Artificial Intelligence 2025

Last Updated May 22, 2025

Finland

Law and Practice

Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

In Finland, there are currently no specific laws that solely govern artificial intelligence (AI) or machine learning. However, as the EU’s Artificial Intelligence Act (AI Act) entered into force on 2 August 2024, Finland is in the process of aligning national legislation with the provisions of the AI Act. Additionally, there are several laws and regulations that may apply to AI and its applications in various domains, such as privacy, data protection and intellectual property.

  • Data Protection: The General Data Protection Regulation (GDPR) applies to the processing of personal data, including AI and ML models that use personal data. The Data Protection Act (1050/2018) complements the GDPR and regulates the processing of personal data in Finland.
  • Liability: Finland follows a strict liability rule that applies to product liability, which may include AI and its applications. If an AI system causes harm or damage, the company or organisation responsible for the AI may be held liable.
  • Intellectual Property: The Finnish Copyright Act (404/1961), the Finnish Patents Act (550/1967) and the Finnish Trade Secrets Act (595/2018) are technologically neutral and therefore apply to AI and its applications, protecting the rights of the creators of AI technologies.
  • Discrimination: The Finnish Non-discrimination Act (1325/2014) prohibits discrimination based on race, gender, religion and other protected grounds, including discrimination based on AI-driven decision-making.

In addition to the above, the Finnish government has expressed its plan to implement AI systems and automated decision-making in government activities.

It has to be taken into account that most recent technologies and AI solutions may not have been publicly disclosed as companies often seek to maintain a competitive edge by keeping their innovations confidential. As a result, the industry use of AI is discussed mainly on a general level. However, based on our experience, AI solutions are already in use in various industries, such as retail, banking, energy, entertainment, logistics and manufacturing. These industries commonly employ AI to optimise processes and improve the accuracy of data output.

A good example of the opportunities created by AI and automation is the “Industrial Internet of Things”, which can be used by companies in different industrial sectors to improve and optimise their operations. Today's industrial machinery is constantly generating data which, together with data from customers, can be used to optimise production volumes, for example. When all of this happens automatically, it may affect the position of the employees that perform the same tasks. This also creates a whole new set of opportunities for cloud service providers to offer companies data pools for such uses. Such data can then be potentially used for training AI systems and producing more accurate and relevant results based on the data – eg, in the retail industry. Also, as a consequence of recent developments in the field of generative AI solutions, which are often based on large language models (such as ChatGPT), companies in different industries are starting to explore opportunities to use and integrate AI into their business operations.

The Finnish government is actively involved in facilitating the adoption and advancement of AI for industry use through various strategies and initiatives.

Finland has engaged multiple ministries, such as the Ministry of Economic Affairs and Employment, Ministry of Justice, and the Ministry of Finance, in drafting AI-related policies and guidelines. These efforts aim to increase AI usage in a safe and responsible manner, with a strong emphasis on digitalisation, economic growth, and ethical AI deployment. The AI 4.0 programme, for instance, is a government initiative designed to accelerate business digitalisation and strengthen Finland’s position in digital and AI advancements.

Furthermore, Finland’s engagement in AI is also characterised by its focus on open data and the ethical use of AI. The Finnish Centre for Artificial Intelligence (FCAI) emphasises the importance of adopting ethical guidelines, new methods of data collection, provision of high-quality open government data, and involving the public in discussions around AI.

Business Finland, a Finnish governmental funding agency, also supports the development of AI technology through key programmes, such as the joint Research, Development and Innovation Programme ICT 2023; the aim of which is to fund high-quality scientific research, which is also expected to have a scientific and social impact.

Finland’s approach to AI regulation is primarily conservative. Its proposal for a new national AI-specific legislation, which complements the AI Act, the Act on the Surveillance of Certain AI Systems (in Finnish: Laki eräiden tekoälyjärjestelmien valvonnasta) complements existing laws on market surveillance, oversight, and penalties as required by the AI Act.

The regulatory perspective does not explicitly distinguish generative and predictive AI from other forms of AI. Instead, Finland emphasises ensuring safety and compliance with EU legislation, rather than introducing entirely new legal concepts for different types of AI. As the AI-specific national legislation has not yet become applicable, the focus has been on supporting AI development and adoption through various programmes and preparing the national legislation discussed in more detail in 3.7 Proposed AI-Specific Legislation and Regulations.

As AI-specific legislation in Finland is currently mostly absent, Finland’s AI-specific regulation relies on government-issued policies and guidelines, such as the ethical guidelines for the use of AI.

Under the Administration Procedure Act (434/2003), an authority may make an automated decision on a case that does not involve matters which, at the Authority’s prior discretion, would require a case-by-case assessment. It is essential to note that automatic decision-making tools would therefore not use discretion in decision-making. As a result, the reform of the law does not allow for the use of overly advanced AI. Decision-making can therefore be automatic, but not autonomous.

Additionally, the Finnish government has given a draft proposal that introduces new national legislation regarding AI, including the Act on the Supervision of Certain AI Systems (in Finnish: Laki eräiden tekoälyjärjestelmien valvonnasta) and revisions to existing laws to supplement the requirements of the AI Act. The Act on the Supervision of Certain AI Systems becomes applicable as of 2 August 2025. Please see 3.7 Proposed AI-Specific Legislation and Regulations for further information.

In late 2020, the Finnish Ministry of Economic Affairs and Employment established a steering group to devise a plan to accelerate AI adoption and further the so-called “fourth industrial revolution” in Finland. The Artificial Intelligence 4.0 programme was aimed at fostering the development and deployment of AI and digital technologies, with a special focus on small and medium-sized enterprises (SMEs) in the manufacturing sector. The programme’s final report, released in December 2022, outlines 11 specific actions designed to position Finland as a leader in the twin transition by 2030.

The Finnish government has also launched the National Artificial Intelligence Programme, AuroraAI, in 2020, which concluded towards the end of 2022. The project’s key contribution was the creation of the AuroraAI network, an AI-driven technical framework that facilitates the exchange of information and interoperability among various services and platforms.

In October 2024, the Ministry of Finance formed a co-operation group, supported by a wider public-sector network, to unify generative AI pilot projects, share best practices, and foster collaboration across ministries. This effort builds on the earlier ministerial AI community established in 2023, complements Finland’s previous AI initiatives such as AuroraAI, and supports future policy development by integrating lessons learned into broader national deployments of generative AI.

The entire field of AI regulation in Finland is to a very large extent defined by the EU Artificial Intelligence Act and the upcoming supplementary national legislation. The EU AI Act entered into force on 1 August 2024 and Finland is currently in the process of altering the national legislation to comply with the AI Act.

Finland has throughout the legislative process of these legislations had a positive view of AI-related regulation in the EU and generally supports responsible AI development.

Under the EU AI Act, “Artificial intelligence system” is defined as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This is to a great extent in line with the definition used by OECD in its updated “Recommendation of the Council on Artificial Intelligence” from November 2023.

We have not identified any notable conflicts between national legislation and the AI Act.

There is no applicable information in this jurisdiction.

As a member of the European Union, Finland is influenced by EU’s regulation and directives. Thus, the Finnish government has stated its strategy to avoid enacting national legislation, in particular, laws that are in conflict with EU laws, instead focusing on implementing EU-level legislation.

In terms of data protection laws, Finland adheres to EU’s General Data Protection Regulation (GDPR) as well as the Data Act (EU 2023/2854) adopted in December 2023. Additionally, Finland aligns with the influence of the EU’s Directive on Copyright and Related Rights in the Digital Single Market (EU 2019/790), which incorporates specific provisions for text and data mining, essential for AI research and development.

In light of the EU Artificial Intelligence Act (AI Act), Finland is introducing supplementary legislation that focuses on establishing a risk-based regulatory framework, designating supervisory authorities, and setting out enforcement mechanisms. The goal is to ensure AI systems do not compromise security, privacy, or fundamental rights, while still promoting innovation and trust in AI.

The AI Act aims to mitigate potential risks associated with AI technologies, such as privacy breaches and discrimination, ensuring AI’s beneficial societal and economic use. Finnish businesses in the AI sector will need to adapt to these new requirements, especially for high-risk AI systems, which may include enhanced transparency, data governance, and accountability measures. This regulatory framework could also drive innovation, pushing companies to develop compliant, ethical AI solutions.

On 24 October 2024, the government released a draft proposal that introduces new legislation, including the Act on the Supervision of Certain AI Systems (in Finnish: Laki eräiden tekoälyjärjestelmien valvonnasta), and makes amendments to existing laws in order to complement and specify the EU AI Act’s requirements to establish a comprehensive framework for the supervision and enforcement of AI systems, especially the high-risk AI systems. The Act on the Supervision of Certain AI Systems most importantly designates relevant market surveillance authorities and a process for notifying authorities and the single point of contact for co-operation with the EU bodies. The proposal remains under review and is expected to enter into force on 2 August 2025.

In Finland, there are no judgments or court decisions relating to AI yet. That is most likely due to the small amount of AI legislation.

The only relevant decisions have come from the Deputy Data Protection Ombudsman and the National Non-Discrimination and Equality Tribunal.

Deputy Data Protection Ombudsman

The Deputy Data Protection Ombudsman has handled AI-related issues in two decisions concerning use of an automated decision-making tool. Both decisions are about a tool designed to find patients whose treatment should be specified and refer them for the right treatment. In the decisions, the Deputy Data Protection Ombudsman assessed whether the tool was making automated individual decisions in the meaning of Article 22 of GDPR and whether the data controller was acting in accordance with the GDPR. Because the cases are not a matter of AI, but about data protection issues, there is less discussion on AI. However, the Deputy Data Protection Ombudsman raised concerns that the algorithm used in the tool would discriminate against patients who would be excluded from specific proactive healthcare interventions as a result of an assessment based on the profiling performed by the algorithm. Nevertheless, this consideration was not relevant in the decisions.

The National Non-Discrimination and Equality Tribunal

In brief, the National Non-Discrimination and Equality Tribunal’s decision was about discrimination in access to credit services. The person involved had applied for credit to finance his purchases when buying household goods from an online shop. However, the credit company did not grant the credit. The decision was based on credit ratings based on statistical methods in credit reference agencies. The ratings did not take the actual ability to pay into account, but were based on general statistics relating to, for example, place of residence, gender, language and age information. If the Finnish-speaking male applicant had been a Swedish speaking female, he would have been granted credit.

The algorithm, using the above-mentioned data and making credit ratings based on it, was deemed to be discriminatory. The AI was not advanced in this case either, but a mere algorithm profiling clients based on set information.

Reasons for the Lack of Case Law

There are various reasons for the lack of court rulings. For example, the anticipation of EU legislation and the scarcity of specific legislation might have created a situation where there have not been any questions on the interpretation of law that would require settlement in court. Another reason could be a lack of litigation. If there are no disputes on the use of AI, there is also no need for legal proceedings that would produce case law. Also, if potential disputes have been settled or resolved through alternative dispute resolution mechanisms, they would not have generated court rulings.

In Finland, the regulation is prepared mainly in different ministries. They draft the law proposals to the government, which then passes them on to the parliament. The ministries are authorised to propose any kind of new legislation or amendments to the existing laws that are valid throughout Finland. Agencies and standard-setting bodies issuing decrees and soft law are further discussed in 6. Standard-Setting Bodies.

So far, regarding AI-related regulation, the most active ministries have been the Ministry of Economic Affairs and Employment, the Ministry of Justice and the Ministry of Finance. The Ministry of Justice prepared the legislation on automated decision-making in public authorities. The Information Management Board, which supervises the use of AI by public authorities, acts under the Ministry of Finance. The Ministry of Economic Affairs and Employment, in turn, set up the now finalised Artificial Intelligence 4.0 programme, which aimed to accelerate business digitalisation.

There are several non-binding recommendations and policy documents that guide ethical and responsible AI use. The Finnish Ministry of Economic Affairs and Employment has published frameworks encouraging transparency, accountability, and data protection in AI-driven services, while also promoting innovation. These guidelines apply broadly to various AI deployments, emphasising ethical considerations, fairness, and societal impact, rather than focusing on a single sector. Although not legally mandatory, they serve as practical guidelines for organisations, shaping industry best practices and influencing future regulatory developments.

Finland has not yet seen high-profile enforcement actions targeting AI specifically, and there are no known cases in which large fines were imposed solely for AI-related infringements. The Finnish Data Protection Ombudsman and the Finnish Competition and Consumer Authority (FCCA) have both indicated that they keep a watchful eye on AI-driven activities, but they have primarily relied on general data protection or competition law when taking action. So far, orders to remedy data processing issues or adjust business practices have been the main outcomes, rather than heavy fines. Although no AI-specific cases are currently pending at a large scale, regulators have stressed that future sanctions could become more significant if organisations breach fundamental principles of data protection, competition rules, or consumer protection in their AI deployments.

The Finnish Standards Association (SFS) is the national standardisation organisation in Finland. In 2018, the SFS established a national standardising group SFS/SR 315 to develop standards related to AI. SFS/SR 315 currently focuses on the Finnish concepts and terms of AI, reference architecture, ethical and societal aspects of AI and AI management systems. Members of the group are also involved in producing and commenting on the content of both European and international standards.

AI-Related Soft Law

In addition to the SFS, there are several public bodies in Finland that provide AI-related guidance and soft law. For example, the Finnish Centre for Artificial Intelligence (FCAI) is a community of AI experts in Finland, initiated by Aalto University, the University of Helsinki, and the VTT Technical Research Centre of Finland. It provides research-based knowledge and guidance on AI and its applications to academia, industry and government organisations.

Finnish supervisory authorities also have a role in the field of AI soft law. The Data Protection Ombudsman supervises compliance with data protection legislation, which naturally applies to processing of personal data by AI systems. The Non-Discrimination Ombudsman supervises compliance with non-discrimination provisions in the use of AI and algorithms. As previously mentioned, the Deputy Data Protection Ombudsman has issued decisions concerning automated decision-making. Furthermore, the Non-Discrimination Ombudsman took a case concerning automated decision-making in lending to the National Non-Discrimination and Equality Tribunal in 2017.

Although recommendations and guidance by these bodies are not legally binding, they provide valuable guidance and recommendations for the development, deployment and use of AI in Finland.

Standardisation in Finland is closely connected to international work. 97% of the standards approved in Finland are of international origin. International standards are sometimes complemented by nationally developed standards. The most important international standard-setting bodies affecting Finland include ISO, CEN, IEC and CENELEC.

In spring 2023, new legislation on automated decision-making by public authorities entered into force. Until then, the use of AI by public authorities has required special legislation on practically every different type of decision. Despite that, for example, the Finnish Tax Administration and the Social Insurance Institution, which are both making millions of administrative decisions a year, have already been using automated decision-making based on special legislation.

The new, general legislation allows public authorities to use automated decision-making without special legislation as long as the use of AI tools complies with the law. That naturally makes the use of AI easier for administrative bodies as the decision-specific legislation is no longer needed, hence AI systems will most likely be used more widely in the near future. Indeed, it was stated in the preparatory works of the new legislation that the general legislation allowing automated decision-making is needed due to the increased use and demand of AI.

However, the use of AI enabled by the new legislation is not overly advanced. It only allows automated decision-making in situations that do not require any consideration. The algorithms must transfer the matter to a human being if it cannot be resolved without reflection. As a result, AI cannot be used, for example, in hiring government employees, as the hiring process always requires consideration and cannot be done solely based on non-disputable facts.

The Finnish government also published new guidelines in February 2025 on how generative AI should be utilised responsibly in public administration as a supportive tool. Although the guidance allows a wider range of AI functionalities to improve governmental tasks, there are clear recommendations that generative AI should not be integrated into workflows that require legal or discretionary judgement. Significant emphasis is placed on transparency and accountability, with the recommendation that authorities state openly when AI has been used.

As there was no general legislation or guidelines concerning the use of AI before the ones mentioned in 7.1 Government Use of AI, and Finnish governmental authorities have only recently started using AI services and applications with more sophisticated AI, there are not yet any judicial decisions or rulings on government use of AI. Because the new legislation and guidelines were introduced quite recently, and practical AI use in public administration is still at an early stage, there are no pending cases concerning AI use in governmental bodies at the time of writing.

The Finnish Transport and Communications Agency and the National Cyber Security Centre have published a study on cyberattacks enabled by AI. Although the threat of AI-enabled cyberattacks is currently still considered low, it is acknowledged that the intelligent automation provided by AI systems will enhance traditional cyberattacks by increasing their speed, scale, coverage and personalised targeting, thus increasing their overall success. AI can also make attacks more sophisticated, tailored, malleable and harder to detect. AI attacks can include targeted data phishing, impersonation and imitation, and better hiding of malware activity. A slow or ineffective response to an advanced AI attack may allow the attacker to penetrate even deeper into systems or networks before being caught.

The study states that cybersecurity must therefore become more automated to respond to AI-enabled cyberattacks. Only automated defence systems will be able to match the speed of AI-enabled attacks. These defence systems will need AI-based decision-making to detect and respond to such attacks.

In Finland, there is currently no legislation specifically on the security risks posed by AI. The development of national legislation on AI has been in a kind of wait-and-see mode, as when the AI Act becomes fully applicable, it will take precedence over national legislation. It remains to be seen whether national legislation on cybersecurity will also start to evolve once the AI Act has become fully applicable.

An example of national legislation on cybersecurity is the Act on the Operation of the Government Security Network, which requires important governmental bodies to have a security network in place to ensure that their communications and operations are uninterrupted even in exceptional situations. However, it does not contain any AI-specific provisions.

Generative AI technology has recently attracted a lot of attention in Finland due to its potential to improve businesses’ efficiency and productivity. However, as with any technology, it comes with potential risks that must be considered. These are some of the risks that are currently being discussed in the industry:

  • Confidentiality and Intellectual Property Risks: Since generative AI models often absorb user-input data to improve the model over time, they could end up exposing private or proprietary information to the public. The risk of others accessing sensitive information increases the more an organisation uses the technology.
  • Inaccuracies and Hallucinating: Even when generative AI models are used correctly, there is always a risk that they generate false or malicious content. Generative AI is strongly associated with the so-called problem of hallucination: AI convincingly presents things that are completely untrue. False claims can be easy to trust when they are presented in a very credible way. Use of inaccurate and untrue outputs may also potentially lead to defamation and, consequently, criminal sanctions.
  • Copyright: Who owns content once it is run through generative AI applications? Licenses and terms vary between different AI tools. However, it can be extremely complicated to determine copyright between the original rights holder of input data, the AI tool operator and another user claiming AI-generated content as their own.
  • Deepfakes: With the widespread use of deepfake content, problems such as manipulation of the public as well as attacks on personal rights and sensitive information are becoming more common. AI-generated images and videos can look extremely realistic, making them difficult for humans or even machines to detect. This material can be used to cause harm to the reputation of a company or its executives. Cybercriminals can also use generative AI to create more sophisticated phishing scams or credentials to hack into systems.
  • Attacks on Datasets: The use of generative AI also poses additional cybersecurity risks such as data poisoning, which involves manipulating the data used to train the models, and adversarial attacks. Adversarial attacks attempt to deceive generative AI models by feeding them malicious inputs, which could lead to incorrect outputs and potentially harm businesses or individuals relying on these outputs.

In Finland, the rights of data subjects, including rectification and deletion of personal data, are safeguarded under the General Data Protection Regulation (GDPR). This EU-wide legislation mandates the correction of inaccurate personal data (“right to rectification”) and allows individuals to request the deletion of their personal data under specific conditions (“right to be forgotten”). For AI applications, rectifying inaccurate data does not necessitate altering the AI model but involves correcting the erroneous information. Deletion requests typically require removing the individual’s data from the dataset without needing to delete the entire AI model, provided it does not compromise the model’s integrity.

Concerns primarily focus on the data used for training these AI systems and the data generated from user interactions. For example, applications like ChatGPT, which utilise extensive datasets potentially containing personal information, have faced challenges for possibly processing personal data, such as IP addresses, without user consent or clear guidelines for data deletion or restriction. Furthermore, AI-generated data introduces issues around data integrity, the obligation for data deletion, and limitations on data use, leaving users without assurances that their information will not be misused or inaccurately stored. To mitigate these data protection risks, the use of closed or proprietary datasets that either exclude personal data or comply with data protection laws is suggested as a safer alternative.

In Finland, the integration of AI into legal practice is currently increasing, thus transforming the traditional methodologies, notably in document analysis, legal research, predictive analytics, contract review, and automated client services. The adoption of AI tools like Legora in the practice of law exemplifies the ongoing evolution within the legal industry, highlighting a shift towards more efficient, accurate, and accessible legal services. Legora has also been implemented at Borenius to help its professionals by aggregating knowledge and simplifying legal workflows.

In general, AI or an algorithm cannot be held legally liable in Finland even if the damage is directly caused by it. That is because the legal entity doctrine has not been extended beyond natural and legal persons and only a legal entity recognised by law can be held liable for damages. Figuratively, AI can be compared to any tool. It does not matter whether a construction worker causes damage with a hammer or by their own hand – liability lies with the worker in both cases. Thus, the user of, for example, AI, an algorithm, or automated decision-making is always the one liable for the possible damages caused by the tool. For example, in most cases, a doctor is responsible for any diagnosis and treatment given, so in this respect the responsibility of the involved algorithm in decision-making itself is disregarded. Also, with regard to the activities of the authorities, even if the algorithm makes an actual administrative decision completely independently, the liability will lie with the official. For the same reasons, it does not matter which participant uses AI in supply chains, for instance, because the liability lies with the user.

However, the Finnish Chancellor of Justice has stated that, with the increase in automated decision-making, questions of apportionment of liability are central and that regulation and rules are needed as soon as possible. Liability issues related to AI algorithms have arisen in a number of health technology and autonomous car-related issues in particular but may also relate to contractual and product liability issues where the AI algorithm is involved in decision-making in one way or another. Still, at the time of writing, there is no extensive legislation on the matter in Finland.

Although the user is liable for the damages caused by AI, the action can be insured. Many insurance companies operating in Finland offer insurance for ICT services, which cover, in most cases, the direct damages caused when using AI.

Although AI-related liability issues have not yet been fully addressed through legislation, there is some legislation on the matter. The issue was tackled for the first time in the new chapters of the Act on Information Management in Public Administration (906/2019) in 2023. This act is the first one that addresses liability issues at the level of law. As was the case before the new legislation, pursuant to the new provisions, a machine or AI cannot be held legally responsible for its decisions. Automated decision-making must be treated as an instrument or tool for which the user is ultimately responsible.

The European Commission was previously working on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) but has since abandoned the proposal on 12 February 2025. The Commission is currently assessing whether another proposal or another type of approach should be chosen.

Currently, non-contractual liability for AI systems is also regulated by the EU Product Liability Directive. According to this Directive, liability extends to manufacturers of defective components.

The use of algorithms and machine learning has become increasingly prevalent in decision-making processes across many industries. However, there is growing recognition of the potential for algorithmic bias, which refers to the systematic and discriminatory effects of algorithmic decision-making systems on specific groups or individuals.

In Finland, the Avoiding AI Biases project was implemented as part of the 2021 government plan, which aimed to map the risks to fundamental rights and non-discrimination posed by machine learning-based AI systems that are either currently in use or planned for use in Finland. The project developed an assessment framework for non-discriminatory AI applications, helping to identify and manage risks of discrimination and promote equality in the use of AI. The framework can also help companies ensure that their AI systems are compliant with non-discrimination laws.

In the public sector, healthcare and state grant decision-making have been identified as consumer areas where bias can create significant risk. In the private sector, credit scoring and hiring practices are considered high-risk areas for algorithmic bias.

In terms of industry efforts to address bias, several companies in Finland have established their own ethical guidelines for the use of AI. For example, certain companies (such as Nokia) have developed their own AI ethics framework that aims to ensure that their AI systems are transparent, trustworthy, and free of bias. Similarly, the Finnish Tax Authority has created a set of ethical guidelines for the use of AI.

The principle of data minimisation is a particularly challenging aspect of the data protection regulation from the perspective of AI technology, at least as long as the efficiency of machine learning algorithms depends on the availability of large amounts of data. As machine learning requires large datasets, companies utilising personal data in connection with machine learning are at greater risk of using data for purposes other than what it was collected for, processing information on individuals not in the scope of data collection and storing data for longer than necessary. Also, as authorities in other countries have pointed out (eg, regarding ChatGPT in Italy), it may be difficult for manufacturers of AI systems and users of AI to identify and apply the legal basis for the processing of personal data.

However, the use of AI technology in the protection of personal data offers several benefits. AI technology can be used as a form of privacy-enhancing technology to help organisations comply with data protection by design obligations. For example, AI can be used to create synthetic data which replicates patterns and statistical properties of personal data. This can be processed in lieu of personal data.

AI can also be used to minimise the risk of privacy breaches, for example by encrypting personal data, reducing human error. The use of AI technology can also increase efficiency and accuracy in detecting and responding to potential data breaches. However, implementing such measures may come at the cost of hindering business operations and access to data, and data breaches can still occur despite stringent security measures.

Another issue arises from the processing of personal data and machine-generated data without direct human supervision. While automated data processing can increase efficiency and speed, it may also perpetuate implicit biases and discrimination that may not be immediately apparent. Without direct human supervision, errors and mistakes made by AI systems may go undetected, leading to adverse outcomes for individuals. The Finnish context of this issue is further discussed in 11.1 Algorithmic Bias.

AI has powered the use of biometric technologies, including facial recognition applications, which are increasingly used for verification, identification and categorisation purposes. Facial recognition technologies are inherently legally problematic, as they directly affect fundamental rights such as the protection of private life, the protection of personal data and the right to personal integrity, which are protected both at the EU and constitutional level. Facial recognition technology is built on the processing of biometric data, therefore it encompasses the processing of special categories of personal data under the GDPR.

The processing of biometric data as data belonging to special categories of personal data is in principle prohibited under the GDPR without consent or direct justification under the GDPR or other legislation. In addition, certain measures or contractual procedures may be required. As a result, the use of facial recognition technology also requires a legal basis under the GDPR, such as explicit consent, a statutory task or a public interest.

Companies using facial recognition technology must ensure that they comply with all applicable laws and regulations, obtain necessary consents and protect the biometric information they collect. Failure to do so may result in fines and reputational harm.

Based on the wording of the EU AI Act, all remote biometric identification systems will be considered “high-risk AI systems” subject to strict requirements (unless falling under the prohibition of Article 5 AI Act), except when the AI system is intended to be used for biometric verification whose sole purpose is to confirm that a natural person is the person they claim to be. The Act will also set forth specific transparency obligations for systems that would not be considered high-risk.

The use of chatbots and other AI technologies to substitute for services rendered by natural persons is regulated by the GDPR and national privacy laws in Finland. The GDPR requires that individuals be informed when their personal data is being processed, including when AI is used to process that data.

It is provided under the Act on the Provision of Digital Services (306/2019) that public authorities may provide advice to customers using service automation, such as chatbots, only when, inter alia, the user is:

  • informed that they are exchanging messages with the service automation;
  • offered the possibility of contacting a natural person within the authority to continue the service; and
  • offered the possibility of recording the exchange of messages with the service automation.

The use of technologies to make undisclosed suggestions or manipulate the behaviour of consumers, such as dark patterns, may be considered unfair commercial practices under the Finnish Consumer Protection Act. Dark patterns refer to various methods used to design the structure of websites, software, mobile applications, or other user interfaces to deceive or otherwise cause a consumer to do something that they were not originally supposed to do. For example, companies may not take automatic measures that incur additional costs for the consumer. The consumer’s explicit consent to the additional costs must be sought under the Consumer Protection Act. Dark patterns are supervised by the Finnish Competition and Consumer Authority (FCCA).

Adopting AI in procurement introduces risks that must be addressed in contracts between customers and AI suppliers, particularly for AI as a Service (AIaaS) models. Key considerations include:

  • Data Privacy and Security: Contracts must include strict data protection measures, defining how sensitive information is safeguarded, access permissions, and breach protocols.
  • Bias and Decision-Making: Agreements should mandate regular audits and bias mitigation, with suppliers providing transparency about their AI training datasets and corrective actions for identified biases.
  • Transparency and Explainability: Contracts need to require a degree of explainability from AI systems, ensuring suppliers can clarify the AI’s decision-making processes.
  • Data Quantity and Accuracy: For internal AI, agreements should address the need for substantial, high-quality data for AI training, including data quality assessment and improvement strategies.
  • Performance Guarantees: Contracts should outline expected AI performance metrics and outcomes, including remedies for failing to meet these standards due to inaccurate predictions or biased results.

Digitalisation has been strongly linked to employment in Finland. AI in employment may lead to cost savings, shorter processing and better recruitment decisions, but it may also pose considerable risks, including with regards to the applicants’ rights to privacy and equal treatment. Firstly, data protection legislation must be considered when developing AI-based employment tools and when processing personal data. Secondly, the Non-Discrimination Act and the Act on Equality between Women and Men (609/1986) constrain the integration of AI in the context of employment. Furthermore, the provisions of the Employment Contracts Act (55/2001) require that employees must be treated equally and impose certain obligations on the employment relationship that would not necessarily be met.

When it comes to the question of liability, the Non-Discrimination Ombudsman has stated that the parties responsible for AI systems and the parties using them, such as employers, are always responsible for ensuring that their activities are in accordance with the non-discrimination rules.

In addition to general data protection legislation, the Act on the Protection of Privacy in Working Life (759/2004) applies to the processing of employees’ personal data in Finland. For example, it is permissible to locate employees if the employer has a valid reason to do so. In principle, location data cannot be used to monitor obligations under employment law, such as working time. However, it is possible to monitor and track working time if the employee performs all or most of their work elsewhere than at the employer’s premises. In such cases, the employer must determine the purpose of the technical monitoring of employees and the matter must be handled in a procedure referred to in the Act on Cooperation within Undertakings (1333/2021).

AI can be used to understand how to make work more efficient and safer. A study carried out by the Finnish Institute of Occupational Health sought to understand the connection between irregular working hours and sickness absences and accidents. AI was able to identify different working time patterns that were linked to the risk of accidents. The results showed that AI helps to understand how to make working conditions safer and thus make work more efficient.

Finland has the potential to be a trendsetter in the use of AI in digital platform companies, as Finland has expertise, capabilities and developers in the field of new technology applications. As previously stated, one of the objectives of the AI 4.0 programme was to increase the use of AI in SMEs. One way to achieve this is to provide systems where companies and organisations can build their own AI applications.

One of Finland’s largest grocery store chains launched a home delivery service where a robot delivers the food from the store to the customer. The delivery is ordered via a mobile app, after which the groceries are packed and put on board a robot. The robot uses AI to plan the route and to detect obstacles, people and vehicles. The service functions in limited areas but shows that different AI-based systems have found a footing in Finland.

Financial services have significantly benefited from the use of AI, for example, AI-based customer service has been used for a long time in the financial sector. AI can also be used in lending decisions, as it can quickly assess the conditions for granting a loan or credit and then make the decision. Such automated decision-making must comply with the requirements of the Finnish Data Protection Act and the GDPR. In such cases, the financial services companies must inform the credit applicant of the existence of automated decision-making and profiling and provide relevant information on the logic behind the processing, as well as on the significance of this processing and its possible consequences for the applicant.

One solution to the growing shortage of resources in the Finnish healthcare system is the extensive use of AI in healthcare. However, a very complex regulatory framework – ie, the legislation on medical devices, data protection, public authorities’ automated decision-making and information management in social welfare and healthcare, must be taken into account when using AI in healthcare. Even though the purpose of the framework is to improve the safety of use, it slows down the integration of AI into healthcare.

A significant amount of social and health data, for example on patient treatment, examinations, and prescriptions, is recorded in the national information system services in Finland. With the help of AI, this existing data could be used for preventive care, to improve the quality of care or to achieve cost-effectiveness and efficiency. However, the use of AI for the above-mentioned activities is constrained by national legislation, such as the Act on the Status and Rights of Patients (785/1992), the Act on the Processing of Client Data in Healthcare and Social Welfare (703/2023) and the Act on the Secondary Use of Health and Social Data (552/2019).

The Deputy Data Protection Ombudsman has stated that AI may be used in healthcare. However, in order to use AI-enabled products and services in healthcare, they must pass extensive examinations testing algorithms, demonstrating product safety and clinical and analytical validity. In the testing phase, the EU regulation of medical devices and Finnish data processing laws, such as the Act on the Secondary use of Health and Social Data and the Biobank Act (688/2012), must be taken into account.

For instance, chatbots are widely used as the first point of contact between patients and healthcare providers. With AI, this service could be more personalised based on natural interaction between a patient and AI. While AI is interacting with the patient, it could identify potential problems and offer advice on them and generate suggestions for referrals or prescriptions. However, pursuant to the Act on the Provision of Digital Services, an authority must ensure in advance the appropriateness of the information and advice generated by AI. Thus, such a provision hinders the full utilisation of AI in healthcare in the public sector.

For example, in relation to the use of the electronic patient data management system (Apotti) in the largest hospital district in Finland, the aim is to integrate AI into the PDM system in the near future. AI could be used to identify work tasks that could be partially or fully automated. For example, it could be used to generate patient records based on a conversation between a doctor and a patient. In addition, AI can make use of available datasets, for example by comparing medical publications with patient records, to generate different treatment recommendations to help doctors in planning patient care.

Regulations for the use of AI in autonomous vehicles are being aligned with EU standards, particularly under the framework regulation on motor vehicles (EU) 2018/858. Proposed updates to the Finnish Traffic Law include restrictions on the use of communication devices by drivers to prevent distractions. Liability for accidents involving autonomous vehicles follows general liability principles, with various parties potentially held accountable, from insurance companies to contractors and individuals, based on negligence or reckless behaviour affecting public safety. Finland is also making efforts towards international harmonisation of autonomous vehicle regulations to comply with EU directives, including SERMI certification and CE marking.

Frameworks addressing AI algorithms, data privacy, cybersecurity, and vehicle performance are under development, emphasising cybersecurity and data protection in public IT procurement processes. Legislation is also being prepared to incorporate environmental impact considerations in public vehicle procurements. While ethical considerations for AI decision-making in critical situations are not explicitly covered, existing professional obligations in sectors like healthcare may provide some guidance on balancing public and ethical interests.

In Finland, the Government Decree on the Safe Use and Inspection of Work Equipment regulates the use of machines, tools, and other technical devices, as well as their combinations (work equipment) in work as specified in the Occupational Safety and Health Act. The adoption of new technology is also expected to improve worker safety. Additionally, the General Product Safety Regulation (GPSR) entered into force in December 2024. As a result, if AI contributes to defects in products or processes, manufacturers, system integrators, and software developers can be held accountable.

Professionals using AI are expected to uphold high standards of liability and responsibility, ensuring that AI tools are reliable and appropriately supervised. Confidentiality remains paramount, with a requirement for AI systems to comply with stringent data protection standards, safeguarding sensitive client information. Intellectual property rights concerning AI technologies necessitate careful consideration to ensure lawful use and respect for existing IP rights.

Client consent is critical, especially for services involving personal data processing by AI, demanding transparency and informed agreement from clients. Moreover, professionals must ensure their AI practices comply with Finnish laws and regulations, including the GDPR for data protection, reflecting Finland’s commitment to high ethical and legal standards in the integration of AI into professional services.

Generative AI significantly challenges the intellectual property rights (IPR) landscape, particularly copyright law’s human-centric authorship criteria. To be protected, works must originate from a human’s creative effort, a concept extended to inventions and designs. The rise of generative AI, capable of creating valuable outputs, prompts questions about their ownership and protectability under IPR. While contract law might address some issues, others could require legislative updates or new legal interpretations. Currently, IPR frameworks are hesitant to acknowledge AI as the creator for copyright or patent right purposes. Additionally, using generative AI involves risks, such as potential copyright infringement if the AI is trained on or generates outputs using unauthorised copyrighted materials. The legal boundaries around such uses, including for educational purposes, remain unclear, signalling a transformative period for IPR amidst the evolution of AI technologies.

Can AI Technology be an Inventor for Patent Purposes?

In early 2020, the European Patent Office (EPO) took a position on whether AI can be an inventor or patentee. The EPO’s decision concerned an AI system called DABUS. The decision was unequivocally negative – DABUS could not be the inventor. The EPO argued that the European Patent Convention (EPC) requires the inventor named in the application to be a natural person and not a machine, and that the designation of the inventor is a necessary formal condition for the grant of a patent, without which the substantive conditions for patentability cannot be examined. Another main reason for rejecting applications was that, in the EPO’s view, the patent confers rights on the inventor of the patentable invention, which cannot be granted to “non-persons” such as computer programs. Although the inventor’s right can be transferred, an AI cannot transfer the rights granted to it, because it is not a legal entity to which rights could arise in the first place.

Copyright v Trade Secret as Protection for AI Algorithms

Copyright has traditionally been viewed as the primary means of safeguarding software, such as computer programs in Finland. However, copyright protects the underlying source code of, for example, an AI tool. Proving in court that it has been copied is very difficult, and the source code may not even have been copied as such, but rather “imitated”. As a result, copyright does not effectively prevent competitors from developing their own versions of such an AI tool, potentially eroding any competitive advantage that may have been gained. According to the Finnish Trade Secrets Act, which was based on the 2016 Trade Secrets Directive, a trade secret is information that is confidential, has commercial value due to its confidential nature, and is subject to reasonable measures to ensure its confidentiality. In many cases, information related to AI technologies possessed by a company will meet these conditions, making the algorithm a protected trade secret under the Act regardless of its implementation or expression, unlike copyright.

Technical Instruction

The Act further provides for the concept of a technical instruction, which applies under Finnish law and stands independent of the Directive. A technical instruction is a technical guideline or operations model that can be used in the course of business. An AI algorithm can also be considered a technical instruction. The protection of technical instructions is activated when such instructions are disclosed confidentially in certain circumstances. If a party has received an algorithm confidentially under these circumstances, they are not allowed to use or disclose it without authorisation. As a result, even in situations where an AI algorithm cannot be protected as a trade secret, it may still qualify for protection as a technical instruction.

In Finland, there is ongoing debate over whether AI-generated art or written works merit copyright protection. Finnish law requires originality and independence, and because current AI often relies significantly on human input, its outputs do not meet this standard. Even if AI reaches a degree of autonomy, Finnish law and case law confirm that copyright can only belong to a natural person, ruling out ownership by the AI. Consequently, copyright resides with the individuals who contributed to the AI’s creation or operation, although the question of responsibility grows more complex as AI gains independence.

One of the main issues related to creating works and products using OpenAI is the ownership of intellectual property rights. The language models developed by OpenAI are trained on vast amounts of text and other data from the internet, which may include copyrighted materials or other protected works. In principle, the users of OpenAI technology who create works or products are responsible for ensuring that they have the necessary rights to use any third-party materials that may be incorporated into their creations.

The Finnish Competition and Consumer Authority (FCCA) increasingly monitors AI-focused “acqui-hires,” especially when large tech entities gain key AI talent from innovative start-ups that stay below conventional merger thresholds. The FCCA aligns with EU-wide efforts to ensure these transactions do not stifle emerging competition or undermine incentives to innovate. Similarly, the rise of algorithmic pricing and potential collusion concerns the Finnish authorities, which keep watch for any evidence that AI-powered pricing systems create anti-competitive outcomes in domestic markets. Data-driven market dominance is another area of growing focus, with Finnish regulators paying close attention to situations in which leading firms could use exclusive access to AI-critical data to shut out equally innovative (and smaller) competitors. This vigilance reflects Finland’s broader commitment to maintaining a balanced environment for AI development, one in which both traditional and nascent competitors can compete on fair terms.

Existing cybersecurity legislation in Finland, such as the Act on Information Management in Public Administration and the EU’s GDPR, provides a framework for protecting data and systems against cyber threats. These laws are increasingly relevant to AI, given AI’s reliance on large datasets and complex algorithms.

New legislation, including the EU AI Act, also addresses AI-specific cybersecurity concerns. Article 15 of the AI Act states that high-risk AI systems must be designed and developed so that they achieve an appropriate level of accuracy, stability, and cybersecurity throughout their entire life cycle. Additionally, the NIS2 Directive, which has been implemented in Finnish legislation through the Act on Cybersecurity (in Finnish: Kyberturvallisuuslaki) which aims to strengthen and harmonise the level of cybersecurity by improving the capabilities of critical sectors. NIS2 does not mandate the use of AI but strongly encourages it as an innovative tool for enhancing cybersecurity in critical sectors.

Companies in Finland, as elsewhere in the EU, must improve their ESG performance yet currently have no explicit AI ESG reporting rules. Instead, they must align with existing regulations, including EU-level directives and the GDPR. While AI can enhance resource and energy efficiency, its large energy consumption raises environmental concerns, prompting Finnish authorities to promote green computing. AI also fosters inclusivity, but policymakers remain alert to possible biases and demand clearer governance and ethical oversight. Future laws may establish stricter requirements for AI in ESG reporting, aiming to balance its benefits with potential negative consequences.

In Finland, when implementing AI best practices across organisations, key considerations include:

  • compliance with Finnish and EU regulations, including GDPR, and sector-specific laws;
  • adoption of ethical AI guidelines focusing on fairness, transparency, non-discrimination, and accountability;
  • establishment of robust data governance policies for data quality, privacy, and security;
  • identification and mitigation of AI deployment risks, including biases and errors;
  • creation of transparent AI systems with explainable decisions;
  • assurance of AI system security, reliability, and resilience; and
  • investment in employee training for AI technology proficiency.

Practical advice for effective AI implementation:

  • start with pilot projects and gradually expand;
  • leverage existing AI frameworks and tools for guidance;
  • engage with industry, academia, and regulators for insights on standards and best practices;
  • maintain records of AI development and deployment for compliance and auditing;
  • include stakeholder input in AI system development and deployment;
  • continuously monitor and update AI systems according to evolving best practices; and
  • consult with experts for guidance on AI regulation and implementation.

By adhering to these considerations and practical steps, Finnish organisations can ensure their AI practices are responsible, compliant, and aligned with both national and international standards.

Borenius

Eteläesplanadi 2
00130 Helsinki
Finland

+358 20 713 3136

erkko.korhonen@borenius.com www.borenius.com
Author Business Card

Trends and Developments


Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

General Framework

Finland has continued its efforts to become a vanguard in leveraging and developing AI-related tools and functions, both in the public and private sectors. The government and public authorities have introduced and continued to implement programmes aimed at accelerating the adoption and development of AI, identifying multiple concrete avenues of application to enhance societal efficiency and operational fluency. Below are some of the most notable objectives in relation to the government’s strategic approach:

  • Finland aims to be a sustainable winner in the twin transition (digital and green) by 2030, maintaining the importance of AI advancements for the transition.
  • As an overarching vision, Finland aims to become a global leader in the application and the use of AI.
  • The government aims to emphasise responsible technology-neutral regulation to better retain flexibility and adaptability with fast-paced digital developments.
  • The government plans to continue and expand the ongoing process of adopting automated decision-making using AI in the public sector.
  • The government endeavours to influence AI-related EU legislation in a manner that minimises the need for national legislation.

To achieve these objectives, the public sector – in many instances in co-operation with the private sector – has initiated several programmes and strategies. Among others, Finland was one of the first countries to introduce a national Artificial Intelligence Strategy, which was launched in 2017 with the aim of making Finland a trusted and secure pioneer in the digital economy by 2025 and promoting a leadership position for the country in the application of AI. The strategy was subsequently followed by multiple other programmes striving toward a similar goal, such as the National Artificial Intelligence programme (AuroraAI) and the Artificial Intelligence 4.0 programme (2020-2023).

Funding has also been correctly identified as necessary to boost the progress towards these goals. Consequently, Business Finland, a Finnish governmental funding agency, supports the industry through several funding programmes, such as the AI Business Programme, with the stated aim of accelerating the development, growth and internationalisation of Finnish AI companies with a budget of over EUR200 million (from 2018–2021). As a more recent example, the Technology Industries of Finland, a Finnish business and labour market lobbying organisation, has established a network called AI Finland as a part of its EUR13.2M investment in AI. The network is open to all sectors and aims to advance the utilisation and development of AI in Finland. Their mission is again in their part to help Finland become a leading country in the application and development of AI by connecting AI-related demand and supply, creating structures for information sharing and serving as an active trailblazer.

EU AI Act – National Legislative Amendments Underway

Following the current government’s strategy to refrain from unwarranted AI regulation so as to remain in conformity with the EU’s initiatives, the EU AI Act (EU) 2024/1689, adopted as of 1 August 2024, remains the most influential piece of product safety legislation in Finland regarding AI. The AI Act purports to offer a balanced, proportionate, and horizontal regulatory approach to AI. It addresses the associated risks and challenges, whilst minimising constraints on technological development and the cost of introducing AI solutions to the market. Finland shares the EU’s vision in this respect and, for example, actively participated in the legislative process regarding the AI Act in the EU.

Nevertheless, Finland did not refrain from also presenting differing views as to the contents of the proposed AI Act. Along with other countries, Finland successfully proposed that the scope should not include any systems that simply follow predefined rules and instructions without any discretion or alteration of their operational logic. In particular, Finland objected to the inclusion of so-called rule-based automated decision-making in the scope of the Regulation. One of the main reasons for this was the fact that Finland already had functioning systems in place in the public sector that benefitted from automated decisions – for example, the tax authorities from the 1980s onward – and the AI Act would in this respect possibly add to the administrative burden.

Pursuant to the adoption of the AI Act, Finland has initiated the process of aligning the national legislation to make the necessary changes in accordance with the provisions of the AI Act. For the task, the Minister of Economic Affairs has set up a working group for the national implementation of the AI Act (AIA working group) on 24 April 2024. As part of the first phase of the process, on 24 October 2024, the government gave its draft proposal for legislation to complement the AI Act. It introduces new legislation and amendments to existing laws to establish a comprehensive framework for the supervision and enforcement of AI systems, particularly high-risk AI systems. The proposal designates national authorities responsible for enforcing the AI Act's provisions, establishes mechanisms for imposing sanctions, and ensures in conjunction with the AI Act that AI systems do not compromise human safety, health, or fundamental rights. It is worth noting that the proposed regulatory changes are meant to complement and specify the obligations set out in the AI Act and not to enact differentiated legislation – discretionary powers are only used where specifically provided for in the AI Act.

Most notably, the mentioned government proposal introduces the Act on the Supervision of Certain AI Systems (in Finnish “Laki eräiden tekoälyjärjestelmien valvonnasta”), which, inter alia, designates the relevant market surveillance authorities, notifying authorities and the single point of contact for co-operation with the EU bodies. The draft government proposal is currently under review and subject to changes, and the relevant complementing legislation is expected to become applicable on 2 August 2025.

The second phase of the process focuses on ensuring that at least one national regulatory test environment (“sandbox”) for AI is operational by 2 August 2026, as required by the AI Act, allowing organisations to test their systems under supervision and promote both legal certainty and innovation. These test environments can be established in co-operation with other member states or by participating in existing sandboxes, as long as they offer an equivalent level of national coverage. The AIA working group’s preliminary view is that new legislation may be needed to support these sandboxes, and relevant competent authorities and stakeholders will be involved in the process.

Public Sector Seeks Avenues for the Adoption of AI Into its Processes

The active pursuit of a leadership position in the field of AI and the general forward-looking attitude towards the use thereof have not resulted in significant new national legislation regarding AI. Outside the initiatives to implement the obligations of the AI Act, other amendments and proposals remain isolated. Again, this is in part due to the recent output of EU-level legislation and the effort to minimise the amount of national legislation needed.

Nevertheless, the potential uses for AI are increasingly being considered in various public sector entities where AI may not be at the forefront, but possible instances for its use are acknowledged or otherwise taken into consideration. In this context, on 20 September 2024, the Ministry of Finance of Finland published a report on experiments with the use of generative AI in public administration. These trials on the use of generative AI are a part of the government-encouraged efforts to make public sector processes more efficient through the incorporation of AI.

In one instance, AI was used in the preliminary stage of law-making for the process of going through existing preparatory works, for which Finnish LLMs were trained on legislative texts and related material. Perhaps more interestingly, LLMs were also used in another experiment for the purpose of producing summaries from the pool of statements gathered from various stakeholders and committees in relation to the preparatory legislative process. In connection to this, the LLMs were also used to evaluate the level of support for a given legislative proposal. Though implemented with varying levels of success, these experiments indicate that the public sector is keen to locate actual and practical avenues for the use of AI.

As described previously, the many rule-based automated decision-making processes already in use in the public sector remain outside the scope of the AI Act. However, some uncertainty exists on a national level concerning how the AI Act relates to legislative changes affecting automated decision-making. Though automation was already incorporated in public sector decision-making, the relevant national legislation was amended to explicitly allow automated decision-making in conformity with the General Data Protection Regulation (GDPR).

The AI Act, on the other hand, focuses on systems that rely on AI to sort data, make predictions, or perform tasks in ways that go beyond simple rule-based procedures. While traditional automated processes can still operate under earlier legal frameworks, any procedure that relies on more advanced AI techniques – particularly those that might be categorised as “high-risk” under the AI Act – would need to follow the upcoming requirements.

The sometimes-vague line between rule-based automated decision-making and (generative) AI-driven decision-making in the public sector has also recently been reviewed by the Finnish Chancellor of Justice. In his decision of 2 February 2025, the Chancellor of Justice on his own initiative assessed the use of AI in automated decision-making in the Social Insurance Institution of Finland (“Kela”). Kela uses rule-based automated decision-making, for example, in routine decisions not involving case-by-case deliberation related to social benefits.

According to the Chancellor of Justice, Kela’s current use of AI primarily serves as a support tool rather than a direct mechanism for making final decisions. While Kela employs AI-driven functionalities such as text analytics or data classification as part of the preparatory process for the automated decisions, these do not themselves produce binding outcomes for applicants. The Chancellor of Justice highlighted that Kela’s automated decision-making relies on predefined legal rules instead of machine-learning models, ensuring transparency and predictability in the decision process, and thus found no suspicion of unlawful conduct on Kela’s part.

In any case, generative AI – rather than making final decisions – is more and more frequently being used as a support tool in the work of public sector entities. In connection to this, the Ministry of Finance of Finland has published Guidelines on using generative AI to support and assist work in public administration on 27 February 2025 (VN/6190/2025). It aims to encourage the use of generative AI in the workplace and improve the efficiency of the public sector. According to the Guidelines, generative AI should be used in a responsible and transparent manner, ensuring that any AI-generated content is verified by the responsible public official before it is incorporated into final documents. Users should adhere to ethical principles and relevant legal frameworks, including the AI Act, to mitigate risks such as hallucination, bias and violations of privacy or intellectual property rights. While the guidance strongly encourages leveraging generative AI for efficiency in tasks like content creation, data analysis and customer service, it also emphasises the importance of recognising its limitations and maintaining human oversight.

Overall, due to Finland’s existing progressive approach to tools related to digital governance, the country is in a good position to adapt to the new requirements under the AI Act while incorporating AI in their functions. Nonetheless, for now, the role of AI tools remains mostly supportive in the public sector.

Private Sector Transitions From Experiments to Commercial Use – Regulatory Uncertainty Remains a Bottleneck

Private sector actors in Finland are increasingly transitioning from isolated proof-of-concepts to scaled commercial applications of AI, reflecting a growing realisation that AI can boost both efficiency and revenue generation. These developments in Nordic companies are supported by the AI Finland network established by the Technology Industries of Finland. Based on the AI Network’s Nordic State of AI report of 2025 prepared jointly with AMD Silo AI, in several industries, organisations report using AI – many with positive results – to predict machine outages, optimise resource allocation, and inform maintenance efforts, thereby lowering downtime and operating costs.

At the same time, many are also exploring new AI-driven services, such as logistics analytics and patient monitoring solutions, though these ventures often require careful alignment with core business strategies and data repositories. As stakeholders scale up AI projects, the importance of clear return-on-investment metrics has intensified, prompting a need for robust frameworks that measure both financial impact and workforce productivity gains.

However, as identified in the Nordic State of AI report of 2025, regulatory and political uncertainty regarding AI remains a persistent hindrance for its wider adoption also in the private sector. The different approaches adopted by the EU, the USA and China, as an example, add to the complexity for businesses operating in this area. Companies see meeting existing and geographically relevant legal mandates as crucial yet keeping pace with emerging international frameworks and best practices is just as important. These evolving standards shape not only official regulatory compliance but also the demands of partners, customers, and suppliers in a globally interconnected marketplace. Given AI’s dual role as both an instrument and a point of focus in geopolitical affairs, continually tracking policy changes is seen as an essential pillar of any well-grounded AI investment strategy. In particular, this might leave the smaller companies on the Finnish market reluctant to make big investments relative to their size regarding AI implementation.

Copyright Issues With AI Persist as a Point of Contention

One major hot topic regarding AI usage and copyright in Finland focuses on whether training large language models (LLMs) qualifies as mere text and data mining (TDM) or constitutes a broader act of copying that requires express permission from rights holders. Recently, the Finnish legal regime regarding copyright has been under evaluation in relation to AI by the Finnish Copyright Delegation’s AI working group under the Finnish Ministry of Education and Culture, which published its final report on 28 October 2024. According to the final report, although the Finnish Copyright Act aims to address TDM by offering an opt-out for commercial usage, there remains ambiguity over when these uses transition into prohibited acts of reproduction or adaptation. Stakeholders fear that such uncertainty may allow large-scale, unauthorised exploitation of protected content, especially when generative AI systems train on vast text or image datasets.

Further complicating matters is the limited awareness of how to implement a legally valid “narrow or broad” opt-out mechanism and how to ensure its enforceability in the training context. This issue is particularly relevant to certain sectors prone to the use of their copyrighted material in the training of AI, such as the media and education sectors. For example, actors in the media sector have voiced significant difficulties in ensuring that their material is not unlawfully used, for example, by so-called crawlers that mine data from their websites.

Moreover, the Finnish legal framework currently lacks explicit, unified rules for handling scenarios where AI-generated outputs contain substantial portions of copyrighted works or imitations of an artist’s style or persona. This gap potentially undermines creators’ ability to control the commercial exploitation of their output, while also making it challenging for legitimate AI developers to confirm compliance.

One proposed solution, discussed by rights holders, is the introduction of a dedicated collective licensing scheme for large-scale AI training, akin to agreement-based extended licences under Finnish law. In parallel, the final report by the Finnish Copyright Delegation’s AI working group recommends clarifying the distinction between lawful TDM and full-scale reproduction within the meaning of the Finnish Copyright Act by adopting guidelines that require machine-readable signals for prohibiting uses in datasets. These measures would help creators maintain clearer control of their works while granting AI developers a straightforward path to secure licences.

Another approach suggested by some participants is to strengthen oversight through administrative bodies that could monitor compliance with copyright rules relevant to AI. Most prominently, enhanced transparency requirements in the AI Act are expected to significantly support the enforcement of Finnish copyright laws by obliging developers to disclose the sources of their training data. Combined, the proposed solutions underscore a growing consensus that Finnish legislation may require greater specificity and robust licensing frameworks in order to protect rights holders without stifling AI innovation.

Ethical Discussion Around AI in Finland Intensifies as Fairness, Transparency and Accountability Remain Top Priorities

In addition to new laws, guidelines and the growing number of practical AI experiments in both the public and private sectors, the ethical aspects of AI use have become a major point of discussion in Finland. Questions about fairness, transparency, and accountability are increasingly important for policymakers, non-governmental organisations, universities, and businesses. The Finnish government recently published Ethical Guidelines for AI that outline the ethical foundations for using AI in public administration. The guidelines emphasise that the public sector operates with a mandate from Finnish citizens, using public funds and striving for the common good. Therefore, they require officials to ensure impartial and accountable use of AI, maintain transparency, protect citizens’ rights, and uphold trust in governance, underlining that public authorities remain ultimately responsible for any AI systems they employ. Similarly, some private sector actors have also come up with their own ethical guidelines for the use of AI, reflecting the principles set out by the government.

In academic and research settings, the Finnish Centre for Artificial Intelligence (FCAI) directs many cross-disciplinary projects that unite technical, legal, and social science views. Several of these projects provide structures for accountability, fairness, and focused design strategies to keep AI systems in line with core ethical standards. Joint efforts between universities, research institutes, and private companies concentrate on improving AI explainability, reducing algorithmic biases, and measuring how AI-based tools affect different parts of Finnish society. This close collaboration not only strengthens research expertise but also encourages a community that actively connects ethical principles with real-world applications.

A particularly sensitive issue is the possibility of AI unintentionally worsening existing biases in society. Many Finnish companies, for example, now use AI systems to speed up recruitment processes. While these tools can improve efficiency, there is a concern that automated screening might reproduce human biases hidden in training data.

In Finland, the Data Protection Ombudsman has emphasised the need to design and audit AI systems to protect individual rights and equality. According to the Data Protection Ombudsman, AI can bring many benefits to various fields, but solutions must be developed ethically, safely, and to everyone’s advantage. The Data Protection Board wishes to foster responsible AI innovation by protecting personal data and upholding data protection legislation. As a result, Finnish organisations are urged to add bias detection and mitigation methods to reduce the risk of unfair discrimination against certain job applicants or employees.

Transparency and accountability are also central themes in public debate. The worry is that if the government makes important administrative or welfare decisions using AI without enough oversight, trust in public administration may suffer. This follows Finland’s long tradition of open data and clarity in administration. Even though the country has high marks for transparency, the move towards more complex AI solutions such as machine learning algorithms that keep evolving introduces new challenges. The public wants clear signs that government agencies will carefully check for risks and be cautious before fully automating critical public services. Much of this concern ties into the need for AI systems to be explainable enough for audits and human intervention when necessary.

Privacy and data protection also feature prominently in Finland’s debates on AI ethics. Although Finland generally has strong data protection rules that meet EU standards under the General Data Protection Regulation (GDPR), the evolving nature of AI raises questions about what counts as personal data, how it may be used for AI training, and in which contexts such use is allowed. A wide range of stakeholders, such as municipal leaders and start-up founders, consult the Data Protection Ombudsman to determine best practices, particularly where sensitive data is involved. The Finnish respect for personal integrity encounters AI’s sometimes unpredictable developments, prompting regular calls for ongoing legislative and policy updates to keep interpretations of privacy adequate.

Another ethical question arises over the possible displacement of human workers in fields where AI can take on repetitive or operational tasks. Some Finnish labour unions recognise innovation’s benefits for competitiveness but also stress fair transition measures for workers who need retraining. This highlights a broader question about how to manage workforce shifts without undermining Finland’s well-known welfare system. The government and various stakeholders have responded by promoting lifelong learning programmes, showing the widely shared view that people should continually update their skills as technology advances.

Finally, there is a moral debate about delegating decisions to AI systems when personal freedoms, fundamental rights, or complex value judgements are at stake. In a country that values equality and social harmony, there are calls for strict oversight where AI might directly affect lives, such as in healthcare diagnoses or judicial sentencing recommendations. This echoes Finland’s culture of consensus-building and gradual experimentation, where new solutions are tested on a smaller scale before being adopted more widely.

Overall, discussions about AI ethics in Finland show a strong interplay between innovation, social welfare values, and democratic processes that encourage open conversation. Finland’s laws largely follow the AI Act, but the focus on transparency, fairness, and human oversight also reflects the country’s established political culture. As AI grows more advanced and becomes more common across industries, these ethical considerations will likely guide how Finland uses technology to improve efficiency and productivity, while still upholding equality, privacy, and public well-being.

Borenius

Eteläesplanadi 2
00130 Helsinki
Finland

+358 20 713 3136

erkko.korhonen@borenius.com www.borenius.com
Author Business Card

Law and Practice

Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

Trends and Developments

Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.