Artificial Intelligence 2024

Last Updated May 28, 2024

Finland

Law and Practice

Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

In Finland, there are currently no specific laws that solely govern artificial intelligence (AI) or machine learning. However, there are several laws and regulations that may apply to AI and its applications in various domains, such as privacy, data protection and intellectual property.

  • Data Protection: The General Data Protection Regulation (GDPR) applies to the processing of personal data, including AI and ML models that use personal data. The Data Protection Act (1050/2018) regulates the processing of personal data in Finland.
  • Liability: Finland follows a strict liability rule that applies to product liability, which may include AI and its applications. If an AI system causes harm or damage, the company or organisation responsible for the AI may be held liable.
  • Intellectual Property: The Finnish Copyright Act (404/1961), the Finnish Patents Act (550/1967) and the Finnish Trade Secrets Act (595/2018) apply to AI and its applications, protecting the rights of the creators of AI technologies.
  • Discrimination: The Finnish Non-discrimination Act (1325/2014) prohibits discrimination based on race, gender, religion and other protected grounds, including discrimination based on AI-driven decision-making.

In addition to the above, the Finnish government has expressed its plan to implement AI systems and automated decision-making in government activities.

It has to be taken into account that most recent technologies and AI solutions may not have been publicly disclosed yet because companies wish to take competitive advantage of their inventions. As a result, the industry use of AI is discussed mainly on a general level. However, based on our experience, AI solutions are already in use in various industries, such as retail, banking, energy, entertainment, logistics and manufacturing. These industries commonly employ AI to optimise processes and improve the accuracy of data output.

A good example of the opportunities created by AI and automation is the “Industrial Internet of Things”, which can be used by companies in different industrial sectors to improve and optimise their operations. Today's industrial machinery is constantly generating data which, together with data from customers, can be used to optimise production volumes, for example. When all of this happens automatically, it may affect the position of the employees that perform the same tasks. This also creates a whole new set of opportunities for cloud service providers to offer companies data pools for such uses. Such data can then be potentially used for training AI systems and producing more accurate and relevant results based on the data – eg, in the retail industry. Also, as a consequence of recent developments in the field of generative AI solutions, which are often based on large language models (such as Chat GPT), companies in different industries are starting to explore opportunities to use and integrate AI into their business operations.

The Finnish government has engaged multiple ministries, such as the Ministry of Economic Affairs and Employment, Ministry of Justice, and the Ministry of Finance, in drafting AI-related policies and guidelines. These efforts aim to increase AI usage in a safe and responsible manner, with a strong emphasis on digitalisation, economic growth, and ethical AI deployment. The AI 4.0 programme, for instance, is a government initiative designed to accelerate business digitalisation and strengthen Finland’s position in digital and AI advancements.

Furthermore, Finland’s engagement in AI is also characterised by its focus on open data and the ethical use of AI. The Finnish Centre for Artificial Intelligence (FCAI) emphasises the importance of adopting ethical guidelines, new methods of data collection, provision of high-quality open government data, and involving the public in discussions around AI.

Business Finland, a Finnish governmental funding agency, also supports the development of AI technology through key programmes, such as the joint Research, Development and Innovation Programme ICT 2023; the aim of which is to fund high-quality scientific research, which is also expected to have a scientific and social impact.

In Finland, the approach to regulating AI is characterised by a mix of strategic initiatives aimed at fostering innovation and setting the stage for future legislation, particularly in response to evolving EU regulations. As there is no AI-specific national legislation developed, the focus is mainly on supporting AI development and adoption through various programmes and preparing for the implementation of upcoming AI regulations from the EU.

Additionally, Finland is actively promoting collaborations within the AI sector through various support instruments and reforms. Initiatives like Business Finland’s programmes and the AuroraAI aim to create new business opportunities, foster digital transformation in industries, and prepare for an ethical society in the age of AI. These efforts underline Finland’s commitment to leveraging AI for societal benefit while preparing the regulatory and infrastructural groundwork for its ethical and sustainable use.

Under the Administration Procedure Act (434/2003), an authority may make an automated decision on a case that does not involve matters which, at the Authority’s prior discretion, would require a case-by-case assessment. It is essential to note that automatic decision-making tools would therefore not use discretion in decision-making. As a result, the reform of the law does not allow for the use of overly advanced AI. Decision-making can therefore be automatic, but not autonomous.

As AI-specific legislation in Finland is mostly absent, pending the finalisation of the EU’s Artificial Intelligence Act and the proposed Artificial Intelligence Liability Directive, Finland’s AI-specific regulation relies on government-issued policies and guidelines, such as the ethical guidelines for the use of AI.

In late 2020, the Finnish Ministry of Economic Affairs and Employment established a steering group to devise a plan to accelerate AI adoption and further the so-called “fourth industrial revolution” in Finland. The Artificial Intelligence 4.0 programme was aimed at fostering the development and deployment of AI and digital technologies, with a special focus on small and medium-sized enterprises (SMEs) in the manufacturing sector. The programme’s final report, released in December 2022, outlines 11 specific actions designed to position Finland as a leader in the twin transition by 2030.

Additionally, the Finnish Government launched the National Artificial Intelligence Programme, AuroraAI, in 2020, which concluded towards the end of 2022. The project’s key contribution was the creation of the AuroraAI network, an AI-driven technical framework that facilitates the exchange of information and interoperability among various services and platforms.

The entire field of AI regulation in Finland will to a very large extent be defined by the proposed EU Artificial Intelligence Act (2021/0106 (COD)) and the national legislation implementing the proposed Artificial Intelligence Liability Directive (2022/0303 (COD)). Both of these pieces of legislation are currently pending in the EU.

Finland has throughout the legislative process of these proposals had a positive view of AI-related regulation in the EU and generally supports responsible AI development.

Under the proposed EU AI Act, “Artificial intelligence system” is defined as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This is to a great extent in line with the definition used by OECD in its updated “Recommendation of the Council on Artificial Intelligence” from November 2023.

There are some minor conflicts between the AI Act proposal and existing legislation in Finland. Namely, the automated decision-making by authorities enabled and permitted under the Administration Procedure Act could fall under the scope of the proposed AI Act.

There is no applicable information in this jurisdiction.

As a member of the European Union, Finland is influenced by EU’s regulation and directives. Thus, the Finnish government has stated its strategy to avoid enacting national legislation, in particular, laws that are in conflict with EU laws, instead focusing on implementing EU-level legislation.

In terms of data protection laws, Finland adheres to EU’s General Data Protection Regulation (GDPR) as well as the supplementing Data Act (EU 2023/2854) adopted in December 2023. Additionally, Finland aligns with the influence of the EU’s Directive on Copyright and Related Rights in the Digital Single Market (EU 2019/790), which incorporates specific provisions for text and data mining, essential for AI research and development.

Finland is closely watching the development of AI legislation at the EU level, particularly the EU Artificial Intelligence Act, which is poised to become a significant regulatory framework affecting AI deployment within the EU, including Finland. The proposed legislation aims to mitigate potential risks associated with AI technologies, such as privacy breaches and discrimination, ensuring AI’s beneficial societal and economic use. Finnish businesses in the AI sector will need to adapt to these new requirements, especially for high-risk AI systems, which may include enhanced transparency, data governance, and accountability measures. This regulatory framework could also drive innovation, pushing companies to develop compliant, ethical AI solutions.

In Finland there are no judgments or court decisions yet relating to AI. That is most likely due to the small amount of AI legislation.

The only relevant decisions have come from the Deputy Data Protection Ombudsman and the National Non-Discrimination and Equality Tribunal.

Deputy Data Protection Ombudsman

The Deputy Data Protection Ombudsman has handled AI-related issues in two decisions concerning use of an automated decision-making tool. Both decisions are about a tool designed to find patients whose treatment should be specified and refer them for the right treatment. In the decisions, the Deputy Data Protection Ombudsman assessed whether the tool was making automated individual decisions in the meaning of Article 22 of GDPR and whether the data controller was acting in accordance with the GDPR. Because the cases are not a matter of AI, but about data protection issues, there is less discussion on AI. However, the Deputy Data Protection Ombudsman raised concerns that the algorithm used in the tool would discriminate against patients who would be excluded from specific proactive healthcare interventions as a result of an assessment based on the profiling performed by the algorithm. Nevertheless, this consideration was not relevant in the decisions.

The National Non-Discrimination and Equality Tribunal

In brief, the National Non-Discrimination and Equality Tribunal’s decision was about discrimination in access to credit services. The person involved had applied for credit to finance his purchases when buying household goods from an online shop. However, the credit company did not grant the credit. The decision was based on credit ratings based on statistical methods in credit reference agencies. The ratings did not take the actual ability to pay into account, but was based on general statistics relating to, for example, place of residence, gender, language and age information. If the Finnish-speaking male applicant had been a Swedish speaking female, he would have been granted credit.

The algorithm, using the above-mentioned data and making credit ratings based on it, was deemed to be discriminatory. The AI was not advanced in this case either, but a mere algorithm profiling clients based on set information.

Reasons for the Lack of Case Law

There are various reasons for the lack of court rulings. For example, the anticipation of EU legislation and the scarcity of specific legislation might have created a situation where there have not been any questions on the interpretation of law that would require settlement in court. Another reason could be a lack of litigation. If there are no disputes on the use of AI, there is also no need for legal proceedings that would produce case law. Also, if potential disputes have been settled or resolved through alternative dispute resolution mechanisms, they would not have generated court rulings.

The decisions discussed in 4.1 Judicial Decisions did not form any definitions of AI. Since the decisions were about GDPR compliance and discrimination, the GDPR’s definition of automated individual decision was used. A decision based solely on automated processing refers to a situation where no human is involved in the decision-making process. For example, if an automated process produces a recommendation that a human only takes into account with other factors in making the final decision, it is not a decision based solely on automated processing.

In Finland, the regulation is prepared mainly in different ministries. They draft the law proposals to the government, which then passes them on to the parliament. The ministries are authorised to propose any kind of new legislation or amendments to the existing laws that are valid throughout Finland. Agencies and standard-setting bodies issuing decrees and soft law are further discussed in 6. Standard-Seeing Bodies.

So far, regarding AI-related regulation, the most active ministries have been the Ministry of Economic Affairs and Employment, the Ministry of Justice and the Ministry of Finance. The Ministry of Justice prepared the legislation on automated decision-making in public authorities. The Information Management Board, which supervises the use of AI by public authorities, acts under the Ministry of Finance. The Ministry of Economic Affairs and Employment, in turn, set up the now finalised Artificial Intelligence 4.0 programme, which aimed to accelerate business digitalisation.

The definitions of AI technologies used by different agencies are very similar. They all emphasise three features, namely: autonomy, ability to learn and high performance. AI enables machines, devices, software, systems and services to act in a meaningful way, adapting to the task and the situation, almost like a human.

Nevertheless, the existing legislation does not yet cover the usage of AI described in the definition. As stated in 3. AI-Specific Legislation and Directives and 7. Government Use of AI, the current legislation only concerns the use of automated decision-making, which must not make any considerations by itself. The legislation does not, however, restrict the use of AI in businesses as the national legislation only applies to public authorities. As a result, companies can use AI much more freely in their operations as long as they acknowledge their responsibilities and liability.

The main regulatory objective of the AI legislation and the authorities issuing legislation and supervising legal compliance is to increase the use of AI in Finland in a safe and responsible manner. For example, in the preparatory works of the above-mentioned legislation it is stated that the objective of the legislation is to enable the use of automated decision-making more widely and that way increase the use of technology. At the same time, the Information Management Board ensures that authorities using AI are acting lawfully.

The objective of the Artificial Intelligence 4.0 programme under the Ministry of Economic Affairs and Employment was to strengthen digitalisation and economic growth, as well as encourage co-operation between different sectors, increase investment in digitalisation and improve digital skills. The vision of the programme was “[i]n 2030, Finland will be a sustainable winner in the twin transition”. Twin transition means responding to the challenges of industrial digitalisation and the green transition at the same time.

AI-specific legislation is still mostly lacking in Finland, which is why there are also no law enforcement actions relating specifically to the use of AI technologies. The remedies for breaching the legislation on use of AI can be imposed mainly under the provisions of the Tort Liability Act (412/1974), the Constitution of Finland (731/1999) (public authorities), the Criminal Code (39/1889) or the GDPR. Moreover, if an authority gives an order or prohibitive decision, it can be enforced with a conditional fine under the Act on Conditional Fines.

In the National Non-Discrimination and Equality Tribunal’s decision mentioned in 4.1 Judicial Decisions, the tribunal imposed a conditional fine of EUR100,000 on the credit company. Currently, there are no other law enforcement actions pending.

The Finnish Standards Association (SFS) is the national standardisation organisation in Finland. In 2018, the SFS established a national standardising group SFS/SR 315 to develop standards related to AI. SFS/SR 315 currently focuses on the Finnish concepts and terms of AI, reference architecture, ethical and societal aspects of AI and AI management systems. Members of the group are also involved in producing and commenting on the content of both European and international standards.

AI-related Soft Law

In addition to the SFS, there are several public bodies in Finland that provide AI-related guidance and soft law. For example, the Finnish Centre for Artificial Intelligence (FCAI) is a community of AI experts in Finland, initiated by Aalto University, the University of Helsinki, and the VTT Technical Research Centre of Finland. It provides research-based knowledge and guidance on AI and its applications to academia, industry and government organisations.

Finnish supervisory authorities also have a role in the field of AI soft law. The Data Protection Ombudsman supervises compliance with data protection legislation, which naturally applies to processing of personal data by AI systems. The Non-Discrimination Ombudsman supervises compliance with non-discrimination provisions in the use of AI and algorithms. As previously mentioned, the Deputy Data Protection Ombudsman has issued decisions concerning automated decision-making. Furthermore, the Non-Discrimination Ombudsman took a case concerning automated decision-making in lending to the National Non-Discrimination and Equality Tribunal in 2017.

Although recommendations and guidance by these bodies are not legally binding, they provide valuable guidance and recommendations for the development, deployment and use of AI in Finland.

Standardisation in Finland is closely connected to international work. 97% of the standards approved in Finland are of international origin. International standards are sometimes complemented by nationally developed standards. The most important international standard-setting bodies affecting Finland include ISO, CEN, IEC and CENELEC.

Finland is currently at a pivotal juncture regarding government use of AI. In spring 2023, new legislation on automated decision-making by public authorities entered into force. Until now, the use of AI by public authorities has required special legislation on practically every different type of decision. Despite that, for example, the Finnish Tax Administration and the Social Insurance Institution, which are both making millions of administrative decisions a year, have already been using automated decision-making based on special legislation.

The new, general legislation allows public authorities to use automated decision-making without special legislation as long as the use of AI tools complies with the law. That naturally makes the use of AI easier for administrative bodies as the decision-specific legislation is no longer needed, hence AI systems will most likely be used more widely in the near future. Indeed, it was stated in the preparatory works of the new legislation that the general legislation allowing automated decision-making is needed due to the increased use and demand of AI.

However, the use of AI enabled by the new legislation is not overly advanced. It only allows automated decision-making in situations that do not require any consideration. The algorithms must transfer the matter to a human being if it cannot be resolved without reflection. As a result, AI cannot be used, for example, in hiring government employees, as the hiring process always requires consideration and cannot be done solely based on non-disputable facts.

As there was not any general legislation concerning the use of AI before the legislation mentioned in 8.1 Government Use of AI, and the Finnish governmental authorities are not yet using any AI services or applications using more sophisticated AI, there are not any judicial decisions or rulings on government use of AI. Due to the lack of legislation and use, there are no pending cases concerning AI use in governmental bodies at the time of writing.

The Finnish Transport and Communications Agency and the National Cyber Security Centre have published a study on cyberattacks enabled by AI. Although the threat of AI-enabled cyberattacks is currently still considered low, it is acknowledged that the intelligent automation provided by AI systems will enhance traditional cyberattacks by increasing their speed, scale, coverage and personalised targeting, thus increasing their overall success. AI can also make attacks more sophisticated, tailored, malleable and harder to detect. AI attacks can include targeted data fishing, impersonation and imitation, and better hiding of malware activity. A slow or ineffective response to an advanced AI attack may allow the attacker to penetrate even deeper into systems or networks before being caught.

The study states that cybersecurity must therefore become more automated to respond to AI-enabled cyberattacks. Only automated defence systems will be able to match the speed of AI-enabled attacks. These defence systems will need AI-based decision-making to detect and respond to such attacks.

In Finland, there is currently no legislation specifically on the security risks posed by AI. The development of national legislation on AI has been in a kind of wait-and-see mode, as the EU Commission’s proposal on AI Act has been published and when the Act enters into force it will take precedence over national legislation. It remains to be seen whether national legislation on cybersecurity will also start to evolve once the AI Act has entered into force.

An example of national legislation on cybersecurity is the Act on the Operation of the Government Security Network, which requires important governmental bodies to have a security network in place to ensure that their communications and operations are uninterrupted even in exceptional situations. However, it does not contain any AI-specific provisions.

Generative AI technology has recently attracted a lot of attention in Finland due to its potential to improve businesses’ efficiency and productivity. However, as with any technology, it comes with potential risks that must be considered. These are some of the risks that are currently being discussed in the industry:

  • Confidentiality and intellectual property risks: Since generative AI models often absorb user-inputted data to improve the model over time, they could end up exposing private or proprietary information to the public. The risk of others accessing sensitive information increases the more an organisation uses the technology.
  • Inaccuracies and hallucinating: Even when generative AI models are used correctly, there is always a risk that they generate false or malicious content. Generative AI is strongly associated with the so-called problem of hallucination: AI convincingly presents things that are completely untrue. False claims can be easy to trust when they are presented in a very credible way. For example, the most talked-about generative AI model, ChatGPT, has been documented to hallucinate facts. Use of inaccurate and untrue outputs may also potentially lead to defamation and, consequently, criminal sanctions.
  • Copyright: Who owns content once it is run through generative AI applications? Licenses and terms vary between different AI tools. However, it can be extremely complicated to determine copyright between the original rights holder of input data, the AI tool operator and another user claiming AI-generated content as their own.
  • Deepfakes: With the widespread use of deepfake content, problems such as manipulation of the public as well as attacks on personal rights and sensitive information are becoming more common. AI-generated images and videos can look extremely realistic, making them difficult for humans or even machines to detect. This material can be used to cause harm to the reputation of a company or its executives. Cybercriminals can also use generative AI to create more sophisticated phishing scams or credentials to hack into systems.
  • Attacks on datasets: The use of generative AI also poses additional cybersecurity risks such as data poisoning, which involves manipulating the data used to train the models, and adversarial attacks. Adversarial attacks attempt to deceive generative AI models by feeding them malicious inputs, which could lead to incorrect outputs and potentially harm businesses or individuals relying on these outputs.

Generative AI significantly challenges the intellectual property rights (IPR) landscape, particularly copyright law’s human-centric authorship criteria. To be protected, works must originate from a human’s creative effort, a concept extended to inventions and designs. The rise of generative AI, capable of creating valuable outputs, prompts questions about their ownership and protectability under IPR. While contract law might address some issues, others could require legislative updates or new legal interpretations. Currently, IPR frameworks are hesitant to acknowledge AI as the creator for copyright or patent right purposes. Additionally, using generative AI involves risks, such as potential copyright infringement if the AI is trained on or generates outputs using unauthorised copyrighted materials. The legal boundaries around such uses, including for educational purposes, remain unclear, signalling a transformative period for IPR amidst the evolution of AI technologies.

In Finland, the rights of data subjects, including rectification and deletion of personal data, are safeguarded under the General Data Protection Regulation (GDPR). This EU-wide legislation mandates the correction of inaccurate personal data (“right to rectification”) and allows individuals to request the deletion of their personal data under specific conditions (“right to be forgotten”). For AI applications, rectifying inaccurate data does not necessitate altering the AI model but involves correcting the erroneous information. Deletion requests typically require removing the individual’s data from the dataset without needing to delete the entire AI model, provided it does not compromise the model’s integrity.

Concerns primarily focus on the data used for training these AI systems and the data generated from user interactions. For example, applications like ChatGPT, which utilise extensive datasets potentially containing personal information, have faced challenges for possibly processing personal data, such as IP addresses, without user consent or clear guidelines for data deletion or restriction. Furthermore, AI-generated data introduces issues around data integrity, the obligation for data deletion, and limitations on data use, leaving users without assurances that their information will not be misused or inaccurately stored. To mitigate these data protection risks, the use of closed or proprietary datasets that either exclude personal data or comply with data protection laws is suggested as a safer alternative.

In Finland, the integration of AI into legal practice is currently increasing, thus transforming the traditional methodologies, notably in document analysis, legal research, predictive analytics, contract review, and automated client services. The adoption of AI tools like Leya in the practice of law exemplifies the ongoing evolution within the legal industry, highlighting a shift towards more efficient, accurate, and accessible legal services. Leya has also been implemented at Borenius to help its professionals by aggregating knowledge and simplifying legal workflows.

In general, AI or an algorithm cannot be held legally liable in Finland even if the damage is directly caused by it. That is because the legal entity doctrine has not been extended beyond natural and legal persons and only a legal entity recognised by law can be held liable for damages. Figuratively, AI can be compared to any tool. It does not matter whether a construction worker causes damage with a hammer or by their own hand – liability lies with the worker in both cases. Thus, the user of, for example, AI, an algorithm, or automated decision-making is always the one liable for the possible damages caused by the tool. For example, in most cases, a doctor is responsible for any diagnosis and treatment given, so in this respect the responsibility of the involved algorithm in decision-making itself is disregarded. Also, with regard to the activities of the authorities, even if the algorithm makes an actual administrative decision completely independently, the liability will lie with the official. For the same reasons, it does not matter which participant uses AI in supply chains, for instance, because the liability lies with the user.

However, the Finnish Chancellor of Justice has stated that, with the increase in automated decision-making, questions of apportionment of liability are central and that regulation and rules are needed as soon as possible. Liability issues related to AI algorithms have arisen in a number of health technology and autonomous car-related issues in particular but may also relate to contractual and product liability issues where the AI algorithm is involved in decision-making in one way or another. Still, at the time of writing, there is no extensive legislation on the matter in Finland.

Although the user is liable for the damages caused by AI, the action can be insured. Many insurance companies operating in Finland offer insurance for ICT services, which cover, in most cases, the direct damages caused when using AI.

Although AI-related liability issues have not been fully addressed through legislation, there is some legislation on the matter. The issue was tackled for the first time in the new chapters of the Act on Information Management in Public Administration (906/2019) in 2023. This act is the first one that addresses liability issues at the level of law. As was the case before the new legislation, pursuant to the new provisions, a machine or AI cannot be held legally responsible for its decisions. Automated decision-making must be treated as an instrument or tool for which the user is ultimately responsible.

In September 2022, the European Commission issued a proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), which aims to make it easier to hold the tortfeasor liable by applying a reversed burden of proof in situations where it is difficult for the injured party to prove a causal link to the damage caused by AI. The AI Liability Directive also obliges Member States to ensure that courts are empowered to order a pre-trial discovery on relevant evidence when a specific high-risk AI system is suspected of having caused damage. The Directive does not interfere with national laws on who can be held liable for damage caused by AI, but it does seek to prevent AI users from hiding behind the AI they use to avoid liability. At the time of writing, the Directive is not yet in force and no national legislation has been adopted on its basis.

The use of algorithms and machine learning has become increasingly prevalent in decision-making processes across many industries. However, there is growing recognition of the potential for algorithmic bias, which refers to the systematic and discriminatory effects of algorithmic decision-making systems on specific groups or individuals.

In Finland, the Avoiding AI Biases project was implemented as part of the 2021 government plan, which aimed to map the risks to fundamental rights and non-discrimination posed by machine learning-based AI systems that are either currently in use or planned for use in Finland. The project developed an assessment framework for non-discriminatory AI applications, helping to identify and manage risks of discrimination and promote equality in the use of AI. The framework can also help companies ensure that their AI systems are compliant with non-discrimination laws.

In the public sector, healthcare and state grant decision-making have been identified as consumer areas where bias can create significant risk. In the private sector, credit scoring and hiring practices are considered high-risk areas for algorithmic bias.

In terms of industry efforts to address bias, several companies in Finland have established their own ethical guidelines for the use of AI. For example, certain companies (such as Nokia) have developed their own AI ethics framework that aims to ensure that their AI systems are transparent, trustworthy, and free of bias. Similarly, the Finnish Tax Authority has created a set of ethical guidelines for the use of AI.

The principle of data minimisation is a particularly challenging aspect of the data protection regulation from the perspective of AI technology, at least as long as the efficiency of machine learning algorithms depends on the availability of large amounts of data. As machine learning requires large datasets, companies utilising personal data in connection with machine learning are at greater risk of using data for purposes other than what it was collected for, processing information on individuals not in the scope of data collection and storing data for longer than necessary. Also, as authorities in other countries have pointed out (eg, regarding ChatGPT in Italy), it may be difficult for manufacturers of AI systems and users of AI to identify and apply the legal basis for the processing of personal data.

However, the use of AI technology in the protection of personal data offers several benefits. AI technology can be used as a form of privacy-enhancing technology to help organisations comply with data protection by design obligations. For example, AI can be used to create synthetic data which replicates patterns and statistical properties of personal data. This can be processed in lieu of personal data.

AI can also be used to minimise the risk of privacy breaches, for example by encrypting personal data, reducing human error. The use of AI technology can also increase efficiency and accuracy in detecting and responding to potential data breaches. However, implementing such measures may come at the cost of hindering business operations and access to data, and data breaches can still occur despite stringent security measures.

Another issue arises from the processing of personal data and machine-generated data without direct human supervision. While automated data processing can increase efficiency and speed, it may also perpetuate implicit biases and discrimination that may not be immediately apparent. Without direct human supervision, errors and mistakes made by AI systems may go undetected, leading to adverse outcomes for individuals. The Finnish context of this issue is further discussed in 11.1 Algorithmic Bias.

AI has powered the use of biometric technologies, including facial recognition applications, which are increasingly used for verification, identification and categorisation purposes. Facial recognition technologies are inherently legally problematic, as they directly affect fundamental rights such as the protection of private life, the protection of personal data and the right to personal integrity, which are protected both at the EU and constitutional level. Facial recognition technology is built on the processing of biometric data, therefore it encompasses the processing of special categories of personal data under the GDPR.

The processing of biometric data as data belonging to special categories of personal data is in principle prohibited under the GDPR without consent or direct justification under the GDPR or other legislation. In addition, certain measures or contractual procedures may be required. As a result, the use of facial recognition technology also requires a legal basis under the GDPR, such as explicit consent, a statutory task or a public interest.

Companies using facial recognition technology must ensure that they comply with all applicable laws and regulations, obtain necessary consents and protect the biometric information they collect. Failure to do so may result in fines and reputational harm.

Based on the current wording of the proposed EU AI Act, all remote biometric identification systems will be considered “high-risk AI systems” subject to strict requirements, except when the AI system is intended to be used for biometric verification whose sole purpose is to confirm that a natural person is the person they claim to be. The Act will also set forth specific transparency obligations for systems that would not be considered high-risk.

AI-related legislation in Finland is still relatively thin on the ground, but the most recent development is the general legislation on automated decision-making in public authorities, which allows the use of automated decision-making more widely than the previous special legislation. The new legislation allows automated decisions only in cases which do not require any consideration or analysis, which is why the technologies used may not be too sophisticated. Under the laws, liability for using the automated decision-making lies with the authority using the technology.

Companies, on the other hand, may use automated decision-making technologies much more freely than authorities as there is no specific national legislation restricting it. The requirements and responsibilities for companies are laid down in the GDPR. In addition to the basic principles of processing personal data, companies using automated decision-making are bound by a special provision according to which the subject of the automated decision or profiling must have the right not to be subject to such decisions or profiling. In that case, the decision must be made by a human. This does not apply if the subject has, for example, given an explicit consent for the automated decisions.

The companies acting as data controllers may be obliged to carry out an assessment of the impact of the envisaged processing operations on the protection of personal data prior to the processing, in connection with the automated profiling and decision-making. Pursuant to a statement by the Finnish Data Protection Ombudsman, the same applies when a company is using biometric data. For example, when biometric data is used, for example, for the evaluation or scoring of an individual, automated decision-making or systematic monitoring, the company must perform an impact assessment.

Under general tort law, the person or entity using AI or automated decision-making technologies is responsible for their use and liable for the potential damages caused when using such technologies. On the other hand, liability and administrative fines for breaching provisions of the GDPR are laid down in the GDPR. The companies breaching the legislation and causing damage to others by using automated decision-making bear the risk of being punished or obliged to pay damages for their actions.

The use of chatbots and other AI technologies to substitute for services rendered by natural persons is regulated by the GDPR and national privacy laws in Finland. The GDPR requires that individuals be informed when their personal data is being processed, including when AI is used to process that data.

It is provided under the Act on the Provision of Digital Services (306/2019) that public authorities may provide advice to customers using service automation, such as chatbots, only when, inter alia, the user is:

  • informed that they are exchanging messages with the service automation;
  • offered the possibility of contacting a natural person within the authority to continue the service; and
  • offered the possibility of recording the exchange of messages with the service automation.

The use of technologies to make undisclosed suggestions or manipulate the behaviour of consumers, such as dark patterns, may be considered unfair commercial practices under the Finnish Consumer Protection Act. Dark patterns refer to various methods used to design the structure of websites, software, mobile applications, or other user interfaces to deceive or otherwise cause a consumer to do something that they were not originally supposed to do. For example, companies may not take automatic measures that incur additional costs for the consumer. The consumer’s explicit consent to the additional costs must be sought under the Consumer Protection Act. Dark patterns are supervised by the Finnish Competition and Consumer Authority (FCCA).

The use of AI algorithms in pricing decisions has led to concerns over cartels and collusion in the Finnish market. The Finnish Competition and Consumer Authority (FCCA) has released a report covering the legal implications of different types of collusion caused by algorithms. The report distinguishes between explicit collusion, where anti-competitive conduct is carried out using an algorithm, and tacit collusion, where algorithms lead to similar pricing without any agreement between competitors. The report highlights the challenges of recognising and intervening in tacit collusion as it does not involve communication between competitors. The report also criticises the suggestion of assessing algorithmic collusion from the perspective of competition law-based price signalling.

The key legal question is how to intervene in tacit collusion caused by pricing algorithms. The report suggests that determining whether intervention is possible through interpretation practices at the EU level or directly through law will be crucial.

The report highlights the challenges of regulating algorithms and the importance of carefully considering the right approach under competition law. As such, the use of AI in pricing decisions requires careful consideration of the risks and benefits and active co-operation among stakeholders to establish effective regulatory frameworks.

Adopting AI in procurement introduces risks that must be addressed in contracts between customers and AI suppliers, particularly for AI as a Service (AIaaS) models. Key considerations include:

  • Data privacy and security: Contracts must include strict data protection measures, defining how sensitive information is safeguarded, access permissions, and breach protocols.
  • Bias and decision-making: Agreements should mandate regular audits and bias mitigation, with suppliers providing transparency about their AI training datasets and corrective actions for identified biases.
  • Transparency and explainability: Contracts need to require a degree of explainability from AI systems, ensuring suppliers can clarify the AI’s decision-making processes.
  • Data quantity and accuracy: For internal AI, agreements should address the need for substantial, high-quality data for AI training, including data quality assessment and improvement strategies.
  • Performance guarantees: Contracts should outline expected AI performance metrics and outcomes, including remedies for failing to meet these standards due to inaccurate predictions or biased results.

Digitalisation has been strongly linked to employment in Finland. AI in employment may lead to cost savings, shorter processing and better recruitment decisions, but it may also pose considerable risks, including with regards to the applicants’ rights to privacy and equal treatment. Firstly, data protection legislation must be considered when developing AI-based employment tools and when processing personal data. Secondly, the Non-Discrimination Act and the Act on Equality between Women and Men (609/1986) constrain the integration of AI in the context of employment. Furthermore, the provisions of the Employment Contracts Act (55/2001) require that employees must be treated equally and impose certain obligations on the employment relationship that would not necessarily be met.

When it comes to the question of liability, the Non-Discrimination Ombudsman has stated that the parties responsible for AI systems and the parties using them, such as employers, are always responsible for ensuring that their activities are in accordance with the non-discrimination rules.

In addition to general data protection legislation, the Act on the Protection of Privacy in Working Life (759/2004) applies to the processing of employees’ personal data in Finland. For example, it is permissible to locate employees if the employer has a valid reason to do so. In principle, location data cannot be used to monitor obligations under employment law, such as working time. However, it is possible to monitor and track working time if the employee performs all or most of their work elsewhere than at the employer’s premises. In such cases, the employer must determine the purpose of the technical monitoring of employees and the matter must be handled in a procedure referred to in the Act on Cooperation within Undertakings (1333/2021).

AI can be used to understand how to make work more efficient and safer. A study carried out by the Finnish Institute of Occupational Health sought to understand the connection between irregular working hours and sickness absences and accidents. AI was able to identify different working time patterns that were linked to the risk of accidents. The results showed that AI helps to understand how to make working conditions safer and thus make work more efficient.

Finland has the potential to be a trendsetter in the use of AI in digital platform companies, as Finland has expertise, capabilities and developers in the field of new technology applications. As previously stated, one of the objectives of the AI 4.0 programme was to increase the use of AI in SMEs. One way to achieve this is to provide systems where companies and organisations can build their own AI applications.

One of Finland’s largest grocery store chains launched a home delivery service where a robot delivers the food from the store to the customer. The delivery is ordered via a mobile app, after which the groceries are packed and put on board a robot. The robot uses AI to plan the route and to detect obstacles, people and vehicles. The service functions in a very limited area but shows that different AI-based systems have found a footing in Finland.

Financial services have significantly benefited from the use of AI, for example, AI-based customer service has been used for a long time in the financial sector. AI can also be used in lending decisions, as it can quickly assess the conditions for granting a loan or credit and then make the decision. Such automated decision-making must comply with the requirements of the Finnish Data Protection Act and the GDPR. In such cases, the financial services companies must inform the credit applicant of the existence of automated decision-making and profiling and provide relevant information on the logic behind the processing, as well as on the significance of this processing and its possible consequences for the applicant.

One solution to the growing shortage of resources in the Finnish healthcare system is the extensive use of AI in healthcare. However, a very complex regulatory framework – ie, the legislation on medical devices, data protection, public authorities’ automated decision-making and information management in social welfare and healthcare, must be taken into account when using AI in healthcare. Even though the purpose of the framework is to improve the safety of use, it slows down the integration of AI into healthcare.

A significant amount of social and health data, for example on patient treatment, examinations, and prescriptions, is recorded in the national information system services in Finland. With the help of AI, this existing data could be used for preventive care, to improve the quality of care or to achieve cost-effectiveness and efficiency. However, the use of AI for the above-mentioned activities is constrained by national legislation, such as the Act on the Status and Rights of Patients (785/1992), the Act on the Processing of Client Data in Healthcare and Social Welfare (703/2023) and the Act on the Secondary Use of Health and Social Data (552/2019).

The Deputy Data Protection Ombudsman has stated that AI may be used in healthcare. However, in order to use AI-enabled products and services in healthcare, they must pass extensive examinations testing algorithms, demonstrating product safety and clinical and analytical validity. In the testing phase, the EU regulation of medical devices and Finnish data processing laws, such as the Act on the Secondary use of Health and Social Data and the Biobank Act (688/2012), must be taken into account.

For instance, chatbots are widely used as the first point of contact between patients and healthcare providers. With AI, this service could be more personalised based on natural interaction between a patient and AI. While AI is interacting with the patient, it could identify potential problems and offer advice on them and generate suggestions for referrals or prescriptions. However, pursuant to the Act on the Provision of Digital Services, an authority must ensure in advance the appropriateness of the information and advice generated by AI. Thus, such a provision hinders the full utilisation of AI in healthcare in the public sector.

For example, in relation to the use of the electronic patient data management system (Apotti) in the largest hospital district in Finland, the aim is to integrate AI into the PDM system in the near future. AI could be used to identify work tasks that could be partially or fully automated. For example, it could be used to generate patient records based on a conversation between a doctor and a patient. In addition, AI can make use of available datasets, for example by comparing medical publications with patient records, to generate different treatment recommendations to help doctors in planning patient care.

Regulations for the use of AI in autonomous vehicles are being aligned with EU standards, particularly under the framework regulation on motor vehicles (EU) 2018/858. Proposed updates to the Finnish Traffic Law include restrictions on the use of communication devices by drivers to prevent distractions. Liability for accidents involving autonomous vehicles follows general liability principles, with various parties potentially held accountable, from insurance companies to contractors and individuals, based on negligence or reckless behaviour affecting public safety. Finland is also making efforts towards international harmonisation of autonomous vehicle regulations to comply with EU directives, including SERMI certification and CE marking.

Frameworks addressing AI algorithms, data privacy, cybersecurity, and vehicle performance are under development, emphasising cybersecurity and data protection in public IT procurement processes. Legislation is also being prepared to incorporate environmental impact considerations in public vehicle procurements. While ethical considerations for AI decision-making in critical situations are not explicitly covered, existing professional obligations in sectors like healthcare may provide some guidance on balancing public and ethical interests.

In Finland, the Act of 12 June 2008 regulates the use of machines, tools, and other technical devices, as well as their combinations (work equipment) in work as specified in the Occupational Safety and Health Act. This regulation has been in force for ten years and is being updated to better reflect current technology and work practices, as well as to fully utilise new types of devices. The adoption of new technology is also expected to improve worker safety.

The regulations governing the use of AI in manufacturing, including product safety and liability, would be influenced by these updates to ensure that AI systems integrated into manufacturing processes meet the required standards for quality, safety, and performance of products. The impact on the workforce and issues related to data privacy and security would also be addressed under the broader scope of occupational safety and health regulations, which are being adapted to accommodate new technologies and work practices.

Professionals using AI are expected to uphold high standards of liability and responsibility, ensuring that AI tools are reliable and appropriately supervised. Confidentiality remains paramount, with a requirement for AI systems to comply with stringent data protection standards, safeguarding sensitive client information. Intellectual property rights concerning AI technologies necessitate careful consideration to ensure lawful use and respect for existing IP rights.

Client consent is critical, especially for services involving personal data processing by AI, demanding transparency and informed agreement from clients. Moreover, professionals must ensure their AI practices comply with Finnish laws and regulations, including the GDPR for data protection, reflecting Finland’s commitment to high ethical and legal standards in the integration of AI into professional services.

Can AI Technology be an Inventor for Patent Purposes?

In early 2020, the European Patent Office (EPO) took a position on whether AI can be an inventor or patentee. The EPO’s decision concerned an AI system called DABUS. The decision was unequivocally negative – DABUS could not be the inventor. The EPO argued that the European Patent Convention (EPC) requires the inventor named in the application to be a natural person and not a machine, and that the designation of the inventor is a necessary formal condition for the grant of a patent, without which the substantive conditions for patentability cannot be examined. Another main reason for rejecting applications was that, in the EPO’s view, the patent confers rights on the inventor of the patentable invention, which cannot be granted to “non-persons” such as computer programs. Although the inventor’s right can be transferred, an AI cannot transfer the rights granted to it, because it is not a legal entity to which rights could arise in the first place.

Copyright v Trade Secret as Protection for AI Algorithms

Copyright has traditionally been viewed as the primary means of safeguarding software, such as computer programs in Finland. However, copyright protects the underlying source code of, for example, an AI tool. Proving in court that it has been copied is very difficult, and the source code may not even have been copied as such, but rather “imitated”. As a result, copyright does not effectively prevent competitors from developing their own versions of such an AI tool, potentially eroding any competitive advantage that may have been gained. According to the Finnish Trade Secrets Act, which was based on the 2016 Trade Secrets Directive, a trade secret is information that is confidential, has commercial value due to its confidential nature, and is subject to reasonable measures to ensure its confidentiality. In many cases, information related to AI technologies possessed by a company will meet these conditions, making the algorithm a protected trade secret under the Act regardless of its implementation or expression, unlike copyright.

Technical Instruction

The Act further provides for the concept of a technical instruction, which applies under Finnish law and stands independent of the Directive. A technical instruction is a technical guideline or operations model that can be used in the course of business. An AI algorithm can also be considered a technical instruction. The protection of technical instructions is activated when such instructions are disclosed confidentially in certain circumstances. If a party has received an algorithm confidentially under these circumstances, they are not allowed to use or disclose it without authorisation. As a result, even in situations where an AI algorithm cannot be protected as a trade secret, it may still qualify for protection as a technical instruction.

In Finland, there has been considerable debate about the extent to which AI-generated works of art and works of authorship should be protected under copyright law. Copyright protection for AI-generated works requires that the work meets the threshold of originality and independence, like any other work of authorship. However, since AI often operates in a limited capacity and relies on human input for creativity, it is uncertain whether works produced by AI applications can be considered original and independent enough to warrant copyright protection.

As AI technology develops towards higher levels of autonomy, this threshold could be met at least theoretically, giving rise to the complex issue of who would hold the copyright for such work. If the AI works autonomously in its creative work without any human guidance, could the copyright belong to the AI itself?

The current answer in the Finnish legal system is no. Copyright can only be held by a natural person as can be interpreted from the Finnish copyright law and as confirmed in case law. Granting copyright to a machine would go against the fundamental understanding of our legal system regarding which entities can have legal rights or obligations. Therefore, although the concept of machine copyright is intriguing, it is currently not permissible.

Instead, the copyright holder of AI-generated works should be the individuals who contributed to the AI application, with the AI considered a tool used by the artist in their work. However, the more autonomous the AI’s creative function becomes, the more complex this determination becomes.

One of the main issues related to creating works and products using OpenAI is the ownership of intellectual property rights. The language models developed by OpenAI are trained on vast amounts of text and other data from the internet, which may include copyrighted materials or other protected works. In principle, the users of OpenAI technology who create works or products are responsible for ensuring that they have the necessary rights to use any third-party materials that may be incorporated into their creations.

Pursuant to the Finnish Limited Liability Companies Act (624/2006), the management of the company shall act with due care and promote the interests of the company. If a company wishes to use AI-based tools, the management must familiarise itself with the use of AI in detail and take the necessary steps to ensure that the use of such tools does not cause damage to the company.

The board must be aware of existing and upcoming AI legislation and how sector-specific regulation and data protection requirements apply to the use of AI-based tools in the company. The board is responsible for deciding whether AI should be utilised in the company and for ensuring that it is used safely. Therefore, the board should consider, among other things, the following issues to fulfil their due diligence obligations.

  • The board should set goals for the use of AI in the company, such as making work more efficient through faster decision-making. This should be followed by an impact assessment on how to take into account potential negative impacts and consequences for employees and stakeholders.
  • The board should ensure that employees have sufficient awareness of the use of AI-based systems and how they should act if they perceive any biased decisions made by AI.
  • The board itself should regularly review the use of AI and take appropriate action if it finds that AI-based decisions are contrary to the goals for the use of AI or the company’s policies.

In Finland, when implementing AI best practices across organisations, key considerations include:

  • compliance with Finnish and EU regulations, including GDPR, and sector-specific laws;
  • adoption of ethical AI guidelines focusing on fairness, transparency, non-discrimination, and accountability;
  • establishment of robust data governance policies for data quality, privacy, and security;
  • identification and mitigation of AI deployment risks, including biases and errors;
  • creation of transparent AI systems with explainable decisions;
  • assurance of AI system security, reliability, and resilience; and
  • investment in employee training for AI technology proficiency.

Practical advice for effective AI implementation:

  • start with pilot projects and gradually expand;
  • leverage existing AI frameworks and tools for guidance;
  • engage with industry, academia, and regulators for insights on standards and best practices;
  • maintain records of AI development and deployment for compliance and auditing;
  • include stakeholder input in AI system development and deployment;
  • continuously monitor and update AI systems according to evolving best practices; and
  • consult with experts for guidance on AI regulation and implementation.

By adhering to these considerations and practical steps, Finnish organisations can ensure their AI practices are responsible, compliant, and aligned with both national and international standards.

Borenius

Eteläesplanadi 2
00130 Helsinki
Finland

+358 20 713 3136

erkko.korhonen@borenius.com www.borenius.com
Author Business Card

Trends and Developments


Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

General Framework

Finland is making significant strides towards becoming a frontrunner in the digital economy, with a particular emphasis on the development and use of artificial intelligence (AI). The government and public authorities have introduced various programmes and guidelines to foster the growth and adoption of AI, recognising its potential to transform both the economy and society. Below, we detail some of the primary objectives set forth by the Finnish government:

  • Finland aims to become a trusted and secure pioneer in the digital economy by 2025.
  • The Ministry of Economic Affairs and Employment has also set a goal to retain and attract the best talent and professionals in the field.
  • Broadly speaking, Finland aims to become a global leader in the application and use of AI in both the public and private sectors.
  • The government plans to make the use of AI possible in healthcare in, eg, shift planning, prevention, self-care and care activities.
  • The government wants to ensure technology-neutral legislation that enables responsible use of automation and new technologies such as AI on a large scale. They plan on conducting a “TechFit” mapping to identify and address gaps and shortcoming, as well as barriers to technologies and automation in legislation and policies.
  • The government wants to actively influence EU legislation on the platform economy, AI, data and digitalisation in a way that minimises additional national legislation.
  • The government wants to enable automated decisions by public authorities using AI.

To achieve these objectives, the Finnish government and public authorities have implemented several programmes and guidelines, including:

  • the national Artificial Intelligence Strategy, launched in 2017 (Finland was one of the first countries to launch such a strategy);
  • the Artificial Intelligence 4.0 programme, designed to promote the development and adoption of AI and other digital technologies, with a particular focus on small and medium-sized enterprises in the manufacturing sector (2020-2022);
  • the National Artificial Intelligence programme, AuroraAI, to develop an AI-powered technical solution, resulting in the AuroraAI network in 2022;
  • building a strong and distinctive digital economy, where close collaboration between the public and private sectors is essential;
  • leveraging Finland’s strengths, such as its highly motivated research groups that specialise in emerging sectors, such as unsupervised learning, its vibrant start-up field and the close co-operation between research institutions and companies;
  • working to strengthen the technological capacity of the public sector and to further develop public-private partnerships; and
  • the key role played by the Finnish Centre for Artificial Intelligence (FCAI) in promoting Finland’s strengths on the global stage.

Business Finland, a Finnish governmental funding agency, supports the development of AI technology through several key funding programmes:

  • the AI Business Programme, boosting development, growth and internationalisation of Finnish AI companies with a total budget of over EUR200 million (2018-2021); and
  • the joint Research, Development and Innovation Programme ICT 2023: Frontier AI Technologies of Academy of Finland and Business Finland; the aim of the programme is to fund high-quality scientific research, which is also expected to have a scientific and social impact.

AI Legislation

EU legislation

The European Commission’s Proposal of 21 April 2021 on the harmonised rules for artificial intelligence and amending certain EU legislative acts (COM/2021/206 final, the “Proposal”) is the main piece of legislation regarding AI-related regulation. This Proposal seeks to establish a European approach and promote the development and deployment of AI for the protection of the public interest, particularly with regards to health, security and fundamental rights and freedoms. The Proposal offers a balanced, proportionate, and horizontal regulatory approach to AI. It addresses the associated risks and challenges, whilst minimising constraints on technological development and the cost of introducing AI solutions to the market.

The Finnish government has consistently supported the EU’s advancements in AI-related regulation and EU-level initiatives. In late 2021, the government released its first memorandum on the Proposal, expressing its strong support for the responsible use of AI in Finland and Europe. The memorandum specifically highlighted the implications of AI systems for fundamental rights, noting that when used correctly, AI solutions can help to enhance and contribute to the realisation of these rights. However, the government acknowledged that there are still some unanswered questions in this area, deeming it crucial to thoroughly assess the scope of applicability from the perspective of fundamental rights. To this end, the Finnish government is advocating for a more comprehensive approach to the regulation of AI; one that takes into account the potential implications for fundamental rights.

In October 2022, the Finnish government presented their opinion on the proposed amendments to the European AI regulation in a second memorandum. The definition of AI was of particular importance in the memorandum and concerns were raised about it. Finland advocates excluding from the scope of regulation any systems that simply follow pre-defined rules and instructions without any discretion or alteration of their operational logic. Moreover, Finland believes it is essential that the regulation on the experimentation and testing of AI systems in real-world settings is truly enabling, encourages innovation and does not unnecessarily hinder the entry of AI systems to the market.

In September 2023, the Finnish government gave its third opinion on the Proposal as amended by the European Parliament. The government advocated for adherence to the Council’s original general approach. The government especially expressed its critique of the proposed amendments to the classification of high-risk systems and the use of biometric identification systems.

Lack of national legislation

Despite the Finnish government’s active involvement in AI at the EU level, its implementation of programmes supporting businesses in the twin transition and adoption of AI, and its generally positive and forward-thinking attitude towards AI, national AI legislation remains rather sparse. The rules on AI use are mainly derived from the GDPR, focusing less on AI specifically and more on the use of personal data in AI-driven technologies. For instance, there is no specific legislation on general AI-related liability, restrictions on AI use, or processing personal or other data with advanced AI technologies in Finland.

This is most likely due to the above-mentioned pending EU legislation and the government’s express goal to minimise national legislation in the area of AI. The AI Act and the new proposed Directive on adapting non-contractual civil liability rules to artificial intelligence (COM/2022/496, “AI Liability Directive”) and the Directive on liability for defective products (COM/2022/495, “Product Liability Directive”) will have a significant impact on all EU member states’ AI-related legislation. As a result, the Finnish government has opted for an active role in the EU arena, choosing not to enact new national legislation while awaiting the AI Act’s entry into force. The said directives must be transposed into national legislation. It remains to be seen whether completely new national laws will be drafted, or if existing laws, such as the Tort Liability Act, will be amended.

However, there has been special legislation concerning automated decision-making in public authorities in Finland. Despite the legislative “pause”, a new general legislation on automated decision-making in public administration was enacted in 2023. Before the new legislation, different decisions in various authorities were regulated separately. The new legislation allows automated decisions in general as long as the authority is acting according to the provisions of the laws. Moreover, liability issues relating to the automated decisions are addressed in the legislation. Under the new provisions in the Act on Information Management in Public Administration, the liability for using automated decision-making lies with the authority using it. Although the regulation does not yet extend to civil liability or developed AI technologies, it is still seen as progressive because it diverges from the current EU regulation.

Another intriguing aspect of the Finnish legal system in relation to AI is the complete absence of case law and judicial decisions. This can be attributed to several factors. Firstly, the anticipation of EU legislation and scarcity of specific national legislation mean that there have not been any issues regarding the interpretation of the law warranting resolution through the courts. Another reason could be the lack of litigation. Without AI-related disputes, there is no need for legal proceedings that would generate case law. Also, if potential disputes have been settled or resolved through alternative dispute resolution mechanisms, they would not have generated a court ruling.

Concerns Regarding AI

Despite the broad recognition of AI’s potential in Finland, concerns regarding its safety have also been nationally acknowledged. To address this, the 2021 government plan included the Avoiding AI Biases project (2021–2022), which identified risks related to fundamental rights and non-discrimination in existing and planned AI systems. The project revealed that algorithmic discrimination had been given due consideration, particularly in the public sector. To ensure that AI applications are non-discriminatory, the researchers developed an assessment framework to identify and manage discrimination risks and promote equality in AI use. This framework helps to ensure that AI is used responsibly and ethically, and that the rights of individuals are respected.

Also, cybersecurity threats associated with AI have been taken seriously in Finland. The Finnish Transport and Communications Agency, together with the National Cybersecurity Centre, have conducted research on the potential impact of artificial intelligence on cyberattacks. While the risk of AI-enabled cyberattacks is currently considered low, the study acknowledges that AI systems can enhance traditional cyberattacks by increasing their speed, scale, coverage and personalisation, thus making them more successful. If an advanced AI attack is not promptly addressed, it can lead to deeper penetration into systems or networks by the attacker.

To combat AI-enabled cyberattacks, the study emphasises the need for cybersecurity to become more automated. Automated defence systems will be the only ones capable of matching the speed of AI-enabled attacks. These systems will require AI-based decision-making capabilities to effectively detect and respond to such attacks.

Data Protection and Privacy Considerations in the Face of Emerging AI Technologies

Many provisions of the GDPR are relevant to AI, and some are challenged by the new ways of processing personal data that are enabled by AI. There is indeed a tension between the traditional data protection principles and the full deployment of the power of AI and big data.

Some of the issues raised in the Finnish privacy discussion include:

  • observation and prevention of implicit biases and algorithmic discrimination in automated processing of personal data;
  • the principle of data minimisation from a machine learning perspective;
  • the increasing use of biometric technologies, including facial recognition applications; and
  • chatbots and conversational AI technologies from a transparency perspective.

On the other hand, AI technology can be used as a form of privacy-enhancing technology to help organisations comply with data protection by design obligations. It can also minimise the risk of privacy breaches and increase efficiency and accuracy in detecting and responding to potential database breaches.

Recent Intellectual Property Considerations

Intellectual property rights are essential in harnessing the economic potential of inventions, including AI technology. AI-related inventions pose several challenges to the current Finnish intellectual property laws.

Firstly, there has been an ongoing discussion on how to sufficiently protect AI algorithms if patenting is not a desired option – eg, for cost or confidentiality reasons. In Finland, copyright has recently been regarded as a somewhat ineffective means of protecting software, as it only applies to the source code. The Finnish Trade Secrets Act provides protection for trade secrets, which can include AI algorithms that, inter alia, are confidential and have commercial value. Another form of protection under Finnish law is the protection for technical instructions, which can be considered applicable to an AI algorithm, even if it does not qualify as a trade secret. However, the most common form of protection for emerging AI technologies in practical terms remains uncertain.

Secondly, AI systems are becoming increasingly autonomous and can seemingly create works of art and works of authorship in a creative manner. This is particularly true of generative AI tools, such as ChatGPT, which have exploded in popularity. This has sparked a debate in Finland on whether the copyright to such works of art or authorship could belong to the AI. The same line of reasoning applies to patent applications – could AI be an inventor or co-inventor of a patentable invention?

According to the prevailing Finnish perspective, the answer to both questions is no. The current legal structures do not recognise AI as an entity to which rights could be granted.  Despite hopes for changes in the comprehensive reform of Finnish Copyright Law in 2023, AI-generated works were not recognised. It remains to be seen how AI-generated works and inventions will shape intellectual property rights through case law.

Small Data

In addition to big data aspects, Finland could potentially become a leader in the “small data” field, where AI can be utilised even with a limited amount of data. There is a current trend of creating more efficient and understandable technologies that require less data, energy and computation. In the future, Finland could capitalise on the B2B market, which is twice as large as the B2C market, by investing in the development of small data AI solutions that could be used on the global stage.

The AI sector is striving to expand the interactive ecosystem in Finland. However, funding is a big challenge. For example, the Finnish Centre for Artificial Intelligence (FCAI) receives flagship funding intended for carrying out basic research, but it is not enough to run the ecosystem itself.

Summary

Finland launched a national Artificial Intelligence Programme in 2017, with the goal of making the country a global leader in the application of AI. By 2025, Finland aims to become a trusted and secure pioneer in the digital economy. To achieve this goal, the country is building a strong and distinctive digital economy through close collaboration between the public and private sectors and strengthening the technological capacity of the public sector while also developing public-private partnerships. The Finnish government has also launched funding programmes for AI development and has shown strong support for EU-level initiatives in AI-related regulation.

The government has emphasised the importance of fundamental rights in the use of AI and urged for a comprehensive approach to regulation. However, the Finnish government has also highlighted the need for a balanced approach to AI regulation that does not impede technological development or raise the cost of introducing AI solutions to the market.

While the Finnish government has been active at the EU level and initiated programmes supporting businesses in the twin transition and adoption of AI technologies, the AI legislation on a national level is still relatively thin on the ground. Pending EU legislation is likely to have a significant impact on AI regulation in Finland.

Borenius

Eteläesplanadi 2
00130 Helsinki
Finland

+358 20 713 3136

erkko.korhonen@borenius.com www.borenius.com
Author Business Card

Law and Practice

Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

Trends and Developments

Authors



Borenius is a leading independent Finnish law firm with over 120 lawyers and a significant investment in a global network of top-tier law firms. It has invested significantly in its global network of top-tier international law firms. The firm has offices in Finland, London, and New York, enabling it to provide excellent service to both domestic and international clients. Borenius’ technology and data practice comprises over 20 skilled lawyers, making it one of the biggest teams in the field in the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that adds value to its clients’ businesses and operations. The firm has recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.