Artificial Intelligence 2023

Last Updated May 30, 2023

Finland

Law and Practice

Authors



Borenius employs over 120 lawyers, and as a leading independent Finnish law firm, we work together with other highly regarded law firms across the globe that share our commitment to excellent service and quality. We have invested significantly in our global network of top-tier international law firms. Today, we have offices in Finland as well as representative offices in London and New York to ensure that we can provide the best advice to our domestic and international clients. Our technology and data practice is made up of nearly 20 skilled lawyers, making us one of the biggest teams in the field the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that provides extra value for clients’ businesses and operations. We have recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters (especially relating to cloud transition).

In Finland, there are currently no specific laws that solely govern artificial intelligence (AI) or machine learning. However, there are several laws and regulations that may apply to AI and its applications in various domains, such as privacy, data protection and intellectual property.

  • Data Protection: The General Data Protection Regulation (GDPR) applies to the processing of personal data, including AI and ML models that use personal data. The Data Protection Act (1050/2018) regulates the processing of personal data in Finland.
  • Liability: Finland follows a strict liability rule that applies to product liability, which may include AI and its applications. If an AI system causes harm or damage, the company or organisation responsible for the AI may be held liable.
  • Intellectual Property: The Finnish Copyright Act (404/1961), the Finnish Patent Act (550/1967) and the Finnish Trade Secrets Act (595/2018) apply to AI and its applications, protecting the rights of the creators of AI technologies.
  • Discrimination: The Finnish Non-discrimination Act (1325/2014) prohibits discrimination based on race, gender, religion and other protected grounds, including discrimination based on AI-driven decision-making.

In addition to the above, the Finnish government has established a national AI strategy to promote the development and use of AI in Finland, and to ensure that AI is used in a responsible and ethical manner. The strategy includes measures to improve the availability and quality of data, to develop AI technologies, and to support the adoption of AI in various industries while ensuring that ethical considerations are taken into account.

It has to be taken into account that most recent technologies and AI solutions may not have been publicly disclosed yet because companies wish to take competitive advantage of their inventions. As a result, the industry use of AI is discussed mainly on a general level. However, based on our experience, AI solutions are already in use in various industries, such as retail, banking, energy, entertainment, logistics and manufacturing. These industries commonly employ AI to optimise processes and improve the accuracy of data output.

A good example of the opportunities created by AI and automation is the “Industrial Internet of Things”, which can be used by companies in different industrial sectors to improve and optimise their operations. Today's industrial machinery is constantly generating data which, together with data from customers, can be used to optimise production volumes, for example. When all of this happens automatically, it may affect the position of the employees that perform the same tasks. This also creates a whole new set of opportunities for cloud service providers to offer companies data pools for such uses. Such data can then be potentially used for training AI systems and producing more accurate and relevant results based on the data – eg, in the retail industry. Also, as consequence of recent developments in the field of generative AI solutions, which are often based on large language models (such as Chat GPT), companies in different industries are starting to explore opportunities to use and integrate AI into their business operations.

Under the Administration Procedure Act (434/2003), an authority may make an automated decision on a case which does not involve matters which, in the Authority's prior discretion, would require a case-by-case assessment. It is essential to note that automatic decision-making tools would therefore not use discretion in decision-making. As a result, the reform of the law does not allow for the use of overly advanced AI. Decision-making can therefore be automatic, but not autonomous.

The entire field of AI regulation in Finland will to a very large extent be defined by the proposed EU Artificial Intelligence Act (2021/0106 (COD)) and the national legislation implementing the proposed Artificial Intelligence Liability Directive (2022/0303 (COD)). Both of these legislations are currently pending in the EU.

Finland has throughout the legislative process of these proposals had a positive view of AI-related regulation in the EU and generally supports responsible AI development. The national AI strategy is aligned with the essential views adopted in the EU proposals.

Under the proposed EU AI Act, “Artificial intelligence system” is defined as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments. This is to a great extent in line with the definition used by OECD in its “Recommendation of the Council on Artificial Intelligence” from 2019.

However, there are some minor conflicts between the latest amendments to the AI Act proposal and existing legislation in Finland. Namely, the automated decision-making by authorities enabled and permitted under the Administration Procedure Act could fall under the scope of the proposed AI Act.

Consequently, the Finnish government aims to exclude systems that automatically apply human-defined rules without discretion or changing logic from the definition of artificial intelligence under the AI Act. This was articulated in the latest government memorandum in 2022 concerning the AI Act. Furthermore, the government deems it important for regulation governing AI system experimentation and testing in real-life situations to truly enable and foster innovation, without unfairly hindering the market entry of AI systems.

There is no applicable information in this jurisdiction.

In Finland there are no judgments or court decisions yet relating to AI. That is most likely due to the small amount of AI legislation.

The only relevant decisions have come from the Deputy Data Protection Ombudsman and the National Non-Discrimination and Equality Tribunal.

Deputy Data Protection Ombudsman

The Deputy Data Protection Ombudsman has handled AI-related issues in two decisions concerning use of an automated decision-making tool. Both decisions are about a tool designed to find patients whose treatment should be specified and refer them for the right treatment. In the decisions, the Deputy Data Protection Ombudsman assessed whether the tool was making automated individual decisions in the meaning of Article 22 of GDPR and whether the data controller was acting in accordance with the GDPR. Because the cases are not a matter of AI, but about data protection issues, there is less discussion on AI. However, The Deputy Data Protection Ombudsman raised concerns that the algorithm used in the tool would discriminate against patients who would be excluded from specific proactive healthcare interventions as a result of an assessment based on the profiling performed by the algorithm. Nevertheless, this consideration was not relevant in the decisions.

The National Non-Discrimination and Equality Tribunal

In brief, The National Non-Discrimination and Equality Tribunal’s decision was about discrimination in access to credit services. The person involved had applied for credit to finance his purchases when buying household goods from an online shop. However, the credit company did not grant the credit. The decision was based on credit ratings based on statistical methods in credit reference agencies. The ratings did not take the actual ability to pay into account, but was based on general statistics relating to, for example, place of residence, gender, language and age information. If the Finnish speaking male applicant had been a Swedish speaking female, he would have been granted credit.

The algorithm, using the above-mentioned data and making credit ratings based on it, was deemed to be discriminative. The AI was not advanced in this case either, but a mere algorithm profiling clients based on set information.

Reasons For the Lack of Case Law

There are various reasons for the lack of court rulings. For example, the anticipation of EU legislation and the scarcity of specific legislation might have created to a situation where there have not been any questions on the interpretation of law that would require settlement in court. Another reason could be a lack of litigation. If there are no disputes on the use of AI, there is also no need for legal proceedings that would produce case law. Also, if potential disputes have been settled or resolved through alternative dispute resolution mechanisms, they would not have generated court rulings.

Technology Definitions in the Decisions

The decisions discussed in 4.1 Judicial Decisions did not form any definitions of AI. Since the decisions were about GDPR compliance and discrimination, the GDPR’s definition of automated individual decision was used. A decision based solely on automated processing refers to a situation where no human is involved in the decision-making process. For example, if an automated process produces a recommendation that a human only takes into account with other factors in making the final decision, it is not a decision based solely on automated processing.

In Finland, the regulation is prepared mainly in different ministries. They draft the law proposals to the government, which then passes them to the parliament. The ministries are authorised to propose any kind of new legislation or amendments to the existing laws that are valid throughout Finland. Agencies and standard-setting bodies issuing decrees and soft law are further discussed in 7. Standard-Seeing Bodies.

So far, regarding AI-related regulation, the most active ministries have been the Ministry of Economic Affairs and Employment, Ministry of Justice and the Ministry of Finance. The Ministry of Justice prepared the legislation on automated decision-making in public authorities. The Information Management Board, which supervises the use of AI in public authorities, acts under the Ministry of Finance. The Ministry of Economic Affairs and Employment, in turn, has set up the Artificial Intelligence 4.0 programme, which aims to accelerate business digitalisation.

Technology Definitions by Regulatory Agencies

The definitions of AI technologies used by different agencies are very similar. They all emphasise three features, namely: autonomy, ability to learn and high performance. AI enables machines, devices, software, systems and services to act in a meaningful way, adapting to the task and the situation, almost like a human.

Nevertheless, the existing legislation does not yet cover the usage of AI described in the definition. As stated in 3. Legislation and Directives and 8. Government Use of AI, the current legislation only concerns the use of automated decision-making, which must not make any considerations by itself. The legislation does not, however, restrict the use of AI in businesses as the national legislation only applies to public authorities. As a result, companies can use AI much more freely in their actions as long as they acknowledge their responsibilities and liability.

The main regulatory objective of the AI legislation and the authorities issuing legislation and supervising the compliance of law is to increase the use of AI in Finland in a safe and responsible manner. For example, in the preparatory works of the above-mentioned legislation it is stated that the objective of the legislation is to enable the use of automated decision-making more widely and that way increase the use of technology. At the same time, the Information Management Board ensures that authorities using AI are acting lawfully.

The objective of the Artificial Intelligence 4.0 programme under the Ministry of Economic Affairs and Employment is to strengthen digitalisation and economic growth, as well as encourage co-operation between different sectors, increase investment in digitalisation and improve digital skills. The vision of the programme is “In 2030 Finland will be a sustainable winner in the twin transition”. Twin transition means responding to the challenges of industrial digitalisation and the green transition at the same time.

AI-specific legislation is still mostly lacking in Finland, which is why there is also no law enforcement actions relating specifically to use of AI technologies. The remedies of breaching the legislation on use of AI can be imposed mainly under the provisions of the Tort Liability Act (412/1974), the Constitution of Finland (731/1999) (public authorities), or the Criminal Code (39/1889), and the GDPR. Moreover, if an authority gives an order or prohibitive decision, it can be enforced with a conditional fine under the Act on Conditional Fines.

In the National Non-Discrimination and Equality Tribunal’s decision mentioned in 4.1 Judicial Decisions, the tribunal imposed a conditional fine of EUR100,000 on the credit company. Currently, there are no other law enforcement actions pending.

Legislation Proposals “On Hold” in Finland

For the moment, there is no AI-related proposed legislation or regulation pending at the national level in Finland. The most important legislation currently pending are the EU Artificial Intelligence Act and the proposed Artificial Intelligence Liability Directive. The government of Finland has given statements on both of them. Because the pending EU legislation will have a material impact on Finnish AI legislation it is most certainly one reason for the lack of national legislation proposals.

Once the proposed EU legislation enters into force, the development of national legislation will presumably be stepped up. The AI Act will be directly applicable, but the directive has to be transposed into national legislation and it it is likely to have an impact on several Finnish laws. Moreover, one of the objectives of the Artificial Intelligence 4.0 programme is to increase Finland’s role in the creation and implementation of EU AI, data and industrial strategies, namely by being active in regulation processes and encouraging Finnish companies to strengthen their role and influence in EU-level policymaking and networks.

The Finnish Standards Association (SFS) is the national standardisation organisation in Finland. In 2018, the SFS established a national standardising group SFS/SR 315 to develop standards related to AI. SFS/SR 315 currently focuses on the Finnish concepts and terms of AI, reference architecture, ethical and societal aspects of AI and AI management systems. Members of the group are also involved in producing and commenting on the content of both European and international standards.

AI-related Soft Law

In addition to the SFS, there are several public bodies in Finland that provide AI-related guidance and soft law. For example, the Finnish Centre for Artificial Intelligence (FCAI) is a community of AI experts in Finland, initiated by Aalto University, the University of Helsinki, and the VTT Technical Research Centre of Finland. It provides research-based knowledge and guidance on AI and its applications to academia, industry and government organisations.

Finnish supervisory authorities also have a role in the field of AI soft law. The Data Protection Ombudsman supervises compliance with data protection legislation, which naturally applies to processing of personal data by AI systems. The Non-Discrimination Ombudsman supervises compliance with non-discrimination provisions in the use of AI and algorithms. As previously mentioned, the Deputy Data Protection Ombudsman has issued decisions concerning automated decision-making. Furthermore, the Non-Discrimination Ombudsman took a case concerning automated decision-making in lending to the National Non-Discrimination and Equality Tribunal in 2017.

Although recommendations and guidance by these bodies are not legally binding, they provide valuable guidance and recommendations for the development, deployment and use of AI in Finland.

Standardisation in Finland is closely connected to international work. 97 per cent of the standards approved in Finland are of international origin. International standards are sometimes complemented by nationally developed standards. The most important international standard-setting bodies affecting Finland include ISO, CEN, IEC and CENELEC.

Finland is currently at a pivotal juncture regarding government use of AI. In spring 2023, new legislation on automated decision-making in public authorities entered into force. So far, using AI in public authorities has required special legislation separately on practically every different type of decision. Despite that, for example, the Finnish Tax Administration and the Social Insurance Institution, which are both making millions of administrative decisions a year, have already been using automated decision-making based on special legislation.

The new, general legislation allows public authorities to use automated decision-making without special legislation as long as the use of AI tools complies with the law. That naturally makes the use of AI easier to administrative bodies as the decision-specific legislation is no longer needed, hence AI systems will most likely be used more widely in near future. Indeed, it was stated in the preparatory work of the new legislation that the general legislation allowing automated decision-making is needed due to the increased use and demand of AI.

However, the use of AI enabled by the new legislation is not overly advanced. It only allows automated decision-making in situations that do not require any consideration. The algorithms must transfer the matter to a human being if it cannot be resolved without reflection. As a result, AI cannot be used, for example, in hiring government employees, as the hiring process always requires consideration and cannot be done solely based on non-disputable facts.

As there was not any general legislation concerning the use of AI before the legislation mentioned in 8.1 Government Use of AI, and the Finnish governmental authorities are not yet using any AI services or applications using more sophisticated AI, there are not any judicial decisions or rulings on government use of AI. Due to the lack of legislation and use, there is not even any pending cases concerning AI use in governmental bodies at the time of writing.

The Finnish Transport and Communications Agency and the National Cyber Security Centre have published a study on cyberattacks enabled by AI. Although the threat of AI-enabled cyberattacks is currently still considered low, it is acknowledged that the intelligent automation provided by AI systems will enhance traditional cyberattacks by increasing their speed, scale, coverage and personalised targeting, thus increasing their overall success. AI can also make attacks more sophisticated, tailored, malleable and harder to detect. AI attacks can include targeted data fishing, impersonation and imitation, and better hiding of malware activity. A slow or ineffective response to an advanced AI attack may allow the attacker to penetrate even deeper into systems or networks before being caught.

The study states that cybersecurity must therefore become more automated to respond to AI-enabled cyberattacks. Only automated defence systems will be able to match the speed of AI-enabled attacks. These defence systems will need AI-based decision-making to detect and respond to such attacks.

In Finland, there is currently no legislation specifically on the security risks posed by AI. The development of national legislation on AI has been in a kind of wait-and-see mode, as the EU Commission’s proposal on AI Act has been published and when the Act enters into force it will take precedence over national legislation. It remains to be seen whether national legislation on cybersecurity will also start to evolve once the AI Act has entered into force.

An example of national legislation on cybersecurity is the Act on the Operation of the Government Security Network, which requires important Governmental bodies to have a security network in place to ensure that their communications and operations are uninterrupted even in exceptional situations. However, it does not contain any AI-specific provisions.

Generative AI technology has recently attracted a lot of attention in Finland due to its potential to improve businesses’ efficiency and productivity. However, as with any technology, it comes with potential risks that must be considered. These are some of the risks that are currently being discussed in the industry:

  • Confidentiality and intellectual property risks: Since generative AI models often absorb user-inputted data to improve the model over time, they could end up exposing private or proprietary information to the public. The risk of others accessing sensitive information increases the more an organisation uses the technology.
  • Inaccuracies and hallucinating: Even when generative AI models are used correctly, there is always a risk that they generate false or malicious content. Generative AI is strongly associated with the so-called problem of hallucination: AI convincingly presents things that are completely untrue. False claims can be easy to trust when they are presented in a very credible way. For example, the most talked-about generative AI model, ChatGPT, has been documented to hallucinate facts. Use of inaccurate and untrue outputs may also potentially lead to defamation and, consequently, criminal sanctions.
  • Copyright: who owns content once it is run through generative AI applications? Licenses and terms vary between different AI tools. However, it can be extremely complicated to determine copyright between the original rights holder of input data, the AI tool operator and another user claiming AI-generated content as their own.
  • Deepfakes: With the widespread use of deepfake content, problems such as manipulation of the public as well as attacks on personal rights and sensitive information are becoming more common. AI-generated images and videos can look extremely realistic, making them difficult for humans or even machines to detect. This material can be used to cause harm to the reputation of a company or its executives. Cybercriminals can also use generative AI to create more sophisticated phishing scams or credentials to hack into systems.
  • Attack to the dataset: The use of generative AI also poses additional cybersecurity risks such as data poisoning, which involves manipulating the data used to train the models, and adversarial attacks. Adversarial attacks attempt to deceive generative AI models by feeding them malicious inputs, which could lead to incorrect outputs and potentially harm businesses or individuals relying on these outputs.

While solutions to these issues are rapidly being developed, it is crucial for any organisation seeking to utilise generative AI tools to recognise the limitations and risks associated with them.

The need to examine and develop the automation of judicial decision-making has been identified since the mid-20th century. The Finnish Bar Association has recognised the impact of rapid digitalisation and technological developments as one of the main themes in the administration of justice. However, there is no policy or regulation from the government on the use of AI in the practice of law. Under the constitution of Finland, the judicial powers are exercised by independent courts of law and the Constitutional Law Committee has stated that the threshold for the transfer of jurisdiction is high. Moreover, everyone has a constitutional right to have their case dealt by a legally competent court of law or other authority. Among other things, these constitutional rights hinder the integration of AI in the practice of law.

Ethical issues are often at the heart of the discussions on the use of AI in the practice of law, as AI can have many unethical effects on the administration of justice. In particular, biased judgments as well as the transparency and traceability are perceived as problematic. Pursuant to the Finnish procedural rules, judgments shall be accompanied by reasons, and they shall indicate the circumstances and the legal reasoning underlying the decision. At this stage, AI may not be able to justify its decisions at a sufficient level, so it is not possible to use it in the practice of law.

In general, AI or an algorithm cannot be held legally liable in Finland even if the damage is directly caused by it. That is because the legal entity doctrine has not been extended beyond natural and legal persons and only a legal entity recognised by law can be held liable for damages. Figuratively, AI can be compared to any tool. It does not matter whether a construction worker causes damage with a hammer or by their own hand – liability lies with the worker in both cases. Thus, the user of, for example, AI, an algorithm, or automated decision-making is always the one liable for the possible damages caused by the tool. For example, in most cases, a doctor is responsible for any diagnosis and treatment given, so in this respect the responsibility of the involved algorithm in decision-making itself is disregarded. Also, with regard to the activities of the authorities, even if the algorithm makes an actual administrative decision completely independently, the liability will lie with the official. For the same reasons, it does not matter which participant uses AI in supply chains, for instance, because the liability lies with the user.

However, the Finnish Chancellor of Justice has stated that, with the increase in automated decision-making, questions of apportionment of liability are central and that regulation and rules are needed as soon as possible. Liability issues related to AI algorithms have arisen in a number of health technology and autonomous car-related issues in particular, but may also relate to contractual and product liability issues where the AI algorithm is involved in decision-making in one way or another. Still, at the time of writing, there is not any extensive legislation on the matter in Finland.

Although the user is liable for the damages caused by AI, the action can be insured. Many insurance companies operating in Finland offer insurance for ICT services, which cover, in most cases, the direct damages caused when using AI.

Regulation on AI Liability

Although AI-related liability issues have not been fully addressed through legislation, there is some legislation on the matter. The issue was tackled for the first time in the new chapters of the Act on Information Management in Public Administration (906/2019) in 2023. This act is the first one that addresses liability issues at the level of law. As was the cases before the new legislation, pursuant to the new provisions, a machine or AI cannot be held legally responsible for its decisions. Automated decision-making must be treated as an instrument or tool for which the user is ultimately responsible.

In September 2022, the European Commission issued a proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), which aims to make it easier to hold the tortfeasor liable by applying a reversed burden of proof in situations where it is difficult for the injured party to prove a causal link to the damage caused by AI. The AI Liability Directive also obliges Member States to ensure that courts are empowered to order a pre-trial discovery on relevant evidence when a specific high-risk AI system is suspected of having caused damage. The Directive does not interfere with national laws on who can be held liable for damage caused by AI, but it does seek to prevent AI users from hiding behind the AI they use to avoid liability. At the time of writing, the Directive is not yet in force and no national legislation has been adopted on its basis.

The use of algorithms and machine learning has become increasingly prevalent in decision-making processes across many industries. However, there is growing recognition of the potential for algorithmic bias, which refers to the systematic and discriminatory effects of algorithmic decision-making systems on specific groups or individuals.

In Finland, the Avoiding AI Biases project was implemented as part of the 2021 government plan, which aimed to map the risks to fundamental rights and non-discrimination posed by machine learning-based AI systems that are either currently in use or planned for use in Finland. The project developed an assessment framework for non-discriminatory AI applications, helping to identify and manage risks of discrimination and to promote equality in the use of AI. The framework can also help companies ensure that their AI systems are compliant with non-discrimination laws.

In the public sector, healthcare and state grant decision-making have been identified as consumer areas where bias can create significant risk. In the private sector, credit scoring and hiring practices are considered high risk areas for algorithmic bias, as further discussed in 5.1 Key Regulatory Agencies and 14.1. Employee Evaluation and Monitoring.

In terms of industry efforts to address bias, several companies in Finland have established their own ethical guidelines for the use of AI. For example, certain companies (such as Nokia) have developed their own AI ethics framework that aims to ensure that their AI systems are transparent, trustworthy, and free of bias. Similarly, the Finnish Tax Authority has created a set of ethical guidelines for the use of AI.

The principle of data minimisation is a particularly challenging aspect of the data protection regulation from the perspective of AI technology, at least as long as the efficiency of machine learning algorithms depends on the availability of large amounts of data. As machine learning requires large datasets, companies utilising personal data in connection with machine learning are at greater risk of using data for purposes other than it was collected for, processing information on individuals not in the scope of data collection and storing data for longer than necessary. Also, as authorities in other countries have pointed out (eg, regarding ChatGPT in Italy), it may be difficult for manufacturers of AI systems and users of AI to identify and apply the legal basis for the processing of personal data.

However, the use of AI technology in the protection of personal data offers several benefits. AI technology can be used as a form of privacy-enhancing technology to help organisations comply with data protection by design obligations. For example, AI can be used to create synthetic data which replicates patterns and statistical properties of personal data. This can be processed in lieu of personal data.

AI can also be used to minimise the risk of privacy breaches, for example by encrypting personal data, reducing human error. The use of AI technology can also increase efficiency and accuracy in detecting and responding to potential data breaches. However, implementing such measures may come at the cost of hindering business operations and access to data, and data breaches can still occur despite stringent security measures.

Another issue arises from the processing of personal data and machine-generated data without direct human supervision. While automated data processing can increase efficiency and speed, it may also perpetuate implicit biases and discrimination that may not be immediately apparent. Without direct human supervision, errors and mistakes made by AI systems may go undetected, leading to adverse outcomes for individuals. The Finnish context of this issue is further discussed in 12.1 Algorithmic Bias.

AI has powered the use of biometric technologies, including facial recognition applications, which are increasingly used for verification, identification and categorisation purposes. Facial recognition technologies are inherently legally problematic, as they directly affect fundamental rights such as the protection of private life, the protection of personal data and the right to personal integrity, which are protected both at the EU and constitutional level. Facial recognition technology is built on the processing of biometric data, therefore it encompasses the processing of special categories of personal data under the GDPR.

The processing of biometric data as data belonging to special categories of personal data is in principle prohibited under the GDPR without consent or direct justification under the GDPR or other legislation. In addition, certain measures or contractual procedures may be required. As a result, the use of facial recognition technology also requires a legal basis under the GDPR, such as explicit consent, a statutory task or a public interest.

Companies using facial recognition technology must ensure that they comply with all applicable laws and regulations, obtain necessary consents and protect the biometric information they collect. Failure to do so may result in fines and reputational harm.

Based on the current wording of the proposed EU AI act, all remote biometric identification systems will be considered “high-risk AI systems” subject to strict requirements. The Act will also set forth specific transparency obligations for systems that would not be considered high-risk.

AI-related legislation in Finland is still relatively thin on the ground, but the most recent development is the general legislation on automated decision-making in public authorities, which allows the use of automated decision-making more widely than the previous special legislation. The new legislation allows automated decisions only in cases which do not require any consideration or analysis, which is why the technologies used may not be too sophisticated. Under the laws, liability for using the automated decision-making lies with the authority using the technology.

Companies, on the other hand, may use automated decision-making technologies much more freely than authorities as there is no specific national legislation restricting it. The requirements and responsibilities for companies are laid down in the GDPR. In addition to the basic principles of processing personal data, companies using automated decision-making are bound by a special provision according to which the subject of the automated decision or profiling must have the right not to be subject to such decisions or profiling. In that case, the decision must be made by a human. This does not apply if the subject has, for example, given an explicit consent for the automated decisions.

The companies acting as data controllers may be obliged to carry out an assessment of the impact of the envisaged processing operations on the protection of personal data prior to the processing, in connection with the automated profiling and decision-making. Pursuant to a statement by the Finnish Data Privacy Ombudsman, the same applies when a company is using biometric data. For example, when biometric data is used, for example, for the evaluation or scoring of an individual, automated decision-making or systematic monitoring, the company must perform the assessment of impact.

Under general tort law, the person or entity using AI or automated decision-making technologies is responsible for their use and liable for the potential damages caused when using such technologies. On the other hand, liabilities and administrative fines for breaching provisions of the GDPR are laid down in the GDPR. The companies breaching the legislation and causing damage to others by using automated decision-making bear the risk of being punished or obliged to pay damages for their actions.

The use of chatbots and other AI technologies to substitute for services rendered by natural persons is regulated by the GDPR and national privacy laws in Finland. The GDPR requires that individuals must be informed when their personal data is being processed, including when AI is used to process that data.

It is provided under the Act on the Provision of Digital Services (306/2019) that public authorities may provide advice to customers using service automation, such as chatbots, only when, inter alia, the user is:

  • informed that they are exchanging messages with the service automation;
  • offered the possibility to contact a natural person within the authority to continue the service; and
  • offered the possibility to record the exchange of messages with the service automation.

The use of technologies to make undisclosed suggestions or manipulate the behaviour of consumers, such as dark patterns, may be considered unfair commercial practices under the Finnish Consumer Protection Act. Dark patterns refer to various methods used to design the structure of websites, software, mobile applications, or other user interfaces to deceive or otherwise cause a consumer to do something that they were not originally supposed to do. For example, companies may not take automatic measures that incur additional costs for the consumer. The consumer’s explicit consent to the additional costs must be sought under the Consumer Protection Act. Dark patterns are supervised by the Finnish Competition and Consumer Authority FCCA.

The use of AI algorithms in pricing decisions has led to concerns over cartels and collusion in the Finnish market. The Finnish Competition and Consumer Authority (FCCA) has released a report covering the legal implications of different types of collusion caused by algorithms. The report distinguishes between explicit collusion, where anti-competitive conduct is carried out using an algorithm, and tacit collusion, where algorithms lead to similar pricing without any agreement between competitors. The report highlights the challenges of recognising and intervening in tacit collusion as it does not involve communication between competitors. The report also criticises the suggestion of assessing algorithmic collusion from the perspective of competition law-based price signalling.

The key legal question is how to intervene in tacit collusion caused by pricing algorithms. The report suggests that determining whether intervention is possible through interpretation practices at the EU level or directly through law will be crucial.

The report highlights the challenges of regulating algorithms and the importance of carefully considering the right approach under competition law. As such, the use of AI in pricing decisions requires careful consideration of the risks and benefits and active co-operation among stakeholders to establish effective regulatory frameworks.

Finland has named AI and digitalisation as ways to modernise industry in line with the objectives of the EU’s twin transition. Finland aims to be a sustainable winner in the twin transition and in order to achieve this goal, some key factors have been identified in the Artificial Intelligence 4.0 programme. Firstly, businesses must improve resource efficiency by investing in sustainable digitalisation. The aim is to remove the factors that have so far slowed it down, namely the lack of talent as well as business challenges such as lack of funding and test platforms. Secondly, it is essential that businesses identify the business benefit of the twin transition and public authorities must ensure that businesses have sufficient incentives to implement the transition. Thirdly, Finnish companies must be represented in value chains, and they must develop sustainable and “natural intelligent” products and services for global use. However, while AI and data are key to making industry more sustainable, the downside is high energy consumption, as in the worst case, the heavy use of AI can even accelerate climate change.

AI in Employee Hiring

Digitalisation has been strongly linked to employment in Finland. AI in employment may lead to cost savings, shorter processing and better recruitment decisions, but it may also pose considerable risks, including with regards to the applicants’ rights to privacy and equal treatment. Firstly, data protection legislation must be considered when developing AI-based employment tools and when processing personal data. Secondly, the Non-Discrimination Act and the Act on Equality between Women and Men (609/1986) constrain the integration of AI in the context of employment. Furthermore, the provisions of the Employment Contracts Act (55/2001) require that employees must be treated equally and impose certain obligations on the employment relationship that would not necessarily be met.

When it comes to the question of liability, the Non-Discrimination Ombudsman has stated that the parties responsible for AI systems and the parties using them, such as employers, are always responsible for ensuring that their activities are in accordance with the non-discrimination rules.

In addition to general data protection legislation, the Act on the Protection of Privacy in Working Life (759/2004) applies to the processing of employees’ personal data in Finland. For example, it is permissible to locate employees if the employer has a valid reason to do so. In principle, location data cannot be used to monitor obligations under employment law, such as working time. However, it is possible to monitor and track working time if the employee performs all or most of their work elsewhere than at the employer’s premises. In such cases, the employer must determine the purpose of the technical monitoring of employees and the matter must be handled in a procedure referred to in the Act on Cooperation within Undertakings (1333/2021).

AI can be used to understand how to make work more efficient and safer. A study carried out by the Finnish Institute of Occupational Health sought to understand the connection between irregular working hours and sickness absences and accidents. AI was able to identify different working time patterns that were linked to the risk of accidents. The results showed that AI helps to understand how to make working conditions safer and thus make work more efficient.

Finland has the potential to be a trendsetter in the use of AI in digital platform companies, as Finland has expertise, capabilities and developers in the field of new technology applications. As previously stated, one of the objectives of the AI 4.0 programme is to increase the use of AI in SMEs. One way to achieve this is to provide systems where companies and organisations can build their own AI applications.

One of Finland’s largest grocery store chains launched a home delivery service where a robot delivers the food from the store to the customer. The delivery is ordered via a mobile app, after which the groceries are packed and put on board a robot. The robot uses AI to plan the route and to detect obstacles, people and vehicles. The service functions in a very limited area but shows that different AI-based systems have found a footing in Finland.

Financial services have significantly benefited from the use of AI, for example, AI-based customer service has been used for a long time in the financial sector. AI can also be used in lending decisions, as it can quickly assess the conditions for granting a loan or credit and then make the decision. Such automated decision-making must comply with the requirements of the Finnish Data Protection Act and the GDPR. In such cases, the financial services companies must inform the credit applicant of the existence of automated decision-making and profiling and provide relevant information on the logic behind the processing, as well as on the significance of this processing and its possible consequences for the applicant.

One solution to the growing shortage of resources in the Finnish healthcare system is the extensive use of AI in healthcare. However, a very complex regulatory framework – ie, the legislation on medical devices, data protection, public authorities’ automated decision-making and information management in social welfare and healthcare, must be taken into account when using AI in healthcare. Even though the purpose of the framework is to improve the safety of use, it slows down the integration of AI into healthcare.

Significant amount of social and health data, for example on patient treatment, examinations, and prescriptions, is recorded in the national information system services in Finland. With the help of AI, this existing data could be used for preventive care, to improve the quality of care or to achieve cost-effectiveness and efficiency. However, the use of AI for the above-mentioned activities is constrained by national legislation, such as the Act on the Status and Rights of Patients (785/1992), the Act on the Electronic Processing of Client Data in Healthcare and Social Welfare (784/2021) and the Act on the Secondary use of Health and Social Data (552/2019).

The Deputy Data Protection Ombudsman has stated that the AI can be used in healthcare. However, in order to use AI-enabled products and services in healthcare, they must pass extensive examinations testing algorithms, demonstrating product safety and clinical and analytical validity. In the testing phase, the EU regulation of medical devices and Finnish data processing laws, such as the Act on the Secondary use of Health and Social Data and the Biobank Act (688/2012), must be taken into account.

For instance, chatbots are widely used as the first point of contact between patients and healthcare providers. With AI, this service could be more personalised based on natural interaction between a patient and AI. While AI is interacting with the patient, it could identify potential problems and offer advice on them and generate suggestions for referrals or prescriptions. However, pursuant to the Act on the Provision of Digital Services, an authority must ensure in advance the appropriateness of the information and advice generated by AI. Thus, such a provision hinders the full utilisation of AI in healthcare in the public sector.

For example, in relation the use of the electronic patient data management system (Apotti) in the largest hospital district in Finland, the aim is to integrate AI into the PDM system in the near future. AI could be used to identify work tasks that could be partially or fully automated. For example, it could be used to generate patient records based on a conversation between a doctor and a patient. In addition, AI can make use of available datasets, for example by comparing medical publications with patient records, to generate different treatment recommendations to help doctor in planning patient care.

Can AI Technology be an Inventor for Patent Purposes?

In early 2020, The European Patent Office (EPO) took a position on whether AI can be an inventor or patentee. EPO’s decision concerned an AI system called DABUS. The decision was unequivocally negative – DABUS could not be the inventor. EPO argued that the European Patent Convention (EPC) requires the inventor named in the application to be a natural person and not a machine, and that the designation of the inventor is a necessary formal condition for the grant of a patent, without which the substantive conditions for patentability cannot be examined. Another main reason for rejecting applications was that, in the EPO’s view, the patent confers rights on the inventor of the patentable invention, which cannot be granted to “non-persons” such as computer programs. Although the inventor’s right can be transferred, an AI cannot transfer the rights granted to it, because it is not a legal entity to which rights could arise in the first place.

Copyright v Trade Secret as Protection for AI Algorithms

Copyright has traditionally been viewed as the primary means of safeguarding software, such as computer programs in Finland. However, copyright protects the underlying source code of, for example, an AI tool. Proving in court that it has been copied is very difficult, and the source code may not even have been copied as such, but rather “imitated”. As a result, copyright does not effectively prevent competitors from developing their own versions of such an AI tool, potentially eroding any competitive advantage that may have been gained. According to the Finnish Trade Secrets Act, which was based on the 2016 Trade Secrets Directive, a trade secret is information that is confidential, has commercial value due to its confidential nature, and is subject to reasonable measures to ensure its confidentiality. In many cases, information related to AI technologies possessed by a company will meet these conditions, making the algorithm a protected trade secret under the Act regardless of its implementation or expression, unlike copyright.

Technical Instruction

The Act further provides for the concept of a technical instruction, which applies under Finnish law and stands independent of the Directive. A technical instruction is a technical guideline or operations model that can be used in the course of business. An AI algorithm can also be considered a technical instruction. The protection of technical instructions is activated when such instructions are disclosed confidentially in certain circumstances. If a party has received an algorithm confidentially under these circumstances, they are not allowed to use or disclose it without authorisation. As a result, even in situations where an AI algorithm cannot be protected as a trade secret, it may still qualify for protection as a technical instruction.

In Finland, there has been considerable debate about the extent to which AI-generated works of art and works of authorship should be protected under copyright law. Copyright protection for AI-generated works requires that the work meets the threshold of originality and independence, like any other work of authorship. However, since AI often operates in a limited capacity and relies on human input for creativity, it is uncertain whether works produced by AI applications can be considered original and independent enough to warrant copyright protection.

As AI technology develops towards higher levels of autonomy, this threshold could be met at least theoretically, giving rise to the complex issue of who would hold the copyright for such work. If the AI works autonomously in its creative work without any human guidance, could the copyright belong to the AI itself?

The current answer in Finnish legal system is no. Copyright can only be held by a natural person as can be interpreted from the Finnish copyright law and as is confirmed in case law. Granting copyright to a machine would go against the fundamental understanding of our legal system regarding which entities can have legal rights or obligations. Therefore, although the concept of machine copyright is intriguing, it is currently not permissible.

Instead, the copyright holder for AI-generated works should be the individuals who contributed to the AI application, with the AI considered a tool used by the artist in their work. However, the more autonomous the AI’s creative function becomes, the more complex this determination becomes.

One of the main issues related to creating works and products using OpenAI is the ownership of intellectual property rights. The language models developed by OpenAI are trained on vast amounts of text and other data from the internet, which may include copyrighted materials or other protected works. In principle, the users of OpenAI technology who create works or products are responsible for ensuring that they have the necessary rights to use any third-party materials that may be incorporated into their creations.

The use of AI in companies is likely to make the work of in-house lawyers more efficient, as it will help in the management of large amounts of data, automate many routine tasks and allow in-house lawyers to focus on more strategic and demanding tasks. The development of AI is very rapid, and its potential is constantly increasing, while the relevant legislation is evolving slowly. The aim of the Artificial Intelligence 4.0 Programme is to increase the use of AI in SMEs. Therefore, companies are recommended to establish internal rules for the use of AI, so in-house lawyers must be familiar with how AI-based tools fit into the existing legal framework and how AI will be regulated in the future. Adopting and adhering to internal rules for the use of AI is a recommended risk management tool and helps to build trust in the use of AI.

Pursuant to the Finnish Limited Liability Companies Act (624/2006). the management of the company shall act with due care and promote the interests of the company. If a company wishes to use AI-based tools, the management must familiarise itself with the use of AI in detail and take the necessary steps to ensure that the use of such tools do not cause damage to the company.

The board must be aware of existing and upcoming AI legislation and how sector-specific regulation and data protection requirements apply to the use of AI-based tools in the company. The board is responsible for deciding whether AI should be utilised in the company and for ensuring that it is used safely. Therefore, the board should consider, among other things, the following issues to fulfil their due diligence obligations.

  • The board should set goals for the use of AI in the company, such as making work more efficient through faster decision-making. This should be followed by an impact assessment on how to take into account potential negative impacts and consequences for employees and stakeholders.
  • The board should ensure that employees have sufficient awareness of the use of AI-based systems and how they should act if they perceive any biased decisions made by AI.
  • The board itself should regularly review the use of AI and take appropriate action if it finds that AI-based decisions are contrary to the goals for the use of AI or the company’s policies.
Borenius Attorneys Ltd

Eteläesplanadi 2
00130 Helsinki
Finland

+358 20 713 3136

erkko.korhonen@borenius.com www.borenius.com
Author Business Card

Trends and Developments


Authors



Borenius employs over 120 lawyers, and as a leading independent Finnish law firm, we work together with other highly regarded law firms across the globe that share our commitment to excellent service and quality. We have invested significantly in our global network of top-tier international law firms. Today, we have offices in Finland as well as representative offices in London and New York to ensure that we can provide the best advice to our domestic and international clients. Our technology and data practice is made up of nearly 20 skilled lawyers, making us one of the biggest teams in the field the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that provides extra value for clients’ businesses and operations. We have recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters (especially relating to cloud transition).

General Framework

Finland is making significant strides towards becoming a frontrunner in the digital economy, with a particular emphasis on the development and use of artificial intelligence (AI). The government and public authorities have introduced various programmes and guidelines to foster the growth and adoption of AI, recognising its potential to transform both the economy and society. Below, we detail some of the primary objectives set forth by the Finnish government:

  • Finland aims to become a trusted and secure pioneer in the digital economy by 2025.
  • The Artificial Intelligence 4.0 programme, intimated by the Ministry of Economic Affairs and Employment of Finland, is aiming to make Finland a leader in the twin transition by 2030.
  • The Ministry of Economic Affairs and Employment has also set a goal to retain and attract the best talent and professionals in the field.
  • The Finnish government aims to establish effective information exchange and interoperability between different services and platforms, providing a powerful tool for businesses and organisations.
  • The Finnish government aims to ensure that Finland fully leverages digitalisation and technological development, eliminating barriers between the public and private sectors.
  • At the same time, it is committed to maintaining a balance between the interests of individuals, companies and society as a whole in the use of new technologies and AI, while also ensuring ethical sustainability through innovation.
  • Broadly speaking, Finland aims to become a global leader in the application and use of AI in both the public and private sectors.
  • Additionally, it aims to set the trend within the EU by establishing fair, consumer-oriented principles for AI use.

To achieve these objectives, the Finnish government and public authorities have implemented several programmes and guidelines, including:

  • the national Artificial Intelligence Programme, launched in 2017 (Finland was one of the first countries to launch such a programme);
  • the Artificial Intelligence 4.0 programme, designed to promote the development and adoption of AI and other digital technologies, with a particular focus on small and medium-sized enterprises in the manufacturing sector (2020-2022);
  • the National Artificial Intelligence programme, AuroraAI, to develop an AI-powered technical solution, resulting in the AuroraAI network in 2022;
  • building a strong and distinctive digital economy, where close collaboration between the public and private sectors is essential;
  • leveraging Finland’s strengths, such as its highly motivated research groups that specialise in emerging sectors, such as unsupervised learning, its vibrant start-up field and the close co-operation between research institutions and companies;
  • working to strengthen the technological capacity of the public sector and to further develop public-private partnerships; and
  • the key role played by the Finnish Centre for Artificial Intelligence (FCAI) in promoting Finland’s strengths on the global stage.

Business Finland, a Finnish governmental funding agency, supports the development of AI technology through several key funding programmes:

  • the AI Business Programme, boosting development, growth and internalisation of Finnish AI companies with a total budget of over EUR 200 million (2018-2021); and
  • the joint Research, Development and Innovation Programme ICT 2023: Frontier AI Technologies of Academy of Finland and Business Finland; the aim of the programme is to fund high-quality scientific research, which is also expected to have a scientific and social impact.

AI Legislation

EU legislation

The European Commission’s Proposal of 21 April 2021 on the harmonised rules for artificial intelligence and amending certain EU legislative acts (COM/2021/206 final, the “Proposal”) is the main piece of legislation regarding AI-related regulation. This Proposal seeks to establish a European approach and promote the development and deployment of AI for the protection of the public interest, particularly with regards to health, security and fundamental rights and freedoms. The Proposal offers a balanced, proportionate, and horizontal regulatory approach to AI. It addresses the associated risks and challenges, whilst minimising constraints on technological development and the cost of introducing AI solutions to the market.

The Finnish government has consistently supported the EU’s advancements in AI-related regulation and EU-level initiatives. In late 2021, the government released its first memorandum on the Proposal, expressing its strong support for the responsible use of AI in Finland and Europe. The memorandum specifically highlighted the implications of AI systems for fundamental rights, noting that when used correctly, AI solutions can help to enhance and contribute to the realisation of these rights. However, the government acknowledged that there are still some unanswered questions in this area, deeming it crucial to thoroughly assess the scope of applicability from the perspective of fundamental rights. To this end, the Finnish government is advocating for a more comprehensive approach to the regulation of AI; one that takes into account the potential implications for fundamental rights.

In October 2022, the Finnish government presented their opinion on the latest proposed amendments to the European AI regulation in a second memorandum. The definition of AI was of particular importance in the memorandum and concern were raised about it. Finland advocates excluding from the scope of regulation any systems that simply follow pre-defined rules and instructions without any discretion or alteration of their operational logic. Moreover, Finland believes it is essential that the regulation on experimentation and testing of AI systems in real-world settings is truly enabling, encourages innovation and does not put up unnecessary hurdles to the market entry of AI systems.

Lack of national legislation

Despite the Finnish government’s active involvement in AI at the EU level, its implementation of programmes supporting businesses in the twin transition and adoption of AI, and its generally positive and forward-thinking attitude towards AI, national AI legislation remains rather sparse. The rules on AI use are mainly derived from the GDPR, focusing less on AI specifically and more on the use of personal data in AI-driven technologies. For instance, there is no specific legislation on general AI-related liability, restrictions on AI use, or processing personal or other data with advanced AI technologies in Finland.

This is most likely due to the above-mentioned pending EU legislation. The AI Act and the new proposed Directive on adapting non-contractual civil liability rules to artificial intelligence (COM/2022/496, “AI Liability Directive”) and the Directive on liability for defective products (COM/2022/495, “Product Liability Directive”) will have a significant impact on all EU member states’ AI-related legislation. As a result, the Finnish government has opted for an active role in the EU arena, choosing not to enact new national legislation while awaiting the AI Act’s entry into force. The said directives must be transposed into national legislation. It remains to be seen whether completely new national laws will be drafted, or if existing laws, such as the Tort Liability Act, will be amended.

However, there has been special legislation concerning automated decision-making in public authorities in Finland. Despite the legislative “pause”, a new general legislation on automated decision-making in public administration was enacted in 2023. Before the new legislation, different decisions in various authorities were regulated separately. The new legislation allows automated decisions in general as long as the authority is acting according to the provisions of the laws. Moreover, liability issues relating to the automated decisions are addressed in the legislation. Under the new provisions in the Act on Information Management in Public Administration, the liability of using automated decision-making lies with the authority using it. Although the regulation does not yet extend to civil liability or developed AI technologies, it is still seen as progressive because it diverges from the current EU regulation.

Another intriguing aspect of the Finnish legal system in relation to AI is the complete absence of case law and judicial decisions. This can be attributed to several factors. Firstly, the anticipation of EU legislation and scarcity of specific national legislation means that there have not been any issues regarding the interpretation of the law warranting resolution through the courts. Another reason could be the lack of litigation. Without AI-related disputes, there is no need for legal proceedings that would generate case law. Also, if potential disputes have been settled or resolved through alternative dispute resolution mechanisms, they would not have generated a court ruling.

Concerns Regarding AI

Despite the broad recognition of AI’s potential in Finland, concerns regarding its safety have also been nationally acknowledged. To address this, the 2021 government plan included the Avoiding AI Biases project, which identified risks related to fundamental rights and non-discrimination in existing and planned AI systems. The project revealed that algorithmic discrimination had been given due consideration, particularly in the public sector. To ensure that AI applications are non-discriminatory, the researchers developed an assessment framework to identify and manage discrimination risks and promote equality in AI use. This framework helps to ensure that AI is used responsibly and ethically, and that the rights of individuals are respected.

Also, cybersecurity threats associated with AI have been taken seriously in Finland. The Finnish Transport and Communications Agency, together with the National Cybersecurity Centre, have conducted research on the potential impact of artificial intelligence on cyberattacks. While the risk of AI-enabled cyberattacks is currently considered low, the study acknowledges that AI systems can enhance traditional cyberattacks by increasing their speed, scale, coverage and personalisation, thus making them more successful. If an advanced AI attack is not promptly addressed, it can lead to deeper penetration into systems or networks by the attacker.

To combat AI-enabled cyberattacks, the study emphasises the need for cybersecurity to become more automated. Automated defence systems will be the only ones capable of matching the speed of AI-enabled attacks. These systems will require AI-based decision-making capabilities to effectively detect and respond to such attacks.

Data Protection and Privacy Considerations in the Face of Emerging AI Technologies

Many provisions of the GDPR are relevant to AI, and some are challenged by the new ways of processing personal data that are enabled by AI. There is indeed a tension between the traditional data protection principles and the full deployment of the power of AI and big data.

Some of the issues raised in the Finnish privacy discussion include:

  • observation and prevention of implicit biases and algorithmic discrimination in automated processing of personal data;
  • the principle of data minimisation from a machine learning perspective;
  • the increasing use of biometric technologies, including facial recognition applications; and
  • chatbots and conversational AI technologies from a transparency perspective.

On the other hand, AI technology can be used as a form of privacy-enhancing technology to help organisations comply with data protection by design obligations. It can also minimise the risk of privacy breaches and increase efficiency and accuracy in detecting and responding to potential database breaches.

Recent Intellectual Property Considerations

Intellectual property rights are essential in harnessing the economic potential of inventions, including AI technology. AI-related inventions pose several challenges to the current Finnish Intellectual Property Laws.

Firstly, there has been an ongoing discussion on how to sufficiently protect AI algorithms if patenting is not a desired option – eg, for cost or confidentiality reasons. In Finland, copyright has recently been regarded as a somewhat ineffective means of protecting software, as it only applies to the source code. The Finnish Trade Secrets Act provides protection for trade secrets, which can include AI algorithms that, inter alia, are confidential and have commercial value. Another form of protection under Finnish law is the protection for technical instructions, which can be considered applicable to an AI algorithm, even if it does not qualify as a trade secret. However, the most common form of protection for emerging AI technologies in practical terms remains uncertain.

Secondly, AI systems are becoming increasingly autonomous and can seemingly create works of art and works of authorship in a creative manner. This is particularly true of generative AI tools, such as ChatGPT, which has exploded in popularity since the autumn of 2023. This has sparked a debate in Finland on whether the copyright to such works of art or authorship could belong to the AI. The same line of reasoning applies to patent applications – could AI be an inventor or co-inventor of a patentable invention?

According to the prevailing Finnish perspective, the answer to both questions is no. The current legal structures do not recognise AI as an entity to which rights could be granted.  Despite hopes for changes in the comprehensive reform of Finnish Copyright Law in 2023, AI-generated works were not recognised. It remains to be seen how AI-generated works and inventions will shape intellectual property rights through case law.

Small Data

In addition to big data aspects, Finland could potentially become a leader in the “small data” field, where AI can be utilised even with a limited amount of data. There is a current trend of creating more efficient and understandable technologies that require less data, energy and computation. In the future, Finland could capitalise on the B2B market, which is twice as large as the B2C market, by investing in the development of small data AI solutions that could be used on the global stage.

The AI sector is striving to expand the interactive ecosystem in Finland. However, funding is a big challenge. For example, the Finnish Centre for Artificial Intelligence (FCAI) receives flagship funding intended for carrying out basic research, but it is not enough to run the ecosystem itself.

Summary

Finland launched a national Artificial Intelligence Programme in 2017, with the goal of making the country a global leader in the application of AI. By 2025, Finland aims to become a trusted and secure pioneer in the digital economy. To achieve this goal, the country is building a strong and distinctive digital economy through close collaboration between the public and private sectors and strengthening the technological capacity of the public sector while also developing public-private partnerships. The Finnish government has also launched funding programmes for AI development and has shown strong support for EU-level initiatives in AI-related regulation.

The government has emphasised the importance of fundamental rights in the use of AI and urged for a comprehensive approach to regulation. However, the Finnish government has also highlighted the need for a balanced approach to AI regulation that does not impede technological development or raise the cost of introducing AI solutions to the market.

While the Finnish government has been active at the EU level and initiated programmes supporting businesses in the twin transition and adoption of AI technologies, the AI legislation on a national level is still relatively thin on the ground. Pending EU legislation is likely to have a significant impact on AI regulation in Finland.

Borenius Attorneys Ltd

Eteläesplanadi 2
00130 Helsinki
Finland

+358 20 713 3136

erkko.korhonen@borenius.com www.borenius.com
Author Business Card

Law and Practice

Authors



Borenius employs over 120 lawyers, and as a leading independent Finnish law firm, we work together with other highly regarded law firms across the globe that share our commitment to excellent service and quality. We have invested significantly in our global network of top-tier international law firms. Today, we have offices in Finland as well as representative offices in London and New York to ensure that we can provide the best advice to our domestic and international clients. Our technology and data practice is made up of nearly 20 skilled lawyers, making us one of the biggest teams in the field the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that provides extra value for clients’ businesses and operations. We have recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters (especially relating to cloud transition).

Trends and Developments

Authors



Borenius employs over 120 lawyers, and as a leading independent Finnish law firm, we work together with other highly regarded law firms across the globe that share our commitment to excellent service and quality. We have invested significantly in our global network of top-tier international law firms. Today, we have offices in Finland as well as representative offices in London and New York to ensure that we can provide the best advice to our domestic and international clients. Our technology and data practice is made up of nearly 20 skilled lawyers, making us one of the biggest teams in the field the Nordics. The team is well equipped to advise clients on the most demanding technology-related assignments and to provide practical and strategic advice that provides extra value for clients’ businesses and operations. We have recently been advising clients in matters involving complex R&D projects, procurement of a data services platform by a large insurance company, universities and other public sector entities in complex data protection matters (especially relating to cloud transition).

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.