Technology-neutral Background Law
Norwegian law is often technology-neutral and therefore applicable to AI technologies without being drafted specifically for the purpose of addressing these technologies. This is the case, for example, as regards Norwegian public administration law, employment law, contract law, non-discrimination law, and copyright law.
Employment
As an example of technology-neutral laws that apply to the use of AI systems, Norwegian employment law particularly restricts the control measures that an employer may use in respect of its employees, see Chapter 9 of the 2005 Norwegian Employment Act (Lov 17. juni 2005 nr. 62 om arbeidsmiljø, arbeidstid og stillingsvern mv.). Employers considering the use of AI as part of their monitoring and control with employees must adhere to these provisions. Moreover, Chapter 13 of the Employment Act contains provisions detailing the non-discrimination principle in the employment context. These provisions are applicable to the use of AI systems for hiring purposes.
Tort Law
Norwegian tort law consists of a combination of broadly applicable, technology-neutral liability regimes, as well as more specific regimes such as the Product Liability Act. While the Product Liability Act does not, currently, apply to standalone software systems (including standalone AI systems), it does apply to products that incorporate AI systems, see 10.1 Theories of Liability. Moreover, AI developers, providers, and users could be liable for damages occurring as a result of using AI systems, on the basis of a strict liability, statutory vicarious liability for employers or principals (arbeidsgiveransvar), or negligence.
Product Safety and Consumer Protection
Product safety laws and consumer protection laws in Norway are largely harmonised with EU regulations that apply in these areas. Currently, these laws are applicable to AI technologies without being designed specifically to address concerns associated with AI. However, the forthcoming EU AI Act, which will become applicable in Norway through the EEA agreement, will introduce product safety requirements aiming to address AI risks.
Data Protection
The Norwegian Personal Data Act (PDA) (Lov 15. juni 2018 nr. 38 om behandling av personopplysninger) aims to protect individuals from having their privacy violated through the processing of personal data. The Act implements the EU General Data Protection Regulation (GDPR) in Norwegian law. The GDPR is thus part of the Personal Data Act and applies as Norwegian law.
The PDA and the GDPR are applicable to AI technologies where the processing of personal data is involved. The PDA and the GDPR apply to processing of personal data that is carried out in connection with the activities of a data controller or a data processor in Norway, regardless of whether the processing takes place in the EEA or not. The PDA and the GDPR also apply to the processing of personal data about data subjects located in Norway, and which is carried out by a data controller or data processor that is not established in the EEA, if the processing is related to offer of goods or services to such data subjects in Norway. Hence, companies that apply AI technologies in services provided to persons in Norway need to comply with the PDA and the GDPR.
Key industry applications of AI in Norway include various forms of decision support, including data analysis and risk prediction. In significant industries such as agriculture, aquaculture, and energy, AI based on machine learning and computer vision are used to optimise the utilisation and distribution of resources such as electricity, fish feed or fertilisers. In many industries, AI is used in predictive analytics and monitoring of various infrastructure components, enabling preventive maintenance and safety-critical interventions. In maritime industries, Norwegian businesses are pioneering the development and testing of autonomous ships including container vessels and passenger ferries.
Generally, Norwegian businesses are increasingly implementing generative AI solutions for internal purposes and to assist in the provision of services where feasible.
The AI Research Billion
In 2023, the Norwegian government announced that NOK1 billion will be allocated to research and development efforts related to AI technologies. The government envisages that these funds will be used to establish four to six dedicated AI research centres that will contribute to greater insight into the societal impact of AI technologies, as well as insight into technological aspects, and the potential for innovation in commercial businesses and the public sector.
Regulatory Sandbox Projects
Since 2020, the Norwegian Data Protection Authority has provided a regulatory sandbox and conducted several sandbox projects to help facilitate testing and implementation of AI technologies in areas covered by the data protection law framework.
Guidance Services
Other sector-specific initiatives from state authorities also exist, such as the Directorate of Health's regulatory guidance service for AI projects. AI development projects that receive such guidance benefit from iterative meetings with persons with relevant expertise, from the Directorate and other public agencies.
Implementation of the AI Act
There is no overarching legal framework for AI in place in Norway. The EU AI Act has been adopted at the EU level, and will be implemented in Norwegian law. Generally, Norwegian politicians have been reluctant to initiate AI regulation efforts over the last years. Norway's regulatory efforts in this area in the near future is expected to centre around the implementation of the AI Act.
Piecemeal AI Legislation in Norway
Currently, AI-specific legislative provisions are spread across various instances of the Norwegian regulatory framework, and they are mostly limited to laws governing the public sector. The provisions that specifically address AI technologies have typically occurred as a result of specific needs that have arisen in practice, to which the legislators have responded in a piecemeal manner. For example, following allegations from certain stakeholders that lack of access to health data hindered AI development in the health sector, a specific provision was implemented in Section 29 of the Health Personnel Act (Lov 2. juli 1999 nr. 64 om helsepersonell), which facilitates access to health data, upon application, for the purposes of developing and deploying clinical decision support systems.
Data Sharing
Provisions facilitating access to data from public entities and the sharing of data between public entities are typical of legislative measures to promote digitalisation in Norway. Generally, there has been a high level of awareness about the need for data utilisation as a prerequisite for successful digitalisation. Specific legislation to this effect in the private sector has not been adopted in Norway, but the forthcoming EU Data Act will provide new rules concerning data sharing from and between commercial businesses, particularly as regards products in the Internet of Things. Industrial data to be shared under the Data Act may to some extent promote development of AI technologies.
As mentioned in 3.1 General Approach to AI-Specific Legislation, piecemeal provisions within the legal framework for the public sector have been implemented to address AI technologies. These provisions typically address the need for utilisation of data.
In addition to provisions facilitating the utilisation of data, Norwegian statutory law contains several provisions that specify the extent to which fully automated decision-making may be relied on in the public sector. The general approach found in current statutes is to facilitate fully automated decision-making only in respect of decisions that are of little importance to the individuals that are impacted by the decision. Similar provisions facilitating fully automated decision-making are currently not found in statutes governing private businesses, which means that the primary provision governing fully automated decision-making in the private sector is Article 22 of the EU GDPR.
Norwegian authorities have issued several guidance documents and reports on the use of AI. The Norwegian Maritime Authority has commissioned guidelines pertaining to the construction or installation of fully or partially autonomous ships. Most guidance documents are informative rather than normative. They provide general overviews of relevant laws and considerations that AI developers or users should take, rather than providing recommendations or interpretations.
Most significantly, the Norwegian Directorate of Digitalisation has issued a beta version of a guidance document on responsible development and use of AI. The Norwegian Equality and Anti-Discrimination Ombud has been active in relation to AI, issuing two relevant reports in 2023-2024 on "non-discrimination by design" (innebygd diskrimineringsvern) and algorithmic discrimination. High-level guidance on AI and data protection is also available from the Norwegian Data Protection Authority. More sector-specific guidance is available from the Norwegian Directorate of Health, aiming to help AI researchers and developers navigate the fragmented legal framework for AI development projects in Norwegian law.
As mentioned in 3.1 General Approach to AI-Specific Legislation, Norway has been reluctant to implement AI-specific regulations at the national level, and has instead awaited the forthcoming EU law pertaining to AI technologies, including the AI Act and the AI Liability Directive. The benefit of this approach is that Norwegian law has very few areas where national AI-specific legislation is likely to conflict with AI laws from the EU. On the flipside, there is limited maturity in Norway when it comes to understanding the impact of the forthcoming EU laws on the existing legal framework.
Regulations on Automated Decision-making
Certain provisions of particular relevance to AI technologies are found in different parts of the Norwegian legal framework pertaining to the use of data for innovation purposes and the permissibility of relying on fully automated decisions in the public sector. Currently, fully automated decision-making in the public sector is only permissible in connection with decisions that are of little importance to the individual (eg, Section 11 of the Patient Records Act) or where the decision relies on non-discretionary criteria.
Tension with the AI Act
While these provisions are relevant in the context of AI systems, the policy that underpins them is from an era when AI technologies were less prominent than they are at present. The current rules for fully automated decision-making have been created with traditional, hard-coded software solutions in mind. The idea is that automated decision-making based on such systems enhances equality of treatment. There is a need to reconsider their appropriateness in an era when AI systems are capable of handling even discretionary criteria in a decision-making process. The forthcoming AI Act deals with the most significant risks of AI systems involved in decision-making, and one may question whether current Norwegian restrictions on fully automated decision-making should be maintained when the AI Act becomes applicable.
This is not applicable.
Data Protection
As discussed in 3.1 General Approach to AI-Specific Legislation, certain amendments have been made to Norwegian health sector legislation to accommodate the use of health data in AI projects. AI-specific changes to general data protection laws have not been made.
Data Mining Exceptions
Data mining exceptions are relevant in the context of AI technologies, because the training of machine learning algorithms sometimes involves data mining activities that could otherwise infringe copyright. The Norwegian Copyright Regulation (Forskrift 26. august 2021 nr. 2608) facilitates data mining in certain specified situations. However, further data mining exceptions have recently been proposed.
The proposed changes would particularly the EU Digital Single Market (DSM) Directive, which establishes an exception for text and data mining from lawfully accessible works. The Norwegian preparatory works to the proposed data mining exceptions highlight the importance of data mining in relation to artificial intelligence technologies.
It is expected that the proposed data mining exceptions will be adopted by the end of 2024. The new provisions will distinguish between data mining for non-commercial purposes in research, educational institutions and cultural heritage organisations on the one hand, and commercial data mining on the other hand.
EU-driven Developments
Key developments in the near future will centre around the implementation of the EU AI Act, the EU AI Liability Directive, and the EU's proposed changes to the Product Liability Directive. These EU laws constitute pathbreaking changes in the legal systems of EU/EEA member states, including the Norwegian legal system.
Impact of the AI Act
The AI Act sets out several requirements that high-risk AI systems and their providers must comply with, which supplement existing product safety and fundamental rights laws in Norway. In addition to imposing certain new requirements that are designed specifically to address risks associated with AI systems, the AI Act requires preventive compliance measures, such as risk and impact assessments, to ensure compliance with AI-specific safety requirements as well as existing fundamental rights principles.
To date, there are no judicial decisions in Norway concerning AI systems.
There are no judicial decisions in Norway that define the notion of artificial intelligence.
Undecided Supervisory Role
The designation of a supervisory authority, as required by Article 70 of the AI Act, is yet to be decided in Norway.
Ongoing Activities
Currently, there are several regulatory/supervisory agencies that play a role in relation to AI systems. The Directorate of Digitalisation (DigDir) is active in terms of providing guidance for AI development and usage. The Norwegian Data Protection Authority has assumed a central role, particularly due to its provision of a regulatory sandbox scheme for AI since 2020. The Norwegian Consumer Council has been particularly concerned about generative AI, issuing a report on the consumer harms of generative AI in June 2023. The Office of the Auditor General of Norway has engaged in the issue of auditing AI systems. Moreover, within the remits of its mandate to promote equality and non-discrimination, the Equality and Anti-Discrimination Ombud plays an active role. Other central regulatory agencies include sector-specific agencies such as the Directorate of Health and the Norwegian Maritime Authority (see 3.3 Jurisdictional Directives).
Conformity Assessment Bodies
Conformity assessment or certification bodies can also be seen as regulatory agencies, acting on the basis of delegated powers within specific areas such as medical devices or ship construction. In Norway, DNV has been active in the AI space, issuing several practical guidance documents.
Definitions in the AI Act
Given that Norway is awaiting the EU AI Act before initiating further legislative efforts related to AI systems, the definition of an "AI system" and a "general purpose AI model" will be applicable within the Norwegian legal framework once the AI Act has been implemented in Norwegian law. Under the AI Act, an "AI system" is understood as a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (Article 3(1) AIA).
Even though AI-specific legislation may be enacted at the national level in respect of areas outside the scope of EU law (eg, AI for military and national security purposes), it seems likely that such legislation will nonetheless apply the definitions found in the AI Act.
The Directorate of Digitalisation (DigDir) has been established to promote a coordinated uptake of digital technologies in Norway. Thus, it serves a wide purpose related to digitalisation and is not dedicated to AI technologies. The nurturing of digital and AI literacy, and the promotion of safe and ethical use of AI, lie firmly within the objectives of the DigDir. Other existing agencies mentioned in 5.1 Regulatory Agencies serve other specific objectives related to data protection, consumer rights, and sector-specific interests. The Norwegian Consumer Council has highlighted a broad set of concerns associated with generative AI systems, including the risk of false information, "deepfakes", environmental impact, and the exploitation of workers in low-income countries to carry out data labelling tasks ("ghost workers").
To date, there have been no enforcement actions related to AI systems in Norway. However, following Meta's announcement that it would start using data published by its users to train AI algorithms, the Norwegian Consumer Council has, in collaboration with NOYB, filed a complaint with the Norwegian Data Protection Authority. The complaint urges the DPA to investigate Meta's practices and prohibit the use of personal data for undefined purposes related to the development of AI technologies. The complainants allege that Meta does not have a sufficient legitimate interest that overrides the interests of the data subjects and that the purposes of the intended processing are not defined as required by the GDPR. Thus, according to the complaint, Meta would have to rely on proper consent rather than the "opt-out" mechanism that Meta has proposed, where users can choose to object to the use of their personal data for AI development purposes.
The Norwegian DPA has signalised that it prioritises the complaint and intends to coordinate its efforts in the Meta case with other European data protection authorities.
The Norwegian standard setting body Standard Norge and other Norwegian stakeholders are involved in standard-setting initiatives related to AI technologies at the international level. Standard Norge has established a committee for participation in standardisation efforts by the ISO and CEN-CENELEC. It is recognised in Norway that participation in standard-setting activities is an important arena for influencing the regulatory developments related to AI technologies.
Standards from international standard-setting bodies generally impact companies doing business in the technology space in Norway. Norwegian legislation tends not to refer to international standards, but technology purchasers in Norway will often expect vendors to comply with widely recognised standards, for example in relation to information and cybersecurity, risk management, data formatting, and application programming interfaces.
A Booming Public Sector
Norwegian authorities are testing and implementing several AI-based solutions to aid in governmental operations and for administrative law purposes. Some examples include:
In addition to the above-mentioned use cases, several public agencies in Norway have implemented or are planning to implement generative AI systems to handle questions from and provide information and guidance to citizens.
Legal Uncertainty
The development of AI for public administration purposes often requires that personal data held by public agencies are used for different purposes than what they were initially collected and stored for. The use of personal data for AI development purposes may contradict the legitimate expectations of citizens as data subjects. In lack of national provisions specifically providing a legal basis for AI development, there is uncertainty about the extent to which public agencies have a legal basis for such repurposing. Due to the lack of a clear legal basis for AI development based on the personal data of citizens, the above-mentioned sandbox project in the Norwegian Labour and Welfare Administration concluded that legislative changes are needed for such a project to be completed.
There are no judicial decisions or pending cases related to government use of AI in Norway.
The Norwegian National Security Authority (NSM) does not have specific laws dedicated exclusively to AI, but it follows broader national and international guidelines and principles for the development and use of AI. NSM has contributed to the national digital risk assessment reports, highlighting the security implications of AI and the importance of maintaining robust cybersecurity measures to prevent the misuse of AI technologies. NSM has provided guidelines for secure AI system development as part of a collaborative effort with international partners. These guidelines emphasise the importance of incorporating cybersecurity throughout the AI system development lifecycle from design to deployment and maintenance, but build on general cybersecurity principles that are not unique to AI.
Generative AI typically raises issues related to the lawful use of material/data as input and as basis for training and related to the output received by the user. Typically, the scraping of data from the internet, as part of creating generative AI models, could result in unlawful use of information, in breach of IP and Data Protection laws and the rights of third parties. The lack of transparency is making it difficult for users of AI models to assess the validity of results provided as output by the model and for individuals to enforce their rights.
General Starting Points for Data
Under Norwegian law, the general rule is that no one can hold property rights to data per se.
As a starting point, the individual or entity that gathers and controls data may therefore freely exploit the data for any purpose, except for personal data (see 8.3 Data Protection and Generative AI). This may include exploitation for commercial profit, improving products and services, and sharing or licensing data to third parties.
Restrictions That May Apply
Although one cannot obtain property rights to data per se, legal restrictions on the use of data may nonetheless apply. Such restrictions may follow from intellectual property rights (typically copyrights, patents and/or database protection) in the original material from which data is collected, trade secret protection, contractual restrictions such as confidentiality obligations or limited rights to use (licences), or restrictions in national legislation such as security acts.
Consequently, the provider of an AI system may influence the protection of input and output of generative AI systems through the terms and conditions, which may include licence terms.
Data Protection Principles
Generative AI models raise concerns both in relation to basic data protection principles and the rights of individuals. Where the PDA and the GDPR apply, the collection and use of personal data as part of generative AI models must be in line with principles related to purpose limitation, data minimisation, accuracy and storage limitation. It is also required that all use of personal data must have a sufficient legal basis and take place in a fair and transparent manner towards the individual. These requirements apply to all use of personal data in all stages including collection, training and the generation of outputs.
Rights of Data Subjects
Individuals whose personal data are being processed have certain rights such as the right to access to personal data and to rectification and erasure. The exercise of these rights are being challenged by the use of personal data to train generative AI models. The scope and amount of personal data is often not clear in relation to training data, input data, and output data of such models. Once personal data has been used to train a model, there is uncertainty as to the feasibility of effectively accessing, rectifying, and erasing personal data.
Purpose Limitation
The principle of purpose limitation requires that AI models should be designed and used based on a defined purpose and that personal data must not be used for new purposes that are incompatible with the purposes for which the data was initially collected. Data minimisation entails that the use of personal data should generally be limited to what is strictly required. Any superfluous personal data should be deleted or anonymised.
Responsibility of Controllers
Data controllers are responsible for ensuring and documenting compliance with the law, and must take compliance measures such as risk assessments, ensuring privacy by design and default, and providing sufficient information to individuals. For example, developers should strive to use algorithms that minimise the acquisition and processing of personal data while ensuring robust data security and confidentiality measures.
AI in legal tech
Generative AI assistants are commonplace in large law firms. Various other applications are also being tested and gradually implemented, such as AI-based tools for contract drafting and revision.
Professional codes of conduct
Legal professionals using AI in their work need to ensure that they maintain adherence to general codes of conduct and ethical principles. In Norway, a new act regulating lawyers and others who provide legal assistance has recently been passed (Lov 12. mai 2022 nr. 28 om advokater og andre som yter rettslig bistand).
Chapter 8 of the above-mentioned act sets out the fundamental principles that lawyers must always abide by in the course of their work. How the use of AI technologies might impact those principles is a subject of some debate in Norway. Many companies have created their own codes of conduct to ensure that AI is used responsibly. Further guidance from the Norwegian Bar Association or supervisory authorities do not currently exist for legal professions.
No Legal Personhood
Norwegian theories of liability do not consider AI as a legal person. The allocation of liability for injuries or damages where AI systems are involved currently depends on the general doctrines of liability that apply regardless of the technology involved. These doctrines may lead to liability for various actors in an AI supply chain, including AI providers and users.
Negligence
As a generally applicable theory of liability in Norwegian law, negligence may entail liability for any actor involved in the development, commercialisation or use of AI systems. Developers that deviate from good AI development practices or widely recognised codes of conduct may be held liable on the basis of negligence, provided that the damages in question would have been avoided by the appliance of such practices (causation).
Users may particularly be exposed to liability based on negligence if they do not comply with safety instructions and documentation accompanying an AI system, or if they use AI systems for other purposes than those intended by the provider of the system.
Vicarious Liability for Employers
According to Section 2-1 of the Norwegian Damages Compensation Act (Lov 13. Juni 1969 nr. 26 om skadeserstatning), employers may be held liable for damages caused intentionally or negligently by employees in the performance of work. In the assessment of whether an employee has acted negligently in the development or use of an AI system, the considerations mentioned above in relation to liability based on negligence will be relevant.
Strict Liability
In Norwegian tort law, there is a longstanding non-statutory strict liability doctrine (ulovfestet objektivt ansvar), which could be relevant in the context of damages that occur where AI systems are involved. This doctrine was initially developed in case law related to damages caused by dangerous, industrial businesses, such as nitroglycerin factories.
Initially, the purpose of the strict liability doctrine in Norwegian law was specifically to impose liability in an appropriate manner in cases where technological developments created new risks to bystanders. Particularly, this doctrine targeted businesses whose commercial and industrial endeavours exposed others to extraordinary risk on a continuous basis. Those businesses were to be held liable regardless of negligence.
While the strict liability doctrine in Norwegian law has since come to be applied quite broadly to businesses that expose others to extraordinary risks on a continuous basis, the original intention of addressing new technological developments is revitalised in relation to the emergence of new risks posed by AI systems.
In principle, strict liability can be applied to providers and users of AI systems. In practice, however, this liability doctrine would appear to be particularly relevant in respect of businesses that are users of AI systems, ie, businesses that rely on AI-driven processes or machinery as part of their operations. For example, if a building constructor relies on autonomous vehicles, lifts, etc, and those machines make mistakes that cause injury to bystanders, the non-statutory strict liability doctrine could lead to liability for the building constructor.
Product liability
The Norwegian Product Liability Act (Lov 23. desember 1988 nr. 104 om produktansvar) governs the liability of a manufacturer for damages caused by a product that the manufacturer has placed on the market. While the Product Liability Act is relevant in relation to some damages caused by AI systems, it has important limitations in terms of its scope:
Contractual liability
As noted above, damages to property not used for consumer purposes are not covered by the Product Liability Act. Therefore, contractual allocation is particularly important in respect of such damages caused by AI systems that have been sold or made available to a buyer or licensee. According to the Norwegian Sales of Goods act (Lov 13. mai 1988 nr. 27 om kjøp), a buyer is entitled to compensation for damages due to deficiencies in the sales item unless the seller proves that any deficiencies were caused by an obstacle beyond the seller's control which he could not reasonably be expected to have taken into account at the time of the contract or to avoid or overcome the consequences of.
As regards the scope of damages for which a seller or service provider may be held liable, the general rule in Norwegian background law pertaining to contractual relationships is that a seller is liable for damages that the sales item causes to property or assets that have a close physical and functional connection with the sales item, as these damages are considered to be direct damages, see the Supreme Court's decision in RT-2004-675. Thus, such damages will be recoverable as direct damages in contracts for AI-based products governed by Norwegian law unless the contract specifies otherwise.
Insurance
To date, specific insurance arrangements for damages caused by AI systems have not been subject to much discussion in Norway. There is uncertainty about how liability for such damages would be allocated according to the different liability doctrines that exist in Norwegian law, which likely makes it challenging for insurers to assess risk and articulate policies for AI-related damages.
EU AI Liability Directive
At the EU level, changes have been proposed that are likely to impact Norwegian tort law and the allocation of liability in cases where AI systems are involved. The European Commission has proposed an AI Liability Directive concerning civil liability in non-contractual matters. In its proposal the Commission underscores that current liability laws at the national level may not be suited to handle liability claims in cases where AI systems are involved. Particularly, the Commission voices concern that victims may not be able to identify a liable person or entity and prove that the conditions for liability are met.
If the AI Liability Directive is adopted as proposed by the Commission, a rebuttable presumption of causation in cases where AI systems are involved, to mitigate the challenges that injured parties might otherwise have to overcome to prove causation.
Revision of the Product Liability Act
In addition to the AI Liability Directive, the European Commission has proposed a revision of the Product Liability Directive which is implemented in Norway through the Product Liability Act. The proposed changes include adaptations that would bring standalone AI systems in under the scope of EU product liability law and, as a consequence, the Norwegian Product Liability Act.
Bias is a Widespread Concern
The issue of algorithmic bias has received increasing attention in the Norwegian discourse around AI technologies in recent years. In 2024, the Norwegian Equality and Anti-Discrimination Ombud commissioned a report discussing algorithmic bias and discrimination. Algorithmic bias is also receiving academic attention in Norway, being the main topic of the first doctoral thesis on contemporary AI law authored by Dr Mathias Hauglid.
Risks Associated with Bias in AI Systems
Algorithmic bias is a problem that causes different types of risk, including safety risks, performance risks, and the risk of negative impacts on fundamental rights including primarily the right to non-discrimination.
Bias may create significant risk to individuals in areas where AI systems are used as decision support or in fully automated decision processes. For example, the risk of unfair or discriminatory treatment due to bias in algorithms is present when AI is used in risk assessments for insurance purposes, in clinical decision-making in the health sector, in connection with hiring decisions or assessments of employee performance, and in web advertisement.
Companies or public agencies that use biased AI systems may become liable for damages related to wrongful decisions and for violation of the right to non-discrimination.
Bias in the AI Act
The risk of algorithmic bias is addressed specifically by the EU's AI Act, which is expected to become fully applicable in Norway in 2025-2026. According to the AI Act, AI providers and, in some cases, deployers, must take certain preventive measures to mitigate biases that may cause harm from a safety or fundamental rights perspective.
The Need for Big Data May Collide With Data Protection Principles
In relation to the development of AI technology, one may argue that there is an inherent conflict between ensuring compliance with the basic data protection principles and the development of a powerful AI model, which typically requires large amounts of data. This suggests that compliance with the principles of purpose limitation and data minimisation, as well as transparency requirements, may be challenging. Respecting data protection laws is essential in the field of artificial intelligence. To comply, companies should build algorithms that minimise the collection and processing of personal data while ensuring data security and transparency.
Privacy-enhancing Technologies
At the same time, AI technologies can enhance the right to privacy and data protection by for instance automating data protection processes, detecting potential privacy breaches, strengthening cyber security measures, or minimising the need for data for training purposes, for example through federated learning approaches.
Heightened Risks
The use of facial recognition and other biometric information for the purpose of uniquely identifying a natural person is categorised as processing of special categories of personal data in the GDPR. The use of biometric data, and facial recognition in particular, poses heightened risks to the rights of data subjects. Businesses considering to apply these technologies should carefully consider the principles of lawfulness, necessity, proportionality and data minimisation as set out in the GDPR. While the use of such technologies may be perceived as particularly effective, the data controller must assess the impact on fundamental rights and freedoms and consider less intrusive measures.
Jurisdictional Law on Unique Identifiers
In addition to the general requirements in the GDPR, Section 12 of the Norwegian Personal Data Act states that unique identifiers, which includes biometric information, may only be processed when there is a need for secure identification and the method is necessary to achieve such identification.
Regulatory Attention
The use of facial recognition together with AI technology has received attention from the Norwegian Data Protection authority and was during 2023 assessed as part of its sandbox scheme, in a project relating to an advanced security measure to prevent ID theft and account takeover. Moreover, in its 2022 report "Your privacy – our shared responsibility", a Privacy Commission identified facial recognition technology as a technology that will have a major impact on privacy in the future. The Commission supports a general ban on the use of biometric remote identification in public spaces.
Jurisdictional Provisions Facilitating Automated Decision-making
As described in 3.2 Jurisdictional Law and 7.1 Government Use of AI, Norwegian law facilitates the use of fully automated decision-making in some specific areas of the public administration.
Automated Decisions with Legal or Other Significant Effects
In areas where there are no specific provisions facilitating automated decision-making, Article 22 GDPR sets out the conditions under which automated decision-making can be used to make decisions that produce legal effects or otherwise significantly affect an individual.
Outside of the cases where Norwegian law specifically accommodates the use of automated decision-making that significantly affects a person, a decision-maker may rely on automated decision-making when it is necessary for entering into, or performing, a contract with the individual affected by the decision, or on the basis of explicit consent from that individual, see Article 22 GDPR.
Other Automated Decisions
As regards automated decisions outside the scope of Article 22 GDPR, ie, decisions that do not significantly affect an individual, decision-makers still need to comply with their general data protection obligations. This includes being transparent about the purposes that personal data are collected and used for. However, when a decision does not have significant effects, the decision-maker may rely on legitimate interests or other legal bases for the processing of personal data in accordance with the GDPR.
Because the regulatory requirements in relation to decisions with significant effects are stricter than for other decisions, the question of how to distinguish between decisions with significant effects and other decisions will likely be subject to some debate going forward.
Attention from Supervisory Authorities
Due to significant attention to risks associated with profiling and automated decision-making from the Norwegian Data Protection Authority and the Norwegian Anti-Discrimination and Equality Ombud, companies relying on automated decision-making can expect to catch interest and become subject to scrutinisation from these and other supervisory authorities in Norway.
Chatbots and Substitutes for Human Communication
The EU AI Act will, when it becomes applicable, require that the use of chatbots and other AI technologies that serve as substitutes for services or communications rendered by natural persons must be disclosed, see Article 50(1) of the AI Act. Moreover, providers of generative AI systems will be obliged to ensure that AI-generated outputs must be identifiable as such.
Jurisdictional Law Relating to the Use of AI to Covertly Influence Consumer Behaviour ("Nudging")
The use of data-driven technologies to influence consumer behaviour – often referred to as 'nudging' - has been a concern for the Norwegian Consumer Council for some time. The Council addressed this issue in a report published in 2018, titled "Deceived by Design". As noted in the report, nudging can be used to covertly steer consumers in the direction of actions that benefit a service provider while not necessarily being aligned with the consumer's best interests.
Jurisdictional Norwegian law does not specifically address the use of AI technologies to nudge and influence consumer behaviour. However, depending on the circumstances, action may be taken against such practices based on Section 6 of the Norwegian Marketing Act (Lov 9. januar 2009 nr. 2 om kontroll med markedsføring og avtalevilkår – markedsføringsloven). Section 6 prohibits practices that conflict with good business practice towards consumers and is likely to materially "distort the economic behaviour of consumers, causing them to make decisions they would not otherwise have made." To what extent the use of AI to nudge consumers conflicts with good business practices in Norway would have to be assessed on a case-by-case basis.
Prohibited Practices Under the EU AI Act
Certain AI systems aimed at influencing a person's decision-making will be prohibited by the EU AI Act. Notably, the prohibition in Article 5(1)(a) of the AI Act will apply to AI systems deploying subliminal techniques that a person or group of persons is not aware of and that purposefully manipulates or deceives in order to materially distort behaviour.
The Norwegian Competition Authority is concerned with the use of algorithms to monitor, predict, and automatically set prices for goods and services. In 2021, the Authority issued a report discussing this issue. The report particularly voices concerns in respect of collusive outcomes due the increased observability of prices which is made possible by algorithmic price monitoring.
Specific Considerations for AI
Contracts will need to consider the specific risks and complexities associated with different types of AI technologies, the complex value chain that may be involved, and the business models of AI-related companies which may differ from traditional business models for technology and software companies.
Importantly, contractual arrangements for AI technologies must be based on a high level of technical understanding, in order to accommodate the specific technology involved in each transaction.
Forthcoming Legislation
Due to the abundance of forthcoming legislation of relevance to AI systems, contracts for the procurement of AI systems should
Definition of Acceptable Performance
Another important topic for contractual negotiation and drafting will relate to defining the level of performance that an AI system should have for the contract to be fulfilled by the vendor, as well as the processes through which that level of performance should be verified.
Liability Clauses
As in other contractual negotiations, the drafting of liability mechanisms will be key to the predictability and comfort of contractual parties. Liability mechanisms is contracts governed by Norwegian law should be drafted with due attention to the Norwegian liability doctrines discussed in 10.1 Liability Theories.
Early days
HR departments in Norwegian businesses generally perceive AI technologies as potentially useful tools that might enhance hiring and other HR processes, such as in relation to termination processes where there is a large number of individuals in scope. However, the implementation of AI for HR purposes in Norway is in its nascent stages.
Bias and Discrimination
There is increasing awareness in Norway of the risk of biases in AI algorithms, which may lead to discriminatory decision-making. Such concerns might have a chilling effect on the implementation of AI for hiring and termination purposes. Indeed, the risk of bias and discrimination seems inevitable in the employment context, for example when AI systems are used to assess job applications or predict the performance of employees.
Preventive Measures and Regulatory Requirements
To avoid discriminatory outcomes, employers considering the use of AI in hiring decisions need to implement risk management and quality assurance practices that address the risk of discrimination in AI systems. When the EU AI Act's requirements for high-risk AI systems become applicable, such preventive measures will be mandatory, as AI systems for hiring, promotion, termination, etc, are classified as high-risk under the AI Act.
In addition to the AI Act requirements, companies considering the use of AI in decisions on hiring or termination should be mindful of the restrictions on fully automated decision-making with significant effects for natural persons according to Article 22 GDPR.
In the same way as for employee hiring and termination, AI technologies could provide efficient means to evaluate performance and monitoring of employees' work. As such use of AI technologies would result in processing of personal data, the use of AI would have to comply with the Norwegian Personal Data Act and the GDPR. Furthermore, there are additional legal requirements in the Norwegian Working Environment Act regarding control measures in the workplace that must be complied with before evaluation and/or monitoring based on AI technology is started (see 1.1 General Legal Background).
Use of AI by Digital Platform Companies in Norway
AI technologies are used by national and international digital platform companies operating Norway. For example, such companies use AI to predict demand for products and services and to match service providers with customers.
Employment and Regulatory Concerns
While there are employment and regulatory concerns associated with digital platform companies in general, including concerns about the effects on competition, those concerns are not particularly reinforced by their use of AI technologies. However, the increased use of AI by digital platform companies could potentially decrease the need for these companies to rely on human work. For example, it is known through the media that food delivery companies have initiated testing of drone-based food deliveries. If the use of drones and robots becomes the standard way of delivering food in the future, this could practically eliminate these platforms' need for human resources to execute deliveries.
Use of AI in Financial Services
Financial services companies including insurance companies and credit scoring companies in Norway probably rely on AI technologies in their operations, to some extent. However, there is little publicly available information about these companies' current use of AI systems.
Under the regulatory sandbox scheme provided by the Norwegian Data Protection Authority, a project has been conducted regarding the use of AI based on federated machine learning to increase the precision of electronic monitoring systems intended to flag transactions that are suspicious from an anti-money laundering perspective. The project addressed the problem of using data across several banks to train algorithms on data concerning as many transactions as possible without sharing personal data across different banks.
Regulatory Concerns
Concerns related to privacy and non-discrimination are salient in relation to the use of AI by financial service providers. The flagging of transactions made by individuals could lead to increased scrutiny of the personal data of some persons, and there is also the risk of biases that cause persons from certain groups to be more likely to be flagged as "risky" than other persons, eg, in relation to AML concerns or in relation to the risk of defaulting a loan.
The Norwegian Financial Authority has communicated that it will prioritise supervisory activities related to the use of AI technologies in the financial services sector in the future. Particularly, the Authority has expressed concern about insurance companies' increasing use of AI in their operations. The Authority has stressed the importance of carrying out robust risk assessments before using AI and of ensuring transparency of pricing towards customers.
AI in Healthcare
Healthcare is probably the most advanced sector in Norway in terms of development and testing of AI technologies based on machine learning, deep neural networks and natural language processing. AI systems are also used in practice for clinical purposes, including radiological applications to diagnose patients based on images.
Regulatory adaptations
In the Norwegian societal and professional discourse around innovation in the health sector, the issue of gaining access to relevant data for AI training purposes has often been highlighted as a barrier to innovation. The Norwegian Heath Personne Act was amended in 2021, to ensure that health data can be used for AI development and deployment purposes, see Section 3.1 General Approach to AI-Specific Legislation. It is expected that a more comprehensive revision of jurisdictional Norwegian health law will be initiated to facilitate development and use of AI in the future.
AI systems as medical devices
AI systems intended to support or automate clinical decision-making will generally constitute standalone medical devices under the EU Medical Device Regulation. When the AI Act becomes applicable to high-risk AI systems, AI systems that constitute medical devices will be subject to a joint certification and conformity assessment procedure covering the requirements that follow from both regulations. Once deployed or placed on the market, the use of AI medical devices is regulated by jurisdictional law, see the Norwegian Regulation on the Handling of Medical Devices (Forskrift 29. November 2013 nr. 1373 om håndtering av medisinsk utstyr).
A Norwegian act facilitating the testing of autonomous vehicles was enacted in 2017 (Lov 15. desember 2017 nr. 112 om utprøving av selvkjørende kjøretøy). The purpose of the act is to facilitate the testing of autonomous vehicles within a framework that takes particular account of road safety and privacy considerations. Testing is to take place gradually, particularly based on the maturity of the technology and with the aim of uncovering the impacts of autonomous vehicles on road safety, efficiency in traffic management, mobility and the environment.
Manufacturing of products are often data-intensive activities and therefore well-suited for the use of AI to analyse, improve, and automate processes. The use of AI in the manufacturing of products is not specifically addressed by Norwegian law. Nor is the use of AI for such purposes subject to particular requirements under the EU AI Act. The general requirements for manufacturers of different types of products are nonetheless applicable. In accordance with product safety legislation both at the EU and national level, manufacturers must ensure that the use of AI in the manufacturing process is reconcilable with their obligations pertaining to risk management, quality assurance, and product testing.
In areas where scientific rigour is required in the manufacturing process, such as in the development of drugs and medical devices, manufacturers should consider how the scientific reliability of AI-based processes can be ensured.
Professional service providers such as lawyers, health personnel, financial advisors, real estate brokers etc have to ensure that their use of AI technologies does not contradict their professional obligations, including relevant codes of conduct or ethical frameworks. Currently, Norwegian law does not specifically outlaw the use of AI by any professional service providers. It is expected that professional codes of conduct will be developed for different categories of professional service providers, to ensure responsible use of AI systems.
AI systems as Copyright Holders
Copyright protection is obtained according to Norwegian law when works are created based on an individual creative intellectual effort. The prevailing view among lawyers in Norway is that human involvement is probably required for such a creative intellectual effort to be deemed present, which suggests that AI systems may not be copyright holders under Norwegian law. The question has not appeared before Norwegian courts.
AI systems as Inventors
The prevailing view in Norway is that AI systems cannot be inventors under Norwegian patent law.
Trade secrets have statutory protection according to the Norwegian Trade Secrets Act (Lov 27. mars 2020 nr. 15 om vern av forretningshemmeligheter), provided that the trade secrets constitute information which is
The various components of an AI system, such as datasets, algorithms, and models, can be kept secret and obtain the status of trade secrets under the Trade Secrets Act.
For contractual purposes, the contracting parties may define what shall be deemed as trade secrets for the purposes of the contractual relationship. To this effect, contracts should be drafted carefully and based on sound knowledge of the different aspects of AI technologies that stakeholders might want to protect as trade secrets.
Users of generative AI models may worry about infringing the rights of third-parties when using outputs generated by generative AI models. There may also be concern about potential infringements of third-party rights during the training of such models. The use of copyrighted materials for training purposes may in principle constitute an interference with the rights of the copyright holder. However, issues concerning the use of copyright-protected works for AI training purposes have not been authoritatively decided by Norwegian courts. Stakeholders should monitor any legislative developments, case law, or guidance in this space, as it seems likely that the scope of intellectual property protection in the context of AI-generated works could be clarified in the future.
When AI models are created and used based on OpenAI technology, this often entails that outputs generated by the models will be the result of several contributions including the pre-trained model from Open AI, any local data used for retraining purposes, system configurations set by the business's own personnel, and the inputs provided by users. If valuable works are generated, questions may arise regarding the appropriate allocation of rights to such works. Terms and conditions from the vendor should be carefully reviewed to ensure that they are aligned with user expectations.
When advising corporate boards of directors in identifying and mitigating risks in the adoption of AI, the following key issues should at least be addressed:
Compliance with Guidelines from Supervisory Authorities
Guidance on lawful and responsible use of AI in various sectors have been issued by supervisory authorities in Norway, see 3.3 Jurisdictional Directives and 5 AI Regulatory Oversight.
Compliance with Technical Standards and Harmonised Standards
Compliance with requirements applicable to AI systems, including those found in the forthcoming EU AI Act, should in many cases be achieved through adherence to technical standards and harmonised standards. Harmonised standards are standards that have been endorsed by the European Commission as a means of complying with legal requirements set out in EU law. Harmonised standards will play a significant role in facilitating compliance with the AI Act.
Preventive Risk and Impact Assessments
To ensure compliance, providers and users of AI systems should conduct preventive risk and impact assessments before initiating the development of deployment of AI systems. Such assessments should include health and safety aspects of the technology, as well as the potential fundamental rights impacts.
Dronning Mauds gate 11
0250 Oslo
Norway
+47 228 275 00
oslo@wr.no www.wr.noArtificial Intelligence Discourse in Norway
In Norway, like in many other countries, AI technologies leapt to the forefront of the public discourse especially after the release of ChatGPT 3 in November 2022. The Norwegian Language Council named "AI-generated" (KI-generert) new word of the year in 2023. In 2024, various AI technologies are highly present in the general societal discourse.
AI technologies receive considerable media attention and is a popular subject for workshops, conferences, etc. While headlines are occasionally devoted to concerns about Artificial General Intelligence (AGI) as a potential threat to humanity, ethical and legal concerns related to more near-term applications of AI in various sectors take centre stage. Particularly, concerns related to risks of interference with privacy, cybersecurity, copyright infringements, bias and discrimination, and utilisation of AI by criminals (eg, through the use of deepfakes) are at the forefront of the public discourse.
Controversies Over the Use of Copyrighted Material and Personal Data in AI Training
The use of data for training AI algorithms without the consent of copyright holders or data subjects has sparked some public outcry in Norway. Norwegian authors have voiced concerns over the use of their copyrighted texts for the training of large language models. Moreover, Meta's announcement that the company would start using images and other data posted by users for training purposes has stirred up a considerable debate, with several influential voices urging people to stop using Meta's services altogether. The Norwegian Consumer Council and NOYB have submitted a joint complaint to the Data Protection Authority, which has stated that it intends to prioritise the matter.
National AI Strategy
The Norwegian government issued a national AI Strategy in 2020, taking a generally optimistic and encouraging approach to the development and implementation of AI systems in Norway. The Strategy emphasises that Norway should be in a good position to succeed with the development of AI technologies, due to factors such as:
The AI Strategy also declares that the use of AI technologies in Norway should adhere to seven ethical principles:
The points raised in the National AI Strategy remain relevant. It is nonetheless worth noting that Norway's current National AI Strategy is already something of a relic, given the rapid technological developments that have occurred over the last few years. It is illustrative that the term generative AI is not mentioned in the strategy, although language technologies are referred to in a more general sense.
Whether and when Norway will have a new AI strategy has not been announced. However, the government is preparing a broader digitalisation strategy, which is expected to devote considerable attention to AI technologies.
In 2023, the government announced that at least one billion NOK would be dedicated to the establishment of four to six AI research centres which will contribute to technology development as well as knowledge about the societal, legal, and ethical impacts of AI technologies. It is expected that these research centres will include collaborations between commercial corporations and the research environments at Norwegian universities.
AI in Norwegian Businesses
According to a report from the Norwegian Confederation of Business (NHO) released in January 2024, one in four Norwegian businesses is using AI technologies in one way or another.
A significant trend in Norway in 2024 is the increasing implementation of generative AI for internal purposes within Norwegian companies in all industries. Various solutions based on large language models from international technology companies are being used with differing levels of local adjustments. While Norwegian companies clearly see opportunities of becoming more productive with these tools, many companies have profound concerns about the risks associated with generative AI systems. Salient concerns include concerns about data governance and confidentiality, protection of personal data, copyright infringements, and general reliability of outputs from generative AI.
The aforementioned report from NHO suggests that the use of AI technologies will enable many businesses to maintain current production levels with less labour. However, it also notes that businesses can use AI technologies to produce new and better products or services, thereby increasing their revenue. Businesses with increased production may need more labour when AI is adopted.
Pioneering Sectors and Industries
The health sector is in many ways spearheading the development of AI systems in Norway, with advanced research and innovation often taking place in collaboration between public hospitals, universities, and private technology companies. In addition to being active in AI development and research, Norwegian hospitals are using AI systems in the course of providing healthcare services to patients.
Other sectors where AI systems are being tested and used as part of business operations include renewable and fossil energy, financial services, transportation, public administration, and many other areas. Notably, Norway is at the forefront of testing of autonomous ships, which is currently occurring in Norwegian fjords.
AI literacy and Cross-disciplinarity
Although the National AI Strategy praises the overall digital literacy in the Norwegian population, the early stages of adopting AI technologies have made it obvious that there is nonetheless a need to educate the population about the benefits and risks of these technologies. As AI systems are being implemented to conduct an increasingly wider set of tasks within private and public organisations, basic knowledge of AI technologies is a prerequisite for maintaining any level of human oversight with automated processes.
In many industries, the use of AI systems is changing the way people work and imposing new requirements as to their skillsets. Cross-disciplinarity is becoming more valuable and it is widely recognised that development, deployment, and assessment of AI systems are best conducted by multidisciplinary teams.
Waiting for the EU AI Act
Norway has been quite reluctant in terms of initiating AI-specific legislative efforts at the national level. Rather, Norwegian legislators have awaited the EU AI Act, which will become applicable in the EU over the next two years. The Norwegian Minister of Digitalisation has stated that Norway aims to follow the EU's timeline and implement the AI Act so that it will become applicable in Norway at the same time.
When the AI Act becomes applicable in Norway, it will constitute the first cross-sectoral legal framework for AI in Norway. Existing laws only contain piecemeal provisions addressing the use of AI for certain limited purposes within specific sectors, such as provisions on the testing of autonomous vehicles, access to health data for AI development purposes, and the use of automated decision-making in some parts of the public administration.
Impact of the AI Act
The AI Act aims at promoting innovation and trade while protecting health, safety, and fundamental rights in relation to AI systems. Taking a risk-based approach, the regulation distinguishes between high-risk AI systems and other AI systems. Most of the requirements imposed by the AI Act are applicable only to high-risk AI systems, which makes it crucial for AI providers to consider which risk class their AI-based products or services belong to.
Businesses considering the provision of high-risk AI systems for the Norwegian market will have to comply with a comprehensive set of AI Act requirements pertaining to risk management, data governance and data quality, technical documentation, logging of events, transparency and user information, human oversight, cybersecurity, accuracy, and robustness. The requirements are comprehensive, but the fact that they will be harmonised across the European Economic Area means that companies that comply with the requirements will have access to the entire EEA marketplace, including Norway. This is exactly in line with the innovation-friendly ambitions of the AI Act.
While most of the requirements in the AI Act apply to providers of high-risk AI systems, companies that deploy high-risk AI systems also need to consider the obligations that fall upon them. Some deployers are required to conduct a "fundamental rights impact assessment" before deployment of a high-risk AI system.
Moreover, any deployer of high-risk AI systems should generally be mindful of the relationship between the mandatory requirements for the providers of such systems and the duties of deployers to utilise the information and functions made available to them by deployers. For example, while it falls upon the deployer of a high-risk AI system to equip the system with functions that enable an appropriate degree of human oversight, it is the responsibility of the deployer to ensure that its personnel are informed and capable of using such human oversight measures in a meaningful way, in practice.
What Room is There for AI Legislation at the National Level?
The AI Act could trigger legislative developments in Norway beyond the implementation of the AI Act itself. First of all, there are a few areas that are excluded from the scope of the AI Act, such as the use of AI for military, defence, or national security purposes. In respect of these areas, Norwegian legislators will probably need to take action at some point. Particularly, if AI technologies will be used for national security purposes, this might invoke the need for certain legislative rules of play to ensure the effective protection of fundamental rights.
Because the AI Act primarily stipulates requirements that must be fulfilled by AI systems and their providers before they can be placed on the market, Norway could probably also come up with jurisdictional laws to govern the use of AI systems in Norwegian businesses. Laws regulating the use of AI technologies in Norway would have to be carefully drafted, as such laws cannot constitute de facto restrictions on market entry for products.
Other legislative efforts that might be foreseen in Norway could relate to the promotion of good data governance practices and data sharing, to facilitate responsible development and use of AI systems. The introduction of jurisdictional laws to this effect could particularly be foreseen in relation to non-personal data. For Internet of Things (IoT) services and products, data sharing obligations will to some extent be introduced in Norwegian law through the EU Data Act, which will become applicable in the EU in September 2025. In theory, similar regulations concerning data sharing could be implemented in jurisdictional law for other stakeholders beyond the IoT space.
Dronning Mauds gate 11
0250 Oslo
Norway
+47 228 275 00
oslo@wr.no www.wr.no