Contributed By Wikborg Rein Advokatfirma AS
Technology-Neutral Background Law
Norwegian law is often technology-neutral and therefore applicable to AI technologies without being drafted specifically for the purpose of addressing these technologies. This is the case, for example, as regards Norwegian public administration law, employment law, contract law, non-discrimination law and copyright law.
Employment
As an example of technology-neutral laws that apply to the use of AI systems, Norwegian employment law particularly restricts the control measures that an employer may use in respect of its employees, see Chapter 9 of the 2005 Norwegian Working Environment Act (Lov 17. juni 2005 nr. 62 om arbeidsmiljø, arbeidstid og stillingsvern mv.). Employers considering the use of AI as part of their employee monitoring and control systems must adhere to these provisions. Moreover, Chapter 13 of the Working Environment Act contains provisions detailing the non-discrimination principle in the employment context. These provisions are applicable to the use of AI systems for hiring purposes.
Tort Law
Norwegian tort law consists of a combination of broadly applicable, technology-neutral liability regimes, as well as more specific regimes such as the Product Liability Act. While the Product Liability Act does not, currently, apply to standalone software systems (including standalone AI systems), it does apply to products that incorporate AI systems, see 10.1 Theories of Liability. Moreover, AI developers, providers, and users could be liable for damages occurring as a result of using AI systems, on the basis of strict liability, statutory vicarious liability for employers or principals (arbeidsgiveransvar) or negligence.
Product Safety and Consumer Protection
Product safety laws and consumer protection laws in Norway are largely harmonised with EU regulations that apply in these areas. Currently, these laws are applicable to AI technologies without being designed specifically to address concerns associated with AI. However, the EU AI Act, which will become applicable in Norway after an implementing legislative process through the EEA agreement, will introduce product safety requirements aiming to address AI risks specifically.
Data Protection
The Norwegian Personal Data Act (PDA) (Lov 15. juni 2018 nr. 38 om behandling av personopplysninger) aims to protect individuals from having their privacy violated through the processing of personal data. The Act implements the EU General Data Protection Regulation (GDPR) in Norwegian law. The GDPR is thus part of the Personal Data Act and applies as Norwegian law.
The PDA and the GDPR are applicable to AI technologies where the processing of personal data is involved. The PDA and the GDPR apply to processing of personal data that is carried out in connection with the activities of a data controller or a data processor in Norway, regardless of whether the processing takes place in the EEA or not. The PDA and the GDPR also apply to the processing of personal data about data subjects located in Norway, and which is carried out by a data controller or data processor that is not established in the EEA, if the processing is related to the offer of goods or services to those data subjects in Norway. Hence, companies that apply AI technologies in services provided to persons in Norway need to comply with the PDA and the GDPR.
Key industry applications of AI in Norway include various forms of decision support, including data analysis and risk prediction. In significant industries such as agriculture, aquaculture and energy, AI-based on machine learning and computer vision are used to optimise the utilisation and distribution of resources such as electricity, fish feed or fertilisers. In many industries, AI is used in predictive analytics and monitoring of various infrastructure components, enabling preventive maintenance and safety-critical interventions. In maritime industries, Norwegian businesses are pioneering the development and testing of autonomous ships including container vessels and passenger ferries. Norwegian employers are increasingly experimenting with using AI in connection with decisions on hiring.
Many Norwegian businesses have implemented generative AI solutions for internal purposes and to assist in the provision of services. Various chatbot solutions, contract drafting tools and document management solutions are commonplace.
National Digitalisation Strategy
In 2024, the Norwegian government published the National Digitalisation Strategy for 2024-2030. According to the new strategy, the government will establish a national infrastructure for AI by 2030 to ensure that Norway will be at the forefront of the ethical and safe use of AI. The government aims to provide Norwegian businesses with favourable conditions for developing and using AI. As part of the strategy, the public sector should also use AI to develop better services and solve tasks more efficiently.
The AI Research Billion
In 2023, the Norwegian government announced that NOK1 billion will be allocated to research and development efforts related to AI technologies. The government envisages that these funds will be used to establish four to six dedicated AI research centres that will contribute to greater insight into the societal impact of AI technologies, as well as insight into technological aspects, and the potential for innovation in commercial businesses and the public sector. The deadline for funding applications passed at the end of 2024, but no funding decisions have been made as of 6 April 2025.
Regulatory Sandbox Projects
Since 2020, the Norwegian Data Protection Authority has provided a regulatory sandbox and conducted several sandbox projects to help facilitate testing and implementation of AI technologies in areas covered by the data protection law framework.
Guidance Services
Other sector-specific initiatives from state authorities also exist, such as the Directorate of Health’s regulatory guidance service for AI projects. AI development projects that receive such guidance benefit from iterative meetings with persons with relevant expertise, from the Directorate and other public agencies.
Implementation of the AI Act
There is no comprehensive legal framework for AI in place in Norway. Although the EU AI Act has entered into force at the EU level, to become applicable in Norway it must be implemented through a legislative procedure. The Norwegian Minister of Digitalisation has stated an intention to present a legislative proposal in the first half of 2025. However, as of 6 April 2025, no formal decisions or legislative initiatives regarding the AI Act have been introduced in Norway. Over the last few years, Norwegian politicians have been reluctant to initiate AI regulation efforts. Looking ahead, Norway’s regulatory efforts in this area are expected to centre around the implementation of the AI Act.
Piecemeal AI Legislation in Norway
Currently, AI-specific legislative provisions are spread across various instances of the Norwegian regulatory framework, and they are mostly limited to laws governing the public sector.
Data Sharing
Provisions facilitating access to data from public entities and the sharing of data between public entities are typical of legislative measures to promote digitalisation in Norway. Specific legislation to this effect in the private sector has not been adopted in Norway, but the forthcoming EU Data Act, which enters into force in the EU on 12 September 2025, will provide new rules concerning data sharing from and between commercial businesses, particularly as regards products in the internet of things category. These products often rely on AI technologies. Also, industrial data to be shared under the Data Act may, to some extent, promote the development of AI technologies.
In addition to provisions facilitating the utilisation of data, Norwegian statutory law contains several provisions that specify the extent to which fully automated decision-making may be relied on in the public sector. The general approach found in current statutes is to facilitate fully automated decision-making only in respect of decisions that are of little importance to the individuals that are impacted by the decision. Similar provisions facilitating fully automated decision-making are currently not found in statutes governing private businesses, which means that the primary provision governing fully automated decision-making in the private sector is Article 22 of the EU GDPR.
Norwegian authorities have issued several guidance documents and reports on the use of AI. The Norwegian Maritime Authority has commissioned guidelines pertaining to the construction or installation of fully or partially autonomous ships. Most guidance documents are informative rather than normative.
Most significantly, the Norwegian Directorate of Digitalisation has issued a beta version of a guidance document on responsible development and use of AI. The Norwegian Equality and Anti-Discrimination Ombud has been active in relation to AI, issuing two relevant reports in 2023-2024 on “non-discrimination by design” (innebygd diskrimineringsvern) and algorithmic discrimination. High-level guidance on AI and data protection is also available from the Norwegian Data Protection Authority. Sector-specific guidance is available, eg, from the Norwegian Directorate of Health, aiming to help AI researchers and developers navigate the fragmented legal framework for AI development projects in Norwegian law.
As mentioned in 3.1 General Approach to AI-Specific Legislation, Norway has been reluctant to implement AI-specific regulations at the national level, and has instead awaited the EU AI Act. The benefit of this approach is that Norwegian law has very few areas where national AI-specific legislation is likely to conflict with AI laws from the EU. On the flipside, there is limited maturity in Norway when it comes to understanding the impact of the AI Act on existing Norwegian legislation.
Regulations on Automated Decision-Making
Automated decision-making that involves the processing of personal data is restricted by Article 22 of the GDPR; see also 7.1 Government Use of AI. Certain provisions of particular relevance to AI technologies are found in different parts of the Norwegian legal framework pertaining to the use of data for innovation purposes and the permissibility of relying on fully automated decisions in the public sector. As an example, fully automated decision-making while providing healthcare services is only permissible in connection with decisions that are of little importance to the individual (see Section 11 of the Patient Records Act) unless otherwise determined by further government regulations. Other examples can be found in 7.1 Government Use of AI.
Tension with the AI Act
While these provisions are relevant in the context of AI systems, the policy that underpins them is from an era when AI technologies were less prominent. The current rules for fully automated decision-making have been created with traditional, hard-coded software solutions in mind. The idea is that automated decision-making based on such systems enhances equality of treatment. There is a need to reconsider their appropriateness in an era when AI systems are capable of handling even discretionary criteria in a decision-making process. The EU AI Act deals with the most significant risks of AI systems involved in decision-making. If AI systems intended for fully automated decision-making comply with the AI Act, one may reasonably question whether restrictions on the use of AI in fully automated decision-making processes should continue to apply in Norway.
This is not applicable.
Data Protection
As discussed in 3.1 General Approach to AI-Specific Legislation, certain amendments have been made to Norwegian health sector legislation to accommodate the use of health data in AI projects. AI-specific changes to general data protection laws have not been made.
Data Mining Exceptions
Data mining exceptions are relevant in the context of AI technologies, because the training machine learning algorithms often involves data mining activities that could otherwise infringe copyright. The Norwegian Copyright Regulation (Forskrift 26. august 2021 nr. 2608) facilitates data mining in certain specified situations. However, further data mining exceptions have recently been proposed.
The proposed data mining exceptions will implement the EU Digital Single Market (DSM) Directive, which establishes an exception for text and data mining of lawfully accessible works. The Norwegian preparatory works accompanying the proposed exceptions highlight the importance of data mining in relation to AI technologies.
If enacted, the new provisions will distinguish between data mining for non-commercial purposes in research, educational institutions and cultural heritage organisations on the one hand, and commercial data mining on the other hand. As of 6 April 2025, the proposals have not been enacted.
EU-Driven Developments
Key developments in the near future will centre around the implementation of the EU AI Act, the EU’s proposed revisions to the Product Liability Directive, and related legislation such as the EU Data Act and cybersecurity regulations. These EU laws constitute landmark changes in the legal frameworks of EU/EEA member states, including Norway. However, Norway is currently lagging behind in its implementation processes of several EU laws that have been adopted in the technology sector in recent years.
Impact of the AI Act
The AI Act sets out several requirements that high-risk AI systems and their providers must comply with, which supplement existing product safety and fundamental rights laws in Norway. In addition to imposing new requirements that are designed specifically to address risks associated with AI systems, the AI Act requires preventive compliance measures, such as risk and impact assessments, to ensure compliance with AI-specific safety requirements as well as existing fundamental rights principles. The AI Act sets out specific provisions governing the category of AI systems referred to as “general-purpose AI systems”. These provisions will play an important role in practice, as the notion of general-purpose AI systems covers conversational agents and other generative AI systems that are widely used in Norway.
To date, there are no judicial decisions in Norway concerning AI systems.
The Norwegian Government has decided that the Norwegian Communications Authority (Nasjonal kommunikasjonsmyndighet) shall undertake the role as supervisory authority for AI, as required by the EU AI Act. The Norwegian Communications Authority is thus tasked with ensuring that the AI Act is followed up in a uniform manner in Norway. It will also serve as the national point of contact for EU bodies and the supervisory authorities of other EU/EEA member states.
Ongoing Activities
Currently, there are several regulatory/supervisory agencies that oversee AI systems in Norway. The Directorate of Digitalisation (DigDir) plays an active role in providing guidance on AI development and usage. The Norwegian Data Protection Authority has assumed a central role, particularly through its regulatory sandbox scheme for AI, which has been in place since 2020. The Norwegian Consumer Council has been particularly concerned about generative AI, issuing a report on the consumer harms of generative AI in June 2023. The Office of the Auditor General of Norway has engaged in the issue of auditing AI systems. Moreover, within the remits of its mandate to promote equality and non-discrimination, the Equality and Anti-Discrimination Ombud plays an active role. Other central regulatory agencies include sector-specific agencies such as the Directorate of Health and the Norwegian Maritime Authority (see 3.3 Jurisdictional Directives).
See 3.3 Jurisdictional Directives.
To date, there have been no enforcement actions related to AI systems in Norway. However, following Meta’s announcement that it would start using data published by its users to train AI algorithms, the Norwegian Consumer Council has, in collaboration with NOYB, filed a complaint with the Norwegian Data Protection Authority.
Following the complaint, Meta announced that it would postpone its decision regarding the use of user data for AI development purposes.
The Norwegian standard setting body, Standard Norge, and other Norwegian stakeholders are involved in standard-setting initiatives related to AI technologies at the international level. Standard Norge has established a committee to participate in standardisation efforts by the ISO and CEN-CENELEC. It is recognised in Norway that participation in standard-setting activities is an important arena for influencing the regulatory developments related to AI technologies.
Standards from international standard-setting bodies generally impact companies doing business in the technology sector in Norway. Norwegian legislation tends not to refer to international standards, but technology purchasers in Norway will often expect vendors to comply with widely recognised standards, for example in relation to information and cybersecurity, risk management, data formatting and application programming interfaces.
A Booming Public Sector
Norwegian authorities are testing and implementing several AI-based solutions to aid governmental operations and for administrative law purposes. Some examples include the following.
In addition to the above-mentioned use cases, several public agencies in Norway have implemented or are planning to implement generative AI systems to handle questions from and provide information and guidance to citizens.
Legal Uncertainty
The development of AI for public administration purposes often requires that personal data held by public agencies are used for purposes other than those for which they were initially collected and stored. The use of personal data for AI development purposes may contradict the legitimate expectations of citizens as data subjects. In the absence of national provisions specifically providing a legal basis for AI development, there is uncertainty about the extent to which public agencies have a legal basis for this repurposing. Due to the lack of a clear legal basis for AI development based on the personal data of citizens, the above-mentioned sandbox project in the Norwegian Labour and Welfare Administration concluded that legislative changes are needed in order to progress with this project.
There are no judicial decisions or pending cases related to government use of AI in Norway.
The Norwegian National Security Authority (NSM) has contributed to national digital risk assessment reports, highlighting the security implications of AI and the importance of maintaining robust cybersecurity measures to prevent the misuse of AI technologies. The NSM has provided guidelines to secure AI system development as part of a collaborative effort with international partners. These guidelines emphasise the importance of incorporating cybersecurity throughout the AI system development lifecycle, from design to deployment and maintenance, while building on general cybersecurity principles that are not unique to AI.
Generative AI typically raises issues related to the lawful use of material/data for training purposes and in relation to the outputs received by users. Typically, the scraping of data from the internet, as part of creating generative AI models, could result in the unlawful use of information, in breach of IP and data protection laws and the rights of third parties. The lack of transparency is making it difficult for users of AI models to assess the validity of results provided as output by the model and for individuals to enforce their rights.
Data Protection Principles
Generative AI models raise concerns both in relation to basic data protection principles and the rights of individuals. Where the PDA and the GDPR apply, the collection and use of personal data as part of generative AI models must be in line with principles related to purpose limitation, data minimisation, accuracy and storage limitation. It is also required that all use of personal data must have a sufficient legal basis and take place in a fair and transparent manner towards the individual. These requirements apply to all use of personal data in all stages including collection, training and the generation of outputs.
Rights of Data Subjects
Individuals whose personal data are being processed have certain rights such as the right to access to personal data and to rectification and erasure. The exercise of these rights are being challenged by the use of personal data to train generative AI models. The scope and amount of personal data is often not clear in relation to training data, input data, and output data of such models. Once personal data has been used to train a model, there is uncertainty as to the feasibility of effectively accessing, rectifying, and erasing personal data.
Purpose Limitation
The principle of purpose limitation requires that AI models should be designed and used based on a defined purpose and that personal data must not be used for new purposes that are incompatible with the purposes for which the data was initially collected. This applies to training of the AI models as well as using personal data as input or context when using the AI model. Data minimisation entails that the use of personal data should generally be limited to what is strictly required. Any superfluous personal data should be deleted or anonymised.
Responsibility of Controllers
Data controllers are responsible for ensuring and documenting compliance with the law, and must take compliance measures such as risk assessments, ensuring privacy by design and default, and providing sufficient information to individuals. For example, developers should strive to use algorithms that minimise the acquisition and processing of personal data while ensuring robust data security and confidentiality measures.
AI in Legal Tech
Generative AI assistants are commonplace in large law firms. Various other applications are also being tested and gradually implemented, such as AI-based tools for contract drafting and revision.
Professional Codes of Conduct
Legal professionals using AI in their work need to ensure that they maintain adherence to general codes of conduct and ethical principles. In Norway, a new act regulating lawyers and others who provide legal assistance entered into force from 1 January 2025 (Lov 12. Mai 2022 nr. 28 om advokater og andre som yter rettslig bistand).
Chapter 8 of the above-mentioned Act sets out the fundamental principles that lawyers must always abide by in the course of their work, including requirements with regard to confidentiality, information security and loyalty to the client. How the use of AI technologies might impact those principles is a subject of some debate in Norway. Many companies have created their own codes of conduct to ensure that AI is used responsibly. However, there is no formal guidance from the Norwegian Bar Association or supervisory authorities on the use of AI in the legal profession.
No Legal Personhood
Norwegian theories of liability do not consider AI as a legal person. The allocation of liability for injuries or damages where AI systems are involved currently depends on the general doctrines of liability that apply regardless of the technology involved. These doctrines may lead to liability for various actors in an AI supply chain, including AI providers and users.
Negligence
As a generally applicable theory of liability in Norwegian law, negligence may entail liability for any actor involved in the development, commercialisation or use of AI systems. Developers that deviate from good AI development practices or widely recognised codes of conduct may risk being held liable on the grounds of negligence.
Users may be particularly exposed to liability based on negligence if they do not comply with safety instructions and documentation accompanying an AI system, or if they use AI systems for other purposes than those intended by the provider of the system.
Vicarious Liability for Employers
According to Section 2-1 of the Norwegian Damages Compensation Act (Lov 13. Juni 1969 nr. 26 om skadeserstatning), employers may be held liable for damages caused intentionally or negligently by employees in the performance of work. In the assessment of whether an employee has acted negligently in the development or use of an AI system, the considerations mentioned above in relation to liability based on negligence will be relevant.
Strict Liability
In Norwegian tort law, there is a longstanding non-statutory strict liability doctrine (ulovfestet objektivt ansvar), which could be relevant in the context of damages that occur where AI systems are involved. This doctrine was initially developed in case law related to damages caused by dangerous, industrial businesses, such as nitroglycerin factories.
Initially, the strict liability doctrine in Norwegian law was introduced to impose liability in an appropriate manner in cases where technological developments created new risks to bystanders. In particular, this doctrine targeted businesses whose commercial and industrial endeavours exposed others to extraordinary risk on a continuous basis. Those businesses were to be held liable regardless of negligence.
While the strict liability doctrine in Norwegian law has come to be applied broadly to businesses that expose others to extraordinary risks on a continuous basis, the original intention of addressing new technological developments is being revitalised in relation to the emergence of new risks posed by AI systems.
This liability doctrine would appear to be particularly relevant in respect of businesses that are users of AI systems, ie, businesses that rely on AI-driven processes or machinery as part of their operations. For example, if a building constructor relies on autonomous vehicles, lifts, etc, and those machines make mistakes that cause injury to bystanders, the non-statutory strict liability doctrine could lead to liability for the building constructor.
Product Liability
The Norwegian Product Liability Act (Lov 23. Desember 1988 nr. 104 om produktansvar) governs the liability of a manufacturer for damages caused by a product that the manufacturer has placed on the market. While the Product Liability Act is relevant in relation to some damages caused by AI systems, it has important limitations in terms of its scope.
Contractual Liability
As noted above, damages to property not intended for consumer purposes are not covered by the Product Liability Act. Therefore, contractual allocation is particularly important in relation to such damages caused by AI systems that have been sold or made available to a buyer or licensee.
As regards the scope of damages for which a seller or service provider may be held liable, the general rule in Norwegian contract law is that a seller is liable for damage that the sales item causes to property or assets that have a close physical and functional connection with the sales item, as these damages are considered to be direct damages, as affirmed by the Supreme Court’s decision in RT-2004-675. Thus, such damages will be recoverable as direct damages in contracts for AI-based products governed by Norwegian law unless the contract provides otherwise.
Insurance
To date, specific insurance arrangements for damages caused by AI systems have not been subject to much discussion in Norway. There is uncertainty about how liability for such damages would be allocated according to the different liability doctrines that exist in Norwegian law, which likely makes it challenging for insurers to assess risk and articulate policies for AI-related damages.
EU Liability Directive Proposal Withdrawn
In February 2025, the European Commission decided to withdraw its proposal for an AI Liability Directive. The original proposal aimed to regulate civil liability in non-contractual matters. Now that the proposal for an AI Liability Directive has been withdrawn, victims of damages caused by AI systems will have to rely on the liability regimes that exist at the national level.
Product Liability Legislation May Be Amended to Cover Standalone AI
The European Commission has proposed a revision of the Product Liability Directive, which is implemented in Norway through the Product Liability Act. The proposed changes include adaptations that would bring standalone AI systems within the scope of EU product liability law and, as a consequence, the Norwegian Product Liability Act.
Bias is a Widespread Concern
The issue of algorithmic bias has received increasing attention in the Norwegian discourse around AI technologies in recent years. In 2024, the Norwegian Equality and Anti-Discrimination Ombud commissioned a report discussing algorithmic bias and discrimination.
Risks Associated With Bias in AI Systems
Algorithmic bias is a problem that causes different types of risk, including safety risks, performance risks, and the risk of negative impacts on fundamental rights, including the right to non-discrimination.
Bias may pose significant risk to individuals in areas where AI systems are used as decision support or in fully automated decision processes. For example, the risk of unfair or discriminatory treatment due to bias in algorithms is present when AI is used in risk assessments for insurance purposes, in clinical decision-making in the health sector, in connection with hiring decisions or assessments of employee performance, and in web advertisement.
Companies or public agencies that use biased AI systems may become liable for damages related to wrongful decisions and for violation of the right to non-discrimination.
Bias in the AI Act
The risk of algorithmic bias is addressed specifically by the EU’s AI Act. According to the AI Act, AI providers and, in some cases, deployers, must take certain preventive measures to mitigate biases that may cause harm from a safety or fundamental rights perspective. Providers of high-risk AI systems must conduct documented bias examinations and take the risk of bias and algorithmic discrimination into account when determining the appropriate risk mitigation measures that are applied to their AI systems before deployment.
Heightened Risks
The use of facial recognition and other biometric information for the purpose of uniquely identifying a natural person is categorised as processing of special categories of personal data in the GDPR. The use of biometric data, and facial recognition in particular, poses heightened risks to the rights of data subjects. Businesses considering applying these technologies should carefully consider the principles of lawfulness, necessity, proportionality and data minimisation as set out in the GDPR. While the use of such technologies may be perceived as particularly effective, the data controller must assess the impact on fundamental rights and freedoms and consider less intrusive measures.
Jurisdictional Law on Unique Identifiers
In addition to the general requirements in the GDPR, Section 12 of the Norwegian Personal Data Act states that unique identifiers, which includes biometric information, may only be processed when there is a need for secure identification and the method is necessary to achieve such identification.
Regulatory Attention
The use of facial recognition together with AI technology has attracted attention from the Norwegian Data Protection authority and was assessed in 2023 as part of its sandbox scheme, in relation to a project on advanced security measures to prevent ID theft and account takeover. Moreover, in its 2022 report “Your privacy – our shared responsibility”, the Privacy Commission identified facial recognition technology as a technology that will have a major impact on privacy in the future. The Commission supports a general ban on the use of biometric remote identification in public spaces.
Jurisdictional Provisions Facilitating Automated Decision-Making
As described in 3.2 Jurisdictional Law and 7.1 Government Use of AI, Norwegian law facilitates the use of fully automated decision-making in some specific areas of public administration.
Automated Decisions with Legal or Other Significant Effects
In areas where there are no specific provisions facilitating automated decision-making, Article 22 of the GDPR sets out the conditions under which automated decision-making can be used to make decisions that produce legal effects or otherwise significantly affect an individual.
Apart from cases where Norwegian law specifically accommodates the use of automated decision-making that significantly affects a person, a decision-maker may only rely on automated decision-making when it is necessary for entering into, or performing, a contract with the individual affected by the decision, or if the individual has given explicit consent, in accordance with Article 22 of the GDPR.
Other Automated Decisions
As regards automated decisions outside the scope of Article 22 of the GDPR, ie, decisions that do not significantly affect an individual, decision-makers still need to comply with their general data protection obligations. This includes being transparent about the purposes that personal data are collected and used for. However, when a decision does not have significant effects, the decision-maker may rely on legitimate interests or other legal bases for the processing of personal data in accordance with the GDPR.
Because the regulatory requirements in relation to decisions with significant effects are stricter than for other decisions, the question of how to distinguish between decisions with significant effects and other decisions will likely be subject to some debate going forward.
Attention from Supervisory Authorities
Due to significant attention to risks associated with profiling and automated decision-making from the Norwegian Data Protection Authority and the Norwegian Anti-Discrimination and Equality Ombud, companies relying on automated decision-making can expect to attract attention and become subject to scrutiny from these and other supervisory authorities in Norway.
Chatbots and Substitutes for Human Communication
When the EU AI Act becomes applicable, it will require that the use of chatbots and other AI technologies that serve as substitutes for services or communications rendered by natural persons must be disclosed, as per Article 50(1) of the AI Act. Moreover, providers of generative AI systems will be obliged to ensure that AI-generated outputs must be identifiable as such.
Jurisdictional Law Relating to the Use of AI to Covertly Influence Consumer Behaviour (“Nudging”)
The use of data-driven technologies to influence consumer behaviour – often referred to as “nudging” ‒ has been a concern for the Norwegian Consumer Council for some time. The Council addressed this issue in a report published in 2018, titled “Deceived by Design”. As noted in the report, nudging can be used to covertly steer consumers in the direction of actions that benefit a service provider while not necessarily being aligned with the consumer’s best interests.
Jurisdictional Norwegian law does not specifically address the use of AI technologies to nudge and influence consumer behaviour. However, depending on the circumstances, action may be taken against such practices based on Section 6 of the Norwegian Marketing Act (Lov 9. januar 2009 nr. 2 om kontroll med markedsføring og avtalevilkår – markedsføringsloven). Section 6 prohibits practices that conflict with good business practice towards consumers and are likely to materially “distort the economic behaviour of consumers, causing them to make decisions they would not otherwise have made.” To what extent the use of AI to nudge consumers conflicts with good business practices in Norway would have to be assessed on a case-by-case basis. However, as nudging will usually involve processing of personal data, the requirements of the DPA and the GDPR must be complied with.
Prohibited Practices Under the EU AI Act
Certain AI systems aimed at influencing a person’s decision-making will be prohibited by the EU AI Act. Notably, the prohibition in Article 5(1)(a) of the AI Act will apply to AI systems deploying subliminal techniques that a person or group of persons is not aware of and that purposefully manipulates or deceives in order to materially distort behaviour.
Specific Considerations for AI
Contracts will need to consider the specific risks and complexities associated with different types of AI technologies, the complex value chain that may be involved, and the business models of AI-related companies which may differ from traditional business models for technology and software companies.
Importantly, contractual arrangements for AI technologies must be based on a high level of technical understanding, in order to accommodate the specific technology involved in each transaction.
Forthcoming Legislation
Due to the abundance of forthcoming legislation relevant to AI systems, contracts for the procurement of AI systems should proactively deal with how the implementation of future regulatory requirements should be managed under the contract.
Definition of Acceptable Performance
Another important topic for contractual negotiation and drafting relates to defining the level of performance that an AI system should have for the contract to be fulfilled by the vendor, as well as the processes through which that level of performance should be verified.
Liability Clauses
As in other contractual negotiations, the drafting of liability mechanisms will be key to the predictability and comfort of contractual parties. Liability mechanisms is contracts governed by Norwegian law should be drafted with due attention to the Norwegian liability doctrines discussed in 10.1 Theories of Liability.
Early Days
HR departments in Norwegian businesses generally perceive AI technologies as potentially useful tools that might enhance hiring and other HR processes, such as in termination processes where there is a large number of individuals in scope. However, the implementation of AI for HR purposes in Norway is in its nascent stages.
Bias and Discrimination
There is increasing awareness in Norway of the risk of biases in AI algorithms, which may lead to discriminatory decision-making. Such concerns might have a chilling effect on the implementation of AI for hiring and termination purposes.
Preventive Measures and Regulatory Requirements
To avoid discriminatory outcomes, employers considering the use of AI in hiring decisions need to implement risk management and quality assurance practices that address the risk of discrimination in AI systems. When the EU AI Act’s requirements for high-risk AI systems become applicable, such preventive measures will be mandatory, as AI systems for hiring, promotion, termination, etc, are classified as high-risk under the AI Act.
In addition to the AI Act requirements, companies considering the use of AI in decisions on hiring or termination should be mindful of the restrictions on fully automated decision-making with significant effects for natural persons according to Article 22 GDPR.
In the same way as for employee hiring and termination, AI technologies could provide efficient means to evaluate performance and monitoring of employees’ work. As such use of AI technologies would result in the processing of personal data, the use of AI would have to comply with the Norwegian Personal Data Act and the GDPR. Furthermore, there are additional legal requirements in the Norwegian Working Environment Act regarding control measures in the workplace that must be complied with before evaluation and/or monitoring based on AI technology can be implemented (see 1.1 General Legal Background).
Use of AI by Digital Platform Companies in Norway
AI technologies are used by national and international digital platform companies operating Norway. For example, such companies use AI to predict demand for products and services and to match service providers with customers.
Employment and Regulatory Concerns
While there are employment and regulatory concerns associated with digital platform companies in general, including concerns about the effects on competition, those concerns are not particularly reinforced by their use of AI technologies.
Use of AI in Financial Services
Financial services companies in Norway, including insurance companies and credit scoring companies, probably rely on AI technologies to some extent in their operations. However, there is limited publicly available information regarding their current use of AI systems.
Under the regulatory sandbox scheme provided by the Norwegian Data Protection Authority, a project was conducted regarding the use of AI based on federated machine learning to increase the precision of electronic monitoring systems designed to flag transactions that are suspicious from an anti-money laundering perspective. The project addressed the problem of using data across several banks to train algorithms on data concerning as many transactions as possible without sharing personal data across different banks.
Regulatory Concerns
Concerns related to privacy and non-discrimination are very important in relation to the use of AI by financial service providers. The flagging of transactions made by individuals could lead to increased scrutiny of the personal data of certain persons. There is also the risk of biases that cause persons from certain groups to be more likely to be flagged as “risky” than other persons, eg, in relation to AML concerns or in relation to the risk of defaulting on a loan.
The Norwegian Financial Authority has communicated that it will prioritise supervisory activities related to the use of AI technologies in the financial services sector in the future. In particular, the Authority has expressed concern about insurance companies’ increasing use of AI in their operations. The Authority has stressed the importance of carrying out robust risk assessments before using AI and of ensuring transparent pricing for customers.
AI in Healthcare
Healthcare is probably the most advanced sector in Norway in terms of developing and testing AI technologies based on machine learning, deep neural networks and natural language processing. AI systems are also used in practice for clinical purposes, including radiological applications to diagnose patients based on images.
Regulatory Adaptations
In the Norwegian societal and professional discourse around innovation in the health sector, the issue of gaining access to relevant data for AI training purposes has often been highlighted as a barrier to innovation. The Norwegian Heath Personnel Act was amended in 2021, to ensure that health data can be used for AI development and deployment purposes (see 3.1 General Approach to AI-Specific Legislation). It is expected that a more comprehensive revision of jurisdictional Norwegian health law will be initiated to facilitate the development and use of AI in the future.
AI Systems as Medical Devices
AI systems intended to support or automate clinical decision-making will generally constitute standalone medical devices under the EU Medical Device Regulation. When the AI Act becomes applicable to high-risk AI systems, AI systems that constitute medical devices will be subject to a joint certification and conformity assessment procedure covering the requirements that follow from both regulations. Once deployed or placed on the market, the use of AI medical devices is regulated by jurisdictional law, including the Norwegian Regulation on the Handling of Medical Devices (Forskrift 29. November 2013 nr. 1373 om håndtering av medisinsk utstyr).
A Norwegian act facilitating the testing of autonomous vehicles was enacted in 2017 (Lov 15. desember 2017 nr. 112 om utprøving av selvkjørende kjøretøy). The purpose of the act is to facilitate the testing of autonomous vehicles within a framework that takes particular account of road safety and privacy considerations. Testing is to take place on a step by step basis, considering the maturity of the technology and aiming to uncover the impact of autonomous vehicles on road safety, efficiency in traffic management, mobility and the environment.
Manufacturing of products is often a data-intensive activity and therefore well-suited for the use of AI to analyse, improve and automate processes. The use of AI in the manufacturing of products is not specifically addressed by Norwegian law, nor is the use of AI for such purposes subject to particular requirements under the EU AI Act. The general requirements for manufacturers of different types of products are nonetheless applicable. In accordance with product safety legislation, both at the EU and national levels, manufacturers must ensure that the use of AI in the manufacturing process aligns with their obligations pertaining to risk management, quality assurance and product testing.
In areas where scientific rigour is required in the manufacturing process, such as in the development of drugs and medical devices, manufacturers should consider how the scientific reliability of AI-based processes can be ensured.
Professional service providers such as lawyers, health personnel, financial advisors, real estate brokers, etc, must ensure that their use of AI technologies does not contradict with their professional obligations, including relevant codes of conduct or ethical frameworks. Currently, Norwegian law does not specifically prohibit the use of AI by professional service providers. It is expected that professional codes of conduct will be developed for different categories of professional service providers, to ensure responsible use of AI systems.
General Starting Points for Data
Under Norwegian law, the general rule is that no one can hold property rights to data per se.
As a starting point, the individual or entity that gathers and controls data may therefore freely exploit the data for any purpose, except for personal data (see 8.2 Data Protection and Generative AI). This may include exploitation for commercial profit, improving products and services, and sharing or licensing data to third parties.
Restrictions That May Apply
Although one cannot obtain property rights to data per se, legal restrictions on the use of data may nonetheless apply. Such restrictions may follow from intellectual property rights (typically copyrights, patents and/or database protection) in the original material from which data is collected, trade secret protection, contractual restrictions such as confidentiality obligations or limited rights to use (licences), or restrictions in national legislation such as security acts.
Certain restrictions will apply to entities that gather or control data in their capacity as providers of connected products (ie, internet of things) or data processing service providers pursuant to the EU Data Act.
Consequently, the provider of an AI system may influence the protection of input and output of generative AI systems through the terms and conditions, which may include licence terms.
Professional service providers such as lawyers, health personnel, financial advisors, real estate brokers, etc, must ensure that their use of AI technologies does not contradict their professional obligations, including relevant codes of conduct or ethical frameworks. Currently, Norwegian law does not specifically outlaw the use of AI by any professional service providers. It is expected that professional codes of conduct will be developed for different categories of professional service providers, to ensure responsible use of AI systems.
Trade secrets have statutory protection under the Norwegian Trade Secrets Act (Lov 27. mars 2020 nr. 15 om vern av forretningshemmeligheter), provided that the trade secrets constitute information which:
The various components of an AI system, such as datasets, algorithms and models, can be kept secret and obtain the status of trade secrets under the Trade Secrets Act.
For contractual purposes, the contracting parties may define what shall be deemed as trade secrets for the purposes of the contractual relationship. To this effect, contracts should be drafted carefully and based on sound knowledge of the different aspects of AI technologies that stakeholders might want to protect as trade secrets.
Users of generative AI models may worry about infringing the rights of third-parties when using outputs generated by generative AI models. There may also be concern about potential infringements of third-party rights during the training of such models. The use of copyrighted materials for training purposes may in principle constitute an interference with the rights of the copyright holder. However, issues concerning the use of copyright-protected works for AI training purposes have not been authoritatively decided by Norwegian courts. Stakeholders should monitor any legislative developments, case law or guidance in this area, as it seems likely that the scope of intellectual property protection in the context of AI-generated works could be clarified in the future.
When AI models are created and used based on OpenAI technology, the outputs generated by the models will often be the result of several factors including the pre-trained model from Open AI, any local data used for retraining purposes, system configurations set by the business’s own personnel, and the inputs provided by users. If valuable works are generated, questions may arise regarding the appropriate allocation of rights to such works. Terms and conditions from the vendor should be carefully reviewed to ensure that they are aligned with user expectations.
Although Norwegian Competition Authority (Konkurransetilsynet) has not issued any formal strategies or white papers on AI, the Authority has stated its key concerns in an opinion piece that was published in 2024. The Authority is concerned that new opportunities for rapid sharing of massive amounts of market-sensitive information could make it possible to co-ordinate prices in new ways. In particular, the use of self-learning pricing algorithms can lead to harmful price co-ordination. The Authority is also concerned that operators with access to large amounts of consumer data can use such data to develop AI systems capable of creating highly individualised products and services, which could in turn give the companies with access to the relevant datasets a significant competitive advantage within their market.
Existing and upcoming cybersecurity legislation in Norway (including EU legislation) will in many cases be applicable to AI technologies, users and providers. Cybersecurity measures are required by entities that are subject to the Norwegian Digital Security Act (Digitalsikkerhetsloven) (not yet in force). The Digital Security Act implements the NIS1 Directive relating to network and information systems in businesses that serve important functions in Norwegian society or infrastructure.
The EU AI Act requires that high-risk AI systems must have an appropriate level of robustness and cybersecurity (Article 15). Moreover, the forthcoming EU Cyber Resilience Act requires that hardware and software providers implement certain cybersecurity measures as a matter of product design more generally.
The environmental impact of AI technologies is part of the general AI discourse in Norway, but there are no particular directives or documents guiding companies in how they should deal with environmental aspects of AI technologies. Public authorities are generally required to consider the environmental impact of their decisions, including procurements, which means that they have to consider, eg, the necessity of having an AI system developed or the difference in impact between different AI technologies. ESG reporting requirements exist pursuant to EU law applicable in Norway, and AI systems may, in practice, be used to assist with the reporting process.
Compliance with Guidelines from Supervisory Authorities
Guidance on lawful and responsible use of AI in various sectors have been issued by supervisory authorities in Norway, see 3.3 Jurisdictional Directives and 5. AI Regulatory Oversight.
Compliance with Technical Standards and Harmonised Standards
Compliance with requirements applicable to AI systems, including those found in the forthcoming EU AI Act, should in many cases be achieved through adherence to technical standards and harmonised standards. Harmonised standards are standards that have been endorsed by the European Commission as a means of complying with legal requirements set out in EU law. Harmonised standards will play a significant role in facilitating compliance with the AI Act.
Preventive Risk and Impact Assessments
To ensure compliance, providers and users of AI systems should conduct preventive risk and impact assessments before initiating the development of deployment of AI systems. Such assessments should include health and safety aspects of the technology, as well as the potential impact on fundamental rights.
Dronning Mauds gate 11
0250 Oslo
Norway
+47 228 275 00
oslo@wr.no www.wr.no