Artificial Intelligence 2024

Last Updated May 28, 2024

Denmark

Law and Practice

Authors



Bird & Bird is Denmark’s leading international law firm in the areas of technology and digitalisation. The firm’s technology and data practice in Denmark is made up of 20 skilled lawyers, making it one of the largest teams in the field in Denmark and the Nordics. Bird & Bird has a reputation for providing sophisticated, pragmatic advice to companies that are carving the world’s digital future. Bird & Bird helps with all aspects of deploying and developing new technologies, such as generative AI, the latest developments with software and data, and key regulatory considerations impacting businesses that create and harness technology. With more than 1,700 lawyers and legal practitioners across a worldwide network of 32 offices, Bird & Bird delivers expertise across a full range of legal services, operating as one truly international partnership with shared goals, accounting and profit pool. The firm’s commitment is to provide clients with advice from the right lawyers, in the right locations.

Denmark has not at this time implemented any specific regulation governing artificial intelligence (AI). However, where applicable, AI will naturally be subject to Danish law, which means that AI in many cases must be construed and used in accordance with Danish legal principles and enacted legislation. This applies to a number of legal areas, and therefore, only a few legal areas are highlighted below. In a Danish context, both authorities and the private sector have focused primarily on the following.

Data Protection

In addition to the General Data Protection Regulation (GDPR), the Danish Data Protection Act (Lovbekendtgørelse 2024-03-08 nr. 289) supplements this regulation. The Danish Data Protection Agency has issued a guide on public authorities’ use of personal data throughout the life cycle of an AI system (see 3.3 Jurisdictional Directives).

Intellectual Property Law and Trade Secrets

The use of AI is also subject to the Danish regulation of intellectual property rights, including but not limited to the Danish Copyright Act (Lovbekendtgørelse 2023-08-20 nr. 1093) and Danish Trade Secrets Act (Lov 2018-04-25 nr. 309 om forretningshemmeligheder). For example, AI systems may potentially generate data constituting a trade secret under the Danish Trade Secrets Act, which will require the AI system to have reasonable protective measures in place to secure the necessary level of confidentiality.

Employment Law

Employers must ensure that any use of AI is in accordance with Danish employment legislation and applicable collective bargaining agreements. Especially the latter is particular to Danish law. This is relevant if a company intends to use AI tools as part of its recruitment process – eg, CV sorting tools. If AI tools are utilised in the recruiting process, the AI tool must not discriminate based on unlawful criteria, as required by the Danish Employment Non-discrimination Act (Lovbekendtgørelse 2011-06-08 nr. 645).

Irrespective that Denmark has not (yet) adopted any specific AI regulation, various other non-binding initiatives have been initiated concerning the regulation and use of AI, inter alia focusing on ensuring responsible and ethical development and use of AI. This is further detailed in 2.2 Involvement of Governments in AI Innovation.

Generative AI

Following the widespread adoption of chatbots powered by large language models (LLMs), particularly ChatGPT, Danish businesses across various industries are increasingly deploying this technology. For established organisations, the focus is on secure, private deployments. These companies leverage private cloud tenants to whitelist and utilise LLMs (such as ChatGPT and M365 Copilot) within their own secure environments, thereby ensuring data confidentiality.

Initially, some organisations expressed skepticism and concerns about integrating AI solutions into daily operations. However, this hesitancy has generally been replaced by a willingness to embrace AI’s potential, recognising it as a crucial competitive advantage. It is important to note that the adoption rate varies significantly across industries, and even among individual companies within the same sector.

Predictive AI

In recent years, while generative AI has gained more public awareness and significant traction in various industries, predictive AI has been playing an increasingly important role. Companies are less public about their use of predictive AI systems due to competitive considerations, immature governance setups, uncertainties regarding liability and the need to protect their commercial interests. 

For example, in the Danish medical industry and healthcare system, predictive AI is already being utilised or at least actively explored. One application is in medical image analysis, where it helps radiologists prioritise critical cases.

In February 2024, all political parties in the Danish Parliament agreed on a new national digitalisation strategy for 2024-2027. The strategy consists of 29 initiatives, several of which focus on AI. These include ensuring a “[r]esponsible and strong foundation for utilising artificial intelligence” and potentially investing in and training a Danish language model.

Regulatory Sandbox

Additionally, funds have been allocated to establish a regulatory sandbox aiming at providing companies and public authorities with guidance on GDPR when developing or using AI solutions – eg, by providing free relevant expertise. The regulatory sandbox is a collaboration between the Danish Data Protection Agency and the Agency for Digital Government.

The regulatory sandbox aims to support innovation and the use of AI solutions and to ensure a swifter process from development to operation of AI systems, including reducing any uncertainties surrounding the regulatory framework of such AI systems.

A project part of the regulatory sandbox is expected to last between three to six months.

The Danish Approach

As detailed in 5.2 Technology Definitions, the Danish approach tends to align closely with EU legislative texts, making the specific Danish legislative stance best described as agnostic, even if the Danish debate does not always reflect this.

EU Artificial Intelligence Act

It is unclear how the Danish opt-out on EU justice and home affairs will affect the AI Act, given it is a regulation directly applicable across EU member states. Unless Danish legislators decide to implement the parts of the regulation covered by the Danish opt-out, the implementation of specific AI regulations related to the AI Act is not expected. This issue is briefly discussed in 3.4.2. Jurisdictional Conflicts.

EU AI Directives

The pending EU AI Liability Directive, and the EU Product Liability Directive, will require Danish implementation. As the directives are pending finalisation, it is too early to state anything specific about Denmark’s implementation.

However, it will be interesting to see how the EU AI directives will influence the Danish legal landscape, particularly concerning fault-based liabilities, such as the new rules on the burden of proof, including a presumption of a causal link between defects and the AI system’s output. In Denmark, damages and liability in many cases of fault-based liability are determined on a non-statutory basis, which might have to change in relation to AI.

In the coming period leading up to the enforcement of the EU AI Act and the finalisation of the aforementioned directives, Denmark’s approach to regulating AI is expected to be clarified further.

As described in 3.1 General Approach to AI-Specific Legislation, no specific AI-specific legislation has been enacted in Denmark. However, various public authorities have issued non-binding White Papers or guidelines with the aim of providing companies within their sector or relevant to their domain with relevant non-binding guidance (see 3.3 Jurisdictional Directives).

Guidelines Issued by Public Authorities

The Danish Financial Supervisory Authority (DFSA) and the Danish Data Protection Agency (DDPA) have published guidance in relation to the use of AI. The White Paper issued by the DFSA focuses on providing tools and inspiration for companies within the financial sector regarding data ethics when applying AI. The White Paper should merely be seen as guidance and does not impose new requirements on companies.

Danish Data Protection Agency Guidelines

The DDPA has published guidelines for other public authorities, specifically geared towards municipalities and the Regions (administrative units), on handling AI technology in accordance with applicable data protection legislation.

The guidance focuses on ensuring compliance with the data protection regulation throughout the life cycle of an AI system, meaning from the development phase until the operation phase.

Different Phases

The guidelines distinguish between the public authorities’ use or development of an AI system in the following three phases:

  • the design/clarification phase;
  • the development/training phase; and
  • the operation phase.

The guidelines outline that it is essential to consider in which phase you are in and how the personal data is incorporated into the AI system to ensure compliance with the data protection rules, as – in particular – the purpose, lawfulness and legal basis can change depending on the phase.

Supporting the Development of AI While Ensuring Compliance

As outlined in previous sections, a key focus is to avoid the guidelines becoming onerous to developing relevant AI systems while continuously ensuring that they comply with the data protection regulations.

As briefly touched on in 3.1 General Approach to AI-Specific Legislation, Denmark is awaiting the AI Act (including the EU AI Liability Directive and the EU Directive on liability for defective products) to come into effect.

Generally, Denmark welcomes the implementation of the AI Act and has not taken local steps that might be similar or in conflict with it. The Danish digitisation strategy and establishment of the regulatory sandbox are signs that Danish legislators are eagerly awaiting the harmonisation of the AI Act.

Due to its opt-out on justice and home affairs, Denmark has reservations regarding EU law in areas like criminal justice and police operations. This means that the AI Act regulating law enforcement authorities’ use of facial recognition will not apply in Denmark, including the use of biometric categorisation systems, predictive policing and remote biometric identification systems.

The use of AI-based facial recognition by public authorities, including the police, is becoming increasingly debated, as also discussed in 11.3 Facial Recognition and Biometrics.

There is no applicable information in this jurisdiction.

Implementation of DSM Directive Article 4

Denmark has made amendments to its copyright law in recent years to accommodate developments in AI technology, particularly in data mining. One notable development is the implementation of Article 4 of the DSM Directive on exceptions and limitations on text and data mining into Section 11 of the Danish Copyright Act.

Exceptions for Text and Data Mining

Previously, data mining could potentially infringe on copyright as it involved reproducing and analysing copyrighted material without permission from the creator, provided lawful access to the copyrighted material was obtained in the first place. However, with the introduction of Sections 11b and 11c, Denmark has recognised the importance of data mining and now enables such activities through exceptions to copyright law. As a general rule, according to the new sections, authors cannot oppose the use of their works for text and data mining.

Reservation for Text and Data Mining

While text and data mining can be used for research and AI development purposes without the need for prior permission, right holders have the option to prohibit commercial text and data mining by stating so in a machine-readable manner, including in metadata and terms and conditions for the use of a website or service. In such cases, text and data mining can only be legally done after an agreement with – and possibly payment to – the right holders.

Proactive Approach

The amendments to the Danish Copyright Act demonstrate a proactive approach to fostering AI technology through data mining while still upholding the principles of copyright protection and the rights of authors.

See 3.1 General Approach to AI-Specific Legislation.

Currently, the courts in Denmark have not issued any judicial decisions with respect to generative AI and intellectual property rights. However, the DDPA has issued opinions and decisions regarding public authorities' use of AI systems (see 7.2 Judicial Decisions).

Reasons for Lack of Judicial Decisions

It is difficult to provide any definitive answer as to the lack of judicial rulings; however, the following elements may be relevant (although not exhaustive):

  • Missing AI-specific legislation. Denmark is still awaiting AI-specific regulation, therefore there have not been any AI-specific clauses that have given rise to conflicting interpretations or understandings.
  • Early days. The lack of case law could be due to the relatively early stage of widespread use of well-developed AI solutions or systems in the private sector. Industries are still finding their footing in the use of AI and have not yet engaged in secondary commercial or legal actions.

No relevant judicial decisions have yet been issued in Denmark; however, see 5.2 Technology Definitions for more information on the Danish approach to judicial matters.

Implementation and oversight in Denmark for other areas of law related to the EU digitisation agenda (eg, NIS I) have been carried out in accordance with a well-established sector principle. This means that responsibilities are divided among authorities or agencies based on the sector, rather than having the entire legislation overseen by one or two agencies.

However, for AI the Danish Agency for Digitisation has as of 11 April 2024 been appointed to act as the national co-ordinating supervisory authority for the EU’s AI Regulation. The Danish Agency for Digitisation has historically been responsible for ensuring stability in Danish public IT projects and systems and more recently for digital infrastructure directed at citizens.

Danish authorities and agencies do not use a singular definition for AI technology. Their guidance has focused on defining AI, often pointing to definitions from the AI Act or previous discussions in the EU legislative process. 

Danish Tendency to Align with EU Law

There is a growing tendency in Danish lawmaking to adopt technical definitions from EU directives directly or with minimal amendments. Overall, Denmark seeks to implement regulations closely aligned with the original EU texts, often looking to the EU for clarity on definitions rather than expanding or interpreting them nationally. Therefore, businesses operating under Danish jurisdiction would benefit from aligning themselves with EU definitions, whether dealing with a single technology or multiple.

As stated in 5.1 Technology Definitions, agencies in Denmark focus narrowly on their area of responsibility.

The Danish Agency for Digitisation has historically focused on implementing digital infrastructure. Co-ordinating uniform messaging and enforcement of AI regulations across other authorities and agencies will be a key challenge. However, the Danish Agency for Digitisation is generally expected to have a constructive attitude towards AI, viewing it as a benefit to Danish society.

For more details, see 3.2 Jurisdictional Law and 3.3 Jurisdictional Directives.

No notable enforcement actions directly aimed at AI have yet been taken in Denmark. As outlined in 5.3 Regulatory Objectives and 3. AI-Specific Legislation and Directives, Danish agencies and authorities are keenly interested in the use of AI technology but are currently focused on providing guidance.

The lack of notable enforcement action is also tied to the status of the AI Act, with Danish authorities seemingly preferring not to introduce substantial local legislation ahead of the Act, as discussed in 5.2 Technology Definitions on the Danish legislator’s approach.

Certain Danish standards are set for IT security and responsible use of data, such as the D-mark (D-mærket), which is an industry initiative from, among others, the Danish Industry Foundation in collaboration with the Confederation of Danish Industry and the Danish Chamber of Commerce.

This standard does not yet include specifics related to AI, and no other significant Danish standards are set as of yet. However, the expectations are that the industry will update the D-mark or set a similar standard in due course.

While government authorities have provided guidance (see 3. AI-Specific Legislation and Directives and 5. AI Regulatory Oversight), they have not set any standards, nor are there apparent plans to do so.

International standards such as ISO and IEC will most likely provide an important contribution in shaping local Danish standards in an operational sense, where legislative measures from the EU or government authorities and agencies do not set out norms in detail.

Denmark’s cautious approach to lawmaking in cybersecurity, technology, and related fields (see 5.2 Technology Definitions) where the EU has set out legislation, has the implicit effect that international standards have become indirectly significant for many Danish industry actors seeking alignment with other commercial entities.

Currently, AI in Denmark primarily serves as a supportive measure for expert assessments rather than providing administrative decisions. Denmark has for years aimed to automatise and increase the efficiency of the public administration, particularly within the taxation area.

Property Valuation System

Most recently, the Danish Tax Agency’s roll-out of the new property valuation system aimed to automatise the calculation of property valuation and value tax, a move which has been much debated.

Issues when Utilising AI in Public Administration

In addition to other applicable legislation, such as the GDPR, the public authorities must adhere to the Danish Public Administration Act (Lovbekendtgørelse 2014-04-22 nr. 433), including good administration practices and legal doctrines, when using AI in their administrative decisions, for example, as part of their expert assessments.

The leading opinion is that the principles of administrative law are technology-neutral, which in some cases imposes high requirements on the use of AI in public administration. This includes compliance with the principles described below:

  • The misuse of power doctrine (magtfordrejningslæren): This principle relates to objective administration, meaning that an authority may not include unbiased considerations when making discretionary decisions and that the authority is conversely obliged to consider all relevant objective considerations when exercising its discretion.
  • The inquisitorial procedure (officialprincippet): This principle entails that a public authority is obligated to obtain all relevant information in a case before making a decision.

According to these principles, it must be possible to document that an AI solution has included all relevant and necessary information and has only considered fair and objective factors in its assessment.

There is no case law regarding the use of AI. However, the DDPA has issued an opinion of particular relevance regarding public authorities’ use of AI.

Use of AI-based Profiling Tool

After a request from the Danish Agency for Labour Market and Recruitment (Styrelsen for Arbejdsmarked og Rekruttering – STAR), the DDPA issued an opinion regarding the municipalities’ legal basis to use an AI profiling tool (ASTA) designed to predict the likelihood of a citizen becoming long-term unemployed.

ASTA was not developed to issue automated decisions but merely to support decision-making, providing recommendations for relevant initiatives to social officers.

Legal Basis for Processing Activities When Using AI

In its opinion, the Danish Data Protection Agency outlined that the requirements for clarity of the necessary legal basis for processing personal data depend on how intrusive the processing activity in question is for the data subject. If the processing activities are deemed intrusive, the requirements for the clarity of the legal basis are correspondingly stricter and vice versa. In the specific case, it was assessed that the use of ASTA constituted an intrusive processing activity, necessitating a high level of clarity regarding the legal basis for the processing activities.

In general, and as elaborated in its guidance, the DDPA highlighted that the mere use of AI solutions by public authorities should not be deemed intrusive. However, citizen-focused use of such AI solutions often impacts citizens’ life situations, meaning the AI solution’s processing of personal data will typically be considered intrusive.

The Danish Centre for Cyber Security (Center for Cybersikkerheds – CFCS) published a new threat assessment in March 2024 that described how hackers may misuse generative AI.

The updated assessment focuses on how hackers use generative AI to create phishing emails or develop sub-parts of a code with harmful output. Currently, it is unclear to what extent the technology is misused, but the CFCS highlights its significant negative potential.

Despite the new threats emerging since the widespread availability of generative AI, the CFCS has not changed its overall assessment of the cyber threat towards Denmark.

Generative AI and Issues Raised

One of the main issues is the lack of transparency in the decision-making process of the AI, making it difficult to identify and correct errors. Additionally, the use of generative AI in creating realistic deepfakes, for example by using ChatGPT, raises questions about privacy and cybersecurity.

Addressing the Issues

In addition to those mentioned above, Danish policymakers are taking various steps to address these issues, including:

  • The Digitalization Agency has published guides for public authorities and businesses on the responsible use of generative AI.
  • A citizen-focused guide is expected to be released in Spring 2024.
  • A guide on data ethics for algorithm development and use has been published.

IP Protection for AI Assets

In the AI business, it is crucial to understand how to achieve IP protection in all processes, as know-how and costs accumulate not only in the final product but also in the creation process. Assets in the AI process that can be IP protected include AI models, training data, input (prompts), and the output. However, at present, AI works that AI has learned and created on its own fall outside the protection of Danish Patent and Copyright law. Furthermore, the terms and conditions of the AI tool provider can influence the protection of assets with respect to the input and output of the generative AI tool.

Potential IP Infringements

There is a risk of IP infringements under Danish Copyright law with respect to the models, or the training, input, or output data. If AI-generated works are given copyright and protection, there are concerns that the number of works with rights may explode due to AI’s high productivity. Conversely, if no rights are given to AI-generated works, a situation may arise where free-riding becomes frequent, and third parties can freely use AI, even though it has required work and costs to develop it. This can result in a loss of motivation for AI research.

For privacy see 8.3 Data Protection and Generative AI.

Protecting IP in the Development of Generative AI Models

The development of AI models involves the collection and use of large sets of data used for training the AI model. This data is often protected by copyright law, which means that the collection of data for training AI must comply with Danish copyright laws. Input and output data generated by AI tools can also be protected by IP rights, depending on their nature and originality. Trademark infringements can also occur if AI tools use brand names and logos, namely in image-generating models.

Influence of AI Tool Providers’ Terms and Conditions on Asset Protection

The terms and conditions of AI tool providers can have an impact on the protection of IP assets. For example, some AI tool providers may require users to grant them a license to use their data for various purposes, which could potentially infringe on their IP rights. Users should – like for other technologies – always carefully review the terms and conditions of the relevant AI tool.

IP Infringements and Applicable Law

IP infringements can occur in the AI process, particularly in the collection of training data. While Danish copyright law generally protects the exclusive right of the author to dispose of their works, Denmark has recently recognised the importance of data mining and allows for exceptions to copyright law for text and data mining activities. As discussed in 3.6 Data, Information or Content Laws, while there are limitations on commercial text and data mining, the law generally allows for the use of works for text and data mining purposes without the need for prior permission from the author. This is a significant change that reflects the impact AI has on the legal framework.

The intersection of data protection and generative AI raises concerns about individuals’ rights and the appropriate use of personal data. Denmark’s focus has been on the right to rectification and erasure of personal data. Purpose limitation and data minimisation are crucial in complying with applicable Danish laws, as they strike a balance between AI development and protecting individuals’ privacy.

The right to rectification may involve correcting inaccuracies in the output or ensuring future iterations of the AI model no longer produce false claims which, in practical terms, is extremely difficult.

The right to erasure, also known as the “right to be forgotten”, enables data subjects to request the deletion of their personal data. However, applying this right in the context of generative AI can be complex. Deleting the entire AI model may not be necessary, especially if the model generates outputs unrelated to individuals. Instead, a more targeted approach may be required, such as deleting or anonymising specific personal data within the AI model.

With regards to purpose limitation, generative AI models should be designed with clear purposes, ensuring data subjects are aware of how their personal data will be used. Transparency is essential to maintain trust and protect individuals’ rights, as also emphasised by the DDPA in its October 2023 guidelines on the development and use of AI by public authorities (see 3.3 Jurisdictional Directives). In particular, the guidelines advise public authorities to consider several factors before starting to develop AI models, such as the legal basis for processing, the duty to inform data subjects about the processing of their personal data, and the need to conduct risk assessments.

In the context of generative AI, data minimisation is especially important to prevent excessive collection and retention of personal data. Techniques such as data anonymisation and aggregation can be employed to minimise reliance on identifiable personal data while achieving desired functionalities in AI models.

Given the rapid development of AI technologies, the DDPA has prioritised AI as a focus area for its supervisory activities in 2024. Consequently, we can anticipate further guidelines, and initiatives from the agency in the near future.

Regulation of AI in Law by Local Organisations

The use of AI in law is currently subject to regulation by local organisations such as the Danish Bar and Law Society, which are tasked with ensuring that AI and its use in the legal field adhere to ethical and professional standards.

Establishment of a Working Group

In 2023, the Association of Danish Law established a working group to identify and describe the use of AI in the legal profession. The aim of this group is to highlight the potential challenges that may arise from the use of AI in law and propose strategies to address them.

AI in Litigation

Danish legal practitioners are increasingly relying on AI-driven tools for tasks such as document reviewing and legal searches, which offer automated support services, facilitating more efficient and cost-effective case preparation. However, given the novelty of AI in litigation, no specific rules or regulations currently exist.

Ethical Concerns

The use of AI in law raises significant ethical concerns, particularly with regards to the potential reduction in human judgment and accountability. This could threaten core values of the legal profession, including fairness and justice. To address this, organisations such as the Danish Bar and Law Society must continue to monitor and regulate the use of AI in law to maintain ethical and professional standards.

Liability for Personal Injury and Commercial Harm Resulting from AI-Enabled Technologies

AI-enabled technologies have the potential to cause personal injury or commercial harm, raising questions about liability and responsibility. In Denmark, there is currently no specific regulation; however, see 3.1 General Approach to AI-Specific Legislation for future legislation on liability for AI-related injuries.

Theories of Liability and Requirements for Imposition of Liability

Theories of liability for personal injury or commercial harm resulting from AI-enabled technologies include product liability, negligence, and strict liability. To impose liability it must be shown that the AI technology caused the harm, that there was a duty of care owed by the operator, and that there was a breach of that duty.

Role of Human Guidance and Allocation of Liability

The role of human guidance is also important in determining liability resulting from AI. An operator who is only assisted by AI-enabled technology has a greater influence than an operator whose function has completely been replaced by an AI system. Hence, the operator who is only assisted by AI-enabled technology has a greater presumption of being liable.

Insurance

Insurance plays a critical role in managing the risks associated with AI-enabled technologies. It is essential to determine the scope of coverage and exclusions in insurance policies for AI-related claims. At this moment, the discussion of insurance coverage for AI-enabled technologies remains purely theoretical, as there does not exist any publicly available information or practical industry discussion in Denmark.

The European Commission proposed an AI Liability Directive in September 2022 (see 3.2 Jurisdictional Law).

Algorithmic Bias in the Public and Private Sector

Biased algorithms could lead to unequal access to healthcare services, misdiagnosis, or inappropriate treatment recommendations. Similarly, in the private sector, algorithmic bias can have severe consequences. For example, biased loan algorithms may disproportionately deny credit to certain groups. Legislators and authorities have historically been aggressive in ensuring citizen and consumer rights, and it is to be expected that this approach will remain the same in respect of any use of AI.

Liability Risks

Although Denmark has not yet implemented specific regulations targeting algorithmic bias, the DDPA actively monitors developments in AI and provides guidance to organisations on complying with existing laws. Furthermore, in Denmark, the Danish Agency for Digitization has taken a significant step by creating a strategy for AI, setting a roadmap for Denmark as well as publishing guidelines on responsible, non-biased, use of generative AI.

Risks

AI models typically rely on large data sets to train and improve their algorithms, thus the principle of data minimisation poses significant challenges in the context of AI technology. Companies may face increased risks of using data for unintended purposes, processing information beyond the scope of the data collection, and retaining data for longer than necessary. Striking a balance between data minimisation and the effectiveness of AI algorithms is a complex challenge. To address this, companies need to ensure that they have a legitimate basis for collecting and processing personal data. However, identifying and applying a legal basis is complex, as has also been highlighted by authorities in other countries, for example in relation to ChatGPT in Italy.

Possibilities

On the other hand, AI technology also offers several advantages in terms of personal data protection. It can serve as a privacy-enhancing technology that helps companies meet their data protection obligations. For example, AI can generate synthetic data that mimics real-world data, helping to train machine learning algorithms without exposing actual personal data. Synthetic data can also help mitigate algorithmic bias by using fair synthetic data sets that are manipulated to avoid, for example, gender or racial discrimination.  AI can also provide a more robust defence against cyber threats and mitigate data breaches by layering security measures with advanced threat detection, pattern analysis and faster response times.

Under the GDPR, facial recognition relies on the processing of biometric data, which is considered a special category of personal data. The GDPR generally prohibits the processing of biometric data unless explicit consent or a legitimate justification under the GDPR or other legislation is obtained. Companies must identify a legal basis under the GDPR and implement strong security measures to protect the biometric data they collect. To mitigate such risks, companies should conduct thorough risk assessments and implement appropriate security measures. This includes conducting regular audits, documenting data processing activities, and providing clear information to individuals whose biometric data is collected.

In Denmark, the DDPA has the authority to authorise the processing of biometric data by private organisations if it is necessary for substantial public interest. For instance, in June 2023, the Danish football club Brøndby IF, obtained authorisation to use facial recognition technology during matches, including those held in other stadiums, after the football club applied for extended use of its system.

There is an ongoing debate about the use of facial recognition in Denmark. For example, the use of facial recognition by the Danish police in public places has been the subject of recent debate, also in light of the AI Act, which prohibits real-time facial recognition in public places for law enforcement purposes (see 3.4.2 Jurisdictional Conflicts).

Companies using automated decision-making, including profiling, must comply with the GDPR. According to Article 22 of the GDPR, individuals have the right not to be subject to a decision based solely on automated decision-making, including profiling, if it would have a significant adverse effect on the individual. However, there are exceptions where such processing is necessary for the conclusion or performance of a contract, is authorised by other legislation, or explicit consent has been obtained from the individual subject to the automated decision-making. If no exception applies, the decision must be made with human intervention. In addition, companies may be required to conduct an assessment of the impact of their automated decision-making processes on the protection of personal data. However, when automated decision-making is used to process special categories of data, including biometric data, explicit consent must be obtained or the processing must be necessary for reasons of substantial public interest, as further discussed in 11.3 Facial Recognition and Biometrics.

However, as also highlighted by the DDPA in its guidelines on the development and use of AI by public authorities, and as further discussed in 3.3 Jurisdictional Directives and 8.3 Data Protection and Generative AI, it may be difficult to obtain valid consent for the processing of personal data in the context of complex AI models. Often, there will be a clear imbalance in the relationship between the data subject and the data controller. For example, if the processing of personal data, such as benefit claims, has an impact on the individual's life situation – whether real or perceived – the individual’s consent cannot be considered freely given. In addition, consent must be specific and informed, and it must be possible to manage the withdrawal of consent and stop the processing of personal data. This can be challenging in complex AI models where data is processed in many different ways, as it is crucial for the validity of consent that the individual understands what their data is being used for and can opt out of those purposes. The more data is to be used, including for automated decision-making, the more difficult it will be to meet the conditions for valid consent.

In Denmark, the use of AI technologies, including chatbots, as a replacement for services rendered by natural persons is subject to the GDPR. Articles 13, 14 and 15 of the GDPR set out the transparency rules and require data controllers to inform individuals about the processing of their personal data, including when AI is involved. The specific information to be provided depends on how personal data is collected; ie, data may be collected directly from the data subject, for example through a job application, or the data may be collected through a third party. In both cases, the individual must be informed of the purpose and use of his or her data, as well as of any proposed new use of that data.

Dark Patterns

However, the use of technology to manipulate consumer behaviour or make undisclosed suggestions, also commonly known as “dark patterns”, raises concerns as it makes it difficult for individuals to make informed choices about their personal data. Dark patterns could entail sharing personal information without clear intent or making purchases by mistake. These practices are often considered unfair under the Danish Marketing Practices Act (Markedsføringsloven).

The Digital Services Act also addresses the issue of dark patterns through its provisions prohibiting online platforms from using dark patterns that manipulate users into making decisions that are not in their best interests.

Price algorithms can be an effective tool for companies to set prices, and the Danish Competition and Consumer Authority (Konkurrence- og Forbrugerstyrelsen) has also recognized the need to address issues related to the use of AI in price-setting.

Generally, price algorithms can be categorised into three types:

  • algorithms that compare and provide a price overview of products/product categories and markets or countries;
  • algorithms that provide oversight of products that do not comply with established price rules and recommendations for prices to comply with such rules; and
  • algorithms that automatically update prices based on internal rules or techniques such as machine learning.

What these algorithms all have in common is that customers usually encounter the same price. In contrast, price discrimination occurs when companies charge different prices to different customers based on factors such as their buying history, willingness to pay, or other characteristics.

Moreover, there are concerns with the use of AI in pricing, such as co-ordinated behaviour and price agreements, which can weaken competition and harm consumers. To address these issues, the Danish Competition and Consumer Authority has established the “Center for TECH” to strengthen the enforcement of competition rules and analyse and monitor companies’ use of big data, machine learning, AI, and algorithms.

In a Danish context, contracts between AI customers and suppliers will be key to resolving a number of the issues facing the use of AI technology in a B2B context. Several of these issues are outlined below.

Intellectual Property Rights and Trade Secrets

Ensuring intellectual property rights is crucial in AI contracts, See 15.1 Applicability of Patent and Copyright Law and 15.2 Applicability of Trade Secretcy and Similar Protection for details.

Liability and Accountability

Addressing liability in the context of AI’s autonomous decisions is also key. Contracts should specify the supplier’s liability scope, detailing the due diligence required and the mechanisms for accountability. See 10.1 Theories of Liability for more.

Regulatory Adaptability

Given the dynamic nature of AI regulation, contracts should incorporate terms allowing for periodic revisions. This ensures that the agreement remains compliant with evolving legal and ethical standards, enabling businesses to navigate the fast-changing AI landscape effectively.

Drafting

Contracts need to add more detail to the areas mentioned above, as well as other areas heavily addressed in drafting and negotiations, such as performance and service levels. Within these areas, adequate attention is required for all aspects of AI procurement, including questions like how the AI trains on data, which data is used, who uses it, and what baseline can be established for performance. These are just some of the critical questions that need to be answered and negotiated.

Using advanced algorithms, AI can quickly sift through thousands of applications and identify relevant candidates based on their skills and experience. AI can also help eliminate human bias, ensuring that the focus is solely on the candidate’s qualifications and competencies. AI offers potential benefits such as cost savings, streamlined processes and improved hiring decisions. But AI also poses significant risks, including privacy, non-discrimination and equal treatment concerns.

Therefore, when developing AI-based recruitment and employment tools, employers must ensure that the technology complies with the GDPR, as well as the Danish Act on Prohibition of Discrimination in Employment (Forskelsbehandlingsloven) and the Danish Act on Equal Treatment between Men and Women (Ligebehandlingsloven). Regular audits, transparency in the use of AI in the selection process and corrective action when bias is identified are crucial steps to mitigate potential liability risks.

In Danish workplaces, various technological tools and solutions have emerged to facilitate the evaluation and monitoring of employees.

Such tools can cause potential harm to employees. Data accuracy and reliability are crucial as further discussed in 11. Legal Issues With Predictive and Generative AI, and certain systems for emotional monitoring are directly prohibited under the AI Act if directed at employees.

In a Danish context, the GDPR imposes strict requirements on the collection, processing, and storage of personal data, also for evaluation and monitoring purposes. Employers must be transparent about the purpose and extent of monitoring as further discussed in 11.5 Transparency,and implement measures to safeguard employee privacy. Failure to comply with these requirements can expose employers to potential liability. It is also important for employers to establish clear policies and guidelines regarding the use of technology for evaluating and monitoring employees.

Companies like GoMore, a Danish mobility operator, have harnessed the power of digital platforms to facilitate private car hire, leasing, and carpooling options. By utilising keyless access and real-time location tracking of available vehicles, GoMore makes it easier for platform users to plan their trips efficiently.

The food delivery sector in Denmark has also witnessed advancements due to digital platforms. Platforms like Wolt employ algorithms to optimise the delivery experience, for example by estimating the time required for restaurants to prepare customers’ food orders and calculating the time it will take for a courier partner to deliver it to the customer.

However, the rise of platform work has also posed regulatory challenges. The EU has taken the lead in addressing these concerns by proposing specific rules for digital labour platforms in a new directive. The directive will require platform workers to be informed about the use of automated monitoring and decision-making systems. It also prohibits the processing of certain types of personal data, such as emotional or psychological states, racial or ethnic origin, migration status, political opinions, religious beliefs, health status, and biometric data, except for data used for authentication purposes.

Denmark’s financial services sector is undergoing a significant digital transformation. Banks and insurance companies are embracing AI technology in relation to, inter alia, credit scoring, customer interfaces and standard AML practices. However, as they delve into the realm of AI, financial institutions are also recognising the need for caution when dealing with vast amounts of customer data. One significant concern is the potential for AI algorithms to make erroneous decisions or predictions due to biases inherent in the data. Biased data can result in discriminatory practices, such as biased loan approvals or pricing as further described in 11.1 Algorithmic Bias.

The DFSA has been proactive in addressing the challenges and risks associated with AI implementation. For example, the FSA has published recommendations for financial institutions on the use of supervised machine learning, emphasising the importance of using AI responsibly. Furthermore, financial institutions must adhere to the GDPR and the DPA.

The use of AI in healthcare has been rapidly increasing in recent years in Demark, providing more efficient and effective care to patients. However, the World Health Organization (WHO) urges caution in the use of AI and has released guidelines on the use of AI in healthcare. These guidelines emphasise the importance of ensuring that AI is used in a responsible and ethical manner to ensure patient safety and privacy.

One of the potential risks associated with AI in healthcare is algorithmic bias; see 11.1 Algorithmic Bias.

AI is also being increasingly used in software as a medical device and related technologies such as wearables and mobile health apps. While these technologies have the potential to provide more personalised care, they also raise concerns about data privacy and the accuracy and reliability of the AI algorithms used. To mitigate these risks, it is essential to ensure that health data is collected and used in a responsible and ethical manner in compliance with the GDPR and the Danish Data Protection Act (DDPA).

Robotic surgery is another area where AI is being used in healthcare. In Denmark, robotic (assisted) surgery has been widely used in gynaecology and other areas and is subject to applicable Danish law within the area of healthcare (eg, The Danish Health Act (Sundhedsloven)) and patient rights legislation concerning liability and damages.

A “self-driven” vehicle is a vehicle that can drive completely or partially without the assistance of a driver. In Denmark (upon prior authorisation) you can experiment with small autonomous vehicles in the public space, which has been governed by the Danish Road Traffic Act (Færdselsloven) since 2017.

One of the major challenges in autonomous vehicle navigation is the AI’s ability to understand the social codes of traffic that enable human drivers to decide whether to take evasive action or keep driving. This has been emphasised in new research from 2023 by the Department of Computer Science at the University of Copenhagen (Datalogisk Institut). Danish liability law for accidents on the road is on a no-fault basis. However, if the accident involves autonomous vehicles, liability might shift to the holder of the authorisation to experiment with autonomous vehicles. Denmark might look to other legal frameworks, such as Britain’s approach, which aims to shift the liability involving self-driven vehicles away from the passengers and onto the regulated licenced operators.

The use of autonomous vehicles also involves a number of data protection concerns, as such use may involve the collection of personal data about drivers and passengers, which the EDPB highlighted in their guidelines back in 2021 on connected cars. However, cars equipped with cameras in and around the car may also be subject to processing of personal data. Both the recording and the subsequent processing of personal data by the car’s cameras are rarely known to anyone other than the driver, partly because no information is provided about the recordings outside the car. And even if the main purpose of the cameras is not to process personal data, the GDPR still applies as long as the individuals are identified or identifiable. Car manufacturers working with connected cars must therefore ensure that the collection and processing of personal data in the use of connected cars comply with the GDPR.

AI is increasingly being integrated into manufacturing processes in Danish companies. Manufacturers are implementing intelligent automation, predictive analytics and machine learning algorithms to achieve reduced downtime and optimised use of materials, etc.

While there have not yet been any major developments in this area of regulation in Denmark, the requirements of the Danish Product Liability Act (Produktansvarsloven) are relevant for manufacturers using AI in manufacturing. The European Parliament endorsed the text at its 2024 March plenary for an updated Product Liability Directive. The revised Product Liability Directive addresses the complexity of digital products and extends liability for defects to software. However, the final interpretation of the Directive will also depend on how it is transposed into Danish law.

Concerning the proposed AI Liability Directive, see 3.1 General Approach to AI-Specific Legislation.

When AI technology is used in professional services, the general view in Denmark remains that it is the responsibility of the professional to ensure that the case is handled correctly, and that the advice is fact-checked. Use of AI will in this respect most likely not be subject to separate legislation; rather, existing rules and legislation for professional services will be understood to encompass AI as they do other technologies used in service provision.

In addition, the use of AI can raise questions about ownership and protection of intellectual property rights. The challenge is to determine the owner of creations made through AI technology, which is further discussed in 15.3 AI-Generated Works of Art and Works of Authorship.

Moreover, professionals must comply with data protection laws, such as the GDPR and the DPA, to protect client privacy and prevent unlawful processing of personal data when using AI in professional services.

Lack of Judicial or Agency Decisions on AI Technology and Inventorship

As of now, there have been no judicial or agency decisions in Denmark regarding whether AI technology can be considered an inventor or co-inventor for patent purposes, or an author or co-author for copyright and moral right purposes. However, under Danish patent law, the inventor must be a physical person. Therefore, it is doubtful that AI technology would qualify as an inventor or co-inventor under current legislation.

Similarly, current Danish copyright law stipulates that right holders must be physical persons, which excludes the possibility of AI technology being an author. Consequently, AI-generated works cannot be protected, leaving them without a designated copyright holder, which means that no one has the exclusive right to dispose of and produce copies of AI-generated works.

Human Input

While AI technology cannot be considered an inventor or author under current Danish copyright law, it might be worth considering whether there exist situations where human input is significant enough to justify copyright protection for AI-generated works. This is particularly relevant in cases where the amount of prompts used in the generative AI process is very significant, and the human input involved in creating and selecting these prompts is extensive.

In such cases, it may be argued that the human contribution to the AI-generated work is significant enough to meet the threshold for copyright protection under Danish copyright law; however, this issue needs to be explored further.

EU Case Law

The Danish legal perspective aligns with EU case law. In 2020, the European Patent Office (EPO) decided that the AI system DABUS could not be considered an inventor, as the European Patent Convention (EPC) requires the inventor to be a physical person. This decision reflects the prevailing view that AI technology does not qualify for inventorship or copyright authorship under current laws.

Applicability and Contractual Aspects of Trade Secret and Similar IP Rights for Protecting AI Technologies and Data

Trade secrets and similar intellectual property rights can cover different aspects of AI, such as algorithms, training data, and implementation details. To safeguard confidential information and trade secrets related to AI, companies may sign non-disclosure agreements (NDAs) with their employees.

Contractual Arrangements for Compliance with Danish IP Regulations

In addition, contractual arrangements play a significant role in ensuring compliance. Companies can protect their AI technologies and data by using contractual clauses that address specific IP issues, such as ownership, licensing, and infringement. These contractual provisions should be tailored to address the unique aspects of AI technologies and data, such as the ownership of AI-generated works, protection of proprietary algorithms, and use of data for training AI models.

Tailoring Contractual Provisions for AI Technologies and Data

The most important thing is to regularly review and update contractual arrangements to ensure they remain relevant and up to date with any new updates regarding trade secrets within AI technologies and the forthcoming Data Act.

Originality Requirements under Danish Copyright Law

Under the Danish Copyright Act, for a work to be eligible for protection, it must be original, meaning that it must be an expression of the author’s creative effort. Therefore, works that result from purely routine activities, such as automatic translations or simple text messages, are not original and are not eligible for protection.

Originality of AI-Generated Works

The question of whether AI-generated works meet the required level of originality has been a topic of discussion. It has been debated whether a machine can constitute a “creative effort” when it relies mainly on human input. AI-generated works are often created through algorithms and machine learning models, which raises the question of whether the machine or the human input should be considered the author.

Authorship Requirements Under Danish Copyright Law

Another obstacle to granting intellectual property protection to AI-generated works is the authorship requirement under the Danish Copyright Act. The law currently prescribes that the author must be a physical person, excluding the possibility of an AI system being considered the author of its works. This means that AI-generated works cannot be protected, leaving them without a designated copyright holder and no exclusive right to dispose of or produce copies of the work.

Ownership of Works and Products Created Using OpenAI

One of the main issues – also in a Danish context - related to OpenAI is the lack of protection for works and products created using the tool. This means that the ownership of the output is not clear, leaving it vulnerable to being used by anyone. The lack of protection raises questions about who has the right to use, distribute, or modify the output generated by OpenAI.

Infringement of IP Rights

Another significant issue related to OpenAI is the potential risk of infringing other individuals’ IP rights. This risk is particularly high when feeding copyrighted content into the system without proper permission.

Confidentiality and Trade Secret Concerns

Additionally, concerns regarding confidentiality and trade secrets may arise when providing input to OpenAI. Users must ensure that they have the rights to any data or information fed to the system and that this information is not confidential. Failure to do so could result in legal action, including breach of contract claims, trade secret misappropriation, and other related claims.

Addressing IP Issues When Using OpenAI

To mitigate the IP risks associated with OpenAI, users must take steps to ensure that they have the rights to use any input data and that they do not infringe on other people’s IP rights. Users should also consider entering into agreements with third-party content owners to obtain proper permission before using copyrighted content.

Compliance with Danish and EU Regulations

Advising corporate boards in Denmark is currently a matter of ensuring a multi-disciplinary approach. The focus of commercial decision-makers is in the current environment very much on ensuring commercial benefits; accordingly, advice needs to be balanced and adequate consideration must be given to technical and legal aspects in addition to the commercial possibilities. 

Risk Assessment and Management

In order to ensure necessary processes, boards need to start establishing comprehensive risk assessment frameworks to identify potential legal, operational/technical, and reputational risks associated with AI deployment. This includes evaluating the reliability, transparency, and accountability of AI systems, as well as their alignment with the company’s strategic objectives and ethical standards.

Training and Awareness

Boards should also prioritise training and awareness programmes to understand the capabilities, limitations, and risks of AI. A number of Danish actors are still in the early stages of understanding AI as such. This includes keeping abreast of technological advancements and regulatory changes, enabling informed decision-making and fostering a culture of AI literacy within the organisation.

Alignment with National and International Guidelines

Now that actual clear regulatory guidance is available, organisations and companies in Denmark need to align their AI practices with the EU’s ethical and legal frameworks and the Danish government’s national strategies.

Practical Implementation of AI Ethics

For Danish industries using AI in relation to sensitive data or where the business is subject to specific risks, it is essential to stay ahead of potential reputational harm by translating abstract ethical principles into actionable practices. This involves integrating ethics into the AI development life cycle, from design to deployment, ensuring that AI systems are transparent, explainable, and aligned with societal values.

Data Governance and Privacy

Another key area is ensuring data quality, securing data storage and transmission, and respecting user privacy (see 11.2 Data Protection and Privacy).

Capacity Building and Stakeholder Engagement

To effectively implement AI best practices, Danish organisations need to invest in building internal expertise and fostering a culture of continuous learning. Such capacity should ensure capabilities in all areas of commercial, operations, technical and law.

Bird & Bird

Advokatpartnerselskab
Kalkbrænderiløbskaj 8
2100 Copenhagen Ø
Denmark

+45 72 24 12 12

denmark@twobirds.com www.twobirds.com/da/reach/nordic-region/denmark
Author Business Card

Trends and Developments


Authors



Bird & Bird is Denmark’s leading international law firm in the areas of technology and digitalisation. The firm’s technology and data practice in Denmark is made up of 20 skilled lawyers, making it one of the largest teams in the field in Denmark and the Nordics. Bird & Bird has a reputation for providing sophisticated, pragmatic advice to companies that are carving the world’s digital future. Bird & Bird helps with all aspects of deploying and developing new technologies, such as generative AI, the latest developments with software and data, and key regulatory considerations impacting businesses that create and harness technology. With more than 1,700 lawyers and legal practitioners across a worldwide network of 32 offices, Bird & Bird delivers expertise across a full range of legal services, operating as one truly international partnership with shared goals, accounting and profit pool. The firm’s commitment is to provide clients with advice from the right lawyers, in the right locations.

AI and Liability in an Increasingly Digitalised Danish Market

It is clear that AI is poised to become a widely used tool within Danish authorities and corporations. Although AI has been met with varying degrees of enthusiasm and scepticism in Denmark, the potential applications of AI are undeniable. 

The high level of digitalisation in the public sector and society, coupled with Denmark’s ambitious digital infrastructure, creates a perfect breeding ground for AI implementation. As the Danish Data Protection Authority’s 2023 report highlights, public authorities are actively seeking ways to integrate AI into their work processes. Additionally, Statistics Denmark’s 2021 survey indicates that a significant portion (21%) of corporations with at least ten employees were already utilising AI by that time.

As AI continues to integrate into the Danish digital market, it operates against a backdrop of evolving legislation. The AI Act, adopted by the European Parliament on 13 March 2024, and the forthcoming AI Liability Directive, are set to shape the legal framework within which AI operates in Denmark. While AI’s integration is advancing, numerous unresolved legal questions persist.

This article discusses two critical legal aspects that will shape the future legal landscape for AI under Danish jurisdiction. The first aspect concerns liability issues specific to AI. For instance, claimants in AI-related lawsuits might face challenges in lifting the burden of proof. This is particularly relevant in the Danish legal context, where fault-based liability combines elements of both civil law and common law. The second aspect concerns the administrative realities that public authorities must navigate as they implement AI and comply with the AI Act locally.

The Structure of the AI Act and Its Integration into Danish Law

The general structure of the AI Act is built upon a distinction between AI systems and general-purpose AI models (GPAI). AI systems are further categorised based on the level of risk they pose: the higher the risk, the higher the level of compliance required (and in extreme cases, certain AI systems will be unlawful).

In addition to categorising AI systems, the AI Act also distinguishes between multiple actors involved in the AI systems’ supply chain. Key among these are providers, the developers of AI systems, and deployers, the users of AI systems. The Act imposes the heaviest compliance requirements on providers of high-risk AI systems. These requirements include implementing various safeguards, maintaining transparency, and documenting automatically generated logs.

Danish law has not previously regulated AI specifically. AI has until now been subject to other regulations as any other technology, such as the GDPR, copyright, etc. The AI Act will therefore be the first, specific regulation of the AI sector in Danish law and the obligations therein will drive Danish understanding of appropriate behaviours, standards, and duties when developing or deploying AI.

The AI Liability Directive’s Possible Impact on Danish Law

The purpose and key content of the AI Liability Directive

Although compliance requirements are heavily detailed in the AI Act, the matter of liability is not regulated in the act. The AI Act is, however, complemented by the forthcoming AI Liability Directive.

The current proposal for the directive intends to establish new rules for claimants in cases of fault-based liability – ie, non-contractual claims for damages caused by AI. Currently, the directive remains to be adopted by the European Parliament and Council.

The directive is relevant for specific matters – ie, where damage was either (i) caused by an AI system or (ii) caused by the failure of an AI system to produce a specific output and as already mentioned for non-contractual, fault-based claims.

The directive introduces two significant new provisions. First, it allows claimants to request national courts to order providers and deployers of AI systems to disclose relevant information for evidence. Second, it sets out conditions under which a rebuttable presumption of causation exists between a defendant’s fault and the damage caused by the AI system.

This is to lessen the burden of proof for claimants, as lifting the burden of proof for damages caused by AI systems may be a tall order. For example, if an AI system has been illegally trained with copyright-protected material, it would be difficult to raise any claim without further insight into the AI system’s operations and logs.

Rules on discovery

The AI Act imposes a number of documentation compliance requirements for providers of high-risk AI systems (and deployers, although these requirements are less demanding). These requirements include documenting logs generated by high-risk AI systems.

These requirements, coupled with national court rules on discovery, present interesting opportunities for both plaintiffs and defendants in lawsuits concerning high-risk AI. Clearer documentation, particularly logs generated by the AI system itself, can provide defendants with a stronger defence. On the one hand, these logs can help establish facts and demonstrate the system functioned as intended, potentially mitigating liability claims. On the other hand, discovery rules that allow access to this documented information create new avenues for claimants to build their case. By reviewing logs and other AI-related documentation, claimants can gather evidence to support a causal link between the AI system and any alleged damages.

The future enactment of the AI Liability Directive under Danish law presents a fascinating development, especially considering that current Danish rules on discovery or disclosure are relatively weak compared to those in countries like Britain.

Presumption of liability under the AI Act and AI Liability Directive and the implications for Danish liability law

As the AI Act entails comprehensive compliance requirements, how these requirements are understood by Danish courts in light of how to establish liability when AI is involved is therefore going to be an essential question for a number of actors, chiefly developers and deployers. 

Fault-based liability under Danish law may be established in a number of ways, however one is of particular relevance here: a negligent action or omission is culpable through the breach of a recognised norm. These norms are typically assumed to converge with applicable law in Denmark. Consequently, it is reasonable to expect that Danish courts would be inclined to establish liability when a provider of a high-risk AI system or generative AI model breaches their obligations under the AI Act.

However, it remains to be seen exactly how and when liability will be determined in specific cases. Currently, there is a lack of specific Danish case law concerning liability and compensation in non-contractual claims related to AI systems. The extent to which providers of high-risk AI must deviate from their obligations under the AI Act or other applicable legislation to be held liable for their actions or omissions is not yet clear.

What we can anticipate, though, is that the AI Liability Directive aims to make it easier for various parties – such as customers, clients, and citizens – who have suffered damage caused by an AI system or by the failure of an AI system to produce a specific output, to seek redress. By setting up rebuttable presumptions and thereby reducing the burden of proof, the directive is designed to facilitate the process for claimants. It will be fascinating to observe the directive’s impact when enacted by Danish legislators and how it will influence the legal landscape for AI in Denmark.

This point, together with the point on Danish liability law, opens up a number of interesting considerations, which – again depending on implementation – could be novel in a Danish context.

For instance, consider a scenario where a court, acting on behalf of an injured party, requires the disclosure of certain documents or logs related to an AI system. If the required documentation is either not produced or does not exist, the court might then need to decide whether this inability to produce material itself constitutes non-compliance.

Another consideration is how courts will handle the various compliance requirements outlined in the AI Act. For example, failing to conduct a conformity assessment and failing to maintain specific logs might both be violations of legal requirements. However, it remains an open question how judges will interpret and weigh these different types of non-compliance.

In conclusion, the AI Act and the AI liability Act might end up significantly affecting Danish understanding of both of the key areas in liability law of establishing liability and proving causation.

Supervisory Authorities – The Agency for Digital Government or the Data Protection Authority

Beyond liability, another key area where AI is expected to have a significant influence in the Danish jurisdiction is the administration of the new AI Act and other forthcoming legislation with a similarly wide scope, such as the Data Act. As noted, the Danish debate on AI includes both sceptics and proponents, and there is a belief that the administration of law will vary depending on which authority legislators appoint to take the lead.

Various branch organisations, such as The Association of Danish Lawyers and Lawfirms (Danske Advokater), The Danish Industry Confederation (DI) and the Danish Association of Municipalities (KL), have expressed different preferences or asked for clarity as to which authority or agency will be the supervisory authority of the AI Act in Denmark. Currently, the expectation is that either the Data Protection Authority (Datatilsynet) or the Agency for Digitisation (Digitaliseringsstyrelsen) will be put in charge.

However, a significant development occurred on 11 April 2024, as this article was being finalised: the Agency for Digitisation was appointed as the national co-ordinating supervisory authority for the EU AI Regulation. The Agency for Digitisation has traditionally been the authority that has developed and overseen Denmark’s digital infrastructure as well as spearheaded the development of essential digital solutions, such as MitID and Digital Post – essential everyday solutions for Danish citizens’ digital infrastructure.

The role as supervisory authority means that the Danish Agency for Digitisation will, among other things, perform a co-ordinating role across the competent supervisory authorities in Denmark and co-operate with other member states in the Union.

This is a new area of responsibility for the Danish Agency for Digitisation, as it will now be involved in enforcing legislation that it has not previously been familiar with. The know-how of how to enforce legislation – especially an EU act as novel as this – through administrative decisions could require a significant learning curve for the agency. This means that private sector actors will need to exercise patience as the agency adapts to its new role.

The Danish Agency for Digitisation now has a dual role, as the agency will now both implement and supervise the regulation. The Danish Agency for Digitisation is placed under the Ministry of Digitalisation, which deals with AI politically, and the Danish Agency for Digitisation itself operates major public IT solutions that incorporate AI.

The Danish Agency for Digitisation is now initiating work on the implementation of the regulation. The focus is on communicating the requirements and obligations that the regulation contains and clarifying who is covered by the regulation. As part of the Danish government’s digitalisation strategy, the Danish Agency for Digitisation and the Danish Data Protection Agency are establishing a regulatory sandbox for AI, where companies and authorities can access relevant expertise and guidance on GDPR when developing or using AI solutions.

Danish Public Authorities’ Implementation of AI Systems: A Potential Challenge for Both Providers of AI Systems and Public Authorities

Another relevant area to cover for Denmark in relation to administration of law, is how the Danish Data Protection Authority (Datatilsynet) has handed down several decisions in recent years, specifically targeting Helsingør Municipality regarding its schools’ use of Chromebooks and compliance with privacy legislation.

The most recent decision stated that the use of Chromebooks in the schools was in breach of the GDPR, as the municipalities in some of the processing activities had no legal basis for disclosing the personal data of the children to Google (as Google delivered a variety of services through the Chromebooks themselves, which were subject to date processing).

Although the decision is relevant to data protection law, the decision also implicates further issues that might potentially affect how especially public authorities might utilise AI systems. In accordance with Danish administrative law and data protection law, including of course the GDPR, Danish public authorities, municipalities and other agencies must, prior to deploying an AI system, make sure that they have a legal basis in order to use said system in their exercise of authority. Specifically in the Chromebook case, this would include an examination of whether data subjects’ data is disclosed to the provider of the system, how the provider processes the data and, most importantly, whether the public body or organ can find a sufficient legal basis to deploy the system.

In the context of this article, it is clear that public authorities as deployers of AI systems must pay close attention to how the AI Act is understood and administered in a Danish context, similar to the GDPR’s administration by the Danish Data Protection Authority.

The question arises whether the development of public authorities’ use of AI systems will lead to restrictions in their use or if public authorities will need to negotiate terms and conditions with AI system providers to ensure compliance with AI-related regulations, administrative law, and data protection law.

In working together with Danish authorities or municipalities, this also means that providers of AI systems must adopt a precautionary stance.

Summary

AI will undoubtedly be vital in optimising the Danish digital market and economy in this digitalised age. However, its implementation raises certain legal questions that need to be resolved.

The AI Liability Directive will lessen the burden of proof for claims brought before national courts by claimants. While the provisions offer possibilities on paper, their practical application will only become apparent once implemented, necessitating significant choices by lawmakers to work them into Danish law.

Danish government agencies will need to adapt to new roles and responsibilities, both as enforcers of AI regulations and as users of the technology. How the Agency for Digitisation will fulfil its role and ensure the enforcement of the AI Act will be crucial. This process is likely to be closely observed, especially in light of the recent developments in the Chromebook case, which highlight the need for Danish authorities to seek guidance and maintain a precautionary stance regarding AI use.

Public authorities, alongside providers and deployers of AI systems, will face uncertain liability law. This uncertainty underscores the importance of collaboration and compliance with the evolving legal landscape to ensure the responsible and effective integration of AI in Denmark.

Bird & Bird

Advokatpartnerselskab
Kalkbrænderiløbskaj 8
2100 Copenhagen Ø
Denmark

+45 72 24 12 12

denmark@twobirds.com www.twobirds.com/da/reach/nordic-region/denmark
Author Business Card

Law and Practice

Authors



Bird & Bird is Denmark’s leading international law firm in the areas of technology and digitalisation. The firm’s technology and data practice in Denmark is made up of 20 skilled lawyers, making it one of the largest teams in the field in Denmark and the Nordics. Bird & Bird has a reputation for providing sophisticated, pragmatic advice to companies that are carving the world’s digital future. Bird & Bird helps with all aspects of deploying and developing new technologies, such as generative AI, the latest developments with software and data, and key regulatory considerations impacting businesses that create and harness technology. With more than 1,700 lawyers and legal practitioners across a worldwide network of 32 offices, Bird & Bird delivers expertise across a full range of legal services, operating as one truly international partnership with shared goals, accounting and profit pool. The firm’s commitment is to provide clients with advice from the right lawyers, in the right locations.

Trends and Developments

Authors



Bird & Bird is Denmark’s leading international law firm in the areas of technology and digitalisation. The firm’s technology and data practice in Denmark is made up of 20 skilled lawyers, making it one of the largest teams in the field in Denmark and the Nordics. Bird & Bird has a reputation for providing sophisticated, pragmatic advice to companies that are carving the world’s digital future. Bird & Bird helps with all aspects of deploying and developing new technologies, such as generative AI, the latest developments with software and data, and key regulatory considerations impacting businesses that create and harness technology. With more than 1,700 lawyers and legal practitioners across a worldwide network of 32 offices, Bird & Bird delivers expertise across a full range of legal services, operating as one truly international partnership with shared goals, accounting and profit pool. The firm’s commitment is to provide clients with advice from the right lawyers, in the right locations.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.