Denmark has not at this time implemented any specific regulation governing artificial intelligence (AI). However, on 26 February 2025, the Danish government introduced a bill concerning the first Danish AI Law (Forslag til Lov om supplerende bestemmelser til forordningen om kunstig intelligens). If enacted, this law will enter into force on the 2 August 2025, supplementing the regulation set out in the AI Act. The primary focus of the Danish AI Law is the appointment of national competent authorities, sanctions and enforcement of the Artificial Intelligence Regulation Chapter II – regarding prohibited forms of AI practices.
Notwithstanding the absence of specific AI regulations at this point, AI will still be subject to Danish law. This means that AI in many cases must be construed and used in accordance with Danish legal principles and enacted legislation. This applies to a number of legal areas, and therefore, only a few legal areas are highlighted below. In a Danish context, both authorities and the private sector have focused primarily on the following.
Data Protection
In addition to the General Data Protection Regulation (GDPR), the Danish Data Protection Act (Lovbekendtgørelse 2024-03-08 nr. 289) supplements this regulation. The Danish Data Protection Agency has issued a guide on public authorities’ use of personal data throughout the life cycle of an AI system (see 3.3 Jurisdictional Directives).
Intellectual Property Law and Trade Secrets
The use of AI is also subject to the Danish regulation of intellectual property rights, including but not limited to the Danish Copyright Act (Lovbekendtgørelse 2023-08-20 nr. 1093) and Danish Trade Secrets Act (Lov 2018-04-25 nr. 309 om forretningshemmeligheder). For example, AI systems may potentially generate data constituting a trade secret under the Danish Trade Secrets Act, which will require the AI system to have reasonable protective measures in place to secure the necessary level of confidentiality.
Employment Law
Employers must ensure that any use of AI is in accordance with Danish employment legislation and applicable collective bargaining agreements. The latter is particularly significant in Danish law. This is relevant if a company intends to use AI tools as part of its recruitment process – eg, CV sorting tools. If AI tools are utilised in the recruiting process, the AI tool must not discriminate based on unlawful criteria, as required by the Danish Employment Non-discrimination Act (Lovbekendtgørelse 2011-06-08 nr. 645).
Irrespective of the fact that Denmark has not yet adopted any specific AI regulation, various other non-binding initiatives have been initiated concerning the regulation and use of AI, inter alia focusing on ensuring responsible and ethical development and use of AI. This is further detailed in 2.2 Involvement of Governments in AI Innovation.
Generative AI
Following the widespread adoption of chatbots powered by large language models (LLMs), such as ChatGPT, Danish businesses across various industries are increasingly deploying this technology. Established organisations prioritise secure and private deployments, utilising private cloud tenants to whitelist and employ LLMs (such as ChatGPT and M365 Copilot) ensuring data confidentiality.
Predictive AI
While generative AI has gained more public attention and traction in various industries, predictive AI has also become increasingly important. Companies tend to be more discreet about their use of predictive AI systems due to competitive considerations, immature governance setups, uncertainties regarding liability and the need to protect their commercial interests.
For example, in the Danish medical industry and healthcare system, predictive AI is already being utilised or at least actively explored.
In February 2024, all political parties in the Danish Parliament agreed on a new national digitalisation strategy for 2024-2027. The strategy consists of 29 initiatives, several of which focus on AI. These include ensuring a “responsible and strong foundation for utilising artificial intelligence” and potentially investing in and training a Danish language model.
Regulatory Sandbox
Additionally, funds have been allocated to establish a regulatory sandbox aiming at providing companies and public authorities with guidance on GDPR when developing or using AI solutions – eg, by providing free relevant expertise. The regulatory sandbox is a collaboration between the Danish Data Protection Agency and the Agency for Digital Government, and the first programme has already been finalised.
As part of the first programme of the regulatory sandbox, two AI projects were selected. Tryg Forsikring, a Danish insurance company, aimed to develop an AI assistant to streamline the documentation of injury cases. Meanwhile, Systematic, a software solutions provider, collaborated with several municipalities to simplify healthcare documentation. Insights from these projects will be shared with the public to benefit others.
The second round of applications has just opened with a deadline of the 4 April 2025, allowing new companies and/or authorities to apply for enrolment in the programme.
The Danish Approach
As detailed in 5.2 Technology Definitions, the Danish approach tends to align closely with EU legislative texts, making the specific Danish legislative stance best described as agnostic, even if the Danish debate does not always reflect this.
EU Artificial Intelligence Act
It is unclear how the Danish opt-out on EU justice and home affairs will affect the AI Act, given it is a regulation directly applicable across EU member states. Unless Danish legislators decide to implement the parts of the regulation covered by the Danish opt-out, the implementation of specific AI regulations related to the AI Act is not expected. This issue is briefly discussed in 3.4.2. Jurisdictional Conflicts.
EU AI Directives
As Denmark prepares to implement the EU Product Liability Directive by 9 December 2026, it is too early to provide specifics about Denmark’s implementation. Further, recent legislative developments and statements from the European Commission have emphasised the importance of improving, inter alia, competitiveness, growth and economic prosperity, which means that the future of the EU AI Liability Directive remains uncertain. This uncertainty is reflected in the European Commission’s work programme 2025, which specifies in relation to the EU AI Liability Directive:
“No foreseeable agreement – the Commission will assess whether another proposal should be tabled or another type of approach should be chosen.”
Until the European Commission issues further guidance, the liability for AI products will continue to be governed by national legislation as this is set out pursuant to the EU Product Liability Directive and other relevant legislation.
Regardless of these developments, it will be interesting to see how the EU legislation will be implemented and influence the Danish legal landscape, particularly concerning fault-based liabilities, such as the new rules on the burden of proof, including a presumption of a causal link between defects and the AI system’s output. In Denmark, damages and liability in many cases of fault-based liability are determined on a non-statutory basis, which might have to change in relation to AI.
As described in 1.1 General Legal Background, Denmark has only introduced one smaller bill concerning specific AI legislation. However, various public authorities have issued non-binding White Papers or guidelines with the aim of providing companies within their sector or relevant to their domain with relevant non-binding guidance (see 3.3 Jurisdictional Directives).
Guidelines Issued by Public Authorities
The Danish Financial Supervisory Authority (DFSA) and the Danish Data Protection Agency (DDPA) have published guidance in relation to the use of AI. The White Paper issued by the DFSA focuses on providing tools and inspiration for companies within the financial sector regarding data ethics when applying AI. The White Paper should merely be seen as guidance and does not impose new requirements on companies.
Danish Data Protection Agency Guidelines
The DDPA has published guidelines for public authorities, specifically geared towards municipalities and the Regions (administrative units), on handling AI technology in accordance with applicable data protection legislation.
The guidelines focus on ensuring compliance with data protection rules throughout the life cycle of an AI system, meaning from the development phase until the operation phase.
Different Phases
The guidelines distinguish between the public authorities’ use or development of an AI system in the following three phases:
It is essential to consider which phase you are in and how the personal data is incorporated into the AI system, as – in particular – the purpose, lawfulness and legal basis can change depending on the phase.
Supporting the Development of AI While Ensuring Compliance
As outlined in previous sections, a key focus is to avoid the guidelines becoming onerous to developing relevant AI systems while continuously ensuring that they comply with applicable data protection legislation.
As briefly touched on in 3.1 General Approach to AI-Specific Legislation, Denmark is awaiting the AI Act (and following updates concerning the EU AI Liability Directive and the EU Product Liability Directive) to come into full effect.
Generally, Denmark welcomes the implementation of the AI Act and has not taken local steps that might be similar or in conflict with it.
Due to Denmark’s opt-out on justice and home affairs, it has certain reservations regarding EU law in areas like criminal justice and police operations. This means that the AI Act regulating law enforcement authorities’ use of facial recognition will not apply in Denmark, including the use of biometric categorisation systems, predictive policing and remote biometric identification systems.
The use of AI-based facial recognition by public authorities, including the police, is becoming increasingly debated, as also discussed in 11.2 Facial Recognition and Biometrics.
There is no applicable information in this jurisdiction.
Implementation of DSM Directive Article 4
Denmark has made amendments to its copyright law in recent years to accommodate developments in AI technology, particularly in data mining. One notable development is the implementation of Article 4 of the DSM Directive on exceptions and limitations on text and data mining into Section 11 of the Danish Copyright Act.
Exceptions for Text and Data Mining
Previously, data mining could potentially infringe on copyright as it involved reproducing and analysing copyrighted material without permission from the creator, provided lawful access to the copyrighted material was obtained in the first place. However, with the introduction of Sections 11b and 11c, Denmark has recognised the importance of data mining and now enables such activities through exceptions to copyright law. As a general rule, according to the new sections, authors cannot oppose the use of their works for text and data mining.
Reservation for Text and Data Mining
While text and data mining can be used for research and AI development purposes without the need for prior permission, right holders have the option to prohibit commercial text and data mining by stating so in a machine-readable manner, including in metadata and terms and conditions for the use of a website or service. In such cases, text and data mining can only be legally done after an agreement with – and possibly payment to – the right holders.
Proactive Approach
The amendments to the Danish Copyright Act demonstrate a proactive approach to fostering AI technology through data mining while still upholding the principles of copyright protection and the rights of authors.
See 3.1 General Approach to AI-Specific Legislation.
Currently, the only judicial decision issued in Denmark (SS-3195/2024-RAN) pertains to a criminal case involving a man who created and distributed AI-generated child abuse images.
However, the DDPA has issued opinions and decisions regarding public authorities' use of AI systems (see 7.2 Judicial Decisions).
Reasons for Lack of Judicial Decisions
It is difficult to provide any definitive answer as to the lack of judicial rulings; however, the following elements may be relevant (although not exhaustive):
Implementation and oversight in Denmark for other areas of law related to the EU digitisation agenda (eg, NIS I) have been carried out in accordance with a well-established sector principle. This means that responsibilities are divided among authorities or agencies based on the sector, rather than having the entire legislation overseen by one or two agencies.
However, in accordance with Article 70 of the AI Act, the proposed Danish AI Law designates the Agency for Digital Government, the Danish Data Protection Agency and the Danish Court Administration as the national competent authorities and market surveillance authorities.
In this bill, the Danish Agency for Digital Government will serve as the single point of contact and national co-ordinating supervisory authority for the EU’s AI Regulation.
The recommendations and guidelines set out by Danish authorities, including their scope and objectives, are discussed under the specific sections; see, for example, 3.3 Jurisdictional Directives and 8.2 Data Protection and Generative AI.
The approach is currently one of guiding companies, citizens and other public authorities acting as users rather than setting out binding requirements.
No notable enforcement actions directly aimed at AI have yet been taken in Denmark. As outlined in 5.2 Regulatory Objectives and 3. AI-Specific Legislation and Directives, Danish agencies and authorities are keenly interested in the use of AI technology but are currently focused on providing guidance.
While government authorities have provided guidance (see 3. AI-Specific Legislation and Directives and 5. AI Regulatory Oversight), they have not yet set any standards; however, the Danish Agency for Digital Government is currently part of a process under the Danish Standards organisation, which acts as secretariat for the Joint Technical Committee for CEN and CENELEC effort in CEN-CENELEC JTC 21, which aims to develop European-wide standards related to AI. The Danish project currently touches upon standards within the areas of transparency, decision support in the public sector and bias.
In addition to the standards from CEN-CENELEC, other relevant international standards such as ISO and IEC will most likely provide an important contribution in shaping local Danish standards in an operational sense, where legislative measures from the EU or government authorities and agencies do not set out norms in detail.
Denmark’s cautious approach to lawmaking in cybersecurity, technology, and related fields, where the EU has set out legislation, has the implicit effect that international standards have become indirectly significant for many Danish industry actors seeking alignment with other commercial entities.
Denmark has for years aimed to automatise and increase the efficiency of the public administration, and authorities are increasingly experimenting with and making AI-supported decisions in relation to the public administration. For example, the project “aEye” aims to improve the treatment of a particular age-related eye disease using AI. The tool should merely support decision-making. Another example is the STAR project (see 7.2 Judicial Decisions), which focused on developing a profiling tool for newly unemployed benefit recipients.
Property Valuation System
Recently, the Danish Tax Agency’s roll-out of the new property valuation system aimed to automatise the calculation of property valuation and value tax, a move which has been much debated.
Issues when Utilising AI in Public Administration
In addition to other applicable legislation, such as the GDPR, the public authorities must adhere to the Danish Public Administration Act (Lovbekendtgørelse 2014-04-22 nr. 433), including good administration practices and legal doctrines, when using AI in their administrative decisions, for example, as part of their expert assessments.
The leading opinion is that the principles of administrative law are technology-neutral, which in some cases imposes high requirements on the use of AI in public administration. This includes compliance with the principles described below:
According to these principles, public authorities must be able to document that an AI solution included all relevant and necessary information and has only considered fair and objective factors in its assessment.
Except for the criminal case mentioned in 4.1 Judicial Decisions, there is no case law regarding the use of AI. However, the DDPA has issued an opinion of particular relevance regarding public authorities’ use of AI.
Use of AI-based Profiling Tool
After a request from the Danish Agency for Labour Market and Recruitment (Styrelsen for Arbejdsmarked og Rekruttering – STAR), the DDPA issued an opinion regarding the municipalities’ legal basis to use an AI profiling tool (ASTA) designed to predict the likelihood of a citizen becoming long-term unemployed.
ASTA was not developed to issue automated decisions but merely to support decision-making, providing recommendations for relevant initiatives to social officers.
Legal Basis for Processing Activities When Using AI
In its opinion, the DDPA outlined that the requirements for clarity of the necessary legal basis for processing personal data depend on how intrusive the processing activity in question is for the data subject. If the processing activities are deemed intrusive, the requirements for the clarity of the legal basis are correspondingly stricter and vice versa. In the specific case, it was assessed that the use of ASTA constituted an intrusive processing activity, necessitating a high level of clarity regarding the legal basis for the processing activities.
In general, and as elaborated in its guidance, the DDPA highlighted that the mere use of AI solutions by public authorities should not be deemed intrusive. However, citizen-focused use of such AI solutions often impacts citizens’ life situations, meaning the AI solution’s processing of personal data will typically be considered intrusive.
The Danish Centre for Cyber Security (Center for Cybersikkerhed – CFCS) published its Second Version of the threat assessment in April 2024, which described how hackers may misuse generative AI.
The updated assessment again focused on how hackers may use generative AI to create phishing emails or develop sub-parts of a code with harmful output. It is still unclear to what extent the technology is misused, but the CFCS highlights its significant negative potential.
Despite the new threats emerging since the widespread availability of generative AI, the CFCS has not changed its overall assessment of the cyber threat towards Denmark.
Generative AI and Issues Raised
One of the main issues is the lack of transparency in the decision-making process of the AI, making it difficult to identify and correct errors. Additionally, the use of generative AI in creating realistic deepfakes, for example by using ChatGPT, raises questions about privacy and cybersecurity.
Addressing the Issues
In addition to those mentioned above, Danish policymakers are taking various steps to address these issues, including:
IP Protection for AI Assets
In the AI business, it is crucial to understand how to achieve IP protection in all processes, as know-how and costs accumulate not only in the final product but also in the creation process. Assets in the AI process that can be IP protected include AI models, training data, input (prompts), and the output. However, at present, AI works that AI has learned and created on its own fall outside the protection of Danish Patent and Copyright law. Furthermore, the terms and conditions of the AI tool provider can influence the protection of assets with respect to the input and output of the generative AI tool.
Potential IP Infringements
There is a risk of IP infringements under Danish Copyright law with respect to the models, or the training, input, or output data. If AI-generated works are given copyright and protection, there are concerns that the number of works with rights may explode due to AI’s high productivity. Conversely, if no rights are given to AI-generated works, a situation may arise where free-riding becomes frequent, and third parties can freely use AI, even though it has required work and costs to develop it. This can result in a loss of motivation for AI research.
For privacy see 8.3 Data Protection and Generative AI.
The intersection of data protection and generative AI raises concerns about individuals’ rights and the appropriate use of personal data. Denmark’s focus has been on the right to rectification and erasure of personal data. Purpose limitation and data minimisation are crucial in complying with applicable Danish laws, as they strike a balance between AI development and protecting individuals’ privacy.
The right to rectification may involve correcting inaccuracies in the output or ensuring future iterations of the AI model no longer produce false claims which, in practical terms, is extremely difficult.
The right to erasure, also known as the “right to be forgotten”, enables data subjects to request the deletion of their personal data. However, applying this right in the context of generative AI can be complex. Deleting the entire AI model may not be necessary, especially if the model generates outputs unrelated to individuals. Instead, a more targeted approach may be required, such as deleting or anonymising specific personal data within the AI model.
With regards to purpose limitation, generative AI models should be designed with clear purposes, ensuring data subjects are aware of how their personal data will be used. Transparency is essential to maintain trust and protect individuals’ rights, as also emphasised by the DDPA in its October 2023 guidelines on the development and use of AI by public authorities (see 3.3 Jurisdictional Directives). In particular, the guidelines advise public authorities to consider several factors before starting to develop AI models, such as the legal basis for processing, the duty to inform data subjects about the processing of their personal data, and the need to conduct risk assessments.
In the context of generative AI, data minimisation is especially important to prevent excessive collection and retention of personal data. Techniques such as data anonymisation and aggregation can be employed to minimise reliance on identifiable personal data while achieving desired functionalities in AI models.
Given the rapid development of AI technologies, the DDPA has prioritised AI as a focus area for its supervisory activities in 2025. Consequently, we can anticipate further guidelines, and initiatives from the agency in the near future.
Regulation of AI in Law by Local Organisations
The use of AI in law is currently subject to regulation by local organisations such as the Danish Bar and Law Society, which are tasked with ensuring that AI and its use in the legal field adhere to ethical and professional standards.
Establishment of a Working Group
In December 2024, the Association of Danish Lawyers’ working group published two new guides on AI with the aim of highlighting the potential challenges that may arise from the use of AI in law and strategies to address them. The topic of the guides is the impact of the AI Act on law firms and points to consider when buying and using AI.
AI in Litigation
Danish legal practitioners are increasingly relying on AI-driven tools for tasks such as document reviewing and legal searches, which offer automated support services, facilitating more efficient and cost-effective case preparation. However, given the novelty of AI in litigation, no specific rules or regulations currently exist.
Ethical Concerns
The use of AI in law raises significant ethical concerns, particularly with regards to the potential reduction in human judgment and accountability. This could threaten core values of the legal profession, including fairness and justice. To address this, organisations such as the Danish Bar and Law Society must continue to monitor and regulate the use of AI in law to maintain ethical and professional standards.
Liability for Personal Injury and Commercial Harm Resulting from AI-Enabled Technologies
AI-enabled technologies have the potential to cause personal injury or commercial harm, raising questions about liability and responsibility. In Denmark, there is currently no specific regulation; however, see 3.1 General Approach to AI-Specific Legislation for future legislation on liability for AI-related injuries.
Theories of Liability and Requirements for Imposition of Liability
Theories of liability for personal injury or commercial harm resulting from AI-enabled technologies include product liability, negligence, and strict liability. To impose liability, it must be shown that the AI technology caused the harm, that there was a duty of care owed by the operator, and that there was a breach of that duty.
Role of Human Guidance and Allocation of Liability
The role of human guidance is also important in determining liability resulting from AI. An operator who is only assisted by AI-enabled technology has a greater influence than an operator whose function has completely been replaced by an AI system. Hence, the operator who is only assisted by AI-enabled technology has a greater presumption of being liable.
Insurance
Insurance plays a critical role in managing the risks associated with AI-enabled technologies. It is essential to determine the scope of coverage and exclusions in insurance policies for AI-related claims. At this moment, the discussion of insurance coverage for AI-enabled technologies remains purely theoretical, as there does not exist any publicly available information or practical industry discussion in Denmark.
See 3.2 Jurisdictional Law.
Algorithmic Bias in the Public and Private Sector
Biased algorithms could lead to unequal access to healthcare services, misdiagnosis, or inappropriate treatment recommendations. Similarly, in the private sector, algorithmic bias can have severe consequences. For example, biased loan algorithms may disproportionately deny credit to certain groups. Legislators and authorities have historically been aggressive in ensuring citizen and consumer rights, and it is to be expected that this approach will remain the same in respect of any use of AI.
Liability Risks
Although Denmark has not yet implemented specific regulations targeting algorithmic bias, the DDPA actively monitors developments in AI and provides guidance to organisations on complying with existing laws. Furthermore, in Denmark, the Danish Agency for Digitisation has taken a significant step by creating a strategy for AI, setting a roadmap for Denmark as well as publishing guidelines on responsible, non-biased, use of generative AI.
Under the GDPR, facial recognition relies on the processing of biometric data, which is considered a special category of personal data. The GDPR generally prohibits the processing of biometric data unless explicit consent or a legitimate justification under the GDPR or other legislation is obtained. Companies must identify a legal basis under the GDPR and implement strong security measures to protect the biometric data they collect. To mitigate such risks, companies should conduct thorough risk assessments and implement appropriate security measures. This includes conducting regular audits, documenting data processing activities, and providing clear information to individuals whose biometric data is collected.
In Denmark, the DDPA has the authority to authorise the processing of biometric data by private organisations if it is necessary for substantial public interest. For instance, in December 2024, the Danish football club F.C. København, obtained authorisation to use facial recognition technology during matches.
There is an ongoing debate about the use of facial recognition in Denmark. For example, the use of facial recognition by the Danish police in public places has been the subject of recent debate, also in light of the AI Act, which prohibits real-time facial recognition in public places for law enforcement purposes (see 3.4.2 Jurisdictional Conflicts).
Companies using automated decision-making, including profiling, must comply with the GDPR. According to Article 22 of the GDPR, individuals have the right not to be subject to a decision based solely on automated decision-making, including profiling, if it would have a significant adverse effect on the individual. However, there are exceptions where such processing is necessary for the conclusion or performance of a contract, is authorised by other legislation, or explicit consent has been obtained from the individual subject to the automated decision-making. If no exception applies, the decision must be made with human intervention. In addition, companies may be required to conduct an assessment of the impact of their automated decision-making processes on the protection of personal data. However, when automated decision-making is used to process special categories of data, including biometric data, explicit consent must be obtained or the processing must be necessary for reasons of substantial public interest, as further discussed in 11.2 Facial Recognition and Biometrics.
However, as also highlighted by the DDPA in its guidelines on the development and use of AI by public authorities, and as further discussed in 3.3 Jurisdictional Directives and 8.2 Data Protection and Generative AI, it may be difficult to obtain valid consent for the processing of personal data in the context of complex AI models. Often, there will be a clear imbalance in the relationship between the data subject and the data controller. For example, if the processing of personal data, such as benefit claims, has an impact on the individual's life situation – whether real or perceived – the individual’s consent cannot be considered freely given. In addition, consent must be specific and informed, and it must be possible to manage the withdrawal of consent and stop the processing of personal data. This can be challenging in complex AI models where data is processed in many different ways, as it is crucial for the validity of consent that the individual understands what their data is being used for and can opt out of those purposes. The more data is to be used, including for automated decision-making, the more difficult it will be to meet the conditions for valid consent.
In Denmark, the use of AI technologies, including chatbots, as a replacement for services rendered by natural persons is subject to the GDPR. Articles 13, 14 and 15 of the GDPR set out the transparency rules and require data controllers to inform individuals about the processing of their personal data, including when AI is involved. The specific information to be provided depends on how personal data is collected; ie, data may be collected directly from the data subject, for example through a job application, or the data may be collected through a third party. In both cases, the individual must be informed of the purpose and use of his or her data, as well as of any proposed new use of that data.
For instance, in June 2024, the DDPA found that IDA Forsikring’s use of AI to analyse recorded customer service phone calls was permissible, but their consent process was non-compliant with the GDPR. The consent was not sufficiently granular, as it bundled multiple purposes without allowing callers to choose which purposes they consented to.
Dark Patterns
However, the use of technology to manipulate consumer behaviour or make undisclosed suggestions, also commonly known as “dark patterns”, raises concerns as it makes it difficult for individuals to make informed choices about their personal data. Dark patterns could entail sharing personal information without clear intent or making purchases by mistake. These practices are often considered unfair under the Danish Marketing Practices Act (Markedsføringsloven).
The Digital Services Act also addresses the issue of dark patterns through its provisions prohibiting online platforms from using dark patterns that manipulate users into making decisions that are not in their best interests.
In a Danish context, contracts between AI customers and suppliers will be key to resolving a number of the issues facing the use of AI technology in a B2B context. Several of these issues are outlined below.
Intellectual Property Rights and Trade Secrets
Ensuring intellectual property rights is crucial in AI contracts. See 15.2 Applicability of Patent and Copyright Law and 15.3 Applicability of Trade Secrecy and Similar Protection for details.
Liability and Accountability
Addressing liability in the context of AI’s autonomous decisions is also key. Contracts should specify the supplier’s liability scope, detailing the due diligence required and the mechanisms for accountability. See 10.1 Theories of Liability for more. Given the unclear status of the AI Liability Directive, there is seemingly a continued need to pay specific attention to liability where the provider, customer and other relevant parties to the contract reside in different jurisdictions.
Regulatory Adaptability
Given the dynamic nature of AI regulation, contracts should incorporate terms allowing for periodic revisions. This ensures that the agreement remains compliant with evolving legal and ethical standards, enabling businesses to navigate the fast-changing AI landscape effectively.
Drafting
Contracts need to add more detail to the areas mentioned above, as well as other areas heavily addressed in drafting and negotiations, such as performance and service levels. Within these areas, adequate attention is required for all aspects of AI procurement, including questions like how the AI trains on data, which data is used, who uses it, and what baseline can be established for performance. These are just some of the critical questions that need to be answered and negotiated.
Using advanced algorithms, AI can quickly sift through thousands of applications and identify relevant candidates based on their skills and experience. AI can also help eliminate human bias, ensuring that the focus is solely on the candidate’s qualifications and competencies. AI offers potential benefits such as cost savings, streamlined processes and improved hiring decisions. But AI also poses significant risks, including privacy, non-discrimination and equal treatment concerns.
Therefore, when developing AI-based recruitment and employment tools, employers must ensure that the technology complies with the GDPR, as well as the Danish Act on Prohibition of Discrimination in Employment (Forskelsbehandlingsloven) and the Danish Act on Equal Treatment between Men and Women (Ligebehandlingsloven). Regular audits, transparency in the use of AI in the selection process and corrective action when bias is identified are crucial steps to mitigate potential liability risks.
In Danish workplaces, various technological tools and solutions have emerged to facilitate the evaluation and monitoring of employees.
Such tools can cause potential harm to employees. Data accuracy and reliability are crucial as further discussed in 11. Specific Legal Issues With Predictive and Generative AI, and certain systems for emotional monitoring are directly prohibited under the AI Act if directed at employees.
In a Danish context, the GDPR imposes strict requirements on the collection, processing, and storage of personal data, also for evaluation and monitoring purposes. Employers must be transparent about the purpose and extent of monitoring as further discussed in 11.4 Transparency, and implement measures to safeguard employee privacy. Failure to comply with these requirements can expose employers to potential liability. It is also important for employers to establish clear policies and guidelines regarding the use of technology for evaluating and monitoring employees.
Companies like GoMore, a Danish mobility operator, have harnessed the power of digital platforms to facilitate private car hire, leasing, and carpooling options. By utilising keyless access and real-time location tracking of available vehicles, GoMore makes it easier for platform users to plan their trips efficiently.
The food delivery sector in Denmark has also witnessed advancements due to digital platforms. Platforms like Wolt employ algorithms to optimise the delivery experience, for example by estimating the time required for restaurants to prepare customers’ food orders and calculating the time it will take for a courier partner to deliver it to the customer.
However, the rise of platform work has also posed regulatory challenges. The EU has taken the lead in addressing these concerns by adopting specific rules for digital labour platforms in a new directive. The directive will require platform workers to be informed about the use of automated monitoring and decision-making systems. It also prohibits the processing of certain types of personal data, such as emotional or psychological states, racial or ethnic origin, migration status, political opinions, religious beliefs, health status, and biometric data, except for data used for authentication purposes.
Denmark’s financial services sector is undergoing a significant digital transformation. Banks and insurance companies are embracing AI technology in relation to, inter alia, credit scoring, customer interfaces and standard AML practices. However, as they delve into the realm of AI, financial institutions are also recognising the need for caution when dealing with vast amounts of customer data. One significant concern is the potential for AI algorithms to make erroneous decisions or predictions due to biases inherent in the data. Biased data can result in discriminatory practices, such as biased loan approvals or pricing as further described in 11.1 Algorithmic Bias.
The DFSA has been proactive in addressing the challenges and risks associated with AI implementation. For example, the FSA has published recommendations for financial institutions on the use of supervised machine learning, emphasising the importance of using AI responsibly. Furthermore, financial institutions must adhere to applicable data protection legislation.
The use of AI in healthcare has been rapidly increasing in recent years in Denmark, providing more efficient and effective care to patients. However, the World Health Organization (WHO) urges caution in the use of AI and has released guidelines on the use of AI in healthcare. These guidelines emphasise the importance of ensuring that AI is used in a responsible and ethical manner to ensure patient safety and privacy.
One of the potential risks associated with AI in healthcare is algorithmic bias; see 11.1 Algorithmic Bias.
AI is also being increasingly used in software as a medical device and related technologies such as wearables and mobile health apps. While these technologies have the potential to provide more personalised care, they also raise concerns about data privacy and the accuracy and reliability of the AI algorithms used. To mitigate these risks, it is essential to ensure that health data is collected and used in a responsible and ethical manner in compliance with applicable data protection legislation.
Robotic surgery is another area where AI is being used in healthcare. In Denmark, robotic (assisted) surgery has been widely used in gynaecology and other areas and is subject to applicable Danish law within the area of healthcare (eg, the Danish Health Act (Sundhedsloven)) and patient rights legislation concerning liability and damages.
A “self-driven” vehicle is a vehicle that can drive completely or partially without the assistance of a driver. In Denmark (upon prior authorisation) you can experiment with small autonomous vehicles in the public space, which has been governed by the Danish Road Traffic Act (Færdselsloven) since 2017.
One of the major challenges in autonomous vehicle navigation is the AI’s ability to understand the social codes of traffic that enable human drivers to decide whether to take evasive action or keep driving. This has been emphasised in research from 2023 by the Department of Computer Science at the University of Copenhagen (Datalogisk Institut). Danish liability law for accidents on the road is on a no-fault basis. However, if the accident involves autonomous vehicles, liability might shift to the holder of the authorisation to experiment with autonomous vehicles. Denmark might look to other legal frameworks, such as Britain’s approach, which aims to shift the liability involving self-driven vehicles away from the passengers and onto the regulated licensed operators.
The use of autonomous vehicles also involves a number of data protection concerns, as such use may involve the collection of personal data about drivers and passengers, which the EDPB highlighted in their guidelines back in 2021 on connected cars. However, cars equipped with cameras in and around the car may also be subject to processing of personal data. Both the recording and the subsequent processing of personal data by the car’s cameras are rarely known to anyone other than the driver, partly because no information is provided about the recordings outside the car. And even if the main purpose of the cameras is not to process personal data, the GDPR still applies as long as the individuals are identified or identifiable. Car manufacturers working with connected cars must therefore ensure that the collection and processing of personal data in the use of connected cars comply with applicable data protection legislation.
AI is increasingly being integrated into manufacturing processes in Danish companies. Manufacturers are implementing intelligent automation, predictive analytics and machine learning algorithms to achieve reduced downtime and optimised use of materials, etc.
While there have not yet been any major developments in this area of regulation in Denmark, the requirements of the Danish Product Liability Act (Produktansvarsloven) are relevant for manufacturers using AI in manufacturing. Additionally, the updated Product Liability Directive (Directive 2024/2853) replaces the nearly 40-year-old existing directive, aiming to modernise product liability rules in line with advancing technologies. This revised directive addresses the complexities of digital products and extends liability for defects to software. However, its final interpretation will depend on how it is transposed into Danish law.
Concerning the proposed AI Liability Directive, see 3.1 General Approach to AI-Specific Legislation.
When AI technology is used in professional services, the general view in Denmark remains that it is the responsibility of the professional to ensure that the case is handled correctly, and that the advice is fact-checked. Use of AI will in this respect most likely not be subject to separate legislation; rather, existing rules and legislation for professional services will be understood to encompass AI as they do other technologies used in service provision.
In addition, the use of AI can raise questions about ownership and protection of intellectual property rights. The challenge is to determine the owner of creations made through AI technology, which is further discussed in 15.4 AI-Generated Works of Art and Works of Authorship.
Moreover, professionals must comply with data protection laws, such as the GDPR and the Danish Data Protection Act, to protect client privacy and prevent unlawful processing of personal data when using AI in professional services.
Protecting IP in the Development of Generative AI Models
The development of AI models involves the collection and use of large sets of data used for training the AI model. This data is often protected by copyright law, which means that the collection of data for training AI must comply with Danish copyright laws. Input and output data generated by AI tools can also be protected by IP rights, depending on their nature and originality. Trademark infringements can also occur if AI tools use brand names and logos, namely in image-generating models.
Influence of AI Tool Providers’ Terms and Conditions on Asset Protection
The terms and conditions of AI tool providers can have an impact on the protection of IP assets. For example, some AI tool providers may require users to grant them a licence to use their data for various purposes, which could potentially infringe on their IP rights. Users should – like for other technologies – always carefully review the terms and conditions of the relevant AI tool.
IP Infringements and Applicable Law
IP infringements can occur in the AI process, particularly in the collection of training data. While Danish copyright law generally protects the exclusive right of the author to dispose of their works, Denmark has recently recognised the importance of data mining and allows for exceptions to copyright law for text and data mining activities. As discussed in 3.6 Data, Information or Content Laws, while there are limitations on commercial text and data mining, the law generally allows for the use of works for text and data mining purposes without the need for prior permission from the author. This is a significant change that reflects the impact AI has on the legal framework.
Lack of Judicial or Agency Decisions on AI Technology and Inventorship
As of now, there have been no judicial or agency decisions in Denmark regarding whether AI technology can be considered an inventor or co-inventor for patent purposes, or an author or co-author for copyright and moral right purposes. However, under Danish patent law, the inventor must be a physical person. Therefore, it is doubtful that AI technology would qualify as an inventor or co-inventor under current legislation.
Similarly, current Danish copyright law stipulates that right holders must be physical persons, which excludes the possibility of AI technology being an author. Consequently, AI-generated works cannot be protected, leaving them without a designated copyright holder, which means that no one has the exclusive right to dispose of and produce copies of AI-generated works.
Human Input
While AI technology cannot be considered an inventor or author under current Danish copyright law, it might be worth considering whether there exist situations where human input is significant enough to justify copyright protection for AI-generated works. This is particularly relevant in cases where the amount of prompts used in the generative AI process is very significant, and the human input involved in creating and selecting these prompts is extensive.
In such cases, it may be argued that the human contribution to the AI-generated work is significant enough to meet the threshold for copyright protection under Danish copyright law; however, this issue needs to be explored further.
EU Case Law
The Danish legal perspective aligns with EU case law. In 2020, the European Patent Office (EPO) decided that the AI system DABUS could not be considered an inventor, as the European Patent Convention (EPC) requires the inventor to be a physical person. This decision reflects the prevailing view that AI technology does not qualify for inventorship or copyright authorship under current laws.
Applicability and Contractual Aspects of Trade Secret and Similar IP Rights for Protecting AI Technologies and Data
Trade secrets and similar intellectual property rights can cover different aspects of AI, such as algorithms, training data, and implementation details. To safeguard confidential information and trade secrets related to AI, companies may sign non-disclosure agreements (NDAs) with their employees.
Contractual Arrangements for Compliance with Danish IP Regulations
In addition, contractual arrangements play a significant role in ensuring compliance. Companies can protect their AI technologies and data by using contractual clauses that address specific IP issues, such as ownership, licensing, and infringement. These contractual provisions should be tailored to address the unique aspects of AI technologies and data, such as the ownership of AI-generated works, protection of proprietary algorithms, and use of data for training AI models.
Tailoring Contractual Provisions for AI Technologies and Data
The most important thing is to regularly review and update contractual arrangements to ensure they remain relevant and up to date with any new updates regarding trade secrets within AI technologies and the forthcoming Data Act.
Originality Requirements under Danish Copyright Law
Under the Danish Copyright Act, for a work to be eligible for protection, it must be original, meaning that it must be an expression of the author’s creative effort. Therefore, works that result from purely routine activities, such as automatic translations or simple text messages, are not original and are not eligible for protection.
Originality of AI-Generated Works
The question of whether AI-generated works meet the required level of originality has been a topic of discussion. It has been debated whether a machine can constitute a “creative effort” when it relies mainly on human input. AI-generated works are often created through algorithms and machine learning models, which raises the question of whether the machine or the human input should be considered the author.
Authorship Requirements Under Danish Copyright Law
Another obstacle to granting intellectual property protection to AI-generated works is the authorship requirement under the Danish Copyright Act. The law currently prescribes that the author must be a physical person, excluding the possibility of an AI system being considered the author of its works. This means that AI-generated works cannot be protected, leaving them without a designated copyright holder and no exclusive right to dispose of or produce copies of the work.
Ownership of Works and Products Created Using OpenAI
One of the main issues – also in a Danish context – related to OpenAI is the lack of protection for works and products created using the tool. This means that the ownership of the output is not clear, leaving it vulnerable to being used by anyone. The lack of protection raises questions about who has the right to use, distribute, or modify the output generated by OpenAI.
Infringement of IP Rights
Another significant issue related to OpenAI is the potential risk of infringing other individuals’ IP rights. This risk is particularly high when feeding copyrighted content into the system without proper permission.
Confidentiality and Trade Secret Concerns
Additionally, concerns regarding confidentiality and trade secrets may arise when providing input to OpenAI. Users must ensure that they have the rights to any data or information fed to the system and that this information is not confidential. Failure to do so could result in legal action, including breach of contract claims, trade secret misappropriation, and other related claims.
Addressing IP Issues When Using OpenAI
To mitigate the IP risks associated with OpenAI, users must take steps to ensure that they have the rights to use any input data and that they do not infringe on other people’s IP rights. Users should also consider entering into agreements with third-party content owners to obtain proper permission before using copyrighted content.
The Danish Competition and Consumer Authority (KFST) is monitoring developments closely, with a focus on three main areas:
Denmark currently has no cybersecurity legislation directly and specifically addressing AI technology. Existing laws, such as the Danish Penal Code, address cybercrimes like hacking and malware distribution, and are set out in a manner whereby such actions leveraging AI can be prosecuted.
The Danish Centre for Cyber Security highlights risks of AI misuse, such as phishing and harmful code development.
Under the Danish Financial Statements Act (Årsregnskabsloven LBKG 2024-09-23 nr 1057), large companies are required to include a statement detailing their efforts and performance in relation to environmental, social and governance (ESG) criteria. Therefore, if a company implements AI in its daily operations, and this integration affects its ESG performance, it should be included in the company’s report. For example, if such initiatives are making its business processes more efficient and reducing energy consumption.
Alignment with National and International Guidelines
With regulatory guidance available and a clear timetable visible, organisations and companies in Denmark need to align their AI practices with the EU’s ethical and legal frameworks and the Danish government’s national strategies.
Practical Implementation of AI Ethics
For Danish industries using AI in relation to sensitive data or where the business is subject to specific risks, it is essential to stay ahead of potential reputational harm by translating abstract ethical principles into actionable practices. This involves integrating ethics into the AI development life cycle, from design to deployment, ensuring that AI systems are transparent, explainable, and aligned with societal values.
Data Governance and Privacy
Another key area is ensuring data quality, securing data storage and transmission, and respecting user privacy.
Capacity Building and Stakeholder Engagement
To effectively implement AI best practices, Danish organisations need to invest in building internal expertise and fostering a culture of continuous learning. Such capacity should ensure capabilities in all areas of commercial, operations, technical and law.
Advokatpartnerselskab
Kalkbrænderiløbskaj 8
2100 Copenhagen Ø
Denmark
+45 72 24 12 12
denmark@twobirds.com www.twobirds.com/da/reach/nordic-region/denmarkEnforcement and Oversight
Introduction
Currently, Danish legislation does not encompass specific provisions that supplement or implement the regulation concerning artificial intelligence. Furthermore, Danish law lacks comprehensive general provisions pertaining to artificial intelligence (AI).
Nevertheless, initial steps towards national legislation governing AI are emerging, either through amendments to existing laws or the introduction of new statutes designed to supplement or implement the requirements established in the AI Act.
Although the AI Act constitutes a regulation, hence directly applicable across EU member states, several provisions within the AI Act necessitate the implementation of specific requirements and obligations of the regulation.
Consequently, Denmark, like all other EU member states, is obliged to supplement its national legislation in accordance with the regulation, effective from 2 August 2025, including matters pertaining to supervisory authorities and enforcement provisions. This has prompted a new proposal for a Danish AI Law.
Proposed Danish AI Law
Other authorities and organisations have criticised the proposed preliminary bill during the consultation process, for example, regarding whether its provisions ensure an adequate harmonised and aligned regulation within Denmark, the enforcement provisions, and the mechanism for administrative appeal.
These concerns have resulted in several amendments to the bill, and it is reasonable to anticipate further modifications after the bill has been examined by a parliamentary committee.
On this backdrop, this article will focus on:
Examining the Danish AI Law
On the 26 February 2025, the Minister for Digital Government of Denmark introduced a bill concerning the first Danish AI Law (Forslag til Lov om supplerende bestemmelser til forordningen om kunstig intelligens). Should it be enacted, the law is scheduled to come into effect on 2nd of August 2025.
The bill has passed the first reading and has been shared with relevant authorities, organisations, and associations, which returned with consultation responses. Currently, the bill is under review by a parliamentary committee, with the second reading anticipated to occur in the middle of this year.
Purpose of the Danish AI Law
The purpose of the Danish AI Law is to establish the national framework for the enforcement, administration and oversight of prohibited AI systems as delineated in the AI Act. This bill proposes supplementary provisions to align Danish legislation with the parts of the Regulation that apply from 2 February 2025 and 2 August 2025.
Furthermore, certain provisions designate national competent authorities in accordance with the regulation to provide such competent authorities with the power to, among other things, demand all information necessary for the authorities’ supervisory activities.
The explanatory notes to the bill emphasise that additional legislative proposals will be introduced to address the remaining chapters of the AI Act, once such chapters apply. At this point, it is unknown how the content of such bills will develop.
The Danish AI Law and the sectoral principle
In Denmark, the implementation and supervision of legislation pertaining to the European Union’s digitisation agenda are carried out in accordance with a well-established sectoral principle. The same principle has been applied to other legislation implemented – for example, NIS and NIS II.
Under this approach, responsibilities are divided between various authorities and agencies based on their respective sectors, rather than centralising oversight of the entire legislative framework within one or two agencies. Until now, considerable speculation and uncertainty have surrounded the discussions on the structuring of the administration and oversight of the AI Act.
The proposed Danish AI Law provides clarity and insights concerning the governance and administration of the AI Act by designating specific authorities as the market surveillance bodies for each category of prohibited AI systems detailed in Article 5 of the AI Act. This approach aligns with the sectoral principle.
Market Surveillance Authority and National Competent Authorities
The sectoral principle is, however, not straightforward to implement in light of the legislation.
Pursuant to Article 70(1) of the AI Act, each member state must establish or designate at least one notifying authority and one market surveillance authority as national competent authorities.
Moreover, in accordance with Article 28(1) of the AI Act, each member state must designate or establish at least one notifying authority, and in accordance with Article 70(2), third sentence, member states are required to designate a market surveillance authority to serve as a single point of contact for the Regulation.
As part of the consultation process, several of the involved stakeholders requested a clear division of responsibilities among the appointed market surveillance authorities. In response, the Ministry of Digital Affairs has allocated the categories of Article 5 (1) in the AI Act to each authority (see below).
Therefore, in accordance with Article 70(1) of the AI Act, including the relevant input from the involved stakeholders, the bill suggests the appointment of the following three authorities as the market surveillance authorities concerning the oversight of prohibited AI systems:
(individually the “Danish Market Surveillance Authority” and collectively the “Danish Market Surveillance Authorities”).
Further, the Agency for Digital Government is proposed as the authorising authority in accordance with Article 28(1) and Article 70(1) as well as the single point of contact, cf. Article 70(2), third sentence, of the AI Act.
Finally, it is suggested that the Danish Market Surveillance Authorities are granted necessary supervisory powers as part of their appointment. These powers should include, inter alia, the ability to request any information necessary for the supervisory activities, conduct inspections and issue injunctions or temporary prohibitions.
According to the proposed Danish AI Law, decisions made by the Danish Market Surveillance Authorities – such as issuing injunctions and temporary prohibitions – will not be subject to administrative appeal.
Decisions made by competent authorities, including their powers and remedies
The proposal to exempt decisions made by the Danish Market Surveillance Authorities from administrative appeal has faced substantial criticism during the consultation process.
However, according to the explanatory notes of the Danish AI Law, this measure is intended to ensure compliance with Article 70(1), second sentence, of the AI Act, which emphasises the need for the independence of national competent authorities.
Furthermore, pursuant to Clause 5 of the proposed Danish AI Law, the Ministry of Digital Affairs proposes provisions enabling market surveillance authorities to request all information deemed necessary for the authority’s supervisory functions from natural and legal persons covered by Article 2(1), subparagraphs a-f of the AI Act (in Danish: “… alle oplysninger, som er nødvendige for myndighedernes tilsynsvirksomhed…”).
It is emphasised that this information will assist the authorities in their supervisory activities, including determining whether specific matters fall within their jurisdiction.
Lastly, in Clause 6 of the proposed Danish AI Law, it is suggested that the Danish Market Surveillance Authorities have the right to access all (business) premises of the natural and legal persons referenced in Article 2(1), subparagraphs a-f of the AI Act.
Such access would be granted upon appropriate identification and would not require a court order, insofar as it is necessary to ensure compliance with the AI Act, the proposed Danish AI Law or any rules issued under the Danish AI Law.
Based on the explanatory notes of the bill, the proposed provisions shall ensure an efficient and expedient surveillance of compliance with the AI Act. Additionally, the Ministry of Digital Affairs assesses that the possibility to make inspections without a court order will have a deterrent effect on the spread of prohibited AI systems.
The Ministry of Digital Affairs replied to the consultation responses that the remedies highlighted in Clauses 5 and 6 may only be used if proportional and necessary for the activity of the market surveillance authorities. Further, an inspection without a court order would typically require that the market surveillance authority, prior to the inspection, tried to retrieve all necessary information in accordance with Clause 5.
The supervisory powers and remedies of the Danish Market Surveillance Authorities are far-reaching and have predictably received some criticism during the consultation process.
Consultation process and comments on the proposed Danish AI Law
Prior to the first reading and introduction of the bill, sixteen authorities, organisations and other stakeholders, including professional bodies, such as the industry association for Danish lawyers (Danske Advokater) and The Danish Industry Confederation (DI)), returned with responses to the bill.
Such initial assessments and reviews led to several updates and amendments to the bill, and further amendments may be expected following its second and third readings, including relevant Q&A sessions in Parliament.
Positive feedback to the Danish AI Law
The appointment of the Agency for Digital Government and the Danish Data Protection Agency as market surveillance authorities has been positively received, despite the Agency for Digital Government’s lack of experience.
This choice appears to be the most suitable, primarily because the Danish Data Protection Agency is well-experienced in conducting oversight, and the subject matter falls within the remit of the Agency for Digital Government. As the area governed by the Danish Court Administration is quite limited, this article will not examine its appointment further.
Notwithstanding the initial enthusiasm and satisfaction regarding the appointment, the consultation responses highlighted several additional concerns.
Concerns regarding the allocation of responsibilities among the market surveillance authorities
A recurring concern raised by several stakeholders pertains to whether the “national competent authorities are provided with adequate technical, financial and human resources, and with infrastructure to fulfil their tasks effectively under this Regulation”, cf. Article 70 (3) of the AI Act.
Within the area of artificial intelligence, effective oversight and enforcement require not only legal expertise but also strong technical understanding, which may place additional pressure on the appointed authorities’ capacity and capabilities – potentially imposing an excessive burden on the authorities.
In recent years, as is often common among public supervisory bodies, neither the capacity of the Danish Data Protection Agency nor its resources have fully aligned with the scope of its obligations. Thus, several stakeholders stressed the importance of ensuring that the necessary capacity and competencies are in place and of continuously assessing whether adequate and skilled resources are allocated – and not least whether the authorities can ensure a uniform approach.
Ensuring a uniform and aligned approach
Maintaining the sectoral principle may complicate efforts to ensure harmonisation and alignment among authorities, particularly once additional market surveillance authorities are appointed to be responsible for oversight of the high-risk systems (for instance, the Financial Supervisory Authority for the financial sector).
The Ministry of Digital Affairs has noted that the requirements relating to co-ordination between the supervisory authorities may be addressed through close collaboration and guidance, thereby not requiring supplementary legislative measures concerning this issue.
Previously, such co-ordination has proven difficult, underscoring the importance of establishing a framework driven by extensive co-operation among all stakeholders – potentially under the overarching responsibility of the Agency for Digital Government as a co-ordinating authority.
In the following, we provide a more comprehensive examination of the challenges arising from the Agency for Digital Government’s organisational framework and its direct subordination to the Ministry for Digital Affairs.
Independence of the Agency for Digital Government and the absence of administrative appeal
The Agency for Digital Government’s independence has been questioned, as the Agency for Digital Government is organised differently than the Danish Data Protection Agency. This concern has been raised by several stakeholders.
This issue partly arises from the Agency for Digital Government’s direct subordination to the Ministry for Digital Affairs, ultimately the minister responsible for such ministry. Consequently, the Data Ethics Council (in Danish: “Dataetisk Råd”) and the Industry Association for Danish Lawyers (in Danish: “Danske Advokater”), noted that, unlike the Danish Data Protection Agency, which operates independently under the Danish Ministry of Justice, the Agency for Digital Government is directly subordinate to the Ministry for Digital Affairs, raising questions about its ability to act independently without undue influence.
Several comments pointed out that Article 70(1) of the AI Act requires national competent authorities to operate independently, impartially and without bias. Yet, as highlighted in the consultation responses, the structural placement of the Agency for Digital Government may not sufficiently safeguard these principles in practice, especially considering its political subordination.
Although the Ministry of Digital Affairs subsequently clarified that the required independence would be functional – meaning that the agency cannot be instructed in specific cases – doubts remain whether this meets the higher threshold of independence envisioned under EU law.
Besides the concerns surrounding the Agency for Digital Government’s ability to operate independently, other flags have been raised in relation to the exemption of the right of administrative appeal.
Exemption of administrative appeal
The removal of administrative appeal (Clause 3 of the Danish AI Law) increases the burden on the judiciary and may result in a lack of effective redress mechanisms for affected businesses.
While the Ministry emphasises the continued availability of judicial review via the Danish Constitutional Act’s Clause 63, stakeholders such as The Danish ICT Industry Association (in Danish: “IT-Branchen”) and the Danish Trade Association for Insurance Companies and Pension Funds (in Danish: F&P) maintain that denying administrative review in a novel and highly technical regulatory area could impair legal certainty and compliance.
Additionally, the exemption of administrative appeal may create uncertainty regarding the regulation of AI, as the only remaining option would be to appeal through the courts – which may lack technical knowledge and be subject to long processing times.
This leads us to the final significant concern raised as part of the consultation process: the ability to conduct inspections without a court order.
Inspections without a court order and retrieval of data
The powers and authorities set out in Clauses 5 and 6 of the proposed Danish AI Law are already subject to critique from several parties.
Particularly, the authority to conduct inspections without a court order is deemed extensive and disproportionate, potentially violating the principle of the rule of law.
Concerns regarding Clause 6 of the proposed Danish AI Law – which allows market surveillance authorities to conduct inspections without a court order – were raised by multiple stakeholders during the consultation process, including the Danish Industry Association (in Danish: “Dansk Industri”), Danske Advokater and the Data Ethics Council (in Danish: “Dataetisk Råd”).
While the Ministry has clarified that the exercise of these powers is subject to a necessity and proportionality assessment, and typically preceded by information requests under Clause 5, many stakeholders have highlighted that the bar for what constitutes necessity remains vague, leaving room for wide interpretation.
As Dansk Industri noted, the lack of judicial pre-approval for such inspections could run counter to the principle of the Danish Constitutional Act’s Clause 72, which protects the inviolability of private property and correspondence.
Moreover, IT-Branchen and others warned that Clause 5’s formulation – allowing the authorities to demand “all information necessary” – may in practice be interpreted broadly, leading to overreach and disproportionate demands on businesses.
These provisions, when viewed together, have been criticised for potentially creating a “chilling effect” on innovation and AI system development in Denmark, particularly if businesses are uncertain about the thresholds for intrusive state actions.
Still, no binding limitations or procedural safeguards – such as judicial review or the involvement of an independent ombudsman – were introduced. Consequently, concerns persist that these powers may be exercised with insufficient oversight or external checks.
Our assessment of the next steps and where this leaves Danish enforcement and oversight of AI technology
In general, the proposal includes several well-founded considerations, such as adherence to the sectoral principle through the delegation of supervisory authority across various authorities.
Conversely, it appears both risky and overly optimistic to expect the authorities to engage in effective internal co-ordination, including discussions of principal issues and interconnected concerns, to achieve a harmonised approach and consistent practice within a short timeframe.
Issues relating to ensuring a harmonised approach
We find a substantial risk that, firstly, companies, authorities and organisations will be forced to seek guidance from multiple market surveillance authorities in order to obtain a comprehensive perspective, and secondly, that such guidance and even actual oversight and enforcement is unaligned. There is a definite risk that a lack of transparency will be the norm for some time.
Bearing this lack of alignment in mind, and with regard to the remedial measures and authorities of the Danish Market Surveillance Authorities, it is of some concern that the Danish Market Surveillance Authorities may be allowed to conduct inspections without court orders.
Extensive enforcement actions
Although the Danish Ministry of Digital Affairs has underscored that all necessary information must be requested before conducting such inspections, it further specified that the request for information is “generally” a prerequisite, thereby creating a pathway for inspections without a court order in circumstances where the authorities have not tried to obtain the data through formal requests.
Overall, this leaves businesses, authorities and organisations in a state of uncertainty. Not only are expectations unclear, but when you are faced with the actual oversight, significant inspection measures may be leveraged against you.
Increasing the caseload
Further, this should be viewed in light of the Danish lawmakers’ decision to exclude the possibility of administrative appeal, meaning companies, authorities and organisations are forced to consider bringing matters to trial. In recent years, the Danish courts have struggled with significant caseloads and subsequent lengthy processing times.
Adding disputes arising in the evolving field of AI, many of which involve complex technical issues, might cause further issues. This situation could result in a backlog of AI-related cases, leaving businesses in an uncertain legal position for an extended period.
Conclusion
As highlighted throughout this article, the bill has only passed the first reading, and thus the final Danish AI Law could diverge significantly with respect to several provisions once fully enacted. Even so, it will be interesting to follow the legislative process, including the Q&A sessions to shed light on the considerations made by legislative authorities during this period.
Kalkbrænderiløbskaj 8
2100 Copenhagen Ø
Denmark
+45 72 24 12 12
denmark@twobirds.com ww.twobirds.com/da/reach/nordic-region/denmark