Contributed By Bird & Bird
Denmark has not at this time implemented any specific regulation governing artificial intelligence (AI). However, where applicable, AI will naturally be subject to Danish law, which means that AI in many cases must be construed and used in accordance with Danish legal principles and enacted legislation. This applies to a number of legal areas, and therefore, only a few legal areas are highlighted below. In a Danish context, both authorities and the private sector have focused primarily on the following.
Data Protection
In addition to the General Data Protection Regulation (GDPR), the Danish Data Protection Act (Lovbekendtgørelse 2024-03-08 nr. 289) supplements this regulation. The Danish Data Protection Agency has issued a guide on public authorities’ use of personal data throughout the life cycle of an AI system (see 3.3 Jurisdictional Directives).
Intellectual Property Law and Trade Secrets
The use of AI is also subject to the Danish regulation of intellectual property rights, including but not limited to the Danish Copyright Act (Lovbekendtgørelse 2023-08-20 nr. 1093) and Danish Trade Secrets Act (Lov 2018-04-25 nr. 309 om forretningshemmeligheder). For example, AI systems may potentially generate data constituting a trade secret under the Danish Trade Secrets Act, which will require the AI system to have reasonable protective measures in place to secure the necessary level of confidentiality.
Employment Law
Employers must ensure that any use of AI is in accordance with Danish employment legislation and applicable collective bargaining agreements. Especially the latter is particular to Danish law. This is relevant if a company intends to use AI tools as part of its recruitment process – eg, CV sorting tools. If AI tools are utilised in the recruiting process, the AI tool must not discriminate based on unlawful criteria, as required by the Danish Employment Non-discrimination Act (Lovbekendtgørelse 2011-06-08 nr. 645).
Irrespective that Denmark has not (yet) adopted any specific AI regulation, various other non-binding initiatives have been initiated concerning the regulation and use of AI, inter alia focusing on ensuring responsible and ethical development and use of AI. This is further detailed in 2.2 Involvement of Governments in AI Innovation.
Generative AI
Following the widespread adoption of chatbots powered by large language models (LLMs), particularly ChatGPT, Danish businesses across various industries are increasingly deploying this technology. For established organisations, the focus is on secure, private deployments. These companies leverage private cloud tenants to whitelist and utilise LLMs (such as ChatGPT and M365 Copilot) within their own secure environments, thereby ensuring data confidentiality.
Initially, some organisations expressed skepticism and concerns about integrating AI solutions into daily operations. However, this hesitancy has generally been replaced by a willingness to embrace AI’s potential, recognising it as a crucial competitive advantage. It is important to note that the adoption rate varies significantly across industries, and even among individual companies within the same sector.
Predictive AI
In recent years, while generative AI has gained more public awareness and significant traction in various industries, predictive AI has been playing an increasingly important role. Companies are less public about their use of predictive AI systems due to competitive considerations, immature governance setups, uncertainties regarding liability and the need to protect their commercial interests.
For example, in the Danish medical industry and healthcare system, predictive AI is already being utilised or at least actively explored. One application is in medical image analysis, where it helps radiologists prioritise critical cases.
In February 2024, all political parties in the Danish Parliament agreed on a new national digitalisation strategy for 2024-2027. The strategy consists of 29 initiatives, several of which focus on AI. These include ensuring a “[r]esponsible and strong foundation for utilising artificial intelligence” and potentially investing in and training a Danish language model.
Regulatory Sandbox
Additionally, funds have been allocated to establish a regulatory sandbox aiming at providing companies and public authorities with guidance on GDPR when developing or using AI solutions – eg, by providing free relevant expertise. The regulatory sandbox is a collaboration between the Danish Data Protection Agency and the Agency for Digital Government.
The regulatory sandbox aims to support innovation and the use of AI solutions and to ensure a swifter process from development to operation of AI systems, including reducing any uncertainties surrounding the regulatory framework of such AI systems.
A project part of the regulatory sandbox is expected to last between three to six months.
The Danish Approach
As detailed in 5.2 Technology Definitions, the Danish approach tends to align closely with EU legislative texts, making the specific Danish legislative stance best described as agnostic, even if the Danish debate does not always reflect this.
EU Artificial Intelligence Act
It is unclear how the Danish opt-out on EU justice and home affairs will affect the AI Act, given it is a regulation directly applicable across EU member states. Unless Danish legislators decide to implement the parts of the regulation covered by the Danish opt-out, the implementation of specific AI regulations related to the AI Act is not expected. This issue is briefly discussed in 3.4.2. Jurisdictional Conflicts.
EU AI Directives
The pending EU AI Liability Directive, and the EU Product Liability Directive, will require Danish implementation. As the directives are pending finalisation, it is too early to state anything specific about Denmark’s implementation.
However, it will be interesting to see how the EU AI directives will influence the Danish legal landscape, particularly concerning fault-based liabilities, such as the new rules on the burden of proof, including a presumption of a causal link between defects and the AI system’s output. In Denmark, damages and liability in many cases of fault-based liability are determined on a non-statutory basis, which might have to change in relation to AI.
In the coming period leading up to the enforcement of the EU AI Act and the finalisation of the aforementioned directives, Denmark’s approach to regulating AI is expected to be clarified further.
As described in 3.1 General Approach to AI-Specific Legislation, no specific AI-specific legislation has been enacted in Denmark. However, various public authorities have issued non-binding White Papers or guidelines with the aim of providing companies within their sector or relevant to their domain with relevant non-binding guidance (see 3.3 Jurisdictional Directives).
Guidelines Issued by Public Authorities
The Danish Financial Supervisory Authority (DFSA) and the Danish Data Protection Agency (DDPA) have published guidance in relation to the use of AI. The White Paper issued by the DFSA focuses on providing tools and inspiration for companies within the financial sector regarding data ethics when applying AI. The White Paper should merely be seen as guidance and does not impose new requirements on companies.
Danish Data Protection Agency Guidelines
The DDPA has published guidelines for other public authorities, specifically geared towards municipalities and the Regions (administrative units), on handling AI technology in accordance with applicable data protection legislation.
The guidance focuses on ensuring compliance with the data protection regulation throughout the life cycle of an AI system, meaning from the development phase until the operation phase.
Different Phases
The guidelines distinguish between the public authorities’ use or development of an AI system in the following three phases:
The guidelines outline that it is essential to consider in which phase you are in and how the personal data is incorporated into the AI system to ensure compliance with the data protection rules, as – in particular – the purpose, lawfulness and legal basis can change depending on the phase.
Supporting the Development of AI While Ensuring Compliance
As outlined in previous sections, a key focus is to avoid the guidelines becoming onerous to developing relevant AI systems while continuously ensuring that they comply with the data protection regulations.
As briefly touched on in 3.1 General Approach to AI-Specific Legislation, Denmark is awaiting the AI Act (including the EU AI Liability Directive and the EU Directive on liability for defective products) to come into effect.
Generally, Denmark welcomes the implementation of the AI Act and has not taken local steps that might be similar or in conflict with it. The Danish digitisation strategy and establishment of the regulatory sandbox are signs that Danish legislators are eagerly awaiting the harmonisation of the AI Act.
Due to its opt-out on justice and home affairs, Denmark has reservations regarding EU law in areas like criminal justice and police operations. This means that the AI Act regulating law enforcement authorities’ use of facial recognition will not apply in Denmark, including the use of biometric categorisation systems, predictive policing and remote biometric identification systems.
The use of AI-based facial recognition by public authorities, including the police, is becoming increasingly debated, as also discussed in 11.3 Facial Recognition and Biometrics.
There is no applicable information in this jurisdiction.
Implementation of DSM Directive Article 4
Denmark has made amendments to its copyright law in recent years to accommodate developments in AI technology, particularly in data mining. One notable development is the implementation of Article 4 of the DSM Directive on exceptions and limitations on text and data mining into Section 11 of the Danish Copyright Act.
Exceptions for Text and Data Mining
Previously, data mining could potentially infringe on copyright as it involved reproducing and analysing copyrighted material without permission from the creator, provided lawful access to the copyrighted material was obtained in the first place. However, with the introduction of Sections 11b and 11c, Denmark has recognised the importance of data mining and now enables such activities through exceptions to copyright law. As a general rule, according to the new sections, authors cannot oppose the use of their works for text and data mining.
Reservation for Text and Data Mining
While text and data mining can be used for research and AI development purposes without the need for prior permission, right holders have the option to prohibit commercial text and data mining by stating so in a machine-readable manner, including in metadata and terms and conditions for the use of a website or service. In such cases, text and data mining can only be legally done after an agreement with – and possibly payment to – the right holders.
Proactive Approach
The amendments to the Danish Copyright Act demonstrate a proactive approach to fostering AI technology through data mining while still upholding the principles of copyright protection and the rights of authors.
See 3.1 General Approach to AI-Specific Legislation.
Currently, the courts in Denmark have not issued any judicial decisions with respect to generative AI and intellectual property rights. However, the DDPA has issued opinions and decisions regarding public authorities' use of AI systems (see 7.2 Judicial Decisions).
Reasons for Lack of Judicial Decisions
It is difficult to provide any definitive answer as to the lack of judicial rulings; however, the following elements may be relevant (although not exhaustive):
No relevant judicial decisions have yet been issued in Denmark; however, see 5.2 Technology Definitions for more information on the Danish approach to judicial matters.
Implementation and oversight in Denmark for other areas of law related to the EU digitisation agenda (eg, NIS I) have been carried out in accordance with a well-established sector principle. This means that responsibilities are divided among authorities or agencies based on the sector, rather than having the entire legislation overseen by one or two agencies.
However, for AI the Danish Agency for Digitisation has as of 11 April 2024 been appointed to act as the national co-ordinating supervisory authority for the EU’s AI Regulation. The Danish Agency for Digitisation has historically been responsible for ensuring stability in Danish public IT projects and systems and more recently for digital infrastructure directed at citizens.
Danish authorities and agencies do not use a singular definition for AI technology. Their guidance has focused on defining AI, often pointing to definitions from the AI Act or previous discussions in the EU legislative process.
Danish Tendency to Align with EU Law
There is a growing tendency in Danish lawmaking to adopt technical definitions from EU directives directly or with minimal amendments. Overall, Denmark seeks to implement regulations closely aligned with the original EU texts, often looking to the EU for clarity on definitions rather than expanding or interpreting them nationally. Therefore, businesses operating under Danish jurisdiction would benefit from aligning themselves with EU definitions, whether dealing with a single technology or multiple.
As stated in 5.1 Technology Definitions, agencies in Denmark focus narrowly on their area of responsibility.
The Danish Agency for Digitisation has historically focused on implementing digital infrastructure. Co-ordinating uniform messaging and enforcement of AI regulations across other authorities and agencies will be a key challenge. However, the Danish Agency for Digitisation is generally expected to have a constructive attitude towards AI, viewing it as a benefit to Danish society.
For more details, see 3.2 Jurisdictional Law and 3.3 Jurisdictional Directives.
No notable enforcement actions directly aimed at AI have yet been taken in Denmark. As outlined in 5.3 Regulatory Objectives and 3. AI-Specific Legislation and Directives, Danish agencies and authorities are keenly interested in the use of AI technology but are currently focused on providing guidance.
The lack of notable enforcement action is also tied to the status of the AI Act, with Danish authorities seemingly preferring not to introduce substantial local legislation ahead of the Act, as discussed in 5.2 Technology Definitions on the Danish legislator’s approach.
Certain Danish standards are set for IT security and responsible use of data, such as the D-mark (D-mærket), which is an industry initiative from, among others, the Danish Industry Foundation in collaboration with the Confederation of Danish Industry and the Danish Chamber of Commerce.
This standard does not yet include specifics related to AI, and no other significant Danish standards are set as of yet. However, the expectations are that the industry will update the D-mark or set a similar standard in due course.
While government authorities have provided guidance (see 3. AI-Specific Legislation and Directives and 5. AI Regulatory Oversight), they have not set any standards, nor are there apparent plans to do so.
International standards such as ISO and IEC will most likely provide an important contribution in shaping local Danish standards in an operational sense, where legislative measures from the EU or government authorities and agencies do not set out norms in detail.
Denmark’s cautious approach to lawmaking in cybersecurity, technology, and related fields (see 5.2 Technology Definitions) where the EU has set out legislation, has the implicit effect that international standards have become indirectly significant for many Danish industry actors seeking alignment with other commercial entities.
Currently, AI in Denmark primarily serves as a supportive measure for expert assessments rather than providing administrative decisions. Denmark has for years aimed to automatise and increase the efficiency of the public administration, particularly within the taxation area.
Property Valuation System
Most recently, the Danish Tax Agency’s roll-out of the new property valuation system aimed to automatise the calculation of property valuation and value tax, a move which has been much debated.
Issues when Utilising AI in Public Administration
In addition to other applicable legislation, such as the GDPR, the public authorities must adhere to the Danish Public Administration Act (Lovbekendtgørelse 2014-04-22 nr. 433), including good administration practices and legal doctrines, when using AI in their administrative decisions, for example, as part of their expert assessments.
The leading opinion is that the principles of administrative law are technology-neutral, which in some cases imposes high requirements on the use of AI in public administration. This includes compliance with the principles described below:
According to these principles, it must be possible to document that an AI solution has included all relevant and necessary information and has only considered fair and objective factors in its assessment.
There is no case law regarding the use of AI. However, the DDPA has issued an opinion of particular relevance regarding public authorities’ use of AI.
Use of AI-based Profiling Tool
After a request from the Danish Agency for Labour Market and Recruitment (Styrelsen for Arbejdsmarked og Rekruttering – STAR), the DDPA issued an opinion regarding the municipalities’ legal basis to use an AI profiling tool (ASTA) designed to predict the likelihood of a citizen becoming long-term unemployed.
ASTA was not developed to issue automated decisions but merely to support decision-making, providing recommendations for relevant initiatives to social officers.
Legal Basis for Processing Activities When Using AI
In its opinion, the Danish Data Protection Agency outlined that the requirements for clarity of the necessary legal basis for processing personal data depend on how intrusive the processing activity in question is for the data subject. If the processing activities are deemed intrusive, the requirements for the clarity of the legal basis are correspondingly stricter and vice versa. In the specific case, it was assessed that the use of ASTA constituted an intrusive processing activity, necessitating a high level of clarity regarding the legal basis for the processing activities.
In general, and as elaborated in its guidance, the DDPA highlighted that the mere use of AI solutions by public authorities should not be deemed intrusive. However, citizen-focused use of such AI solutions often impacts citizens’ life situations, meaning the AI solution’s processing of personal data will typically be considered intrusive.
The Danish Centre for Cyber Security (Center for Cybersikkerheds – CFCS) published a new threat assessment in March 2024 that described how hackers may misuse generative AI.
The updated assessment focuses on how hackers use generative AI to create phishing emails or develop sub-parts of a code with harmful output. Currently, it is unclear to what extent the technology is misused, but the CFCS highlights its significant negative potential.
Despite the new threats emerging since the widespread availability of generative AI, the CFCS has not changed its overall assessment of the cyber threat towards Denmark.
Generative AI and Issues Raised
One of the main issues is the lack of transparency in the decision-making process of the AI, making it difficult to identify and correct errors. Additionally, the use of generative AI in creating realistic deepfakes, for example by using ChatGPT, raises questions about privacy and cybersecurity.
Addressing the Issues
In addition to those mentioned above, Danish policymakers are taking various steps to address these issues, including:
IP Protection for AI Assets
In the AI business, it is crucial to understand how to achieve IP protection in all processes, as know-how and costs accumulate not only in the final product but also in the creation process. Assets in the AI process that can be IP protected include AI models, training data, input (prompts), and the output. However, at present, AI works that AI has learned and created on its own fall outside the protection of Danish Patent and Copyright law. Furthermore, the terms and conditions of the AI tool provider can influence the protection of assets with respect to the input and output of the generative AI tool.
Potential IP Infringements
There is a risk of IP infringements under Danish Copyright law with respect to the models, or the training, input, or output data. If AI-generated works are given copyright and protection, there are concerns that the number of works with rights may explode due to AI’s high productivity. Conversely, if no rights are given to AI-generated works, a situation may arise where free-riding becomes frequent, and third parties can freely use AI, even though it has required work and costs to develop it. This can result in a loss of motivation for AI research.
For privacy see 8.3 Data Protection and Generative AI.
Protecting IP in the Development of Generative AI Models
The development of AI models involves the collection and use of large sets of data used for training the AI model. This data is often protected by copyright law, which means that the collection of data for training AI must comply with Danish copyright laws. Input and output data generated by AI tools can also be protected by IP rights, depending on their nature and originality. Trademark infringements can also occur if AI tools use brand names and logos, namely in image-generating models.
Influence of AI Tool Providers’ Terms and Conditions on Asset Protection
The terms and conditions of AI tool providers can have an impact on the protection of IP assets. For example, some AI tool providers may require users to grant them a license to use their data for various purposes, which could potentially infringe on their IP rights. Users should – like for other technologies – always carefully review the terms and conditions of the relevant AI tool.
IP Infringements and Applicable Law
IP infringements can occur in the AI process, particularly in the collection of training data. While Danish copyright law generally protects the exclusive right of the author to dispose of their works, Denmark has recently recognised the importance of data mining and allows for exceptions to copyright law for text and data mining activities. As discussed in 3.6 Data, Information or Content Laws, while there are limitations on commercial text and data mining, the law generally allows for the use of works for text and data mining purposes without the need for prior permission from the author. This is a significant change that reflects the impact AI has on the legal framework.
The intersection of data protection and generative AI raises concerns about individuals’ rights and the appropriate use of personal data. Denmark’s focus has been on the right to rectification and erasure of personal data. Purpose limitation and data minimisation are crucial in complying with applicable Danish laws, as they strike a balance between AI development and protecting individuals’ privacy.
The right to rectification may involve correcting inaccuracies in the output or ensuring future iterations of the AI model no longer produce false claims which, in practical terms, is extremely difficult.
The right to erasure, also known as the “right to be forgotten”, enables data subjects to request the deletion of their personal data. However, applying this right in the context of generative AI can be complex. Deleting the entire AI model may not be necessary, especially if the model generates outputs unrelated to individuals. Instead, a more targeted approach may be required, such as deleting or anonymising specific personal data within the AI model.
With regards to purpose limitation, generative AI models should be designed with clear purposes, ensuring data subjects are aware of how their personal data will be used. Transparency is essential to maintain trust and protect individuals’ rights, as also emphasised by the DDPA in its October 2023 guidelines on the development and use of AI by public authorities (see 3.3 Jurisdictional Directives). In particular, the guidelines advise public authorities to consider several factors before starting to develop AI models, such as the legal basis for processing, the duty to inform data subjects about the processing of their personal data, and the need to conduct risk assessments.
In the context of generative AI, data minimisation is especially important to prevent excessive collection and retention of personal data. Techniques such as data anonymisation and aggregation can be employed to minimise reliance on identifiable personal data while achieving desired functionalities in AI models.
Given the rapid development of AI technologies, the DDPA has prioritised AI as a focus area for its supervisory activities in 2024. Consequently, we can anticipate further guidelines, and initiatives from the agency in the near future.
Regulation of AI in Law by Local Organisations
The use of AI in law is currently subject to regulation by local organisations such as the Danish Bar and Law Society, which are tasked with ensuring that AI and its use in the legal field adhere to ethical and professional standards.
Establishment of a Working Group
In 2023, the Association of Danish Law established a working group to identify and describe the use of AI in the legal profession. The aim of this group is to highlight the potential challenges that may arise from the use of AI in law and propose strategies to address them.
AI in Litigation
Danish legal practitioners are increasingly relying on AI-driven tools for tasks such as document reviewing and legal searches, which offer automated support services, facilitating more efficient and cost-effective case preparation. However, given the novelty of AI in litigation, no specific rules or regulations currently exist.
Ethical Concerns
The use of AI in law raises significant ethical concerns, particularly with regards to the potential reduction in human judgment and accountability. This could threaten core values of the legal profession, including fairness and justice. To address this, organisations such as the Danish Bar and Law Society must continue to monitor and regulate the use of AI in law to maintain ethical and professional standards.
Liability for Personal Injury and Commercial Harm Resulting from AI-Enabled Technologies
AI-enabled technologies have the potential to cause personal injury or commercial harm, raising questions about liability and responsibility. In Denmark, there is currently no specific regulation; however, see 3.1 General Approach to AI-Specific Legislation for future legislation on liability for AI-related injuries.
Theories of Liability and Requirements for Imposition of Liability
Theories of liability for personal injury or commercial harm resulting from AI-enabled technologies include product liability, negligence, and strict liability. To impose liability it must be shown that the AI technology caused the harm, that there was a duty of care owed by the operator, and that there was a breach of that duty.
Role of Human Guidance and Allocation of Liability
The role of human guidance is also important in determining liability resulting from AI. An operator who is only assisted by AI-enabled technology has a greater influence than an operator whose function has completely been replaced by an AI system. Hence, the operator who is only assisted by AI-enabled technology has a greater presumption of being liable.
Insurance
Insurance plays a critical role in managing the risks associated with AI-enabled technologies. It is essential to determine the scope of coverage and exclusions in insurance policies for AI-related claims. At this moment, the discussion of insurance coverage for AI-enabled technologies remains purely theoretical, as there does not exist any publicly available information or practical industry discussion in Denmark.
The European Commission proposed an AI Liability Directive in September 2022 (see 3.2 Jurisdictional Law).
Algorithmic Bias in the Public and Private Sector
Biased algorithms could lead to unequal access to healthcare services, misdiagnosis, or inappropriate treatment recommendations. Similarly, in the private sector, algorithmic bias can have severe consequences. For example, biased loan algorithms may disproportionately deny credit to certain groups. Legislators and authorities have historically been aggressive in ensuring citizen and consumer rights, and it is to be expected that this approach will remain the same in respect of any use of AI.
Liability Risks
Although Denmark has not yet implemented specific regulations targeting algorithmic bias, the DDPA actively monitors developments in AI and provides guidance to organisations on complying with existing laws. Furthermore, in Denmark, the Danish Agency for Digitization has taken a significant step by creating a strategy for AI, setting a roadmap for Denmark as well as publishing guidelines on responsible, non-biased, use of generative AI.
Risks
AI models typically rely on large data sets to train and improve their algorithms, thus the principle of data minimisation poses significant challenges in the context of AI technology. Companies may face increased risks of using data for unintended purposes, processing information beyond the scope of the data collection, and retaining data for longer than necessary. Striking a balance between data minimisation and the effectiveness of AI algorithms is a complex challenge. To address this, companies need to ensure that they have a legitimate basis for collecting and processing personal data. However, identifying and applying a legal basis is complex, as has also been highlighted by authorities in other countries, for example in relation to ChatGPT in Italy.
Possibilities
On the other hand, AI technology also offers several advantages in terms of personal data protection. It can serve as a privacy-enhancing technology that helps companies meet their data protection obligations. For example, AI can generate synthetic data that mimics real-world data, helping to train machine learning algorithms without exposing actual personal data. Synthetic data can also help mitigate algorithmic bias by using fair synthetic data sets that are manipulated to avoid, for example, gender or racial discrimination. AI can also provide a more robust defence against cyber threats and mitigate data breaches by layering security measures with advanced threat detection, pattern analysis and faster response times.
Under the GDPR, facial recognition relies on the processing of biometric data, which is considered a special category of personal data. The GDPR generally prohibits the processing of biometric data unless explicit consent or a legitimate justification under the GDPR or other legislation is obtained. Companies must identify a legal basis under the GDPR and implement strong security measures to protect the biometric data they collect. To mitigate such risks, companies should conduct thorough risk assessments and implement appropriate security measures. This includes conducting regular audits, documenting data processing activities, and providing clear information to individuals whose biometric data is collected.
In Denmark, the DDPA has the authority to authorise the processing of biometric data by private organisations if it is necessary for substantial public interest. For instance, in June 2023, the Danish football club Brøndby IF, obtained authorisation to use facial recognition technology during matches, including those held in other stadiums, after the football club applied for extended use of its system.
There is an ongoing debate about the use of facial recognition in Denmark. For example, the use of facial recognition by the Danish police in public places has been the subject of recent debate, also in light of the AI Act, which prohibits real-time facial recognition in public places for law enforcement purposes (see 3.4.2 Jurisdictional Conflicts).
Companies using automated decision-making, including profiling, must comply with the GDPR. According to Article 22 of the GDPR, individuals have the right not to be subject to a decision based solely on automated decision-making, including profiling, if it would have a significant adverse effect on the individual. However, there are exceptions where such processing is necessary for the conclusion or performance of a contract, is authorised by other legislation, or explicit consent has been obtained from the individual subject to the automated decision-making. If no exception applies, the decision must be made with human intervention. In addition, companies may be required to conduct an assessment of the impact of their automated decision-making processes on the protection of personal data. However, when automated decision-making is used to process special categories of data, including biometric data, explicit consent must be obtained or the processing must be necessary for reasons of substantial public interest, as further discussed in 11.3 Facial Recognition and Biometrics.
However, as also highlighted by the DDPA in its guidelines on the development and use of AI by public authorities, and as further discussed in 3.3 Jurisdictional Directives and 8.3 Data Protection and Generative AI, it may be difficult to obtain valid consent for the processing of personal data in the context of complex AI models. Often, there will be a clear imbalance in the relationship between the data subject and the data controller. For example, if the processing of personal data, such as benefit claims, has an impact on the individual's life situation – whether real or perceived – the individual’s consent cannot be considered freely given. In addition, consent must be specific and informed, and it must be possible to manage the withdrawal of consent and stop the processing of personal data. This can be challenging in complex AI models where data is processed in many different ways, as it is crucial for the validity of consent that the individual understands what their data is being used for and can opt out of those purposes. The more data is to be used, including for automated decision-making, the more difficult it will be to meet the conditions for valid consent.
In Denmark, the use of AI technologies, including chatbots, as a replacement for services rendered by natural persons is subject to the GDPR. Articles 13, 14 and 15 of the GDPR set out the transparency rules and require data controllers to inform individuals about the processing of their personal data, including when AI is involved. The specific information to be provided depends on how personal data is collected; ie, data may be collected directly from the data subject, for example through a job application, or the data may be collected through a third party. In both cases, the individual must be informed of the purpose and use of his or her data, as well as of any proposed new use of that data.
Dark Patterns
However, the use of technology to manipulate consumer behaviour or make undisclosed suggestions, also commonly known as “dark patterns”, raises concerns as it makes it difficult for individuals to make informed choices about their personal data. Dark patterns could entail sharing personal information without clear intent or making purchases by mistake. These practices are often considered unfair under the Danish Marketing Practices Act (Markedsføringsloven).
The Digital Services Act also addresses the issue of dark patterns through its provisions prohibiting online platforms from using dark patterns that manipulate users into making decisions that are not in their best interests.
Price algorithms can be an effective tool for companies to set prices, and the Danish Competition and Consumer Authority (Konkurrence- og Forbrugerstyrelsen) has also recognized the need to address issues related to the use of AI in price-setting.
Generally, price algorithms can be categorised into three types:
What these algorithms all have in common is that customers usually encounter the same price. In contrast, price discrimination occurs when companies charge different prices to different customers based on factors such as their buying history, willingness to pay, or other characteristics.
Moreover, there are concerns with the use of AI in pricing, such as co-ordinated behaviour and price agreements, which can weaken competition and harm consumers. To address these issues, the Danish Competition and Consumer Authority has established the “Center for TECH” to strengthen the enforcement of competition rules and analyse and monitor companies’ use of big data, machine learning, AI, and algorithms.
In a Danish context, contracts between AI customers and suppliers will be key to resolving a number of the issues facing the use of AI technology in a B2B context. Several of these issues are outlined below.
Intellectual Property Rights and Trade Secrets
Ensuring intellectual property rights is crucial in AI contracts, See 15.1 Applicability of Patent and Copyright Law and 15.2 Applicability of Trade Secretcy and Similar Protection for details.
Liability and Accountability
Addressing liability in the context of AI’s autonomous decisions is also key. Contracts should specify the supplier’s liability scope, detailing the due diligence required and the mechanisms for accountability. See 10.1 Theories of Liability for more.
Regulatory Adaptability
Given the dynamic nature of AI regulation, contracts should incorporate terms allowing for periodic revisions. This ensures that the agreement remains compliant with evolving legal and ethical standards, enabling businesses to navigate the fast-changing AI landscape effectively.
Drafting
Contracts need to add more detail to the areas mentioned above, as well as other areas heavily addressed in drafting and negotiations, such as performance and service levels. Within these areas, adequate attention is required for all aspects of AI procurement, including questions like how the AI trains on data, which data is used, who uses it, and what baseline can be established for performance. These are just some of the critical questions that need to be answered and negotiated.
Using advanced algorithms, AI can quickly sift through thousands of applications and identify relevant candidates based on their skills and experience. AI can also help eliminate human bias, ensuring that the focus is solely on the candidate’s qualifications and competencies. AI offers potential benefits such as cost savings, streamlined processes and improved hiring decisions. But AI also poses significant risks, including privacy, non-discrimination and equal treatment concerns.
Therefore, when developing AI-based recruitment and employment tools, employers must ensure that the technology complies with the GDPR, as well as the Danish Act on Prohibition of Discrimination in Employment (Forskelsbehandlingsloven) and the Danish Act on Equal Treatment between Men and Women (Ligebehandlingsloven). Regular audits, transparency in the use of AI in the selection process and corrective action when bias is identified are crucial steps to mitigate potential liability risks.
In Danish workplaces, various technological tools and solutions have emerged to facilitate the evaluation and monitoring of employees.
Such tools can cause potential harm to employees. Data accuracy and reliability are crucial as further discussed in 11. Legal Issues With Predictive and Generative AI, and certain systems for emotional monitoring are directly prohibited under the AI Act if directed at employees.
In a Danish context, the GDPR imposes strict requirements on the collection, processing, and storage of personal data, also for evaluation and monitoring purposes. Employers must be transparent about the purpose and extent of monitoring as further discussed in 11.5 Transparency,and implement measures to safeguard employee privacy. Failure to comply with these requirements can expose employers to potential liability. It is also important for employers to establish clear policies and guidelines regarding the use of technology for evaluating and monitoring employees.
Companies like GoMore, a Danish mobility operator, have harnessed the power of digital platforms to facilitate private car hire, leasing, and carpooling options. By utilising keyless access and real-time location tracking of available vehicles, GoMore makes it easier for platform users to plan their trips efficiently.
The food delivery sector in Denmark has also witnessed advancements due to digital platforms. Platforms like Wolt employ algorithms to optimise the delivery experience, for example by estimating the time required for restaurants to prepare customers’ food orders and calculating the time it will take for a courier partner to deliver it to the customer.
However, the rise of platform work has also posed regulatory challenges. The EU has taken the lead in addressing these concerns by proposing specific rules for digital labour platforms in a new directive. The directive will require platform workers to be informed about the use of automated monitoring and decision-making systems. It also prohibits the processing of certain types of personal data, such as emotional or psychological states, racial or ethnic origin, migration status, political opinions, religious beliefs, health status, and biometric data, except for data used for authentication purposes.
Denmark’s financial services sector is undergoing a significant digital transformation. Banks and insurance companies are embracing AI technology in relation to, inter alia, credit scoring, customer interfaces and standard AML practices. However, as they delve into the realm of AI, financial institutions are also recognising the need for caution when dealing with vast amounts of customer data. One significant concern is the potential for AI algorithms to make erroneous decisions or predictions due to biases inherent in the data. Biased data can result in discriminatory practices, such as biased loan approvals or pricing as further described in 11.1 Algorithmic Bias.
The DFSA has been proactive in addressing the challenges and risks associated with AI implementation. For example, the FSA has published recommendations for financial institutions on the use of supervised machine learning, emphasising the importance of using AI responsibly. Furthermore, financial institutions must adhere to the GDPR and the DPA.
The use of AI in healthcare has been rapidly increasing in recent years in Demark, providing more efficient and effective care to patients. However, the World Health Organization (WHO) urges caution in the use of AI and has released guidelines on the use of AI in healthcare. These guidelines emphasise the importance of ensuring that AI is used in a responsible and ethical manner to ensure patient safety and privacy.
One of the potential risks associated with AI in healthcare is algorithmic bias; see 11.1 Algorithmic Bias.
AI is also being increasingly used in software as a medical device and related technologies such as wearables and mobile health apps. While these technologies have the potential to provide more personalised care, they also raise concerns about data privacy and the accuracy and reliability of the AI algorithms used. To mitigate these risks, it is essential to ensure that health data is collected and used in a responsible and ethical manner in compliance with the GDPR and the Danish Data Protection Act (DDPA).
Robotic surgery is another area where AI is being used in healthcare. In Denmark, robotic (assisted) surgery has been widely used in gynaecology and other areas and is subject to applicable Danish law within the area of healthcare (eg, The Danish Health Act (Sundhedsloven)) and patient rights legislation concerning liability and damages.
A “self-driven” vehicle is a vehicle that can drive completely or partially without the assistance of a driver. In Denmark (upon prior authorisation) you can experiment with small autonomous vehicles in the public space, which has been governed by the Danish Road Traffic Act (Færdselsloven) since 2017.
One of the major challenges in autonomous vehicle navigation is the AI’s ability to understand the social codes of traffic that enable human drivers to decide whether to take evasive action or keep driving. This has been emphasised in new research from 2023 by the Department of Computer Science at the University of Copenhagen (Datalogisk Institut). Danish liability law for accidents on the road is on a no-fault basis. However, if the accident involves autonomous vehicles, liability might shift to the holder of the authorisation to experiment with autonomous vehicles. Denmark might look to other legal frameworks, such as Britain’s approach, which aims to shift the liability involving self-driven vehicles away from the passengers and onto the regulated licenced operators.
The use of autonomous vehicles also involves a number of data protection concerns, as such use may involve the collection of personal data about drivers and passengers, which the EDPB highlighted in their guidelines back in 2021 on connected cars. However, cars equipped with cameras in and around the car may also be subject to processing of personal data. Both the recording and the subsequent processing of personal data by the car’s cameras are rarely known to anyone other than the driver, partly because no information is provided about the recordings outside the car. And even if the main purpose of the cameras is not to process personal data, the GDPR still applies as long as the individuals are identified or identifiable. Car manufacturers working with connected cars must therefore ensure that the collection and processing of personal data in the use of connected cars comply with the GDPR.
AI is increasingly being integrated into manufacturing processes in Danish companies. Manufacturers are implementing intelligent automation, predictive analytics and machine learning algorithms to achieve reduced downtime and optimised use of materials, etc.
While there have not yet been any major developments in this area of regulation in Denmark, the requirements of the Danish Product Liability Act (Produktansvarsloven) are relevant for manufacturers using AI in manufacturing. The European Parliament endorsed the text at its 2024 March plenary for an updated Product Liability Directive. The revised Product Liability Directive addresses the complexity of digital products and extends liability for defects to software. However, the final interpretation of the Directive will also depend on how it is transposed into Danish law.
Concerning the proposed AI Liability Directive, see 3.1 General Approach to AI-Specific Legislation.
When AI technology is used in professional services, the general view in Denmark remains that it is the responsibility of the professional to ensure that the case is handled correctly, and that the advice is fact-checked. Use of AI will in this respect most likely not be subject to separate legislation; rather, existing rules and legislation for professional services will be understood to encompass AI as they do other technologies used in service provision.
In addition, the use of AI can raise questions about ownership and protection of intellectual property rights. The challenge is to determine the owner of creations made through AI technology, which is further discussed in 15.3 AI-Generated Works of Art and Works of Authorship.
Moreover, professionals must comply with data protection laws, such as the GDPR and the DPA, to protect client privacy and prevent unlawful processing of personal data when using AI in professional services.
Lack of Judicial or Agency Decisions on AI Technology and Inventorship
As of now, there have been no judicial or agency decisions in Denmark regarding whether AI technology can be considered an inventor or co-inventor for patent purposes, or an author or co-author for copyright and moral right purposes. However, under Danish patent law, the inventor must be a physical person. Therefore, it is doubtful that AI technology would qualify as an inventor or co-inventor under current legislation.
Similarly, current Danish copyright law stipulates that right holders must be physical persons, which excludes the possibility of AI technology being an author. Consequently, AI-generated works cannot be protected, leaving them without a designated copyright holder, which means that no one has the exclusive right to dispose of and produce copies of AI-generated works.
Human Input
While AI technology cannot be considered an inventor or author under current Danish copyright law, it might be worth considering whether there exist situations where human input is significant enough to justify copyright protection for AI-generated works. This is particularly relevant in cases where the amount of prompts used in the generative AI process is very significant, and the human input involved in creating and selecting these prompts is extensive.
In such cases, it may be argued that the human contribution to the AI-generated work is significant enough to meet the threshold for copyright protection under Danish copyright law; however, this issue needs to be explored further.
EU Case Law
The Danish legal perspective aligns with EU case law. In 2020, the European Patent Office (EPO) decided that the AI system DABUS could not be considered an inventor, as the European Patent Convention (EPC) requires the inventor to be a physical person. This decision reflects the prevailing view that AI technology does not qualify for inventorship or copyright authorship under current laws.
Applicability and Contractual Aspects of Trade Secret and Similar IP Rights for Protecting AI Technologies and Data
Trade secrets and similar intellectual property rights can cover different aspects of AI, such as algorithms, training data, and implementation details. To safeguard confidential information and trade secrets related to AI, companies may sign non-disclosure agreements (NDAs) with their employees.
Contractual Arrangements for Compliance with Danish IP Regulations
In addition, contractual arrangements play a significant role in ensuring compliance. Companies can protect their AI technologies and data by using contractual clauses that address specific IP issues, such as ownership, licensing, and infringement. These contractual provisions should be tailored to address the unique aspects of AI technologies and data, such as the ownership of AI-generated works, protection of proprietary algorithms, and use of data for training AI models.
Tailoring Contractual Provisions for AI Technologies and Data
The most important thing is to regularly review and update contractual arrangements to ensure they remain relevant and up to date with any new updates regarding trade secrets within AI technologies and the forthcoming Data Act.
Originality Requirements under Danish Copyright Law
Under the Danish Copyright Act, for a work to be eligible for protection, it must be original, meaning that it must be an expression of the author’s creative effort. Therefore, works that result from purely routine activities, such as automatic translations or simple text messages, are not original and are not eligible for protection.
Originality of AI-Generated Works
The question of whether AI-generated works meet the required level of originality has been a topic of discussion. It has been debated whether a machine can constitute a “creative effort” when it relies mainly on human input. AI-generated works are often created through algorithms and machine learning models, which raises the question of whether the machine or the human input should be considered the author.
Authorship Requirements Under Danish Copyright Law
Another obstacle to granting intellectual property protection to AI-generated works is the authorship requirement under the Danish Copyright Act. The law currently prescribes that the author must be a physical person, excluding the possibility of an AI system being considered the author of its works. This means that AI-generated works cannot be protected, leaving them without a designated copyright holder and no exclusive right to dispose of or produce copies of the work.
Ownership of Works and Products Created Using OpenAI
One of the main issues – also in a Danish context - related to OpenAI is the lack of protection for works and products created using the tool. This means that the ownership of the output is not clear, leaving it vulnerable to being used by anyone. The lack of protection raises questions about who has the right to use, distribute, or modify the output generated by OpenAI.
Infringement of IP Rights
Another significant issue related to OpenAI is the potential risk of infringing other individuals’ IP rights. This risk is particularly high when feeding copyrighted content into the system without proper permission.
Confidentiality and Trade Secret Concerns
Additionally, concerns regarding confidentiality and trade secrets may arise when providing input to OpenAI. Users must ensure that they have the rights to any data or information fed to the system and that this information is not confidential. Failure to do so could result in legal action, including breach of contract claims, trade secret misappropriation, and other related claims.
Addressing IP Issues When Using OpenAI
To mitigate the IP risks associated with OpenAI, users must take steps to ensure that they have the rights to use any input data and that they do not infringe on other people’s IP rights. Users should also consider entering into agreements with third-party content owners to obtain proper permission before using copyrighted content.
Compliance with Danish and EU Regulations
Advising corporate boards in Denmark is currently a matter of ensuring a multi-disciplinary approach. The focus of commercial decision-makers is in the current environment very much on ensuring commercial benefits; accordingly, advice needs to be balanced and adequate consideration must be given to technical and legal aspects in addition to the commercial possibilities.
Risk Assessment and Management
In order to ensure necessary processes, boards need to start establishing comprehensive risk assessment frameworks to identify potential legal, operational/technical, and reputational risks associated with AI deployment. This includes evaluating the reliability, transparency, and accountability of AI systems, as well as their alignment with the company’s strategic objectives and ethical standards.
Training and Awareness
Boards should also prioritise training and awareness programmes to understand the capabilities, limitations, and risks of AI. A number of Danish actors are still in the early stages of understanding AI as such. This includes keeping abreast of technological advancements and regulatory changes, enabling informed decision-making and fostering a culture of AI literacy within the organisation.
Alignment with National and International Guidelines
Now that actual clear regulatory guidance is available, organisations and companies in Denmark need to align their AI practices with the EU’s ethical and legal frameworks and the Danish government’s national strategies.
Practical Implementation of AI Ethics
For Danish industries using AI in relation to sensitive data or where the business is subject to specific risks, it is essential to stay ahead of potential reputational harm by translating abstract ethical principles into actionable practices. This involves integrating ethics into the AI development life cycle, from design to deployment, ensuring that AI systems are transparent, explainable, and aligned with societal values.
Data Governance and Privacy
Another key area is ensuring data quality, securing data storage and transmission, and respecting user privacy (see 11.2 Data Protection and Privacy).
Capacity Building and Stakeholder Engagement
To effectively implement AI best practices, Danish organisations need to invest in building internal expertise and fostering a culture of continuous learning. Such capacity should ensure capabilities in all areas of commercial, operations, technical and law.
Advokatpartnerselskab
Kalkbrænderiløbskaj 8
2100 Copenhagen Ø
Denmark
+45 72 24 12 12
denmark@twobirds.com www.twobirds.com/da/reach/nordic-region/denmark