Artificial Intelligence 2024

Last Updated May 28, 2024

Netherlands

Law and Practice

Authors



Greenberg Traurig, LLP is an international law firm with approximately 2,750 attorneys serving clients from 47 offices in the USA, Latin America, Europe, Asia and the Middle East. The firm’s dedicated TMT team consists of more than 100 lawyers, of which seven are in Amsterdam. The Amsterdam team is well-versed in representing clients around the world in domestic, national, and international policy and legislative initiatives, as well as guiding them through the business growth cycle for a variety of technologies. As a result, it provides forward-thinking and innovative legal services to companies producing or using leading-edge technologies to transform and expand their businesses.

Current legislation that touches on AI includes the following.

  • Regulation (EU) 2023/2854 (the Data Act) on data sharing between companies with a focus on Internet-of-Things devices.
  • Directive (EU) 2022/2555 (the NIS2 Directive) on achieving a high common level of cybersecurity across the European Union (EU) which will be implemented into Dutch law within the Network and Information Systems Security Act (Wet beveiliging netwerk- en informatiesystemen).
  • Regulation (EU) 2022/868 (Data Governance Act, or DGA) on data sharing between governments and private companies.
  • Regulation (EU) 2022/2065 (Digital Services Act, or DSA) on regulating platforms and protecting users thereof.
  • Directive (EU) 2019/790 (Digital Single Market Directive, or DSM Directive) on facilitating a legal framework for text and datamining, implemented into Dutch law in the Dutch Copyright Act (Auteurswet, or DCA), Neighboring Rights Act (Wet naburige rechten), the Databases Act (Databankenwet) and the Act on Supervision and Dispute Resolution of Collective Management Organizations Copyright and Neighboring Rights (Wet toezicht en geschillenbeslechting collectieve beheersorganisaties auteurs- en naburige rechten).
  • Regulation (EU) 2016/679 (General Data Protection Regulation, or GDPR) on protecting the rights and freedoms of natural persons in relation to the processing of their personal data and the GDPR Implementation Act (Uitvoeringswet, or AVG).
  • Directive 2016/943/EU (the Trade Secret Directive) on the protection of trade secrets, implemented into Dutch law in the Trade Secret Protection Law (Wet bescherming bedrijfsgeheimen).
  • Regulation (EU) 1257/2012 (Patent Regulation) on the unitary protection of patents.
  • Directive 2001/95/EC (General Product Safety Directive) provides the threshold level of safety of certain products sold within the EU, including toys, aviation, cars, medical devices and lifts, implemented into Dutch law in the Dutch Civil Code (Burgerlijk Wetboek, or DCC).
  • Directive 2001/29/EC (Infosoc Directive) which largely harmonises copyright laws across the EU, implemented into Dutch law in the DCA.
  • Directive 96/6/EC (Database Directive) on the legal protection of databases, implemented into Dutch law in the Databases Act.
  • Rijksoctrooiwet 1995 (Dutch Patent Act 1995) on the protection of patents in the Netherlands.

Certain legislation is pending, but is expected to enter into force soon:

  • the proposal for the EU Artificial Intelligence Act (accepted by the EU Council and enters into force upon 20 days after being published in the EU's Official Journal), which provides a broad and extensive legal framework for the use and development of AI systems and algorithms;
  • the proposal for the EU Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive); and
  • the proposal for a Regulation of the European Parliament And of The Council on horizontal cybersecurity requirements for products with digital elements and amending Regulation (EU) 2019/1020.

The above legislation is supplemented by case law from the Dutch courts and the Court of Justice of the EU (CJEU).

Applications of AI and machine learning are a hot topic in every industry, including healthcare, retail, finance, manufacturing, transportation and education. The use of generative AI, which creates data such as images and text based on generative models, is increasingly common, particularly in customer-facing situations through the use of chatbots. Predictive AI, on the other hand, is used to make predictions based on input data. It is used in a variety of applications, including weather forecasting, stock-market predictions and disease-outbreak predictions. Predictive AI uses machine-learning algorithms to analyse historical data and predict future outcomes.

Industry innovations driven by AI and machine learning include autonomous vehicles, chatbots, personalised marketing, predictive maintenance and precision medicine. These innovations benefit businesses by reducing costs, improving efficiency and creating new revenue streams. Consumers benefit from personalised services, improved healthcare and efficient transportation.

The semiconductor industry in the Netherlands is a global leader in supplying the AI sector, with companies such as ASML, ASM International and NXP developing and producing cutting-edge technology.

Cross-industry cooperative initiatives include the Partnership on AI, a consortium that includes Amazon, Facebook, Google, IBM, and Microsoft. The partnership aims to ensure that AI technologies benefit all of humanity.

Many Dutch government entities actively develop and engage in initiatives that aim to facilitate the adoption and advancement of AI for industry use, as well as the use of AI by government entities themselves. Although the government certainly acknowledges risks relating to AI, the general outlook is positive.

In January 2024, the Dutch Minister of Digital Affairs and Kingdom Relations presented the government’s vision on generative AI, highlighting the opportunities of generative AI and describing it as a promising technology, yet also recognising its challenges, particularly relating to safeguarding human wellbeing, sustainability, justice and security. The focus of the Dutch government in relation to generative AI aligns with the government’s broader ambitions regarding digitalisation (Werkagenda Waardengedreven Digitaliseren), which is to ensure that everyone can participate in the digital age, trust the digital world, and have control over their digital lives.

The Dutch supervisory authorities in the financial sector – the Dutch Central Bank (De Nederlandsche Bank, or DNB) and the Authority for the Financial Markets (Autoriteit Financiële Markten, or AFM) – support digital innovation, which, more often than not, includes some form of AI, through several initiatives. For example, the AFM & DNB InnovationHub and AFM Regulatory Sandbox provide support in manoeuvring the complicated regulatory landscape.

While other jurisdictions may prefer a ”wait and see” stance on how AI unfolds and affects various industries and sectors, the EU and the Netherlands have attempted to adopt – as well as regulate – AI right from the start. In so doing, they have taken a risk-based, one-size-fits-all approach. The general attitude in the Netherlands towards the use of AI is positive.

There are currently no general laws in the Netherlands specifically regulating AI. However, on 21 May 2024, the EU Council approved the EU AI Act. The EU AI Act will apply 36 months after entry into force with certain provisions taking effect earlier. In addition, there are other regulations that impose (indirect) requirements on the deployment of AI, as indicated in 1.1 General Legal Background, as well as a number of sector-specific laws that address AI for specific market parties (see 3.2 Jurisdictional Law).

No general AI legislation has (yet) been enacted as of the date of this publication. However, the EU AI Act has been approved by the EU Council and enters into force 20 days after being published in the EU's Official Journal.

There is specific legislation described in 14.2 Financial Services regarding the use of AI.

On 17 January 2024, the Dutch government published its view on generative AI, emphasising the importance of continuing to monitor and analyse generative AI trends. The Netherlands intends to be a front-runner within the EU in the field of safe and responsible generative AI, and the government aims to achieve its objectives by collaborating closely with the relevant companies and leveraging its international connections. It intends to take on a prominent role in the rollout of AI in the coming years.

In January 2024, the Dutch Ministry of Internal Affairs and Kingdom Relations published a guide to impact assessments on human rights and algorithms. This guide includes extensive explanations on the assessment that government entities have to make to make when using algorithms and AI. Although the guide is non-binding, government entities are expected to use the model when performing this type of assessment. The use of the guide is also recommended for non-government entities by the Dutch DPA.

The recently approved EU AI Act will directly apply to EU Member States as a regulation. This act will be complemented by the AI Liability Directive, which is still pending and will have to be implemented into Member State Law.

The direct application of the EU AI Act provides for harmonisation advantages, meaning that companies can achieve EU-wide compliance by meeting the requirements of one set of rules. Note that EU Member States may still choose to enforce additional or stricter rules.

The EU AI Act is, in many ways, the first of its kind. For this reason, not many issues with existing AI-specific local Dutch laws are expected. Moreover, the EU AI Act will override any existing rules in Member States due to the EU’s legislative system.

There is no applicable information in this jurisdiction.

There have not been any notable amendments or newly introduced jurisdictional data laws in the Netherlands to foster AI technology. The main reason for this is that existing laws (mainly the GDPR) and EU regulations provide an appropriate regulatory framework.

The DSM Directive fosters AI technology by a text and data mining (TDM) exception which allows reproduction and extraction, which includes activities for AI training purposes. Legitimate access is given to subject matters of works contained in networks or databases for the purpose of TDM, unless the right holder expressly reserved that use.

There are no concrete plans for legislative change with respect to data and copyright laws as of the date of this writing.

Aside from the EU AI Act and the EU AI Liability Directive, there is no legislation pending as to the development or use of AI.

As of the date of this publication, there have been no notable decisions from the Dutch courts regarding AI or AI in combination with intellectual property rights.

As of the date of this publication (May 2024), there have been no notable decisions from Dutch courts specifically relating to the definition of AI.

In the Netherlands, the Dutch DPA has been designated as the national coordinating authority for risk signalling, advice and collaboration in the supervision on AI and algorithms. The Dutch DPA has instituted a separate division for this purpose, the directie Coordinatie Algoritmes (DCA).

The Dutch DPA will focus on four areas of attention in 2024 – transparent algorithms, auditing, governance, and the prevention of discriminatory algorithms. In addition, it is expected to be the responsible supervisory authority for monitoring compliance and adherence to the EU AI Act.

Financial regulatory authorities DNB and AFM supervise the use of AI in the financial sector. Conduct supervision is carried out by the AFM, which has prioritised supervision of market parties that offer low-threshold products via apps, as well as digital marketing aimed at consumers. One of the AFM’s goals is to prevent consumers being nudged towards products or services that do not primarily serve their interests. DNB carries out prudential supervision, which, in relation to AI, focuses on topics such as soundness, accountability, fairness, ethics and transparency of AI products and services. The authorities have stated that financial supervision specific to AI will be intensified over the coming years.

The Authority for Consumer and Market (Autoriteit Consument en Markt, or ACM) ensures fair competition between businesses and protects consumer interests. It is the regulatory authority enforcing compliance with the Digital Services Act, the Data Governance Act and the Data Act. One of the goals in the 2024 annual plan of the ACM is to stimulate an open and fair digital economy, for example by taking action against interface designs that interfere with someone’s decision-making process (dark patterns).

Regulatory agencies in the Netherlands have not issued any official definitions of AI. The EU AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. Regulators are not expected to deviate materially from this definition.

Please refer to 5.1. Regulatory Agencies.

In 2021, the Dutch DPA fined the Tax Authorities (Belastingdienst) for discriminative and unlawful processing of personal data which resulted in serious violations of the GDPR. The Tax Authorities processed the (dual) nationality of applicants for childcare allowance (kinderopvangtoeslag) and used this data for automated fraud detection, for example by automatically designating non-Dutch applications as an increased risk. The practices of the Tax Authorities had disastrous consequences, continuing even to this day. Applicants were wrongfully forced to pay back the allowance received, leading to debts, bankruptcies and the wrongful removal of more than 2,000 children from their parents.

In May 2023, the Dutch DPA announced that it is investigating the use of fraud-detection algorithms by Dutch municipalities. These algorithms have been prohibited by the Dutch court because they were biased, led to discrimination, and also violated the GDPR and the European Convention on Human Rights.

In September 2023, the Dutch DPA announced an investigation into to the use of AI aimed at children. In this situation, the Dutch DPA required an unnamed tech company to provide transparency on the operation of a chatbot integrated in an app that is popular with children. The DPA also announced investigations into organisations that use generative AI. For example, it requested that OpenAI provide information on the chatbot ChatGPT.

In November 2023, the Dutch DPA announced that it will supervise the remediation measures of the Employee Insurance Agency (Uitvoeringsinstituut Werknemersverzekeringen, or UWV), which unlawfully used an algorithm to track the online behaviour of benefits recipients.

In the Netherlands, the Royal Netherlands Standardization Institute (NEN) facilitates agreements between stakeholders on standards and guidelines. NEN manages over 31.000 standards, including international (ISO, IEC), European (EN) and national (NEN) standards accepted in the Netherlands. The European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) are the European standard-setting bodies with respect to AI. The standards set out by these bodies are generally non-binding.

CEN and CENELEC standards help in meeting requirements and conducting risk assessments. Compliance with these is usually not mandatory, but it gives the advantage of “presumption of conformity” to manufacturers, economic operators, or conformity assessment bodies. There are no ostensible conflicts with Dutch law.

AI is used across governmental authorities for various Dutch government operations, including administrative inspections and law enforcement. Typical use cases are image recognition, speech and text recognition, machine learning and robotics. The Dutch government’s views on the use of AI are positive, and there are many initiatives to use the technology.

The use of personal data for AI purposes by Dutch government entities is generally subject to the GDPR. However, processing in relation to criminal investigations and proceedings is subject to the Police Data Act (Wet Politiegegevens) and the Judicial and Criminal Records Act (Wet justitiele en strafvorderlijke gegevens) and processing in relation to intelligence agencies to the Intelligence and Security Services Act (Wet op de inlichtingen- en veiligheidsdiensten).

Dutch law enforcement has developed a protocol for using facial recognition technology (FRT). The protocol provides a decision framework and governance for approving requests to experiment with FRT in light of investigations. Since FRT typically includes the processing of biometric information (ie, unique identifiers of someone’s face), strict restrictions apply under the Police Data Act. These restrictions are similar to the protection of biometric information under Article 9 of the GDPR.

There are two notable decisions by the Dutch civil and administrative courts involving AI, as follows.

  • System Risk Indication (SyRI) is an AI-powered tool used by the Dutch government to detect fraud, including social benefit and allowance fraud and tax fraud. However, the District Court of Hague banned the use of SyRI in February 2020, as it was violating Article 8 of the European Convention on Human Rights. SyRI could not strike a balance between social interests and the violation of the private life of individuals due to its lack of transparency and verifiability.
  • The Trade and Industry Appeals Tribunal (College van Beroep voor het bedrijfsleven) ruled that the online bank bunq’s use of AI and data analysis for screening customers satisfies regulatory requirements. Earlier, the DNB ruled that bunq was failing as a “gatekeeper” against money laundering because its screening process did not comply with regulatory laws.

The possible uses of AI in national security include analysing intelligence information, enhancing weapon systems, providing battlefield recommendations, aiding in decision-making, logistics, cybersecurity, military logistics, and command and control. On 15 February 2023, the General Intelligence and Security Service (AIVD) shared principles for defending AI systems that should be considered during the development thereof. These principles include:

  • ensuring dataset quality;
  • considering data validation;
  • taking supply chain security into account;
  • strengthening models against attacks; and
  • ensuring models are auditable.

Generative AI, such as OpenAI’s GPT-3, has raised several issues, including ethical concerns, data privacy and intellectual property (IP) rights.

Ethical concerns revolve around the potential misuse of AI for generating misleading information, deepfakes or promoting hate speech. To address this, AI developers are implementing stricter usage policies and creating more sophisticated content filters.

Data protection is another significant issue. AI models are trained on vast amounts of data, which may include personal data. To protect this data, AI developers seek to implement robust data anonymisation and encryption techniques. They are also working towards creating models that can learn effectively from less data, or even no data at all, which is a concept known as zero-shot learning.

In terms of IP, the assets in the AI process (such as AI models, training data, input prompts and output) can be protected under various forms of IP law. AI models and training data can be protected as trade secrets or database rights, while input prompts and output can potentially be protected under copyright laws or, possibly, patent laws. The AI tool provider’s terms and conditions significantly influence this protection.

There is also the risk of IP infringement – for example, where AI is trained on copyright-protected material without permission. If the AI’s output closely resembles a copyrighted work, it could also lead to infringement claims.

The GDPR affords data subjects several rights, including the right to rectification and deletion. If an AI outputs false claims about an individual, the individual has the right to have this corrected. If the AI has processed the individual’s personal data without their consent, they have the right to have this data deleted.

Purpose limitation and data minimisation principles require that personal data be collected for specific purposes, and that no more data is collected than is necessary for those purposes.

AI assets such as models, training data, input prompts and output can be protected by various forms of intellectual property rights (IPR), depending on their nature and the jurisdiction in question.

AI Models

The algorithms used in AI models can be protected by patents. In the EU and the Netherlands, a patent can be granted for an invention that is new, involves an inventive step, and is susceptible to industrial application. However, mathematical methods as such (which AI algorithms often are) are not considered patentable.

Training Data

Databases can be protected under copyright and/or sui generis database rights. In the EU, a database is protected by copyright if its structure constitutes the author’s own intellectual creation. If a substantial investment has been made in obtaining, verifying or presenting the contents of a database, it may also be protected by a sui generis database right.

Input (Prompts)

Texts used as input prompts can be protected under copyright law if they are original and constitute the author’s own intellectual creation.

Output

The output of an AI tool can also be protected under copyright law if it is original and is the author’s own intellectual creation. However, this is an area of ongoing debate, as it is uncertain whether an AI-generated work can meet the originality requirement since it does not have a human author.

The terms and conditions of the AI tool provider can significantly influence the protection of assets.

Infringements under Dutch and EU IP laws can occur in various ways. For instance, if someone uses a copyrighted database as training data without permission, it will constitute infringement. Unauthorised use of input prompts or output that are protected by copyright could also lead to infringement.

Under the GDPR, data subjects have several rights that pertain to AI models.

Right to Rectification

If an AI model produces false output related to individuals, the data subjects have the right to have inaccurate personal data rectified under Article 16 of the GDPR. This does not necessarily mean that the entire AI model must be deleted or adjusted. The rectification process can be achieved by integrating a mechanism in the AI model to allow the correction of inaccurate data.

Right to Erasure (“Right to Be Forgotten”)

Under Article 17 of the GDPR, a data subject has the right to have their personal data erased without undue delay under certain circumstances – eg, where the data is no longer necessary for the purpose for which it was collected or processed. However, this does not mean the entire AI model would need to be deleted.

Purpose Limitation

Article 5(1)(b) of the GDPR states that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. This can be achieved in AI models through clear communication about how the data will be used and by limiting the use of the data to those purposes only.

Data Minimisation

The GDPR under Article 5(1)(c) also requires that personal data be relevant, and limited to what is necessary in relation to the purposes for which it is processed. This principle can be adhered to in AI models by only collecting and processing the minimal amount of personal data necessary for the model to function as intended.

The application of these rights and principles can be complex within the context of AI, particularly when it comes to rectification and erasure. For example, if an AI system has used personal data to “learn” and adapt its algorithms, simply erasing that data might not fully remove its impact on the model.

AI is increasingly used in the legal profession, and its applications are effectively helping the way legal services are delivered. AI-powered tools have been used in various functions. Examples are voice dictation, document automation (to draft documents appropriately, that are up-to-date and consistent with the then-current laws) and document translation.

There are currently no laws, rules, or regulations in place on using AI in the legal profession. Neither have there been any Dutch court decisions on the matter.

Ethical considerations of AI in the legal profession include:

  • confidentiality – lawyers must ensure that AI systems safeguard client confidentiality and protect sensitive data.
  • supervision – lawyers are responsible for the work produced by AI and must supervise it appropriately; and
  • competence – lawyers must understand the technology they are using and its limitations.

Given the complexity and evolving nature of AI technologies, issues of liability for personal injury or commercial harm resulting from these technologies are of increasing concern under Dutch and EU laws. While the specifics can vary, liability for AI-enabled technologies generally falls under two main theories – strict liability and negligence.

Strict liability holds that the party responsible for placing the AI technology on the market is liable for any harm caused, regardless of fault or intent. In contrast, negligence requires proof that the party acted unreasonably or failed to take necessary precautions, leading to the harm.

The Dutch Civil Code and the EU Product Liability Directive are the main legal frameworks for these theories. They require the claimant to prove that the damage was caused by a defect in the product, and that there is a causal link between the damage and the defect.

Under Dutch and EU laws, human supervision is often required for AI technologies, particularly those with significant potential for harm. For example, autonomous vehicles must have a human driver ready to take control if necessary, and medical AI applications are typically used as decision-support tools for healthcare professionals, not as autonomous decision-makers.

Insurance plays a key role in managing AI liability risks. Businesses can purchase insurance policies to cover potential liability arising from their AI technologies. The terms of these policies can vary, but they typically cover legal defence costs and any damages awarded.

The allocation of liability among supply chain participants is a complex issue. Under Dutch and EU laws, any party involved in the supply chain could potentially be held liable for harm caused by an AI technology. This includes manufacturers, distributors, retailers, and even users if they modify the technology in a way that contributes to the harm. However, contracts and insurance policies often allocate liability in specific ways to manage these risks.

In practice, the trend under Dutch and EU laws is to allocate liability based on control and benefit. Parties that have more control over the AI technology or derive more benefit from it are generally held to a higher standard of liability.

Liability arising from the acts or omissions of AI technology acting autonomously would generally be attributed to the business selling or providing the AI products or services. This is based on the principle of strict liability, which holds businesses responsible for the products they place on the market. However, the specifics can vary depending on factors such as the nature of the harm, the level of autonomy of the AI technology and the terms of any contracts or insurance policies.

Every high-risk AI-enabled technology must be liable under strict liability and all other AI systems fall under fault-based liability, accompanying the burden of insurance. However, a back-end operator is only liable for strict liability if not already covered by Product Liability Directive. The only defence available to the operator is force majeure, and the presumption of fault lies with the operator.

Furthermore, note that the EU AI Act will impose strict obligations not only on the “provider” of a high-risk AI system but also on the “importer”, “distributor” and “deployer” of such systems. The importer needs to verify whether the high-risk AI system is compliant by conformity through verification of documentation, whereas the distributor is required to verify the CE conformity.

The proposed AI Liability Directive sets out to create a new liability regime that ensures legal certainty, enhances consumer trust in AI, and supports consumers’ liability claims for damage caused by AI-enabled products and services.

The Directive provides for rules that EU Member States implement to apply to AI systems that are available on or operating within the EU market. The aim is to improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.

An important feature of this Directive is the rebuttable presumption in case of non-compliance, and, in the case of fault by the defendant, both rules will place the burden of proof with the latter, often the developer or seller of an AI system. The AI Liability Directive must be read in close conjunction with the EU AI Act, since the directive mentions terms that are ultimately defined within it.

Bias in algorithms technically refers to a systematic error introduced by an algorithm that skews results in a particular direction.

These errors are caused by three types of biases:

  • input bias, when the data input is historically biased, non-representative, and of poor quality;
  • training bias, when the input data is mis-categorised and trained; and
  • programming bias, when the algorithms are designed by subjective rules.

Such biases may result in discrimination, inequality, and racism. If an algorithm treats individuals or groups unfairly based on characteristics such as race, gender, age, or religion, it may be in violation of anti-discrimination laws. Areas with a high risk of algorithmic bias include online advertising, credit scoring, hiring and law enforcement. For example, an algorithm used for hiring might be biased against certain groups if it was trained on past hiring decisions that were biased.

Companies can face significant legal and reputational risk if their algorithms are found to be biased. They can be sued for discrimination, fined by regulatory bodies, and suffer damage to their reputation.

Regulators are increasingly scrutinising algorithmic bias. For example, in 2020, the Dutch Data Protection Authority launched an investigation into the use of algorithms by the government.

The data protection risks of using AI technology in business practices predominantly relate to the use of AI in facial recognition technologies (see 11.3 Facial Recognition and Biometrics) and automated decision-making (see 11.4 Automated Decision-Making). Benefits of AI in terms of protecting personal data in business practices include increased accuracy and integrity of personal data and enhanced data security. AI technologies require accurate data for optimal performance, which could lead to an overall optimisation of data protection practices to safeguard data integrity and accuracy. In addition, AI technologies will require top-notch data security practices to protect proprietary and sensitive business information, thereby enhancing the level of data security within a company, which is ultimately beneficial to the protection of personal data.

Face recognition technology can be useful, but can also have severely negative effects for data subjects. The systems identify people in photos, videos, or in real time, and are widely used in sectors such as retail, media, the medical sector, entertainment, e-commerce, and so on. The use of facial and biometric data can cause privacy, data security, and bias and discrimination issues, resulting in regulatory and ethical violations.

Recent CJEU press release 20/24 of 30 January 2024 states that police authorities may not store the biometric and genetic data of persons who have been convicted by final judgment of an intentional offense, with no time limit other than the death of the person concerned. 

Since the use of face recognition involves automated processing of personal data, both GDPR and the Law Enforcement Directive (LED) apply.

The EU AI Act prohibits the use of real-time remote biometric identification systems in publicly accessible spaces under Article 5(1). Companies such as Megvii, Cognitec Systems GmbH, Clarifai Inc, AnyVision and iProov use machine-learning algorithms to search, capture, and analyse facial contours and match them with pre-existing data.

When AI is used to automate certain processes, this often includes automated decision-making as regulated under the GDPR. Automated decision-making involves solely automated processing of personal data of individuals (ie, without any human involvement) that leads to decisions with legal effects concerning that person or similarly significant effects. Article 4(4) of the GDPR defines profiling as a specific form of processing by automated means to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict factors concerning the individual. Examples of automated decision-making are:

  • targeted advertising by personalising advertisements based on user behaviour and preferences;
  • credit scoring, by determining creditworthiness based on various factors, such as income, spending habits, and previous financial history; and
  • job-applicant screening, which involves automated tools to screen resumés or conduct online assessments in order to shortlist candidates.

According to Article 22 of the GDPR, individuals should have the right not to be subjected to a decision based solely on automated processing, including profiling.

However, automated decision-making is allowed if:

  • this is necessary for entering into, or performance of, a contract between the individual and the organisation responsible for such automated decision-making;
  • it is authorised by EU Member State law (for example, anti-tax evasion regulation); or
  • the individual has given explicit consent.

If the automated decision-making is based on the performance of a contract or explicit consent, the individual should at least have the right to seek human intervention on part of the company making these decisions, to express his or her point of view and to contest the decision. In addition, the company must lay down suitable measures to safeguard the individual’s rights and freedoms and legitimate interests.

Furthermore, under the GDPR, a data protection impact assessment (DPIA) is mandatory when the envisaged processing “likely constitutes a high risk” for the individuals. The DPIA is a process designed describe the processing, assess its necessity and proportionality, and help manage the risks of rights and freedoms of natural persons resulting from the processing of personal data by assessing them and determining the measures to address them. A DPIA is mandatory when the envisaged processing “likely constitutes a high risk” to individuals’ rights and freedoms. According to the Working Party 29 Guidelines on Data Protection Impact Assessment, this is generally the case when the processing involves automated decision-making.

Failure to comply with these rules can result in hefty fines. Under the GDPR, a violation of individuals’ rights, including the right not to be subjected to automated decision-making, may result in fines of up to EUR20 million or 4% of the company’s annual turnover for the preceding financial year, whichever is higher. If the company subsequently fails to perform a DPIA where this is required, it risks being subject to fines of up to EUR10 million or 2% of the company’s annual turnover, whichever is higher, under the same regulation.

There is no specific regulatory scheme under Dutch or EU law that directly deals with the use of chatbots or other technologies to substitute for services rendered by natural persons. Under EU law, the GDPR provides broad protections for personal data. It requires data controllers to be transparent about their use of personal data, which would include the use of AI technologies such as chatbots. The GDPR also gives individuals the right to access, correct, and delete their personal data, and to object to its processing.

As for disclosure of AI use, the GDPR’s transparency principle requires that individuals be informed about the processing of their personal data. If an AI system is used to make decisions about individuals, the individuals should be informed of this, as well as the logic involved and the significance and consequences of such processing.

Technologies used to make undisclosed suggestions or manipulate the behaviour of consumers primarily include recommendation algorithms and targeted advertising technologies. These algorithms analyse a user’s behaviour and use this information to suggest products or services that the user might be interested in. While these technologies can be beneficial, they also raise privacy and ethical concerns, as they can be used to influence consumer behaviour without their knowledge or consent.

Additionally, under Article 22 of the GDPR, it is illegal to design chatbots to serve as a primary source of consumer approval decision processes (eg, they cannot approve a loan). There are possibilities of chatbots manipulating the behaviour of the consumer based on the data which has been used for their training. Also, AI-powered bots, known as social media bots, can be programmed to mimic human behaviour. They can spread misleading information to manipulate public opinion.

The use of AI technology in price-setting can potentially raise several competition and antitrust issues under Dutch or EU laws. The following are some of the key concerns.

  • Collusion – AI algorithms can learn to avoid competitive pricing and instead collude to keep prices high. This could potentially violate Article 101 of the Treaty on the Functioning of the European Union (TFEU), which prohibits agreements between companies that prevent, restrict, or distort competition in the EU.
  • Market transparency – AI can increase market transparency, which can be both beneficial and detrimental. While it can lead to more competition, it can also facilitate collusion among companies. If companies can predict their competitors’ pricing strategies, they may be less likely to compete on price.
  • Market dominance – AI can potentially be used to strengthen a company’s market dominance, which could violate Article 102 of the TFEU. This could occur if a company uses AI to engage in predatory pricing or to create barriers to entry for other companies.
  • Data access – AI algorithms rely on data to function effectively. Companies with access to more data may have a competitive advantage, which could potentially lead to antitrust issues. The European Commission has expressed concerns about the control of data by a few large tech companies.
  • Discrimination – AI algorithms could potentially be used to engage in price discrimination, charging different prices to different consumers for the same product. While price discrimination is not inherently illegal under EU law, it could potentially violate EU laws if it is used in a way that is unfair or discriminatory.
  • Lack of transparency – The use of AI in price-setting can make it difficult for regulators to detect anti-competitive behaviour. AI algorithms are often complex and opaque, making it difficult to understand how they are setting prices.

The ACM and the European Commission are actively monitoring these issues, and have indicated that they will take action if they find evidence of anti-competitive behaviour related to the use of AI in price-setting.

AI technology presents a range of new and unique risks that need to be considered in transactional contracts between customers and AI suppliers, particularly in AI as a service model. The following are some of these risks and the ways businesses can address them.

  • Data privacy and security – contracts should clearly define how data will be used, and what security measures are in place to protect it.
  • Intellectual property – contracts should clearly state who owns the intellectual property created by the AI, whether it is the customer, the supplier, or a third party.
  • Liability – the contract should clearly define who bears the liability in different situations.
  • Quality and performance – contracts should include warranties and service level agreements to ensure the AI performs as expected.
  • Transparency and “explainability” – contracts should require AI suppliers to provide detailed explanations of how their systems work, and to provide ongoing transparency into their operations.
  • Ethical concerns – contracts should include provisions for ethical usage and compliance with ethical guidelines.
  • Future-proofing – contracts should be flexible enough to accommodate technological advancements and changes in the regulatory environment.
  • Termination rights – the contract should outline clear termination rights and procedures.

Numerous software and applications available are specifically designed to streamline hiring and termination processes, including Applicant Tracking Systems (ATS), Human Resource Information Systems (HRIS), and various AI-driven tools.

Technology in Hiring

ATS is software that collects and sorts résumés based on given criteria. It automates the process of shortlisting candidates, making it easier for recruiters to find the right talent. AI-driven tools are also used for pre-employment testing and video interviews. They can assess a candidate’s skills, personality traits, and even emotional intelligence.

This technology reduces the time spent on screening and shortlisting candidates, ensuring a more efficient hiring process. It also minimises human bias, leading to a more diverse workforce. On the downside, qualified candidates might be overlooked if their résumés do not include specific keywords used by the ATS. Moreover, the lack of human interaction can make the hiring process impersonal.

If not properly configured, ATS can potentially discriminate against certain applicants, leading to legal implications under employment laws. Also, there are privacy concerns related to the collection and storage of applicant data.

Technology in Termination

HRIS is used to manage employee data and automate the termination process. It oversees tasks such as the deactivation of access to company systems, final paycheck calculations, and exit interview scheduling.

HRIS can ensure a smooth and compliant termination process, reducing the risk of errors and legal complications. It also allows for efficient record-keeping, which is crucial in the event of a legal dispute.

The automated termination process might feel impersonal to the employees. There is also a risk of premature information leaks about the termination.

Legal Risks

If the system is not updated or used correctly, it can lead to legal issues, such as non-compliance with labour laws. Additionally, mishandling of employee data can result in privacy breaches, resulting in potential legal action.

The use of AI in the evaluation of employee performance and monitoring employee work has both positive and negative effects. On the positive side, it can create new job opportunities in areas such as data analysis, programming and system maintenance. When working correctly, AI monitoring is also very precise, which reduces the risk of human error. On the negative side, there are fears that AI could replace human jobs. However, as AI technology is not yet sufficiently advanced to completely replace human labour, these jobs are still largely secure.

Most consumers are unaware that they interact with AI on a daily basis, such as ordering food online. AI may provide automated recommendations to users based on past orders, preferences, and location. Chatbots may function as virtual assistants to provide personalised assistance and AI optimises delivery routes for food delivery by estimating traffic and weather conditions.

The Dutch and EU laws have been evolving in response to these changes. The EU has been working on regulations to ensure fair and just working conditions in the platform economy – eg, the proposed Platform Work Directive, which includes providing access to social protection for platform workers and ensuring that they have the right to collective bargaining.

There have been ongoing discussions in the Netherlands about the status of platform workers. For instance, in 2021, a Dutch court ruled that drivers for the food delivery company Deliveroo are employees, not independent contractors, and are therefore entitled to employment benefits.

However, none of these initiatives or precedents have specifically addressed the use of AI in the digital platform.

AI applications in the finance industry, such as algorithmic trading, customer service, fraud detection and prevention and compliance with anti-money laundering legislation, are used increasingly.

There are some financial regulations that specifically cover the use of AI. The main examples are:

  • the rules on algorithmic trading under the Markets in Financial Instruments Directive 2, as implemented into Dutch legislation in the Financial Supervision Act (Wet op het financieel toezicht, or WFT), and the Markets on Financial Instruments Regulation;
  • regulations about the use of AI in consumer lending resulting from the Consumer Credit Directive, implemented into Dutch legislation in the WFT; and
  • regulations regarding automated advice included in the WFT.

In addition, regulations on internal models of banks (pursuant to the Capital Requirements Regulation) and, to some extent, insurers (pursuant to the Solvency II Directive as implemented into the WFT) include specific requirements that AI models must adhere to.

Medical Device Regulations (MDR) 2017/745 and In Vitro Diagnostic Medical Devices Regulation (IVDR) 2017/746 are the regulations in the EU that govern the use of technology in healthcare.

The use of AI in healthcare is not without risk. While using health data to train AI systems, there is a potential risk of this sensitive data being unlawfully shared with third parties, resulting in data breaches. Other risks are possible bias and inequality risks, due to biased and imbalanced datasets used for training, structural biases and discrimination, disparities in access to quality equipment, and lack of diversity and interdisciplinarity in development teams.

Health data is considered as sensitive data according to Article 9 of the GDPR, and sharing a patient’s health data for training an AI model requires explicit consent from the patient under the Helsinki Declaration. Repurposing of data without the patient’s knowledge and consent must be avoided. There is a potential risk of cyberattacks in the event of data being exposed to the general public.

Under Dutch law, autonomous vehicles are governed by the Dutch Road Traffic Act 1994, which currently requires a driver to be in control of the vehicle at all times. However, the Netherlands is progressive in terms of autonomous-vehicles legislation, and it has been conducting public road tests of self-driving cars since 2015.

At the EU level, autonomous vehicles are subject to a range of regulations, including General Safety Regulation (EU) 2019/2144, which mandates certain safety features for new-vehicle types from July 2022 and for all new vehicles from July 2024. This includes a driver drowsiness and attention warning, an advanced driver distraction warning, an emergency stop signal, reversing detection, and an event data recorder.

Responsibility for accidents or incidents involving autonomous vehicles is a complex issue. According to EU law, the manufacturer of a vehicle could be held liable if a defect in the vehicle causes damage. In the Netherlands, the driver of the vehicle is usually considered responsible for any accidents, even if the vehicle is driving autonomously. However, this may change as autonomous-vehicle technology evolves.

Ethical considerations for AI decision-making in critical situations are also a significant concern. For instance, how should an autonomous vehicle be programmed to act in a situation where an accident is unavoidable? This is a complex ethical question that has yet to be fully answered.

There have been several attempts at international harmonisation to promote global collaboration and consistency in regulations and standards. For instance, the United Nations Economic Commission for Europe (UNECE) has been working on international regulations for autonomous vehicles. However, these efforts are still in the early stages, and there is a long way to go before international consistency is achieved.

AI usage in manufacturing and its implications are governed by several regulations within the Netherlands and the EU, addressing areas such as product safety and liability, workforce impact, and data privacy and security.

Product Safety and Liability

The European Union has a number of directives in place to ensure product safety and manage liability in manufacturing. The General Product Safety Directive ensures that only safe products are sold within the EU. If AI is used in the manufacturing process, the manufacturer must ensure the AI does not compromise the safety of the product.

If a product is faulty and causes damage or injury, the Product Liability Directive is applicable. It makes the producer liable for damage caused by a defect in their product.

Workforce Impact

The Netherlands and the EU have regulations in place to protect workers' rights. The EU’s Charter of Fundamental Rights includes provisions for fair and just working conditions. If AI is being used to replace or augment human workers, it is crucial that worker rights are respected and any transition is handled ethically and responsibly. The Dutch Working Conditions Act also stipulates that employers must ensure a safe and healthy working environment, which would extend to an environment where AI is used.

Data Privacy and Security

The GDPR applies to any business that processes personal data, including those using AI. If AI is used to process personal data in the manufacturing process, it must be done in a manner that respects privacy rights and ensures data security. The EU also has the NIS2 Directive, which provides EU-wide legislation on cybersecurity. It provides legal measures to boost the overall level of cybersecurity in the EU.

The use of AI in professional services in the Netherlands and the European Union is governed by a variety of regulations and guidelines. Here are some of the key areas of focus.

Liability and Professional Responsibility

The proposed AI Liability Directive would include rules on liability. In the Netherlands, the Dutch Civil Code could potentially be applied in cases of AI causing damage. However, determining responsibility in AI-related incidents can be complex due to the nature of machine learning.

Confidentiality

The GDPR applies throughout the EU, including the Netherlands. It mandates that personal data must be handled in a way that ensures its confidentiality and security. Professionals using AI must ensure that the AI systems they use are compliant with GDPR.

IP (Intellectual Property)

In the EU, AI-generated works may not be eligible for copyright protection, as current laws require human authorship. In the Netherlands, The Dutch Copyright Act could potentially apply to AI creations, but this is still a matter of debate.

Client Consent

Under the GDPR, there must be a lawful basis to process personal data lawfully. This could have implications for AI systems used in professional services, especially those that involve data analysis.

Regulatory Compliance

At the EU level, the upcoming EU AI Act aims to ensure that AI is used in a way that respects EU values and regulations. In the Netherlands, the Dutch DPA oversees the use of AI and other data-processing technologies to ensure compliance with GDPR and other regulations.

As of the date of this publication, there have been no decisions from the Dutch courts on inventorship or authorship of an invention or work created by or with AI technology.

AI technologies and data can be protected under the Trade Secrets Act, which protects undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use, and disclosure. The act defines a trade secret as information that meets three criteria – it is secret, it has commercial value because it is secret, and the holder has made reasonable efforts to keep it secret. AI technologies and data often meet these criteria.

The use of trade secrets in AI can be regulated through non-disclosure agreements (NDAs) and confidentiality clauses in contracts.

At the time of this publication, no existing legislation, case law, or other types of guidance can say whether AI generated works of art and works of authorship can be protected by copyright in the Netherlands. The general view among scholars is that, if a piece of work is entirely produced by a machine with minimal or no human intervention and no individual can assert ownership over it, this implies that the work will enter the public domain. This viewpoint aligns with the guidelines provided in other jurisdictions, such as those from the U.S. Copyright Office.

However, current AI technologies such as Chat GPT, where users input prompts, do not operate entirely autonomously. This raises the question of whether the input prompt can be granted copyright protection. Under EU copyright laws, concepts, ideas, and styles cannot receive copyright protection. However, as long as some creative choices are involved, copyright protection will generally apply.

Consequently, it can be argued that protection will often be extended to a specific prompt, as long as it is not just a “mundane or banal expression”. However, the unresolved issue is the extent of protection that can be claimed when one single prompt can yield multiple generative results.

Using OpenAI involves several intellectual property considerations:

  • users should maintain IP rights to their content that they insert into OpenAI;
  • OpenAI will retain the rights to its AI models and code;
  • users cannot use the AI to infringe on others’ IP rights;
  • IP rights to AI-generated content are legally complex and not entirely clear;
  • commercial users must ensure they are not violating OpenAI’s policies or IP laws;
  • some of OpenAI’s models require a license for use; and
  • users are generally liable for any IP violations in the content they generate with the AI.

Advising corporate boards of directors on the adoption of AI involves addressing several key issues to identify and mitigate risks, as follows.

  • Ethical considerations – AI systems can inadvertently perpetuate biases if not properly managed; boards should ensure that AI systems are designed and used in a way that respects ethical norms and values.
  • Data protection – AI systems often require large amounts of data, which can include sensitive personal information; companies must ensure that data is collected, stored, and used in a way that complies with all relevant privacy laws and regulations.
  • Security – AI systems can be vulnerable to cyber-attacks, so boards should ensure that adequate security measures are in place to protect against such threats.
  • Transparency – it is important for companies to understand how AI systems make decisions; this can be challenging with more complex AI models, but it is crucial for accountability and trust.
  • Legal compliance – AI systems must comply with all relevant laws and regulations; this can be a complex task, given the rapidly evolving legal landscape around AI.
  • Skills gap – adopting AI may require new skills and expertise, and boards should consider how to address any skill gaps within the organisation.
  • Economic impact – adopting AI can lead to significant changes in business processes, which can have economic effects, so companies should consider these and plan accordingly.
  • Dependability – company boards should consider the reliability of AI systems, including their ability to perform consistently and the potential risks if they fail.
  • Long-term strategy – AI use should align with the company's long-term strategy, and boards should consider how AI can support strategic goals and the resources needed for successful implementation.
  • Public perception – the use of AI can affect a company’s reputation, and its management should consider how the AI use aligns with the company’s brand and public image.

By addressing these issues, management teams can help ensure that AI is adopted in a way that is ethical, legal, and beneficial for the organisation.

Implementing AI best practices involves addressing the following key issues.

Data Privacy and Security

AI systems often rely on large amounts of data, which can include sensitive information. It is crucial to ensure that data is stored and processed securely, and that privacy rights are respected. This includes complying with regulations such as the GDPR.

Transparency and “Explainability”

AI systems should be designed to be transparent in their operations and decisions. This includes providing clear explanations for AI decisions, especially in critical areas like healthcare or finance.

Bias and Fairness

AI systems can inadvertently perpetuate or amplify existing biases in data. It is important to monitor and mitigate these biases to ensure fair outcomes.

Robustness and Reliability

AI systems should be strong and reliable, with safeguards in place to prevent or mitigate harmful outcomes.

Accountability

There should be clear lines of accountability for AI decisions, including mechanisms for redress when things go wrong.

Practical advice for implementing these best practices effectively includes:

  • developing a clear AI strategy – this should cover a clear vision for how AI will be used, as well as policies and procedures for addressing the issues above;
  • investing in training and education – ensuring that staff understand the capabilities and limitations of AI, as well as the ethical and legal implications;
  • engaging stakeholders – not only internal stakeholders, but also customers, regulators, and the public; transparency and engagement can help build trust in AI systems;
  • monitoring and review – regularly reviewing and updating AI systems and policies to ensure they remain effective and compliant with evolving regulations and standards;
  • seeking expert advice – it is a good idea to consult with legal and technical experts to ensure that AI systems are compliant with regulations and best practices; and
  • piloting ahead of scaling – testing AI systems on a small scale before deploying them widely, which s can help identify and address issues early.
Greenberg Traurig, LLP

Beethovenstraat 545
1083 HK Amsterdam
The Netherlands

+31 651 289 224

+31 20 301 7350

Herald.Jongen@gtlaw.com www.gtlaw.com
Author Business Card

Trends and Developments


Authors



Greenberg Traurig, LLP is an international law firm with approximately 2,750 attorneys serving clients from 47 offices in the USA, Latin America, Europe, Asia and the Middle East. The firm’s dedicated TMT team consists of more than 100 lawyers, of which seven are in Amsterdam. The Amsterdam team is well-versed in representing clients around the world in domestic, national, and international policy and legislative initiatives, as well as guiding them through the business growth cycle for a variety of technologies. As a result, it provides forward-thinking and innovative legal services to companies producing or using leading-edge technologies to transform and expand their businesses.

Artificial intelligence (AI) is revolutionising our daily lives, reshaping work dynamics, and influencing our interactions with the world. As we embrace this transformative technology, it becomes essential not only to harness its capabilities but also to navigate the intricate challenges that it presents.

This Trends & Developments contribution delves into various facets of AI evolution, regulation and challenges in the Netherlands. First, it discusses the impact and regulatory framework of the EU Artificial Intelligence Act (EU AI Act), as well as other steps taken by the European Union towards regulating AI technologies. Second, it explores the role of the Dutch Data Protection Authority, its publications on AI risks, and its scrutiny of AI applications such as ChatGPT.

Third, it elaborates on the intersection of AI with various topics – eg, healthcare and intellectual property rights – examining existing regulations and possible challenges. We also discuss the potential impact of AI on the job market, including concerns around job loss and the creation of new opportunities.

Fourth, this section discusses Dutch collaborations with international organisations such as UNESCO for ethical AI governance, efforts to foster AI start-ups and SMEs, and the establishment of the European AI office for uniform governance across the EU.

Lastly, the article concludes with an overview of recent AI-related initiatives in the Netherlands, demonstrating the commitment to controlling the potential of AI while addressing its complexities. It is noted that the Dutch government – as the first EU Member State government – has expressed its positive outlook on the use of AI, paving the way for responsible AI innovation in the Netherlands.

EU AI Act

On 21 May 2024, the EU AI Act was approved by the EU Council. The EU AI Act will enter into force 20 days after being published in the EU 's Official Journal. The AI Act aims to ensure safety and compliance of AI systems with fundamental rights, while boosting innovation.

Risk-based approach

The EU AI Act follows a risk-based approach, which includes outright prohibition of some types of AI use cases, such as systems for evaluating people based on their social behavior or personality characteristics, systems that create or expand facial recognition databases through uncontrolled scraping of facial images from the internet or CCTV footage, and biometric categorisation systems and real-time biometric identification systems in publicly accessible spaces for law enforcement purposes.

Systems classified as high-risk must be registered in a database maintained by the European Commission before they are made available. They are also subject to an extensive compliance mechanism that includes legal requirements related to risk management, data and data governance, technical documentation, record keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.

GPAI models

The EU AI Act introduces specific provisions for general purpose AI (GPAI) models. Providers of GPAI models are required to keep technical documentation up to date and make it available to competent authorities on request. They are also required to publicly provide a detailed summary of the content used for training the GPAI model and to implement a policy adhering to EU copyright laws.

Transparency

Additionally, the EU AI Act imposes transparency obligations for AI systems intended to interact with humans. Users must be informed that they are interacting with an AI system unless it is obvious from the circumstances and context of use.

Enforcement

The penalties for non-compliance are significant, and designed to be effective, proportionate, and dissuasive. Fines of up to EUR35 million, or 7% of worldwide annual turnover (whichever is higher), may be imposed if the prohibition of certain AI practices is disregarded. Non-compliance with several other obligations under the EU AI Act may result in fines of up to EUR15 million, or 3% of worldwide annual turnover.

Scope of application

The EU AI Act has a broad scope of application, applying not only to providers, importers, distributors, and manufacturers of AI systems, but also to deployers of AI systems. This includes individuals or companies based in the European Union whose services are offered on the EU market, and even those outside the EU if the output produced is used in the European Union.

Effective date

The EU AI Act will enter into force 20 days after its publication in the official Journal and be fully applicable 36 months thereafter, with certain provisions taking effect earlier.

EU AI Pact

Some parts of the EU AI Act will be enforced soon after the act is adopted, while others, such as certain conditions for high-risk AI systems, will only come into effect after a transitional period (ie, the duration between the Act’s initiation and the date it is applicable). The European Commission deems it necessary for these timelines to be expedited given that, in the past few months, technological developments and the mainstream adoption of generative and general-purpose AI systems have been accelerating.

Therefore, the European Commission is launching the AI Pact. This initiative aims to garner the voluntary participation of the industry to anticipate the EU AI Act and begin executing its requirements ahead of the official legal deadline.

In November 2023, the European Commission initiated a “call for interest” aimed at organisations eager to participate in the AI Pact proactively. The next phase will involve the European Commission convening with these interested entities in the first half of 2024 to discuss the Pact’s objectives and gather initial thoughts and exemplary practices that could influence future commitments. Once the EU AI Act is formally adopted, the AI Pact will be officially launched, and “frontrunner” organisations will be encouraged to publicly announce their future pledges.

Please see our contribution in the Trends & Developments section of the Chambers Global Practice Guides TMT 2024.

Legal Developments in AI and Data Protection

Role of the Dutch Data Protection Authority

In the Netherlands, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, or Dutch DPA) has been designated as the national coordinating authority for risk-signalling, advice, and collaboration in the supervision on AI and algorithms. As part of its 2020–2023 enforcement agenda, the Dutch DPA set up a new unit in January 2023, the Algorithms Coordination Directorate, to supervise AI and algorithms. This directorate leads cross-sectoral investigations into AI and algorithms. The Dutch government has committed to providing an annual budget for the same.

General guidance on the use of personal data for AI and Algorithms

On its website, the DPA published guidance on how to use algorithms and AI in a manner that is compliant with data protection laws, such as the EU General Data Protection Regulation (GDPR). Key points include the following.

  • Legality – it is important to have a legal basis for processing personal data; if there is no valid basis, use of personal data is not permitted.
  • Transparency – when processing personal data, transparency towards customers or employees is essential. If using an algorithmic system for decision-making, information on the underlying logic and the expected consequences of the processing must be provided.
  • Purpose limitation – personal data can only be processed with a pre-established objective, and for no other purpose.
  • Data minimisation – as little personal data as necessary should be processed for a specific purpose; data that is demonstrably unnecessary should not be processed.
  • Accuracy – the personal data processed must be accurate to prevent incorrect or unexpected outcomes.
  • Security – all personal data processed must be properly secured through technical and organisational measures.
  • Privacy by design and default – when developing an algorithmic system, it is important to adhere to the GDPR principles of privacy by design and default.

The Dutch DPA also advises conducting a Data Protection Impact Assessment (DPIA) before using algorithmic systems that process personal data. This helps map out privacy risks and take steps to reduce them.

The Dutch DPA provides guidelines for prior consultation with the Dutch Data Protection Authority if the use of algorithms poses a high risk to involved parties. Lastly, it highlights that people have certain rights under the GDPR when their personal data is processed by an algorithmic system, including the right to information, access to their data, and human intervention in decisions.

The Dutch DPA actively enforces compliance with the above regulations. A recent example is the demand for remedial action issued to the Dutch employee insurance agency, or UWV. Until January 2023, the UWV used an algorithm to illegally track the online behaviour of people on unemployment. The Dutch DPA will oversee the implementation of the remedial measures, which include the notification of persons whose personal data was unlawfully processed.

ChatGPT

On 7 June 2023, the Dutch Data Protection Authority (DPA) sought clarification on the training methods used by ChatGPT. The inquiry focused on how user data is being used, the policies and procedures regarding the treatment of personal data sourced from the internet for training purposes, and the approach to handling generated responses that may be inaccurate, outdated, defamatory, or offensive.

First Algorithmic Risks Report Netherlands

The Dutch DPA publishes a yearly risk report highlighting any risks, developments, and recommended strategies in relation to AI. The DPA aims to establish its semi-annual Algorithmic Risks Report Netherlands (ARR) as the primary document for this national risk monitoring function. Its objective is to raise awareness among stakeholders, both private and public organisations, lawmakers, and the general public about current algorithmic risks and preventive measures.

The first ARR report of 31 August 2023 provides a comprehensive overview of general developments, risks, and challenges in the Netherlands, presented from an overarching risk perspective. The Dutch DPA plans to publish this report every six months to provide insights into recent developments, current risks, and associated challenges. Since the beginning of 2023, the Dutch DPA has assumed the role of the coordinating authority for risk signalling, advice, and collaboration in algorithm oversight, positioning the Netherlands at the forefront of international efforts.

In the report, the DPA concludes that the most significant algorithmic risks in the Netherlands at present are intelligent technologies, such as chatbots, and not having a complete understanding of how the algorithms work.

Second Algorithmic Risks Report Netherlands

The DPA released its second AI and Algorithmic Risks Report in January 2024, highlighting the increasing risks associated with AI and algorithms, particularly generative AI. The report emphasises the need for improved risk management and incident monitoring due to growing issues such as disinformation, privacy violations, and discrimination.

The DPA recommends a comprehensive national master plan by 2030 to manage and control AI-related risks and promote responsible AI use. The plan includes clear yearly goals, implementation of regulations such as the AI Act, and emphasises human control, secure applications, and strict organisational rules.

The report also stresses the importance of education about AI and algorithms for all ages and roles, and calls for structural investments in knowledge. It acknowledges the high risks posed by generative AI, including disinformation and discrimination, and underscores the need for proactive efforts from organisations in risk management and internal supervision. The EU AI Act should provide oversight for foundation models and their developers.

For further information on the interaction of data protection and AI, see also our contribution in the Trends & Developments section of the Chambers Global Practice Guides Data Protection & Privacy 2024.

Legal Developments in AI and Intellectual Property Rights

As of now, the Netherlands, like many other countries, does not have specific legislation or case law addressing the issue of AI and intellectual property rights. Current local regulations are based on the European Union’s directives, which do not specifically mention AI but do refer to computer programmes. The most prominent question that has yet to be answered is whether work generated by AI can receive protection under intellectual property laws, for example by means of a patent or copyright.

According to the current Dutch Copyright Act (Auteurswet), only human authors can claim copyright, thus likely excluding AI-generated works from protection. Arguably, some copyright protection exists for the prompts that are used to generate the work. See our contribution in Trends & Developments section of the Chambers Global Practice Guides Trade Marks and Copyright 2024 for a more detailed analysis on this topic.

The Dutch Patent Act (Rijksoctrooiwet) and the European Patent Convention (EPC) set out the requirements for patent protection. Under these laws, patent protection can be obtained for technical entities or processes that are new, inventive and susceptible to an industrial application. Dutch patent law does not directly protect AI systems. However, elements of an AI system such as inference models, network structures and training methodologies can potentially fall within its purview.

The European Patent Office’s Examination Guidelines indicate that algorithms and models are inherently seen as abstract mathematical entities. As such, mathematical methods are not patentable when claimed independently. Nevertheless, this exclusion does not hold if they are incorporated within, for instance, a computer software or a computer implementation.

The EU AI Act brings an attempt to further establish guidelines to navigate the complexities of AI-generated content and authorship. The legislation recognises the importance of human creativity and addresses concerns surrounding ownership and attribution in instances where AI autonomously produces content. Furthermore, the EU AI Act integrates provisions for digital rights management tailored to AI-generated media, safeguarding against unauthorised use and potential misuse of copyrighted material.

The EU AI Act mandates AI providers to enhance transparency by requiring them to inform their AI system users that they are interacting with an AI system, unless this is “obvious from the circumstances and the context of use.”

Legal Developments in AI and Healthcare

The AI in healthcare in the EU is forecast to expand at an annual growth rate of 49.3% to reach a market size of EUR46.69 billion by 2028. The AI healthcare market ranks second globally. However, placing any AI-enabled medical device in the market is complex, because it is considered high-risk and must go through a conformity assessment under the Medical Devices Regulation (MDR), and medical technology could be considered medium-risk under the same regulation. However, medical technology is considered to be high risk under the EU AI Act.

The EU AI act might conflict with the already-regulated MDR. For example, elements such as conformity assessment, quality management systems and notifying bodies are already regulated by the MDR. Furthermore, while the regulations mandate technical documents from organisations, a comparative analysis of these two legislative texts reveals overlapping or conflicting requirements, which may necessitate the submission of two distinct sets of documentation.

“We commend EU legislators’ progress in aligning the requirements and processes of the AI Act with MDR/IVDR and other sectoral legislation”, said Alexander Olbrechts, director of digital health for MedTech Europe. He adds, “We welcome the approach taken by legislators favouring a single conformity assessment and a single, integrated, technical documentation, which we believe is crucial to facilitate investment and innovation in AI in the EU’s digital economy while at the same time ensuring legal certainty for all actors in the AI ecosystem”.

A balanced regulatory approach to privacy and healthcare innovation is essential when it comes to data selection and omission in AI. It is important that all parties involved in the healthcare sector could contribute to and enhance the completeness of datasets for any AI application aimed at improving healthcare and its delivery.

The possible risks while using health data to train AI systems are the risk of this sensitive data being shared with third parties, resulting in data breaches. Possible bias, inequality risk (such as discrimination), and disparities in access to quality equipment are due to biased and imbalanced datasets used for training, and lack of diversity and interdisciplinarity in development teams.

In 2019–2021, the Dutch Ministry of Health, Well-being and Sport and several partners carried out extensive research aimed at implementing AI in the healthcare sector. The results of this programme have been published, including a roadmap for using AI in the healthcare sector. The government also provides opportunities for financing and subsidies.

Legal Developments in AI and the Future Job Market

While the future of AI is bright and evolving, the fear of job loss is ever-present. Yet there is potential for the EU AI Act to lead to the creation of job opportunities. As AI-enabled systems enter the market, responsibilities need to be shared among the relevant parties, and fines for non-compliance under the Act will be hefty. Legal and compliance experts would be an asset for AI organisations.

Furthermore, AI systems that pose a high risk will need to meet certain criteria to be allowed into the EU market. If their risk level is considered too steep, their entry could be blocked. 

The EU AI Act bans applications that threaten citizens’ rights, including biometric categorisation systems that provide sensitive data, the random lifting of facial images from the internet or CCTV footage, facial recognition in workplaces and schools, social scoring and predictive policing. AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

It could be argued that such stringent regulations hinder innovation. The EU AI office will be guided by a scientific panel of independent experts and an AI board comprised of Member States’ representatives. Additionally, an advisory forum will be established to provide technical expertise to the AI Board, and this will include industry representatives, SMEs, start-ups, and members of society and academia. The European Commission and Netherlands will partner with UNESCO for Ethical AI Governance.

UNESCO, the European Commission and the Dutch Authority for Digital Infrastructure have launched a project called “Supervising AI by Competent Authorities”. It aims to create frameworks for ethical AI governance in the Netherlands, balancing regulation and innovation. It will analyse the best institutional design for AI supervision, in line with the EU AI Act and UNESCO’s Recommendation on the Ethics of AI.

The project will produce a comprehensive report on AI supervision, develop case studies and organise training sessions. The goal is to enhance the capacity of authorities across the EU to ensure AI system compliance with ethical standards and regulations.

Fostering Start-Ups and SMEs

The European Commission has introduced the “AI Innovation Package” to support AI start-ups and SMEs and assist in the development of trustworthy AI that respects EU values and rules. The package’s “Communication on boosting startups and innovation in trustworthy AI” outlines a strategic investment strengthening the Union’s supercomputing infrastructure.

Central European Governance Through the European AI Office

On 24 January 2024, the European Commission established the European AI office dedicated to advancing trustworthy AI and mitigating associated risks. Serving as the central hub for AI expertise, it lays the groundwork for a uniform European AI governance system.

The European AI office plays a key role in implementing the AI Act by supporting the governance bodies in Member States in their tasks. It will enforce rules for general-purpose AI models and collaborates with expert teams and Member States in making well-informed decisions.

The initiative, GenAI4EU, aims to promote generative AI adoption within key industrial sectors, fostering an open-innovation ecosystem to facilitate collaboration between AI start-ups and industry players, including the public sector. Both the AI office and GenAI4EU initiative are a part of this package. Together, they will contribute to developing applications in areas including robotics, health, biotech, manufacturing, mobility, climate, and virtual worlds.

Recent AI Related Initiatives in the Netherlands

Finally, there have been several initiatives involving AI over the last year.

  • The Dutch government published its vision on generative AI (“Rijksbrede visie op generatieve AI”) on 18 January 2024. The upshot is that the government has a positive outlook on the use of AI, and aims to foster a strong AI ecosystem in the Netherlands with room for innovation using responsible generative AI. See this link to the publication on the Dutch government’s website.
  • The Dutch government has taken steps to determine the efficiency of AI in the labour market by analysing labour productivity, quantity and quality of work with the help of the Social and Economic Council (SER).
  • It plans to conduct campaigns to educate people on how to safeguard their data from the training of generative AI models.
  • Efforts are being made to establish a secure and accessible national AI testing facility. AINEd InnovatieLabs is set to initiate public-private partnerships to implement responsible generative AI applications in specific government services. The National Growth Fund will contribute EUR204.5 million to the AINEd programme aimed at knowledge, innovation and the application of Dutch AI systems.
  • The recent open-language model in the Netherlands, GPT-NL, will promote the open Dutch and European language models which are consistent with public values and will be granted EUR13.5 million by the Ministry of Economic Affairs and Climate Policy.
Greenberg Traurig, LLP

Beethovenstraat 545
1083 HK Amsterdam
The Netherlands

+31 651 289 224

+31 20 301 7350

Herald.Jongen@gtlaw.com www.gtlaw.com
Author Business Card

Law and Practice

Authors



Greenberg Traurig, LLP is an international law firm with approximately 2,750 attorneys serving clients from 47 offices in the USA, Latin America, Europe, Asia and the Middle East. The firm’s dedicated TMT team consists of more than 100 lawyers, of which seven are in Amsterdam. The Amsterdam team is well-versed in representing clients around the world in domestic, national, and international policy and legislative initiatives, as well as guiding them through the business growth cycle for a variety of technologies. As a result, it provides forward-thinking and innovative legal services to companies producing or using leading-edge technologies to transform and expand their businesses.

Trends and Developments

Authors



Greenberg Traurig, LLP is an international law firm with approximately 2,750 attorneys serving clients from 47 offices in the USA, Latin America, Europe, Asia and the Middle East. The firm’s dedicated TMT team consists of more than 100 lawyers, of which seven are in Amsterdam. The Amsterdam team is well-versed in representing clients around the world in domestic, national, and international policy and legislative initiatives, as well as guiding them through the business growth cycle for a variety of technologies. As a result, it provides forward-thinking and innovative legal services to companies producing or using leading-edge technologies to transform and expand their businesses.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.