Current legislation that touches on artificial intelligence (AI) includes the following:
The foregoing legislation is supplemented by case law from the Dutch courts and the Court of Justice of the EU (CJEU).
Applications of AI, including large language models (LLMs) and machine learning, are a hot topic in every industry, including healthcare, retail, finance, manufacturing, transportation and education. The use of generative AI, which creates data such as images and text based on generative models, is increasingly common, particularly in customer-facing situations through the use of chatbots. Predictive AI, on the other hand, is used to make predictions based on input data. It is used in a variety of applications, including weather forecasting, stock market predictions and disease outbreak predictions. Predictive AI uses machine learning algorithms to analyse historical data and predict future outcomes.
Industry innovations driven by AI and machine learning include autonomous vehicles, chatbots, personalised marketing, predictive maintenance and precision medicine. These innovations benefit businesses by reducing costs, improving efficiency and creating new revenue streams. Consumers benefit from personalised services, improved healthcare and efficient transportation.
The semiconductor industry in the Netherlands is a global leader in supplying the AI sector, with companies such as ASML, ASM International and NXP developing and producing cutting-edge technology.
Many Dutch government entities actively develop and engage in initiatives that aim to facilitate the adoption and advancement of AI for industry use, as well as the use of AI by government entities themselves. Although the government certainly acknowledges risks relating to AI, the general outlook is positive.
In January 2024, the Dutch Minister of Digital Affairs and Kingdom Relations presented the government’s vision on generative AI, highlighting the opportunities of generative AI and describing it as a promising technology, yet also recognising its challenges, particularly relating to safeguarding human wellbeing, sustainability, justice and security. The focus of the Dutch government in relation to generative AI aligns with the government’s broader ambitions regarding digitalisation (Werkagenda Waardengedreven Digitaliseren), which is to ensure that everyone can participate in the digital age, trust the digital world and have control over their digital lives.
The Dutch supervisory authorities in the financial sector – the Dutch Central Bank (De Nederlandsche Bank, or DNB) and the Authority for the Financial Markets (Autoriteit Financiële Markten, or AFM) – support digital innovation, which, more often than not, includes some form of AI, through several initiatives. For example, the AFM & DNB InnovationHub and AFM Regulatory Sandbox provide support in manoeuvring the complicated regulatory landscape.
While other jurisdictions may prefer a ”wait and see” stance on how AI unfolds and affects various industries and sectors, the EU and the Netherlands have attempted to adopt – as well as regulate – AI right from the start. In so doing, they have taken a risk-based, one-size-fits-all approach. The general attitude in the Netherlands towards the use of AI is positive.
On 1 August 2024, the EU AI Act came into force, with most provisions applying after a two-year transition period (around mid-2026). Certain obligations, such as bans on prohibited AI practices, are effective as of 4 February 2025. The EU AI Act is the first comprehensive legal framework on AI worldwide. In addition, there are other regulations that impose (indirect) requirements on the deployment of AI, as indicated in 1.1 General Legal Background, as well as a number of sector-specific laws that address AI for specific market parties (see 3.2 Jurisdictional Law).
The Netherlands currently regulates AI through the recently enacted EU AI Act. There is also specific legislation described in 14.2 Financial Services regarding the use of AI.
On 17 January 2024, the Dutch government published its view on generative AI, emphasising the importance of continuing to monitor and analyse generative AI trends. The Netherlands intends to be a front runner within the EU in the field of safe and responsible generative AI, and the government aims to achieve its objectives by collaborating closely with the relevant companies and leveraging its international connections. It intends to take on a prominent role in the roll-out of AI in the coming years.
In January 2024, the Dutch Ministry of the Interior and Kingdom Relations published a guide to impact assessments on human rights and algorithms. This guide includes extensive explanations on the assessment that government entities have to make when using algorithms and AI. Although the guide is non-binding, government entities are expected to use the model when performing this type of assessment. The use of the guide is also recommended for non-government entities by the Dutch DPA.
The recently enacted EU AI Act applies directly to EU member states as a regulation.
The direct application of the EU AI Act provides for harmonisation advantages, meaning that companies can achieve EU-wide compliance by meeting the requirements of one set of rules. Note that EU member states may still choose to enforce additional or stricter rules.
The EU AI Act is, in many ways, the first of its kind. For this reason, few issues with existing AI-specific local Dutch laws are expected. Moreover, the EU AI Act will override any existing rules in member states due to the EU’s legislative system.
There is no applicable information in this jurisdiction.
There have not been any notable amendments or newly introduced jurisdictional data laws in the Netherlands to foster AI technology. The main reason for this is that existing laws (mainly the GDPR) and EU regulations provide an appropriate regulatory framework.
The DSM Directive fosters AI technology by a text and data mining (TDM) exception that allows reproduction and extraction, including activities for AI training purposes. Legitimate access is given to the subject matters of works contained in networks or databases for the purpose of TDM, unless the right holder expressly reserved that use.
There are no concrete plans for legislative change with respect to data and copyright laws as of the date of this writing.
There is no legislation pending as to the development or use of AI.
As of the date of this publication, there have been no notable decisions from the Dutch courts regarding AI or AI in combination with intellectual property rights (IPRs).
In the Netherlands, the Dutch DPA has been designated as the national co-ordinating authority for risk signalling, advice and collaboration in the supervision of AI and algorithms. The Dutch DPA has instituted a separate division for this purpose, the Algorithm Coordination Directorate (directie Coordinatie Algoritmes, or DCA). The Dutch DPA is also the supervisory authority responsible for monitoring compliance and adherence to the EU AI Act.
The Dutch DPA will focus on four areas in 2025 – transparent algorithms, auditing, governance and the EU AI Act.
Financial regulatory authorities (the DNB and AFM) supervise the use of AI in the financial sector. Conduct supervision is carried out by the AFM, which has prioritised supervision of market parties that offer low-threshold products via apps, as well as digital marketing aimed at consumers. One of the AFM’s goals is to prevent consumers being nudged towards products or services that do not primarily serve their interests. The DNB carries out prudential supervision, which, in relation to AI, focuses on topics such as the soundness, accountability, fairness, ethics and transparency of AI products and services. The authorities have stated that financial supervision specific to AI will be intensified over the coming years.
The Authority for Consumer and Market (Autoriteit Consument en Markt, or ACM) ensures fair competition between businesses and protects consumer interests. It is the regulatory authority enforcing compliance with the DSA, the DGA and the Data Act. One of the goals in the 2025 annual plan of the ACM is to stimulate an open and fair digital economy, for example by taking action against interface designs that interfere with someone’s decision-making process (dark patterns).
Various regulatory bodies have issued AI-specific guidelines and recommendations.
Except for the general principles for the use of AI in the financial sector, none of the aforementioned initiatives are legally binding. However, they carry significant advisory and policy-shaping weight. Their objectives focus on protecting human rights, ensuring fairness and promoting transparency in AI development and use. These guidelines have improved public awareness, driven responsible innovation and prepared Dutch organisations for the EU AI Act.
In 2021, the Dutch DPA fined the Tax Authorities (Belastingdienst) for discriminative and unlawful processing of personal data that resulted in serious violations of the GDPR. The Tax Authorities processed the nationality of applicants for childcare allowance and used this data for automated fraud detection, for example by automatically designating applications as an increased risk. This had disastrous consequences, continuing even to this day. Applicants were wrongfully forced to pay back the allowance received, leading to debts, bankruptcies and the wrongful removal of more than 2,000 children from their parents. The use of AI for fraud detection, particularly by the government, remains a topic under strict regulatory scrutiny. In 2024, the Dutch DPA published several findings on unlawful use of AI for fraud detection by government entities.
In 2024 and 2025, the Dutch DPA published a multitude of reports about enforcement actions relating to AI. For example, the Dutch DPA instructed seven holiday parks to terminate their unlawful mandatory use of facial recognition software for entry to swimming pools.
In its strategy for 2025, the Dutch DPA announced it has a broad interpretation of its role as supervisor: that of social director. The Dutch DPA explained that this role is based on three principles: fundamental values are paramount in its work, it focuses on harmful situations and it chooses interventions where the greatest effect is expected. The fundamental values that the Dutch DPA prioritises are non-discrimination, personal autonomy and freedom, the verifiability and transparency of power and the security of personal data. In relation to AI, the Dutch DPA stated that, in its view, a problem exists if an AI application or the processing of personal data is harmful to these fundamental values. According to the Dutch DPA, such problems can arise both within the clear legal duties of the Dutch DPA and outside them, for example if there are not yet any rules for a certain situation. Although the Dutch DPA’s regulatory oversight and enforcement powers are limited to those set out in legislation, market parties in the Netherlands should be aware of this active stance of the Dutch DPA.
In 2024, the DNB issued two fines for the unlicensed offering of crypto services in the Netherlands.
In the decision on appeal of 27 November 2024, the ACM upheld the fines totalling EUR1,125,000 and binding instructions it had imposed on the gaming company Epic for violations of consumer legislation in the game Fortnite. The ACM found that children were pressured into making purchases by playing on their vulnerabilities, particularly their impulsivity. At the moment of publication, Epic has lodged a court appeal against the decision.
Dutch regulators may also issue public warnings about the use of certain products or market parties. For example, the Dutch DPA warned the public against the AI chatbot Deepseek on 3 February 2025.
In the Netherlands, the Royal Netherlands Standardization Institute (Nederlands Normalisatie Instituut; NEN) facilitates agreements between stakeholders on standards and guidelines. NEN manages over 31,000 standards, including international (International Organization for Standardization (ISO), International Electrotechnical Commission (IEC)), European (Eur Norm; EN) and national (NEN) standards accepted in the Netherlands. The European Committee for Standardisation (Comité Européen de Normalisation; CEN) and the European Committee for Electrotechnical Standardisation (Comité Européen de Normalisation Électrotechnique; CENELEC) are the European standard-setting bodies with respect to AI. The standards set out by these bodies are generally non-binding.
CEN and CENELEC standards help in meeting requirements and conducting risk assessments. Compliance with these standards is usually not mandatory, but it gives the advantage of “presumption of conformity” to manufacturers, economic operators or conformity assessment bodies. There are no ostensible conflicts with Dutch law.
AI is used across governmental authorities for various Dutch government operations, including administrative inspections and law enforcement. Typical use cases are image recognition, speech and text recognition, machine learning and robotics. The Dutch government’s views on the use of AI are positive, and there are many initiatives to use the technology.
The use of personal data for AI purposes by Dutch government entities is generally subject to the GDPR. However, processing in relation to criminal investigations and proceedings is subject to the Police Data Act (Wet Politiegegevens) and the Judicial and Criminal Records Act (Wet justitiele en strafvorderlijke gegevens) and processing in relation to intelligence agencies to the Intelligence and Security Services Act (Wet op de inlichtingen- en veiligheidsdiensten).
Dutch law enforcement has developed a protocol for using facial recognition technology (FRT). The protocol provides a decision framework and governance for approving requests to experiment with FRT in light of investigations. Since FRT typically includes the processing of biometric information (ie, unique identifiers of someone’s face), strict restrictions apply under the Police Data Act. These restrictions are similar to the protection of biometric information under Article 9 of the GDPR.
There are two notable decisions by the Dutch civil and administrative courts involving AI, as follows.
Possible uses of AI in national security include analysing intelligence information, enhancing weapon systems, providing battlefield recommendations, aiding in decision-making, logistics, cybersecurity, military logistics and command and control. On 15 February 2023, the General Intelligence and Security Service (Algemene Inlichtingen- en Veiligheidsdienst; AIVD) shared principles for defending AI systems that should be considered during the development thereof. These principles include:
In December 2024, the AIVD, the Military Intelligence and Security Service (Militaire Inlichtingen- en Veiligheidsdienst; MIVD) and the National Coordinator for Security and Counterterrorism published an extensive analysis on the effect of AI on national security.
Generative AI has raised several issues, including ethical concerns, data privacy and IPRs.
Ethical concerns revolve around the potential misuse of AI for generating misleading information or deepfakes, or promoting hate speech. To address this, AI developers are implementing stricter usage policies and creating more sophisticated content filters.
Data protection is another significant issue. AI models are trained on vast amounts of data, which may include personal data. To protect this data, AI developers seek to implement robust data anonymisation and encryption techniques. They are also working towards creating models that can learn effectively from less data, or even no data at all, which is a concept known as zero-shot learning.
In terms of intellectual property (IP), the assets in the AI process (such as AI models, training data, input prompts and output) can be protected under various forms of IP law. AI models and training data can be protected as trade secrets or database rights, while input prompts and output can potentially be protected under copyright laws or, possibly, patent laws. The AI tool provider’s terms and conditions significantly influence this protection.
There is also the risk of IP infringement – for example, where AI is trained on copyright-protected material without permission. If the AI’s output closely resembles a copyrighted work, it could also lead to infringement claims.
The GDPR affords data subjects several rights, including the right to rectification and deletion. If an AI outputs false claims about an individual, the individual has the right to have this corrected. If the AI has processed the individual’s personal data without their consent, they have the right to have this data deleted.
Purpose limitation and data minimisation principles require that personal data be collected for specific purposes, and that no more data is collected than is necessary for those purposes.
Under the GDPR, data subjects have several rights that pertain to AI models.
Right to Rectification
If an AI model produces false output related to individuals, the data subjects have the right to have inaccurate personal data rectified under Article 16 of the GDPR. This does not necessarily mean that the entire AI model must be deleted or adjusted. The rectification process can be achieved by integrating a mechanism in the AI model to allow the correction of inaccurate data.
Right to Erasure (“Right to Be Forgotten”)
Under Article 17 of the GDPR, a data subject has the right to have their personal data erased without undue delay under certain circumstances – eg, where the data is no longer necessary for the purpose for which it was collected or processed. However, this does not mean the entire AI model would need to be deleted.
Purpose Limitation
Article 5(1)(b) of the GDPR states that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. This can be achieved in AI models through clear communication about how the data will be used and by limiting the use of the data to those purposes only.
Data Minimisation
The GDPR, under Article 5(1)(c), also requires that personal data be relevant and limited to what is necessary in relation to the purposes for which it is processed. This principle can be adhered to in AI models by only collecting and processing the minimal amount of personal data necessary for the model to function as intended.
The application of these rights and principles can be complex within the context of AI, particularly when it comes to rectification and erasure. For example, if an AI system has used personal data to “learn” and adapt its algorithms, simply erasing that data might not fully remove its impact on the model.
AI is increasingly used in the legal profession, and its applications are effectively enhancing the way legal services are delivered. Examples of AI-powered tools are voice dictation, document automation and compliance and document translation.
There are currently no specific laws, rules or regulations in place on using AI in the legal profession. The Dutch Bar Association recently started providing recommendations about the use of AI in compliance with statutory professional standards. There have been no Dutch court decisions on the matter.
Ethical considerations of AI in the legal profession include:
Given the complexity and evolving nature of AI technologies, issues of liability for personal injury or commercial harm resulting from these technologies are of increasing concern under Dutch and EU laws. While the specifics can vary, liability for AI-enabled technologies generally falls under two main theories – strict liability and negligence.
Strict liability holds that the party responsible for placing the AI technology on the market is liable for any harm caused, regardless of fault or intent. In contrast, negligence requires proof that the party acted unreasonably or failed to take necessary precautions, leading to the harm.
The DCC and the EU Product Liability Directive are the main legal frameworks for these theories. They require the claimant to prove that the damage was caused by a defect in the product, and that there is a causal link between the damage and the defect.
Under Dutch and EU laws, human supervision is often required for AI technologies, particularly those with significant potential for harm. For example, autonomous vehicles must have a human driver ready to take control if necessary, and medical AI applications are typically used as decision-support tools for healthcare professionals, not as autonomous decision-makers.
Insurance plays a key role in managing AI liability risks. Businesses can purchase insurance policies to cover potential liability arising from their AI technologies. The terms of these policies can vary, but they typically cover legal defence costs and any damages awarded.
The allocation of liability among supply chain participants is a complex issue. Under Dutch and EU laws, any party involved in the supply chain could potentially be held liable for harm caused by an AI technology. This includes manufacturers, distributors, retailers and even users if they modify the technology in a way that contributes to the harm. However, contracts and insurance policies often allocate liability in specific ways to manage these risks.
In practice, the trend under Dutch and EU laws is to allocate liability based on control and benefit. Parties that have more control over the AI technology or derive more benefit from it are generally held to a higher standard of liability.
Liability arising from the acts or omissions of AI technology acting autonomously would generally be attributed to the business selling or providing the AI products or services. This is based on the principle of strict liability, which holds businesses responsible for the products they place on the market. However, the specifics can vary depending on factors such as the nature of the harm, the level of autonomy of the AI technology and the terms of any contracts or insurance policies.
Every high-risk AI-enabled technology must be liable under strict liability, and all other AI systems fall under fault-based liability, accompanying the burden of insurance. However, a back-end operator is only liable for strict liability if not already covered by the Product Liability Directive. The only defence available to the operator is force majeure, and the presumption of fault lies with the operator.
Furthermore, note that the EU AI Act imposes strict obligations not only on the “provider” of a high-risk AI system but also on the “importer”, “distributor” and “deployer” of such systems. The importer needs to verify whether the high-risk AI system is compliant through verification of documentation, whereas the distributor is required to verify the Conformité Européenne (CE) conformity.
Currently, there is no proposed legislation on the imposition or allocation of liability with respect to AI. The EU legislator earlier proposed the EU AI Liability Directive to modernise EU civil liability rules for harm caused by AI systems. However, in early 2024, the European Commission decided to withdraw the proposal from the legislative process, because it goals were largely covered by the EU AI Act and the updated Product Liability Directive, and because it risked creating legal overlap, uncertainty and complexity for member states.
Bias in algorithms technically refers to a systematic error introduced by an algorithm that skews results in a particular direction.
These errors are caused by three types of biases:
Such biases may result in discrimination, inequality and racism. If an algorithm treats individuals or groups unfairly based on characteristics such as race, gender, age or religion, it may be in violation of anti-discrimination laws. Areas with a high risk of algorithmic bias include online advertising, credit scoring, hiring and law enforcement. For example, an algorithm used for hiring might be biased against certain groups if it was trained on past hiring decisions that were biased.
Companies can face significant legal and reputational risk if their algorithms are found to be biased. They can be sued for discrimination, fined by regulatory bodies and suffer damage to their reputation.
Regulators are increasingly scrutinising algorithmic bias.
Face recognition technology can be useful, but can also have severely negative effects for data subjects. The systems identify people in photos or videos, or in real time, and are widely used in sectors such as retail, media, the medical sector, entertainment, e-commerce and so on. The use of facial and biometric data can cause privacy, data security, and bias and discrimination issues, resulting in regulatory and ethical violations.
CJEU press release 20/24 of 30 January 2024 states that police authorities may not store the biometric and genetic data of persons who have been convicted by final judgment of an intentional offence, with no time limit other than the death of the person concerned.
Since the use of face recognition involves automated processing of personal data, both GDPR and the Law Enforcement Directive (LED) apply.
The EU AI Act prohibits the use of real-time remote biometric identification systems in publicly accessible spaces under Article 5(1).
When AI is used to automate certain processes, this often includes automated decision-making as regulated under the GDPR. Automated decision-making involves solely automated processing of the personal data of individuals (ie, without any human involvement) that leads to decisions with legal effects concerning that person or similarly significant effects. Article 4(4) of the GDPR defines profiling as a specific form of processing by automated means to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict factors concerning the individual. Examples of automated decision-making are:
According to Article 22 of the GDPR, individuals should have the right not to be subjected to a decision based solely on automated processing, including profiling.
However, automated decision-making is allowed if:
If the automated decision-making is based on the performance of a contract or explicit consent, the individual should at least have the right to seek human intervention on the part of the company making these decisions, to express his or her point of view and to contest the decision. In addition, the company must lay down suitable measures to safeguard the individual’s rights, freedoms and legitimate interests.
Furthermore, under the GDPR, a data protection impact assessment (DPIA) is mandatory when the envisaged processing “likely constitutes a high risk” for individuals. The DPIA is a process designed to describe the processing of personal data, assess its necessity and proportionality, and help manage the risks to the rights and freedoms of natural persons resulting from the processing of such data by assessing them and determining the measures to address them. A DPIA is mandatory when the envisaged processing “likely constitutes a high risk” to individuals’ rights and freedoms. According to the Working Party 29 Guidelines on Data Protection Impact Assessment, this is generally the case when the processing involves automated decision-making.
Failure to comply with these rules can result in hefty fines. Under the GDPR, a violation of individuals’ rights, including the right not to be subjected to automated decision-making, which may result in fines of up to EUR20 million or 4% of the company’s annual turnover for the preceding financial year, whichever is higher. If the company subsequently fails to perform a DPIA where this is required, it risks being subject to fines of up to EUR10 million or 2% of the company’s annual turnover, whichever is higher, under the same regulation.
There is no specific regulatory scheme under Dutch or EU law that directly deals with the use of chatbots or other technologies to substitute for services rendered by natural persons. Under EU law, the GDPR provides broad protections for personal data. It requires data controllers to be transparent about their use of personal data, which would include the use of AI technologies such as chatbots. The GDPR also gives individuals the right to access, correct and delete their personal data, and to object to its processing.
As for the disclosure of AI use, the GDPR’s transparency principle requires that individuals be informed about the processing of their personal data. If an AI system is used to make decisions about individuals, the individuals should be informed of this, as well as the logic involved and the significance and consequences of such processing.
Technologies used to make undisclosed suggestions or manipulate the behaviour of consumers primarily include recommendation algorithms and targeted advertising technologies. These algorithms analyse a user’s behaviour and use this information to suggest products or services that the user might be interested in. While these technologies can be beneficial, they also raise privacy and ethical concerns, as they can be used to influence consumer behaviour without their knowledge or consent.
Additionally, under Article 22 of the GDPR, it is illegal to design chatbots to serve as a primary source of consumer approval decision processes (eg, they cannot approve a loan). There are possibilities of chatbots manipulating the behaviour of the consumer based on the data that has been used for their training. Also, AI-powered bots, known as social media bots, can be programmed to mimic human behaviour. They can spread misleading information to manipulate public opinion.
AI technology presents a range of new and unique risks that need to be considered in transactional contracts between customers and AI suppliers, particularly in AI as a service model. In the following, some of these risks and the ways businesses can address them are listed:
Numerous software and applications are specifically designed to streamline hiring and termination processes, including applicant tracking systems (ATS), human resource information systems (HRIS) and various AI-driven tools.
Technology in Hiring
ATS is software that collects and sorts résumés based on given criteria. It automates the process of shortlisting candidates, making it easier for recruiters to find the right talent. AI-driven tools are also used for pre-employment testing and video interviews. They can assess a candidate’s skills, personality traits and even emotional intelligence.
This technology reduces the time spent on screening and shortlisting candidates, ensuring a more efficient hiring process. It also minimises human bias, leading to a more diverse workforce. On the downside, qualified candidates might be overlooked if their résumés do not include specific keywords used by the ATS. Moreover, the lack of human interaction can make the hiring process impersonal.
If not properly configured, ATS can potentially discriminate against certain applicants, leading to legal implications under employment laws. Also, there are privacy concerns related to the collection and storage of applicant data.
Technology in Termination
HRIS is used to manage employee data and automate the termination process. It oversees tasks such as the deactivation of access to company systems, final paycheck calculations and exit interview scheduling.
HRIS can ensure a smooth and compliant termination process, reducing the risk of errors and legal complications. It also allows for efficient record-keeping, which is crucial in the event of a legal dispute.
The automated termination process might feel impersonal to the employees. There is also a risk of premature information leaks about the termination.
Legal Risks
If the system is not updated or used correctly, it can lead to legal issues such as non-compliance with labour laws. Additionally, mishandling of employee data can result in privacy breaches, resulting in potential legal action.
The use of AI in the evaluation of employee performance and monitoring of employee work has both positive and negative effects. On the positive side, it can create new job opportunities in areas such as data analysis, programming and system maintenance. When working correctly, AI monitoring is also very precise, which reduces the risk of human error. On the negative side, there are fears that AI could replace human jobs. However, as AI technology is not yet sufficiently advanced to completely replace human labour, these jobs are still largely secure.
AI may provide automated recommendations to users based on past orders, preferences and location. Chatbots may function as virtual assistants to provide personalised assistance and AI optimises delivery routes for food delivery by estimating traffic and weather conditions.
The Dutch and EU laws have been evolving in response to these changes. The EU has been working on regulations to ensure fair and just working conditions in the platform economy – eg, the proposed Platform Work Directive, which includes providing access to social protection for platform workers and ensuring that they have the right to collective bargaining.
However, none of these initiatives or precedents have specifically addressed the use of AI in the digital platform.
AI applications in the finance industry, such as algorithmic trading, customer service, fraud detection and prevention and compliance with anti-money laundering legislation, are increasingly being used.
There are some financial regulations that specifically cover the use of AI. The main examples are:
In addition, regulations on internal models of banks (pursuant to the Capital Requirements Regulation) and, to some extent, insurers (pursuant to the Solvency II Directive as implemented into the WFT) include specific requirements that AI models must adhere to.
Medical Device Regulations (MDR) 2017/745 and In Vitro Diagnostic Medical Devices Regulation (IVDR) 2017/746 are the regulations in the EU that govern the use of technology in healthcare.
The use of AI in healthcare is not without risk. While using health data to train AI systems, there is a potential risk of this sensitive data being unlawfully shared with third parties, resulting in data breaches. Other risks are possible bias and inequality risks due to biased and imbalanced datasets being used for training, structural biases and discrimination, disparities in access to quality equipment and a lack of diversity and interdisciplinarity in development teams.
Health data is considered as sensitive data according to Article 9 of the GDPR, and using or sharing a patient’s health data for training an AI model requires explicit consent from the patient under the GDPR as well as health care-related legislation. Repurposing data without the patient’s knowledge and consent is unlawful.
Under Dutch law, autonomous vehicles are governed by the Dutch Road Traffic Act 1994, which currently requires a driver to be in control of the vehicle at all times. However, the Netherlands is progressive in terms of autonomous vehicle legislation, and it has been conducting public road tests of self-driving cars since 2015.
At the EU level, autonomous vehicles are subject to a range of regulations, including General Safety Regulation (EU) 2019/2144, which mandates certain safety features for new vehicle types from July 2022 and for all new vehicles from July 2024. This includes a driver drowsiness and attention warning, an advanced driver distraction warning, an emergency stop signal, reversing detection and an event data recorder.
The responsibility for accidents or incidents involving autonomous vehicles is a complex issue. According to EU law, the manufacturer of a vehicle could be held liable if a defect in the vehicle causes damage. In the Netherlands, the driver of the vehicle is usually considered responsible for any accidents, even if the vehicle is driving autonomously. However, this may change as autonomous vehicle technology evolves.
Ethical considerations for AI decision-making in critical situations are also a significant concern. For instance, how should an autonomous vehicle be programmed to act in a situation where an accident is unavoidable? Such complex ethical questions have yet to be fully answered.
There have been several attempts at international harmonisation to promote global collaboration and consistency in regulations and standards. For instance, the international regulations of the United Nations Economic Commission for Europe, which include international regulations for driver control assistance systems, entered into force in 2024.
AI usage in manufacturing and its implications are governed by several regulations within the Netherlands and the EU, addressing areas such as product safety and liability, workforce impact and data privacy and security.
Product Safety and Liability
The EU has a number of directives in place to ensure product safety and manage liability in manufacturing. The General Product Safety Directive ensures that only safe products are sold within the EU. If AI is used in the manufacturing process, the manufacturer must ensure the AI does not compromise the safety of the product.
If a product is faulty and causes damage or injury, the Product Liability Directive is applicable. It makes the producer liable for damage caused by a defect in their product.
Workforce Impact
The Netherlands and the EU have regulations in place to protect workers’ rights. The EU’s Charter of Fundamental Rights includes provisions for fair and just working conditions. If AI is being used to replace or augment human workers, it is crucial that worker rights are respected and that any transition is handled ethically and responsibly. The Dutch Working Conditions Act also stipulates that employers must ensure a safe and healthy working environment, which would extend to an environment where AI is used.
Data Privacy and Security
The GDPR applies to any business that processes personal data, including those using AI. If AI is used to process personal data in the manufacturing process, it must be done in a manner that respects privacy rights and ensures data security. The EU also has the NIS2 Directive, which provides EU-wide legislation on cybersecurity.
The use of AI in professional services in the Netherlands and the EU is governed by a variety of regulations. Here are some of the key areas of focus.
Liability and Professional Responsibility
In the Netherlands, the DCC could potentially be applied in cases of AI causing damage. However, determining responsibility in AI-related incidents can be complex due to the nature of machine learning.
Confidentiality
The GDPR applies throughout the EU, including the Netherlands. It mandates that personal data must be handled in a way that ensures its confidentiality and security. Professionals must ensure that the AI systems they use are compliant with GDPR.
IP
In the EU, AI-generated works may not be eligible for copyright protection, as current laws require human authorship. In the Netherlands, the DCA could potentially apply to AI creations, but this is still a matter of debate.
Client Consent
Under the GDPR, there must be a lawful basis (such as consent) to process personal data lawfully. The use of consent can have implications for AI systems used in professional services, as data subjects can revoke their consent.
Additional consents are required for the use and sharing of patient’s health data and social security numbers by health care providers.
Regulatory Compliance
At the EU level, the EU AI Act aims to ensure that AI is used in a way that respects EU values and regulations. In the Netherlands, the Dutch DPA oversees the use of AI to ensure compliance with the GDPR and other regulations.
AI assets such as models, training data, input prompts and output can be protected by various forms of IPRs, depending on their nature and the jurisdiction in question.
AI Models
The algorithms used in AI models can be protected by patents. In the EU and the Netherlands, a patent can be granted for an invention that is new, involves an inventive step and is susceptible to industrial application. However, mathematical methods as such (which AI algorithms often are) are not considered patentable.
Training Data
Databases can be protected under copyright and/or sui generis database rights. In the EU, a database is protected by copyright if its structure constitutes the author’s own intellectual creation. If a substantial investment has been made in obtaining, verifying or presenting the contents of a database, it may also be protected by a sui generis database right.
Input (Prompts)
Texts used as input prompts can be protected under copyright law if they are original and constitute the author’s own intellectual creation.
Output
The output of an AI tool can also be protected under copyright law if it is original and is the author’s own intellectual creation. However, this is an area of ongoing debate, as it is uncertain whether an AI-generated work can meet the originality requirement since it does not have a human author.
The terms and conditions of the AI tool provider can significantly influence the protection of assets.
Infringements under Dutch and EU IP laws can occur in various ways. For instance, if someone uses a copyrighted database as training data without permission, it will constitute infringement. Unauthorised use of input prompts or output that are protected by copyright could also lead to infringement.
As of the date of this publication, there have been no decisions from the Dutch courts on the inventorship or authorship of an invention or work created by or with AI technology.
AI technologies and data can be protected under the Trade Secrets Act, which protects undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure. The Act defines a trade secret as information that meets three criteria – it is secret, it has commercial value because it is secret and the holder has made reasonable efforts to keep it secret. AI technologies and data often meet these criteria.
Trade secrets in AI can be protected through non-disclosure agreements and confidentiality clauses in contracts.
At the time of this publication, no existing legislation, case law or other types of guidance can say whether AI-generated works of art and works of authorship can be protected by copyright in the Netherlands. The general view among scholars is that if a piece of work is entirely produced by a machine with minimal or no human intervention, and no individual can assert ownership over it, the work will likely enter the public domain. This viewpoint aligns with the guidelines provided in other jurisdictions, such as those from the US Copyright Office.
However, current AI technologies like ChatGPT, where users input prompts, do not operate entirely autonomously. This raises the question of whether the input prompt can be granted copyright protection. Under EU copyright laws, concepts, ideas and styles cannot receive copyright protection. However, as long as some creative choices are involved, copyright protection will generally apply.
Consequently, it can be argued that protection will often be extended to a specific prompt, as long as it is not just a “mundane or banal expression”. However, the unresolved issue is the extent of protection that can be claimed when one single prompt can yield multiple generative results.
Using OpenAI involves several IP considerations:
On 5 February 2025, the Dutch Authority for Consumers and Markets (ACM) announced that it will initiate an inquiry into computer-based consumer pricing. The ACM will aim to explore the concrete effects of computerised and personalised pricing. The ACM carries out this type of inquiry when it believes that a market is not working as well as it should and suspects that breaches of the antitrust/competition rules might be a contributory factor. The ACM uses the information obtained in an inquiry to understand a particular market better from the point of view of competition policy. If it finds grounds for doing so, the ACM may – at a later stage – assess whether it needs to open specific investigations to ensure that the antitrust/competition law rules are being respected. The ACM’s announcement arguably fits in a broader trend in which the ACM is taking a more active stance and advocating for a so-called New Competition Tool. The idea is that this tool should be complementary to the existing competition instruments (Articles 101 and 102 of the Treaty on the Functioning of the European Union (TFEU), the EU Merger Regulation and Dutch equivalents) and be able to address structural competition problems (eg, insufficient effective competition due to barriers to entry and/or switching) that cannot be tackled effectively by the existing competition instruments. It is against that same background – ie, a perceived deficit in the ability of legal instruments within the antitrust space to deal with structural competition problems – that the ACM has recently started to call in below-threshold mergers in the Netherlands. Big tech/AI companies operating in concentrated and/or quickly changing markets would be wise to taking this emerging development into account as part of their business dealings in relation to the Netherlands.
The Netherlands currently applies cybersecurity laws like the Wbni, the GDPR and the Computer Crime Act (Wet Computercriminaliteit) to AI systems and AI-related cyber-risks. These laws protect critical infrastructure, personal data and digital systems from unauthorised access, breaches and cyberattacks, including those involving AI. The NIS2 Directive expands cybersecurity obligations to include AI-based services and platforms, strengthening risk management and incident reporting. The NIS2 Directive should have been implemented into national legislation by October 2024, but has been delayed in the Netherlands until at least Q3 2025.
The EU AI Act has introduced specific cybersecurity requirements for high-risk AI systems to prevent manipulation and ensure safe, trustworthy AI use. The Dutch National Cybersecurity Strategy 2022–28 acknowledges AI’s dual-use potential and calls for secure-by-design AI and improved resilience. Dutch regulators recognise that LLMs lower entry barriers for cybercriminals, enabling faster and more sophisticated social engineering, malware creation and data-driven targeting. The Dutch Cyber Security Council (CSR) and the Netherlands Cyber Security Center have warned about AI’s role in transforming cyberthreats and stressed the need for defensive AI tools. While the EU AI Act addresses some AI-specific risks, no dedicated laws yet exist targeting the misuse of LLMs for cybercrime. Current laws treat AI-assisted attacks the same as traditional ones, though policymakers are debating whether new regulations are needed. The Netherlands is aligning its cybersecurity and AI governance within a broader EU framework, aiming to balance innovation with security and public trust.
There are various ESG legislations applicable in the Netherlands, both at a national level and at an EU-wide level. These legislations impose requirements on certain companies, corporate groups and their business partners. These include ESG reporting based on the Corporate Sustainability Reporting Directive, including on the impact of organisations’ activities in their value chains. In the Netherlands, it is generally permitted to use AI for ESG reporting. This is becoming more widespread, as AI is particularly effective in automating ESG reporting processes such as analysing metrics, analysing and identifying risks and generating insights. AI-solutions for ESG reporting need to comply with privacy and data protection laws, including the GDPR and other evolving legal and ethical standards. The use of AI can have a negative impact on the environment, as AI systems require substantial computational power, which leads to high energy consumption. Data centres that support AI algorithms contribute to carbon emissions unless powered by renewable energy sources.
Effective AI governance structures should include a dedicated AI governance lead, similar to a data protection officer, to oversee AI-related risks. Multidisciplinary AI ethics or impact committees are essential for assessing the ethical, legal and societal implications of AI systems. AI risk classification processes should be used to categorise AI systems into risk levels, with specific governance requirements for higher-risk applications. A centralised algorithm or model register should track AI system usage, risks and monitoring status for transparency and accountability. Clear standards for explainability and transparency must be established to ensure AI systems are understandable to users and stakeholders.
Data Privacy and Security
AI systems often rely on large amounts of data, which can include sensitive information. It is crucial to ensure that data is stored and processed securely, and that privacy rights are respected.
Transparency and “Explainability”
AI systems should be designed to be transparent in their operations and decisions. This includes providing clear explanations for AI decisions, especially in critical areas like healthcare or finance.
Bias and Fairness
AI systems can inadvertently perpetuate or amplify existing biases in data. It is important to monitor and mitigate these biases to ensure fair outcomes.
Robustness and Reliability
AI systems should be strong and reliable, with safeguards in place to prevent or mitigate harmful outcomes.
Accountability
There should be clear lines of accountability for AI decisions, including mechanisms for redress when things go wrong.
Practical advice for implementing these best practices effectively includes:
Operationalising AI Risk Management Requirements
AI systems should be classified by risk level based on the EU AI Act’s categories, with high-risk systems receiving the most oversight. Existing DPIAs can be extended to cover AI-specific risks, especially where personal data is involved. Risk management policies for AI should be integrated into current governance, privacy and security frameworks to keep processes efficient and consistent. High-risk AI systems require documented bias, safety and robustness testing, along with clear incident response and escalation procedures. Oversight should be proportionate, focusing governance efforts where the potential impact of AI systems is greatest.
Beethovenstraat 545
1083 HK Amsterdam
Netherlands
+31 651 289 224
+31 20 301 7350
Herald.Jongen@gtlaw.com www.gtlaw.comAs artificial intelligence (AI) adoption accelerates, regulators across the EU and the Netherlands are rapidly introducing frameworks to address legal, ethical, and societal risks – reshaping compliance expectations, operational priorities and market conditions. This contribution highlights the key trends and developments in the Netherlands and the EU.
The EU AI Act, effective from August 2024, marks a major regulatory milestone, introducing a risk-based framework for AI systems. It bans harmful applications and imposes stringent obligations on high-risk AI – with phased implementation through 2026. New requirements around AI literacy and oversight took effect in February 2025, creating immediate legal and compliance priorities for businesses operating in the EU.
In the Netherlands, regulators are shaping national supervision strategies aligned with the AI Act.
The Dutch Data Protection Authority (DPA) is expanding its role, issuing algorithmic risk reports spotlighting high-risk AI, mental health chatbots and transparency gaps. Consultations on meaningful human intervention in AI decision-making and closer co-ordination among sectoral regulators indicate growing regulatory scrutiny.
At the EU level, the AI Continent Action Plan (April 2025) underscores the market shift towards trustworthy, human-centric AI, combining regulatory enforcement with investment in AI infrastructure, skill and innovation.
At the European level, the European Data Protection Board (EDPB) has further clarified the legal bases for AI’s use of personal data, particularly in high-risk contexts. In December 2024, the EDPB published an opinion on the use of personal data for the development and deployment of AI models. This opinion looks at:
It also addresses the use of first- and third-party data. In April 2025, the European Commission launched the AI Continent Action Plan, reinforcing Europe’s leadership in trustworthy, human-centric AI through new investments, infrastructure, and legal frameworks. Alongside this, the European Parliament Research Center (EPRC) published findings on algorithmic discrimination, clarifying how high-risk AI systems may process sensitive data to monitor and correct bias under the General Data Protection Regulation (GDPR) and the AI Act, provided strict safeguards are in place.
These developments signal a tightening regulatory environment where compliance, transparency, and ethical AI practices will increasingly influence market access, investment decisions and operational risk for businesses across the EU.
The EU AI Act Has Been Enacted, With First Provisions Already in Force
The long-anticipated EU AI Act came into force on 1 August 2024, aiming to promote responsible AI development and safeguard citizens’ health, safety and fundamental rights across the EU. Using a risk-based framework, it categorises AI systems into minimal, specific transparency, high and unacceptable risk levels – banning harmful uses like social scoring and imposing strict requirements on high-risk systems such as medical AI and recruitment tools. The Act seeks to foster innovation while ensuring oversight, transparency and human rights protections. On 2 February 2025, bans on prohibited AI practices and AI literacy requirements became effective. The AI Office, established to oversee enforcement, has also begun its work. On 2 August 2025, the rules for general-purpose AI (GPAI) models will become effective. The remainder of the EU AI Act will apply from 2 August 2026.
Additionally, a Code of Practice for GPAI providers is expected by April 2025, following a public consultation. These obligations include requirements around risk management, data quality, transparency and human oversight. Throughout this phased process, the European Commission, AI Office and national supervisory bodies will release additional guidance and standards to support consistent, effective implementation across the EU.
Please see the firm’s contribution in the Netherlands Trends & Developments chapter in the Chambers Artificial Intelligence 2024 Guide for a summary of the EU AI Act.
Developments in the Netherlands
Final recommendation on the supervision of AI
The Dutch Authority for Digital Infrastructure (Rijksinspectie Digitale Infrastructuur; RDI) and the Dutch DPA have presented a final recommendation on the supervision of AI, emphasising a co-ordinated, sectoral approach. As AI continues to rapidly evolve and be applied across various fields, the recommendation stresses the importance of using existing supervisory authorities for different sectors to ensure effective oversight while maintaining their specialised expertise. The AI Act’s rules for responsible AI use require collaboration between sectoral authorities, especially for cross-sector applications like AI in recruitment or decision-making processes. The RDI and Dutch DPA will play co-ordinating roles, facilitating collaboration and preventing fragmented supervision. The recommendation was developed through extensive collaboration with other supervisory bodies and aims to ensure a safe, fair and innovative AI landscape in the Netherlands, safeguarding fundamental rights and enhancing consumer confidence.
AI literacy
As of February 2025, organisations using AI systems are required to ensure their employees possess adequate AI knowledge. While the meaning of this requirement is still abstract, the Dutch DPA – as one of the few regulators in the EU – has provided further insight into how this requirement should be interpreted. The level of AI literacy should correspond to the specific context in which the AI is applied and the potential impact on individuals or groups. For instance, HR staff must recognise that AI systems might carry biases or overlook critical factors, potentially leading to unfair hiring decisions. Similarly, municipal front-desk employees using AI for identity verification need to be aware that these systems may not perform equally for everyone and that results should not be accepted without question. The Dutch DPA has also indicated its plans to engage with stakeholders to explore ways of raising AI awareness and expertise within organisations.
Third Algorithmic Risks Report Netherlands
Every six months, the Dutch DPA publishes a risk report highlighting any risks, developments and recommended strategies in relation to AI. In July 2024, the Dutch DPA published its third report. This edition emphasises that while AI applications are expanding across various sectors, the mechanisms to manage associated risks are not keeping pace, leading to potential incidents that could affect citizens, businesses and government entities. The Dutch DPA calls for increased vigilance and proactive measures to ensure responsible AI deployment and to safeguard fundamental rights.
Key concerns include the lack of adequate registration and oversight of high-risk AI applications, which hampers the ability to monitor and mitigate potential harms effectively. The report also points out that many AI systems are being implemented without sufficient transparency, making it difficult to assess their impact on privacy and equality. The Dutch DPA underscores the importance of a balanced approach that fosters innovation while ensuring robust safeguards are in place.
To address these issues, the Dutch DPA recommends enhancing AI literacy among stakeholders, implementing comprehensive risk assessments and establishing clear accountability frameworks. The report serves as a call to action for all parties involved to collaborate in developing a trustworthy AI ecosystem that aligns with societal values and legal standards.
Fourth Algorithmic Risks Report Netherlands
On 12 February 2025, the Dutch DPA released the fourth algorithmic risk report. The Dutch DPA raises concerns about the safety and reliability of AI chatbot apps designed for virtual friendship and mental health support. A recent study by the Dutch DPA of nine popular chatbot apps found that many provide unreliable, overly simplistic or even harmful responses, especially during crisis situations. These apps often fail to properly refer vulnerable users to professional help when needed. Many chatbots are based on English language models, performing poorly in Dutch and even inconsistently in English.
The Dutch DPA opens a consultation for meaningful human intervention in algorithmic decision-making
The Dutch DPA is developing a tool to guide meaningful human intervention in algorithmic decision-making, which is required by regulations like the GDPR. Algorithms and AI are increasingly being used by organisations to make decisions such as credit and job application assessments, but individuals must have the right to human involvement in decisions affecting them. The Dutch DPA emphasises that human intervention should be significant and genuinely influence the decision-making process, not just symbolic. Effective implementation of meaningful intervention is crucial, as constraints like time pressure or unclear systems may impact outcomes.
To align the tool with real-world practices, the Dutch DPA is inviting input from companies, organisations and experts through a consultation. The Dutch DPA is providing examples and questions to help organisations understand the factors involved, including human roles, technology, design and processes. The consultation aims to gather feedback on practical experiences and challenges faced by stakeholders in implementing meaningful intervention. The input will be used to refine the document, and a summary of responses will be published without disclosing identities.
The Dutch government’s vision of generative AI
In 2024, the Dutch Ministry of the Interior and Kingdom Relations published the government-wide vision on the use of generative AI. This guide describes the risks and opportunities associated with generative AI and addresses relevant laws and policies. It also includes a set of practical actions for the responsible development and use of generative AI. Overall, the Dutch government is positive about the use of AI and intends to be a front runner within the EU in the field of safe and responsible generative AI.
Developments in the EU
The EDPB adopts an opinion on the use of personal information in the development and deployment of AI models
The EDPB adopted an opinion on the use of personal data in the development and deployment of AI models, focusing on anonymity, legitimate interest and the consequences of unlawful data processing. The opinion addresses when AI models can be considered anonymous, stating that this should be assessed on a case-by-case basis by DPAs. For a model to be anonymous, it must be highly unlikely to identify individuals or extract personal data through queries. Regarding legitimate interest, the EDPB outlines considerations for DPAs to assess its appropriateness, including a three-step test for determining if AI uses like conversational agents or cybersecurity improvements can rely on this legal basis. The opinion also includes criteria for assessing whether individuals may reasonably expect their data to be used, such as the context of data collection and the nature of the relationship with the data controller. Mitigation measures, both technical and procedural, can reduce negative impacts when processing is deemed harmful to individuals. If personal data is unlawfully processed, it may affect the lawfulness of the AI model’s deployment unless the data has been anonymised. The opinion provides a framework for case-by-case analysis, with further guidelines on specific issues, like web scraping, under development.
The European Commission publishes the AI Continent Action Plan
The EU’s AI strategy focuses on excellence and trust, aiming to make Europe a global leader in trustworthy, human-centric AI. The AI Continent Action Plan, launched in April 2025, strengthens previous initiatives by promoting safe, ethical and competitive AI technologies. It supports key sectors like healthcare, education, industry and sustainability, while protecting democratic values and fundamental rights. Major components include creating AI factories, InvestAI facilities, AI skills academies and large-scale data infrastructures to boost AI development and adoption. The plan builds on the January 2024 AI Innovation Package, designed to help startups and SMEs develop AI solutions aligned with EU standards.
A major feature of this package is GenAI4EU, which fosters generative AI across strategic industries through open innovation ecosystems. The EU also commits to investing EUR1 billion per year through Horizon Europe and Digital Europe, with the goal of reaching EUR20 billion in annual AI investments by 2030. Access to high-quality data and robust infrastructure, guided by the Data Act and Data Governance Act, is seen as vital for AI excellence. Alongside this, the EU prioritises trustworthy AI, proposing three legal frameworks: a general AI regulation, a civil liability framework and updates to sectoral safety laws. The AI Act introduces a risk-based approach, classifying AI systems into four categories – minimal, limited, high and unacceptable risk – with special rules for GPAI. Notable milestones include the entry into force of the AI Act in August 2024 and the launch of InvestAI and AI factories in 2025.
The European AI Office, established in February 2024, plays a key role in co-ordinating these initiatives. The Action Plan is part of a broader journey dating back to 2018, when the EU began co-ordinating AI policy through expert groups, alliances and white papers. These continuous efforts aim to ensure AI innovations are safe, ethical and globally competitive. Together, these actions position Europe as a trusted, responsible leader in AI, balancing innovation with strong safeguards for people and society.
The EPRC publishes findings on algorithmic discrimination and AI regulation
On 26 February 2025, the EPRC released a report addressing algorithmic discrimination and the relationship between the AI Act and the GDPR. The report highlights that while the GDPR generally restricts the processing of special categories of personal data, such processing may be permitted within high-risk AI systems when necessary for bias monitoring, detection and correction, as it serves a “substantial public interest” consistent with both the GDPR and the EU AI Act. The report outlines several conditions for lawful processing, including implementing strong cybersecurity measures, following GDPR principles like data minimisation and privacy by design, limiting the use of sensitive data strictly to what is essential for protecting fundamental rights and ensuring the processing is grounded in legal bases such as substantial public interest under the GDPR.
Beethovenstraat 545
1083 HK Amsterdam
Netherlands
+31 651 289 224
+31 20 301 7350
Herald.Jongen@gtlaw.com www.gtlaw.com