Contributed By Greenberg Traurig, LLP
Current legislation that touches on AI includes the following.
Certain legislation is pending, but is expected to enter into force soon:
The above legislation is supplemented by case law from the Dutch courts and the Court of Justice of the EU (CJEU).
Applications of AI and machine learning are a hot topic in every industry, including healthcare, retail, finance, manufacturing, transportation and education. The use of generative AI, which creates data such as images and text based on generative models, is increasingly common, particularly in customer-facing situations through the use of chatbots. Predictive AI, on the other hand, is used to make predictions based on input data. It is used in a variety of applications, including weather forecasting, stock-market predictions and disease-outbreak predictions. Predictive AI uses machine-learning algorithms to analyse historical data and predict future outcomes.
Industry innovations driven by AI and machine learning include autonomous vehicles, chatbots, personalised marketing, predictive maintenance and precision medicine. These innovations benefit businesses by reducing costs, improving efficiency and creating new revenue streams. Consumers benefit from personalised services, improved healthcare and efficient transportation.
The semiconductor industry in the Netherlands is a global leader in supplying the AI sector, with companies such as ASML, ASM International and NXP developing and producing cutting-edge technology.
Cross-industry cooperative initiatives include the Partnership on AI, a consortium that includes Amazon, Facebook, Google, IBM, and Microsoft. The partnership aims to ensure that AI technologies benefit all of humanity.
Many Dutch government entities actively develop and engage in initiatives that aim to facilitate the adoption and advancement of AI for industry use, as well as the use of AI by government entities themselves. Although the government certainly acknowledges risks relating to AI, the general outlook is positive.
In January 2024, the Dutch Minister of Digital Affairs and Kingdom Relations presented the government’s vision on generative AI, highlighting the opportunities of generative AI and describing it as a promising technology, yet also recognising its challenges, particularly relating to safeguarding human wellbeing, sustainability, justice and security. The focus of the Dutch government in relation to generative AI aligns with the government’s broader ambitions regarding digitalisation (Werkagenda Waardengedreven Digitaliseren), which is to ensure that everyone can participate in the digital age, trust the digital world, and have control over their digital lives.
The Dutch supervisory authorities in the financial sector – the Dutch Central Bank (De Nederlandsche Bank, or DNB) and the Authority for the Financial Markets (Autoriteit Financiële Markten, or AFM) – support digital innovation, which, more often than not, includes some form of AI, through several initiatives. For example, the AFM & DNB InnovationHub and AFM Regulatory Sandbox provide support in manoeuvring the complicated regulatory landscape.
While other jurisdictions may prefer a ”wait and see” stance on how AI unfolds and affects various industries and sectors, the EU and the Netherlands have attempted to adopt – as well as regulate – AI right from the start. In so doing, they have taken a risk-based, one-size-fits-all approach. The general attitude in the Netherlands towards the use of AI is positive.
There are currently no general laws in the Netherlands specifically regulating AI. However, on 21 May 2024, the EU Council approved the EU AI Act. The EU AI Act will apply 36 months after entry into force with certain provisions taking effect earlier. In addition, there are other regulations that impose (indirect) requirements on the deployment of AI, as indicated in 1.1 General Legal Background, as well as a number of sector-specific laws that address AI for specific market parties (see 3.2 Jurisdictional Law).
No general AI legislation has (yet) been enacted as of the date of this publication. However, the EU AI Act has been approved by the EU Council and enters into force 20 days after being published in the EU's Official Journal.
There is specific legislation described in 14.2 Financial Services regarding the use of AI.
On 17 January 2024, the Dutch government published its view on generative AI, emphasising the importance of continuing to monitor and analyse generative AI trends. The Netherlands intends to be a front-runner within the EU in the field of safe and responsible generative AI, and the government aims to achieve its objectives by collaborating closely with the relevant companies and leveraging its international connections. It intends to take on a prominent role in the rollout of AI in the coming years.
In January 2024, the Dutch Ministry of Internal Affairs and Kingdom Relations published a guide to impact assessments on human rights and algorithms. This guide includes extensive explanations on the assessment that government entities have to make to make when using algorithms and AI. Although the guide is non-binding, government entities are expected to use the model when performing this type of assessment. The use of the guide is also recommended for non-government entities by the Dutch DPA.
The recently approved EU AI Act will directly apply to EU Member States as a regulation. This act will be complemented by the AI Liability Directive, which is still pending and will have to be implemented into Member State Law.
The direct application of the EU AI Act provides for harmonisation advantages, meaning that companies can achieve EU-wide compliance by meeting the requirements of one set of rules. Note that EU Member States may still choose to enforce additional or stricter rules.
The EU AI Act is, in many ways, the first of its kind. For this reason, not many issues with existing AI-specific local Dutch laws are expected. Moreover, the EU AI Act will override any existing rules in Member States due to the EU’s legislative system.
There is no applicable information in this jurisdiction.
There have not been any notable amendments or newly introduced jurisdictional data laws in the Netherlands to foster AI technology. The main reason for this is that existing laws (mainly the GDPR) and EU regulations provide an appropriate regulatory framework.
The DSM Directive fosters AI technology by a text and data mining (TDM) exception which allows reproduction and extraction, which includes activities for AI training purposes. Legitimate access is given to subject matters of works contained in networks or databases for the purpose of TDM, unless the right holder expressly reserved that use.
There are no concrete plans for legislative change with respect to data and copyright laws as of the date of this writing.
Aside from the EU AI Act and the EU AI Liability Directive, there is no legislation pending as to the development or use of AI.
As of the date of this publication, there have been no notable decisions from the Dutch courts regarding AI or AI in combination with intellectual property rights.
As of the date of this publication (May 2024), there have been no notable decisions from Dutch courts specifically relating to the definition of AI.
In the Netherlands, the Dutch DPA has been designated as the national coordinating authority for risk signalling, advice and collaboration in the supervision on AI and algorithms. The Dutch DPA has instituted a separate division for this purpose, the directie Coordinatie Algoritmes (DCA).
The Dutch DPA will focus on four areas of attention in 2024 – transparent algorithms, auditing, governance, and the prevention of discriminatory algorithms. In addition, it is expected to be the responsible supervisory authority for monitoring compliance and adherence to the EU AI Act.
Financial regulatory authorities DNB and AFM supervise the use of AI in the financial sector. Conduct supervision is carried out by the AFM, which has prioritised supervision of market parties that offer low-threshold products via apps, as well as digital marketing aimed at consumers. One of the AFM’s goals is to prevent consumers being nudged towards products or services that do not primarily serve their interests. DNB carries out prudential supervision, which, in relation to AI, focuses on topics such as soundness, accountability, fairness, ethics and transparency of AI products and services. The authorities have stated that financial supervision specific to AI will be intensified over the coming years.
The Authority for Consumer and Market (Autoriteit Consument en Markt, or ACM) ensures fair competition between businesses and protects consumer interests. It is the regulatory authority enforcing compliance with the Digital Services Act, the Data Governance Act and the Data Act. One of the goals in the 2024 annual plan of the ACM is to stimulate an open and fair digital economy, for example by taking action against interface designs that interfere with someone’s decision-making process (dark patterns).
Regulatory agencies in the Netherlands have not issued any official definitions of AI. The EU AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. Regulators are not expected to deviate materially from this definition.
Please refer to 5.1. Regulatory Agencies.
In 2021, the Dutch DPA fined the Tax Authorities (Belastingdienst) for discriminative and unlawful processing of personal data which resulted in serious violations of the GDPR. The Tax Authorities processed the (dual) nationality of applicants for childcare allowance (kinderopvangtoeslag) and used this data for automated fraud detection, for example by automatically designating non-Dutch applications as an increased risk. The practices of the Tax Authorities had disastrous consequences, continuing even to this day. Applicants were wrongfully forced to pay back the allowance received, leading to debts, bankruptcies and the wrongful removal of more than 2,000 children from their parents.
In May 2023, the Dutch DPA announced that it is investigating the use of fraud-detection algorithms by Dutch municipalities. These algorithms have been prohibited by the Dutch court because they were biased, led to discrimination, and also violated the GDPR and the European Convention on Human Rights.
In September 2023, the Dutch DPA announced an investigation into to the use of AI aimed at children. In this situation, the Dutch DPA required an unnamed tech company to provide transparency on the operation of a chatbot integrated in an app that is popular with children. The DPA also announced investigations into organisations that use generative AI. For example, it requested that OpenAI provide information on the chatbot ChatGPT.
In November 2023, the Dutch DPA announced that it will supervise the remediation measures of the Employee Insurance Agency (Uitvoeringsinstituut Werknemersverzekeringen, or UWV), which unlawfully used an algorithm to track the online behaviour of benefits recipients.
In the Netherlands, the Royal Netherlands Standardization Institute (NEN) facilitates agreements between stakeholders on standards and guidelines. NEN manages over 31.000 standards, including international (ISO, IEC), European (EN) and national (NEN) standards accepted in the Netherlands. The European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) are the European standard-setting bodies with respect to AI. The standards set out by these bodies are generally non-binding.
CEN and CENELEC standards help in meeting requirements and conducting risk assessments. Compliance with these is usually not mandatory, but it gives the advantage of “presumption of conformity” to manufacturers, economic operators, or conformity assessment bodies. There are no ostensible conflicts with Dutch law.
AI is used across governmental authorities for various Dutch government operations, including administrative inspections and law enforcement. Typical use cases are image recognition, speech and text recognition, machine learning and robotics. The Dutch government’s views on the use of AI are positive, and there are many initiatives to use the technology.
The use of personal data for AI purposes by Dutch government entities is generally subject to the GDPR. However, processing in relation to criminal investigations and proceedings is subject to the Police Data Act (Wet Politiegegevens) and the Judicial and Criminal Records Act (Wet justitiele en strafvorderlijke gegevens) and processing in relation to intelligence agencies to the Intelligence and Security Services Act (Wet op de inlichtingen- en veiligheidsdiensten).
Dutch law enforcement has developed a protocol for using facial recognition technology (FRT). The protocol provides a decision framework and governance for approving requests to experiment with FRT in light of investigations. Since FRT typically includes the processing of biometric information (ie, unique identifiers of someone’s face), strict restrictions apply under the Police Data Act. These restrictions are similar to the protection of biometric information under Article 9 of the GDPR.
There are two notable decisions by the Dutch civil and administrative courts involving AI, as follows.
The possible uses of AI in national security include analysing intelligence information, enhancing weapon systems, providing battlefield recommendations, aiding in decision-making, logistics, cybersecurity, military logistics, and command and control. On 15 February 2023, the General Intelligence and Security Service (AIVD) shared principles for defending AI systems that should be considered during the development thereof. These principles include:
Generative AI, such as OpenAI’s GPT-3, has raised several issues, including ethical concerns, data privacy and intellectual property (IP) rights.
Ethical concerns revolve around the potential misuse of AI for generating misleading information, deepfakes or promoting hate speech. To address this, AI developers are implementing stricter usage policies and creating more sophisticated content filters.
Data protection is another significant issue. AI models are trained on vast amounts of data, which may include personal data. To protect this data, AI developers seek to implement robust data anonymisation and encryption techniques. They are also working towards creating models that can learn effectively from less data, or even no data at all, which is a concept known as zero-shot learning.
In terms of IP, the assets in the AI process (such as AI models, training data, input prompts and output) can be protected under various forms of IP law. AI models and training data can be protected as trade secrets or database rights, while input prompts and output can potentially be protected under copyright laws or, possibly, patent laws. The AI tool provider’s terms and conditions significantly influence this protection.
There is also the risk of IP infringement – for example, where AI is trained on copyright-protected material without permission. If the AI’s output closely resembles a copyrighted work, it could also lead to infringement claims.
The GDPR affords data subjects several rights, including the right to rectification and deletion. If an AI outputs false claims about an individual, the individual has the right to have this corrected. If the AI has processed the individual’s personal data without their consent, they have the right to have this data deleted.
Purpose limitation and data minimisation principles require that personal data be collected for specific purposes, and that no more data is collected than is necessary for those purposes.
AI assets such as models, training data, input prompts and output can be protected by various forms of intellectual property rights (IPR), depending on their nature and the jurisdiction in question.
AI Models
The algorithms used in AI models can be protected by patents. In the EU and the Netherlands, a patent can be granted for an invention that is new, involves an inventive step, and is susceptible to industrial application. However, mathematical methods as such (which AI algorithms often are) are not considered patentable.
Training Data
Databases can be protected under copyright and/or sui generis database rights. In the EU, a database is protected by copyright if its structure constitutes the author’s own intellectual creation. If a substantial investment has been made in obtaining, verifying or presenting the contents of a database, it may also be protected by a sui generis database right.
Input (Prompts)
Texts used as input prompts can be protected under copyright law if they are original and constitute the author’s own intellectual creation.
Output
The output of an AI tool can also be protected under copyright law if it is original and is the author’s own intellectual creation. However, this is an area of ongoing debate, as it is uncertain whether an AI-generated work can meet the originality requirement since it does not have a human author.
The terms and conditions of the AI tool provider can significantly influence the protection of assets.
Infringements under Dutch and EU IP laws can occur in various ways. For instance, if someone uses a copyrighted database as training data without permission, it will constitute infringement. Unauthorised use of input prompts or output that are protected by copyright could also lead to infringement.
Under the GDPR, data subjects have several rights that pertain to AI models.
Right to Rectification
If an AI model produces false output related to individuals, the data subjects have the right to have inaccurate personal data rectified under Article 16 of the GDPR. This does not necessarily mean that the entire AI model must be deleted or adjusted. The rectification process can be achieved by integrating a mechanism in the AI model to allow the correction of inaccurate data.
Right to Erasure (“Right to Be Forgotten”)
Under Article 17 of the GDPR, a data subject has the right to have their personal data erased without undue delay under certain circumstances – eg, where the data is no longer necessary for the purpose for which it was collected or processed. However, this does not mean the entire AI model would need to be deleted.
Purpose Limitation
Article 5(1)(b) of the GDPR states that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. This can be achieved in AI models through clear communication about how the data will be used and by limiting the use of the data to those purposes only.
Data Minimisation
The GDPR under Article 5(1)(c) also requires that personal data be relevant, and limited to what is necessary in relation to the purposes for which it is processed. This principle can be adhered to in AI models by only collecting and processing the minimal amount of personal data necessary for the model to function as intended.
The application of these rights and principles can be complex within the context of AI, particularly when it comes to rectification and erasure. For example, if an AI system has used personal data to “learn” and adapt its algorithms, simply erasing that data might not fully remove its impact on the model.
AI is increasingly used in the legal profession, and its applications are effectively helping the way legal services are delivered. AI-powered tools have been used in various functions. Examples are voice dictation, document automation (to draft documents appropriately, that are up-to-date and consistent with the then-current laws) and document translation.
There are currently no laws, rules, or regulations in place on using AI in the legal profession. Neither have there been any Dutch court decisions on the matter.
Ethical considerations of AI in the legal profession include:
Given the complexity and evolving nature of AI technologies, issues of liability for personal injury or commercial harm resulting from these technologies are of increasing concern under Dutch and EU laws. While the specifics can vary, liability for AI-enabled technologies generally falls under two main theories – strict liability and negligence.
Strict liability holds that the party responsible for placing the AI technology on the market is liable for any harm caused, regardless of fault or intent. In contrast, negligence requires proof that the party acted unreasonably or failed to take necessary precautions, leading to the harm.
The Dutch Civil Code and the EU Product Liability Directive are the main legal frameworks for these theories. They require the claimant to prove that the damage was caused by a defect in the product, and that there is a causal link between the damage and the defect.
Under Dutch and EU laws, human supervision is often required for AI technologies, particularly those with significant potential for harm. For example, autonomous vehicles must have a human driver ready to take control if necessary, and medical AI applications are typically used as decision-support tools for healthcare professionals, not as autonomous decision-makers.
Insurance plays a key role in managing AI liability risks. Businesses can purchase insurance policies to cover potential liability arising from their AI technologies. The terms of these policies can vary, but they typically cover legal defence costs and any damages awarded.
The allocation of liability among supply chain participants is a complex issue. Under Dutch and EU laws, any party involved in the supply chain could potentially be held liable for harm caused by an AI technology. This includes manufacturers, distributors, retailers, and even users if they modify the technology in a way that contributes to the harm. However, contracts and insurance policies often allocate liability in specific ways to manage these risks.
In practice, the trend under Dutch and EU laws is to allocate liability based on control and benefit. Parties that have more control over the AI technology or derive more benefit from it are generally held to a higher standard of liability.
Liability arising from the acts or omissions of AI technology acting autonomously would generally be attributed to the business selling or providing the AI products or services. This is based on the principle of strict liability, which holds businesses responsible for the products they place on the market. However, the specifics can vary depending on factors such as the nature of the harm, the level of autonomy of the AI technology and the terms of any contracts or insurance policies.
Every high-risk AI-enabled technology must be liable under strict liability and all other AI systems fall under fault-based liability, accompanying the burden of insurance. However, a back-end operator is only liable for strict liability if not already covered by Product Liability Directive. The only defence available to the operator is force majeure, and the presumption of fault lies with the operator.
Furthermore, note that the EU AI Act will impose strict obligations not only on the “provider” of a high-risk AI system but also on the “importer”, “distributor” and “deployer” of such systems. The importer needs to verify whether the high-risk AI system is compliant by conformity through verification of documentation, whereas the distributor is required to verify the CE conformity.
The proposed AI Liability Directive sets out to create a new liability regime that ensures legal certainty, enhances consumer trust in AI, and supports consumers’ liability claims for damage caused by AI-enabled products and services.
The Directive provides for rules that EU Member States implement to apply to AI systems that are available on or operating within the EU market. The aim is to improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.
An important feature of this Directive is the rebuttable presumption in case of non-compliance, and, in the case of fault by the defendant, both rules will place the burden of proof with the latter, often the developer or seller of an AI system. The AI Liability Directive must be read in close conjunction with the EU AI Act, since the directive mentions terms that are ultimately defined within it.
Bias in algorithms technically refers to a systematic error introduced by an algorithm that skews results in a particular direction.
These errors are caused by three types of biases:
Such biases may result in discrimination, inequality, and racism. If an algorithm treats individuals or groups unfairly based on characteristics such as race, gender, age, or religion, it may be in violation of anti-discrimination laws. Areas with a high risk of algorithmic bias include online advertising, credit scoring, hiring and law enforcement. For example, an algorithm used for hiring might be biased against certain groups if it was trained on past hiring decisions that were biased.
Companies can face significant legal and reputational risk if their algorithms are found to be biased. They can be sued for discrimination, fined by regulatory bodies, and suffer damage to their reputation.
Regulators are increasingly scrutinising algorithmic bias. For example, in 2020, the Dutch Data Protection Authority launched an investigation into the use of algorithms by the government.
The data protection risks of using AI technology in business practices predominantly relate to the use of AI in facial recognition technologies (see 11.3 Facial Recognition and Biometrics) and automated decision-making (see 11.4 Automated Decision-Making). Benefits of AI in terms of protecting personal data in business practices include increased accuracy and integrity of personal data and enhanced data security. AI technologies require accurate data for optimal performance, which could lead to an overall optimisation of data protection practices to safeguard data integrity and accuracy. In addition, AI technologies will require top-notch data security practices to protect proprietary and sensitive business information, thereby enhancing the level of data security within a company, which is ultimately beneficial to the protection of personal data.
Face recognition technology can be useful, but can also have severely negative effects for data subjects. The systems identify people in photos, videos, or in real time, and are widely used in sectors such as retail, media, the medical sector, entertainment, e-commerce, and so on. The use of facial and biometric data can cause privacy, data security, and bias and discrimination issues, resulting in regulatory and ethical violations.
Recent CJEU press release 20/24 of 30 January 2024 states that police authorities may not store the biometric and genetic data of persons who have been convicted by final judgment of an intentional offense, with no time limit other than the death of the person concerned.
Since the use of face recognition involves automated processing of personal data, both GDPR and the Law Enforcement Directive (LED) apply.
The EU AI Act prohibits the use of real-time remote biometric identification systems in publicly accessible spaces under Article 5(1). Companies such as Megvii, Cognitec Systems GmbH, Clarifai Inc, AnyVision and iProov use machine-learning algorithms to search, capture, and analyse facial contours and match them with pre-existing data.
When AI is used to automate certain processes, this often includes automated decision-making as regulated under the GDPR. Automated decision-making involves solely automated processing of personal data of individuals (ie, without any human involvement) that leads to decisions with legal effects concerning that person or similarly significant effects. Article 4(4) of the GDPR defines profiling as a specific form of processing by automated means to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict factors concerning the individual. Examples of automated decision-making are:
According to Article 22 of the GDPR, individuals should have the right not to be subjected to a decision based solely on automated processing, including profiling.
However, automated decision-making is allowed if:
If the automated decision-making is based on the performance of a contract or explicit consent, the individual should at least have the right to seek human intervention on part of the company making these decisions, to express his or her point of view and to contest the decision. In addition, the company must lay down suitable measures to safeguard the individual’s rights and freedoms and legitimate interests.
Furthermore, under the GDPR, a data protection impact assessment (DPIA) is mandatory when the envisaged processing “likely constitutes a high risk” for the individuals. The DPIA is a process designed describe the processing, assess its necessity and proportionality, and help manage the risks of rights and freedoms of natural persons resulting from the processing of personal data by assessing them and determining the measures to address them. A DPIA is mandatory when the envisaged processing “likely constitutes a high risk” to individuals’ rights and freedoms. According to the Working Party 29 Guidelines on Data Protection Impact Assessment, this is generally the case when the processing involves automated decision-making.
Failure to comply with these rules can result in hefty fines. Under the GDPR, a violation of individuals’ rights, including the right not to be subjected to automated decision-making, may result in fines of up to EUR20 million or 4% of the company’s annual turnover for the preceding financial year, whichever is higher. If the company subsequently fails to perform a DPIA where this is required, it risks being subject to fines of up to EUR10 million or 2% of the company’s annual turnover, whichever is higher, under the same regulation.
There is no specific regulatory scheme under Dutch or EU law that directly deals with the use of chatbots or other technologies to substitute for services rendered by natural persons. Under EU law, the GDPR provides broad protections for personal data. It requires data controllers to be transparent about their use of personal data, which would include the use of AI technologies such as chatbots. The GDPR also gives individuals the right to access, correct, and delete their personal data, and to object to its processing.
As for disclosure of AI use, the GDPR’s transparency principle requires that individuals be informed about the processing of their personal data. If an AI system is used to make decisions about individuals, the individuals should be informed of this, as well as the logic involved and the significance and consequences of such processing.
Technologies used to make undisclosed suggestions or manipulate the behaviour of consumers primarily include recommendation algorithms and targeted advertising technologies. These algorithms analyse a user’s behaviour and use this information to suggest products or services that the user might be interested in. While these technologies can be beneficial, they also raise privacy and ethical concerns, as they can be used to influence consumer behaviour without their knowledge or consent.
Additionally, under Article 22 of the GDPR, it is illegal to design chatbots to serve as a primary source of consumer approval decision processes (eg, they cannot approve a loan). There are possibilities of chatbots manipulating the behaviour of the consumer based on the data which has been used for their training. Also, AI-powered bots, known as social media bots, can be programmed to mimic human behaviour. They can spread misleading information to manipulate public opinion.
The use of AI technology in price-setting can potentially raise several competition and antitrust issues under Dutch or EU laws. The following are some of the key concerns.
The ACM and the European Commission are actively monitoring these issues, and have indicated that they will take action if they find evidence of anti-competitive behaviour related to the use of AI in price-setting.
AI technology presents a range of new and unique risks that need to be considered in transactional contracts between customers and AI suppliers, particularly in AI as a service model. The following are some of these risks and the ways businesses can address them.
Numerous software and applications available are specifically designed to streamline hiring and termination processes, including Applicant Tracking Systems (ATS), Human Resource Information Systems (HRIS), and various AI-driven tools.
Technology in Hiring
ATS is software that collects and sorts résumés based on given criteria. It automates the process of shortlisting candidates, making it easier for recruiters to find the right talent. AI-driven tools are also used for pre-employment testing and video interviews. They can assess a candidate’s skills, personality traits, and even emotional intelligence.
This technology reduces the time spent on screening and shortlisting candidates, ensuring a more efficient hiring process. It also minimises human bias, leading to a more diverse workforce. On the downside, qualified candidates might be overlooked if their résumés do not include specific keywords used by the ATS. Moreover, the lack of human interaction can make the hiring process impersonal.
If not properly configured, ATS can potentially discriminate against certain applicants, leading to legal implications under employment laws. Also, there are privacy concerns related to the collection and storage of applicant data.
Technology in Termination
HRIS is used to manage employee data and automate the termination process. It oversees tasks such as the deactivation of access to company systems, final paycheck calculations, and exit interview scheduling.
HRIS can ensure a smooth and compliant termination process, reducing the risk of errors and legal complications. It also allows for efficient record-keeping, which is crucial in the event of a legal dispute.
The automated termination process might feel impersonal to the employees. There is also a risk of premature information leaks about the termination.
Legal Risks
If the system is not updated or used correctly, it can lead to legal issues, such as non-compliance with labour laws. Additionally, mishandling of employee data can result in privacy breaches, resulting in potential legal action.
The use of AI in the evaluation of employee performance and monitoring employee work has both positive and negative effects. On the positive side, it can create new job opportunities in areas such as data analysis, programming and system maintenance. When working correctly, AI monitoring is also very precise, which reduces the risk of human error. On the negative side, there are fears that AI could replace human jobs. However, as AI technology is not yet sufficiently advanced to completely replace human labour, these jobs are still largely secure.
Most consumers are unaware that they interact with AI on a daily basis, such as ordering food online. AI may provide automated recommendations to users based on past orders, preferences, and location. Chatbots may function as virtual assistants to provide personalised assistance and AI optimises delivery routes for food delivery by estimating traffic and weather conditions.
The Dutch and EU laws have been evolving in response to these changes. The EU has been working on regulations to ensure fair and just working conditions in the platform economy – eg, the proposed Platform Work Directive, which includes providing access to social protection for platform workers and ensuring that they have the right to collective bargaining.
There have been ongoing discussions in the Netherlands about the status of platform workers. For instance, in 2021, a Dutch court ruled that drivers for the food delivery company Deliveroo are employees, not independent contractors, and are therefore entitled to employment benefits.
However, none of these initiatives or precedents have specifically addressed the use of AI in the digital platform.
AI applications in the finance industry, such as algorithmic trading, customer service, fraud detection and prevention and compliance with anti-money laundering legislation, are used increasingly.
There are some financial regulations that specifically cover the use of AI. The main examples are:
In addition, regulations on internal models of banks (pursuant to the Capital Requirements Regulation) and, to some extent, insurers (pursuant to the Solvency II Directive as implemented into the WFT) include specific requirements that AI models must adhere to.
Medical Device Regulations (MDR) 2017/745 and In Vitro Diagnostic Medical Devices Regulation (IVDR) 2017/746 are the regulations in the EU that govern the use of technology in healthcare.
The use of AI in healthcare is not without risk. While using health data to train AI systems, there is a potential risk of this sensitive data being unlawfully shared with third parties, resulting in data breaches. Other risks are possible bias and inequality risks, due to biased and imbalanced datasets used for training, structural biases and discrimination, disparities in access to quality equipment, and lack of diversity and interdisciplinarity in development teams.
Health data is considered as sensitive data according to Article 9 of the GDPR, and sharing a patient’s health data for training an AI model requires explicit consent from the patient under the Helsinki Declaration. Repurposing of data without the patient’s knowledge and consent must be avoided. There is a potential risk of cyberattacks in the event of data being exposed to the general public.
Under Dutch law, autonomous vehicles are governed by the Dutch Road Traffic Act 1994, which currently requires a driver to be in control of the vehicle at all times. However, the Netherlands is progressive in terms of autonomous-vehicles legislation, and it has been conducting public road tests of self-driving cars since 2015.
At the EU level, autonomous vehicles are subject to a range of regulations, including General Safety Regulation (EU) 2019/2144, which mandates certain safety features for new-vehicle types from July 2022 and for all new vehicles from July 2024. This includes a driver drowsiness and attention warning, an advanced driver distraction warning, an emergency stop signal, reversing detection, and an event data recorder.
Responsibility for accidents or incidents involving autonomous vehicles is a complex issue. According to EU law, the manufacturer of a vehicle could be held liable if a defect in the vehicle causes damage. In the Netherlands, the driver of the vehicle is usually considered responsible for any accidents, even if the vehicle is driving autonomously. However, this may change as autonomous-vehicle technology evolves.
Ethical considerations for AI decision-making in critical situations are also a significant concern. For instance, how should an autonomous vehicle be programmed to act in a situation where an accident is unavoidable? This is a complex ethical question that has yet to be fully answered.
There have been several attempts at international harmonisation to promote global collaboration and consistency in regulations and standards. For instance, the United Nations Economic Commission for Europe (UNECE) has been working on international regulations for autonomous vehicles. However, these efforts are still in the early stages, and there is a long way to go before international consistency is achieved.
AI usage in manufacturing and its implications are governed by several regulations within the Netherlands and the EU, addressing areas such as product safety and liability, workforce impact, and data privacy and security.
Product Safety and Liability
The European Union has a number of directives in place to ensure product safety and manage liability in manufacturing. The General Product Safety Directive ensures that only safe products are sold within the EU. If AI is used in the manufacturing process, the manufacturer must ensure the AI does not compromise the safety of the product.
If a product is faulty and causes damage or injury, the Product Liability Directive is applicable. It makes the producer liable for damage caused by a defect in their product.
Workforce Impact
The Netherlands and the EU have regulations in place to protect workers' rights. The EU’s Charter of Fundamental Rights includes provisions for fair and just working conditions. If AI is being used to replace or augment human workers, it is crucial that worker rights are respected and any transition is handled ethically and responsibly. The Dutch Working Conditions Act also stipulates that employers must ensure a safe and healthy working environment, which would extend to an environment where AI is used.
Data Privacy and Security
The GDPR applies to any business that processes personal data, including those using AI. If AI is used to process personal data in the manufacturing process, it must be done in a manner that respects privacy rights and ensures data security. The EU also has the NIS2 Directive, which provides EU-wide legislation on cybersecurity. It provides legal measures to boost the overall level of cybersecurity in the EU.
The use of AI in professional services in the Netherlands and the European Union is governed by a variety of regulations and guidelines. Here are some of the key areas of focus.
Liability and Professional Responsibility
The proposed AI Liability Directive would include rules on liability. In the Netherlands, the Dutch Civil Code could potentially be applied in cases of AI causing damage. However, determining responsibility in AI-related incidents can be complex due to the nature of machine learning.
Confidentiality
The GDPR applies throughout the EU, including the Netherlands. It mandates that personal data must be handled in a way that ensures its confidentiality and security. Professionals using AI must ensure that the AI systems they use are compliant with GDPR.
IP (Intellectual Property)
In the EU, AI-generated works may not be eligible for copyright protection, as current laws require human authorship. In the Netherlands, The Dutch Copyright Act could potentially apply to AI creations, but this is still a matter of debate.
Client Consent
Under the GDPR, there must be a lawful basis to process personal data lawfully. This could have implications for AI systems used in professional services, especially those that involve data analysis.
Regulatory Compliance
At the EU level, the upcoming EU AI Act aims to ensure that AI is used in a way that respects EU values and regulations. In the Netherlands, the Dutch DPA oversees the use of AI and other data-processing technologies to ensure compliance with GDPR and other regulations.
As of the date of this publication, there have been no decisions from the Dutch courts on inventorship or authorship of an invention or work created by or with AI technology.
AI technologies and data can be protected under the Trade Secrets Act, which protects undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use, and disclosure. The act defines a trade secret as information that meets three criteria – it is secret, it has commercial value because it is secret, and the holder has made reasonable efforts to keep it secret. AI technologies and data often meet these criteria.
The use of trade secrets in AI can be regulated through non-disclosure agreements (NDAs) and confidentiality clauses in contracts.
At the time of this publication, no existing legislation, case law, or other types of guidance can say whether AI generated works of art and works of authorship can be protected by copyright in the Netherlands. The general view among scholars is that, if a piece of work is entirely produced by a machine with minimal or no human intervention and no individual can assert ownership over it, this implies that the work will enter the public domain. This viewpoint aligns with the guidelines provided in other jurisdictions, such as those from the U.S. Copyright Office.
However, current AI technologies such as Chat GPT, where users input prompts, do not operate entirely autonomously. This raises the question of whether the input prompt can be granted copyright protection. Under EU copyright laws, concepts, ideas, and styles cannot receive copyright protection. However, as long as some creative choices are involved, copyright protection will generally apply.
Consequently, it can be argued that protection will often be extended to a specific prompt, as long as it is not just a “mundane or banal expression”. However, the unresolved issue is the extent of protection that can be claimed when one single prompt can yield multiple generative results.
Using OpenAI involves several intellectual property considerations:
Advising corporate boards of directors on the adoption of AI involves addressing several key issues to identify and mitigate risks, as follows.
By addressing these issues, management teams can help ensure that AI is adopted in a way that is ethical, legal, and beneficial for the organisation.
Implementing AI best practices involves addressing the following key issues.
Data Privacy and Security
AI systems often rely on large amounts of data, which can include sensitive information. It is crucial to ensure that data is stored and processed securely, and that privacy rights are respected. This includes complying with regulations such as the GDPR.
Transparency and “Explainability”
AI systems should be designed to be transparent in their operations and decisions. This includes providing clear explanations for AI decisions, especially in critical areas like healthcare or finance.
Bias and Fairness
AI systems can inadvertently perpetuate or amplify existing biases in data. It is important to monitor and mitigate these biases to ensure fair outcomes.
Robustness and Reliability
AI systems should be strong and reliable, with safeguards in place to prevent or mitigate harmful outcomes.
Accountability
There should be clear lines of accountability for AI decisions, including mechanisms for redress when things go wrong.
Practical advice for implementing these best practices effectively includes:
Beethovenstraat 545
1083 HK Amsterdam
The Netherlands
+31 651 289 224
+31 20 301 7350
Herald.Jongen@gtlaw.com www.gtlaw.com