Artificial Intelligence 2024

Last Updated May 28, 2024

Germany

Law and Practice

Authors



Bird & Bird delivers expertise covering a full range of legal services through more than 1,400 lawyers and legal practitioners across a worldwide network of 32 offices. The firm has built a stellar, global reputation from a deep industry understanding of key sectors and through its sophisticated, pragmatic advice. Bird & Bird is a global leader in advising organisations being changed by digital technology, as well as advising companies who are carving the world’s digital future. AI and generative AI are a cornerstone of the firm’s legal practice, which helps clients leverage AI and digital technology against a backdrop of increasing regulation. Bird & Bird’s longstanding strengths in data protection, commercial and IP law – allied with the team’s technology and communications expertise – mean the firm is ideally placed to work with clients in order to help them reach their full potential for growth.

The general legal background for AI under German law can be summarised as follows.

  • German contract law ‒ there are distinct issues related to contracting for AI services and contracting through AI. The former involves procuring AI services for an organisation, which presents unique legal challenges (eg, regarding the quality and reliability of AI outputs). Contracting through AI raises questions about the validity of contracts entered into by autonomous bots, with contractual obligations potentially imputed to human users.
  • German tort and product liability Law ‒ the key concerns are proving defects or breaches of duty, establishing damages, and determining the causal link in AI-related damages.
  • EU and German data protection law ‒ the General Data Protection Regulation (GDPR) and Federal Data Protection Act (Bundesdatenschutzgesetz, or BDSG) are the main legal frameworks in Germany. Although there are no specific AI regulations, general provisions on automated decision-making and consent apply. Challenges include data deletion or correction in AI models, justification for AI model training, and whether the model itself qualifies as personal data.
  • German copyright law ‒ the interaction between AI and copyright raises questions about training AI systems using copyrighted material and protecting AI-generated content. Current copyright law lacks clear answers, even though it includes provisions on text and data mining. Copyright infringements may occur if AI output includes copyrighted works, and liability for such infringements is unsettled.
  • German labour law ‒ the integration of AI in the workplace raises labour law issues, such as managing job losses, addressing improper handling of AI, and concerns about discrimination or misuse of AI solutions. Works councils, which have influence over AI introduction, are also relevant.
  • German consumer protection law ‒ AI-based consumer products and services fall under German consumer protection law, triggering documentation, transparency requirements, and granting consumer rights in case of defects. Sellers of AI products have update obligations, but defining defects and determining when an AI system requires an update is legally uncharted territory.
  • German criminal law ‒ AI development presents unique challenges in criminal law, particularly in terms of foreseeability of harm, appropriate standard of care, and criminal liability of robots and machines. The discussion focuses on adapting criminal categories to technological changes and addressing risks posed by autonomous systems, trust in AI decisions, and permissible risk.

AI is revolutionising industries globally, including in Germany, by enhancing efficiency, innovation, and decision-making processes. Predictive AI has been integrated into mainstream applications for years, whereas generative AI is in a phase of industry implementation, gaining momentum since 2023 and further accelerating in 2024.

Predictive AI

  • Healthcare ‒ AI significantly enhances the ability to diagnose diseases such as cancer from x-ray images, leading to quicker and more accurate treatments.
  • Energy ‒ by predicting peak demand times and optimising cooling processes, AI reduces energy consumption in data centres, contributing to environmental sustainability.
  • Finance ‒ financial institutions utilise AI for identifying patterns indicative of fraudulent credit card payments, enhancing security measures. Additionally, predictive analytics help in crafting personalised investment strategies.
  • Manufacturing ‒ AI predicts machine failures before they occur, reducing unplanned downtime and extending equipment life.

Generative AI

  • Marketing ‒ AI tools generate targeted content and personalised marketing campaigns, increasing customer engagement and driving sales. They analyse customer data to create content that resonates with specific audiences.
  • Programming ‒ in software development, AI accelerates the creation of code, reducing development times and human error. It enables developers to focus on more complex problems by handling routine coding tasks.
  • Customer Service ‒ AI generates responses that mimic human interaction, improving the customer experience while reducing wait times.
  • Architecture and Design ‒ AI aids architects and designers by generating innovative design alternatives, reducing time and costs. It facilitates the exploration of countless possibilities, leading to more creative and functional designs.

In Germany, the government actively supports the adoption and development of AI through targeted funding programmes. These initiatives aim to stimulate AI innovation across various sectors, including public welfare, start-ups, SMEs, and environmental technology. The funding programmes reflect a strategic approach to supporting AI research and application, driving technological progress and socio-economic benefits. Prominent programmes include AI for the Common Good, European EUREKA clusters, Research and Development of AI methods in SMEs, and the DeepTech Future Fund. These programmes provide financial support to projects that enhance social well-being, foster cross-border collaboration, encourage AI engagement in SMEs, and promote innovative startups.

Germany has taken a cautious approach to regulating AI by relying on existing legal frameworks rather than creating AI-specific legislation. This technology-neutral regulatory environment is partly driven by the need to align with the EU’s AI-specific draft legislation, such as the EU AI Act and proposals on AI liability. As an EU member state, Germany has limited national regulatory options and must adhere to the overarching EU framework. This has left little room for independent action at the national level.

To date, Germany has not enacted any AI-specific legislation.

Government bodies in Germany have not yet issued AI-specific guidelines, but they have been involved in promoting ethical guidelines for trustworthy AI in specific areas. By way of example, the Federal Ministry of Economics and Climate Protection funded the “ForeSight” project, which integrated ethical considerations into the development and application of smart living services. ForeSight developed a code of ethics based on the “Ethics Guidelines for Trustworthy AI” commissioned by the EC and the “Algo.Rules” from the Bertelsmann Foundation. The code focuses on ethical principles such as respect for human autonomy, avoidance of harm, and fairness and accountability. It provides developers with seven core indicators to assess smart living services.

As part of the EU’s approach to AI, the EU has introduced several legal initiatives to promote trust in AI. While the EU AI Act and the sectoral safety legislation are directly applicable in the EU member states owing to their nature as EU Regulations, the liability provisions have to be transposed into German law owing to their nature as EU Directives.

EU AI Act

The EU AI Act, a cross-sectoral product safety regulation, targets high-risk AI systems and general purpose AI models. It will be directly applicable in all EU member states, including Germany, and will enter into force around June 2024.

Liability Rules

Two EU Directives, the revised Product Liability Directive and the new AI Liability Directive, address liability rules for AI-based products and services. The revised Product Liability Directive has been agreed upon and will enter into force in the first half of 2024. The AI Liability Directive is still being negotiated (for further details, see 10. Theories of Liability).

Sectoral Safety Legislation

Additionally, sectoral safety legislation ‒ for example, the General Product Safety Regulation (GPSR) and the Machinery Regulation (MR) ‒ is being revised to address AI integration into existing product safety frameworks. These regulations aim to ensure the safety and accountability of AI-enabled products within their respective sectors. The GPSR came into force on 12 June 2023 and will apply from 13 December 2024, while the MR came into force on 19 July 2023, and will apply from 20 January 2027. These regulations do not require national implementation as they are already in effect.

In the absence of an AI-specific jurisdictional law, inconsistencies are unlikely to arise in Germany.

This is not applicable in Germany.

Content Law

To implement Articles 3 and 4 of the EU’s Digital Single Market Directive (the “DSM Directive”), the German government introduced Section 44b and supplemented Section 60d of the German Copyright Act (Urheberrechtsgesetz, or UrhG) on text and data mining. These new rules are essential for AI, as these exemptions generally allow AI developers to scrape data such as text and images from the internet and train their AI on it. However, the main requirements are:

  • the data must be lawfully accessible (eg, freely available on the internet); and
  • there must be no machine-readable opt-out by the rights-holder. 

Data Protection Law

Unfortunately, the same cannot be said for data protection. In contrast to copyright law, the GDPR establishes a strict guardrail for the collection and use of personal data from the internet to train AI models. Meanwhile, German data protection authorities (DPAs) have made no effort to ease the interpretation of the GDPR in a way that would make the use of personal data for AI training easier to justify.

To date, Germany has not proposed any new AI-specific legislation.

There are no landmark rulings yet on the pressing IP issues concerning generative AI in Germany. Unlike in the USA, there have not been any rulings on how to handle training data, particularly in relation to the Text-and-Data-Mining Exemption under Section 44b of the UrhG. This exemption allows for the scraping of works, such as texts and images, for training purposes under certain conditions. There have also been no rulings on the protectability of AI-generated works. However, a number of other rulings have dealt with AI-related issues in a broader sense during the past year, as follows.

  • Labour Court Hamburg, 16 January 2024, 24 BVGa 1/24 ‒ employers generally do not need the consent of the works council to allow employees the optional use of AI tools with private accounts in case employees were previously authorised to use the AI tool by the employer, provided that they used their private accounts. A works council is a group of elected representatives of the employees of a company in Germany. Its role is to represent the interests of employees in discussions with management.
  • ECJ, 7 December 2023, C-634/21 (the “SCHUFA case”) ‒ the applicability of Article 22 of the GDPR depends on three cumulative requirements:
    1. there must be a “decision”;
    2. this decision must be “based solely on automated processing, including profiling; and
    3. it must “produce legal effects concerning the data subject” or “significantly affect them in a similar way”.

Therefore Article 22 of the GDPR prohibits the automated analysis of data if the result decides whether a contract is made, executed, or cancelled, unless data controllers can rely on limited justifications such as consent and contractual necessity.

  • Federal Patent Court, 11 November 2021, 11 W (pat) 5/21 ‒ only natural persons can be inventors, so AI cannot be an inventor under German patent law. A similar decision can be expected in the future for copyright law, where a human creator is also central to copyright protection.

In Germany, there has not yet been a higher court ruling defining generative AI. In general, there is still no high court ruling on copyright issues related to generative AI.

The definition of AI in the future EU AI Act promises to be central. It is likely that further legislation will refer to this definition and that judgments will also refer to this definition. The definition in Article 3(1) of the EU AI Act reads as follows.

  • “AI system” means a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
  • AI systems are defined in a broad way, emphasising their autonomous nature. This definition is deliberately broad to prevent the EU AI Act from becoming obsolete in the near future. This technology-agnostic definition refers to the elements of “autonomy” that distinguish AI systems from other more deterministic, algorithm-driven types of software. This definition will take precedence over other definitions that may exist due to the primacy of application of EU law.

No German AI Regulator (Yet)

Germany currently lacks a specific “AI regulator” (but will have to appoint one under the EUAI Act in the future). However, German DPAs are assuming a leading role in enforcing the GDPR against companies utilising and offering AI systems in the German market. Although not all AI systems rely on personal data, personal data is often involved in the training and deployment of AI systems. Data protection has emerged as a crucial aspect of AI regulation for two main reasons. First, the concept of personal data is broad and encompasses various types of information processed by AI systems, making data protection rules applicable across sectors. Second, there is significant overlap between the governance of AI and data protection, with ethical considerations, accountability mechanisms, and transparency requirements being fundamental principles of both.

DPAs as the De Facto AI Regulators in Germany

Consequently, DPAs are effectively acting as de facto AI regulators for the time being ‒ actively working to regulate AI systems and likely to continue playing an increasingly important role in governing AI systems and their handling of personal data. Recently, German DPAs also published a position paper outlining the national competences required for the EU AI Act. In this paper, they even argue that German DPAs should be designated as the market surveillance authorities for AI systems in Germany, based on their tasks and expertise.

German DPAs recognise that “AI” refers to the application of machine learning techniques and the use of AI components. They acknowledge the existence of different machine learning methods with distinct characteristics and applications, including considerations such as method selection, design, implementation, training data, and deployment. DPAs also acknowledge that AI systems can pose various risks to individuals’ rights and freedoms, which may be challenging to identify, predict or prove. Consequently, German DPAs require AI-specific measures to mitigate these risks, as there is no universal solution. Processing personal data with an AI component must have a legitimate purpose, a legal basis, and minimise the associated risks. The use of AI systems often entails high risks, necessitating the implementation of rigourous technical and organisational measures, particularly concerning data processing transparency.

German DPAs aim to prevent various harms related to the misuse or mishandling of personal data. In the context of AI, they particularly focus on preventing discriminatory use of personal data and promoting transparency in data processing. DPAs have expressed skepticism regarding the compliance of generative AI systems with the GDPR. They raise concerns about the disruptive nature of generative AI and the potential disregard for data protection principles. However, these concerns should be understood in light of the rapid emergence of new generative AI technology and the regulators’ apprehension about effectively enforcing data protection requirements. It is unlikely that German data protection regulators will outright prohibit the use of generative AI. Instead, they expect organisations using such tools to strike a balance and adequately address the requirements of data protection law.

The German DPAs initiated an investigation into OpenAI’s ChatGPT service in 2023. The DPAs raised questions regarding the compliance of ChatGPT’s data processing with key data protection principles, such as transparency, legal basis, data processing of minors, and information to data subjects. They focused on topics such as personal data collection, its use in machine learning training, storage resulting from machine learning, data transfer to third parties, and user data processing in ChatGPT. This is still ongoing.

In Germany, the national approach to AI standard-setting emphasises the development and adoption of standards specific to key industry sectors. This focus reflects a targeted strategy to ensure AI technologies are implemented responsibly and effectively. The national efforts are primarily oriented towards creating frameworks that guide the ethical, secure and effective use of AI across various domains. These include healthcare, mobility, and environmental sectors, where AI has the potential to drive significant advancements and efficiencies.

At the core of Germany’s standard-setting are collaborations between different stakeholders, including industry leaders, academic institutions, and government entities.

In the EU, AI standardisation involves key players such as the EC, the European Standardisation Organisations (CEN, CENELEC and ETSI) and the national standardisation bodies of the EU member states. These bodies are working together to develop harmonised standards that ensure AI technologies comply with EU regulatory requirements and promote security, privacy and interoperability. This collaborative effort aims to create a standardised framework in line with the regulatory and ethical guidelines outlined in the EU AI Act, thereby ensuring the safe and responsible use of AI technologies across the EU.

The use of AI by government agencies, particularly at the national and local levels, is still in its infancy. It leads to new possibilities for increasing efficiency, while also raising privacy and data protection concerns.

Past and Present

Current and past applications mainly involve simple chatbots to facilitate citizen-government interactions. These chatbots are not based on sophisticated models such as LLMs, but on simpler AI that follows a predetermined, strict decision-tree logic after understanding the citizen’s input (facilitating AI only in the form of natural language processing). In the justice sector, predictive AI has begun to assist judges by clustering and analysing incoming mass litigation cases – although these applications remain relatively simple and not widespread.

Future

Looking ahead, the landscape will evolve with generative AI, which is expected to significantly outperform existing systems. Future applications are likely to include more sophisticated chatbots for public interaction, based on LLMs that offer unprecedented depth and flexibility in longer conversations. Internally focused, new “AuthorityGPTs” (counterparts to the currently proliferating “CompanyGPT” phenomenon) will assist civil servants with tasks such as summarising text and preparing administrative acts. Other use cases will include generative AI tools for courts to help them understand and prepare incoming statements, as well as even prepare court decisions.

Facial Recognition and Biometrics

AI-based facial recognition and biometrics currently do not play a major role in government operations. The upcoming EU AI Act will strictly regulate the application of these two use cases, especially for governments and public authorities. See 11.3 Facial Recognition and Biometrics for more details.

Automated data analysis or evaluation by the State interferes with citizens’ right to informational self-determination. The Federal Constitutional Court, with its judgment of 16 February 2023, decided that the legal regulations for automated data analysis in Hesse and Hamburg are unconstitutional (1 BvR 1547/19, 1 BvR 2634/20). The content concerns the use of analysis software that compiles and evaluates data from the police databases.

The right to informational self-determination is a fundamental right in German law, which allows individuals to decide for themselves when and within what limits information about their private lives should be communicated to others. This right is particularly important in the digital age, where personal data is often collected and used for various purposes, such as marketing, profiling, or surveillance.

Whether a violation of the right to informational self-determination exists depends on a balancing of interests ‒ the interest in data collection (by the State) and the citizen’s interest in preventing this. The weight of the interference of the State is determined in particular by the type and scope of the data that can be processed and the permitted method of data analysis or evaluation. The legislator can control this by regulating the type and scope of the data and limiting the analysis and evaluation method. The broader the possibilities for analysis and evaluation, the greater the burden of justification on the legislator.

The AI Act will play a central role in the future and will massively restrict how governments may use AI for national security (eg, biometrical surveillance). In Germany, there is no comparable set of rules ‒ decisions are often scattered across various areas of law and based on fundamental rights considerations (as in 7.2 Judicial Decisions for the evaluation of police data).

The emergence of generative AI technologies raises new legal complexities in several areas beyond IP and data protection ‒ the latter two of which will be discussed in 8.2 IP and Generative AI and 8.3 Data Protection and Generative AI. A few of the others are discussed here, as follows.

  • Contractual law ‒ the integration of generative AI outputs in services (eg, AI-created marketing campaigns) and the procurement of AI services for businesses necessitate new contractual frameworks. These frameworks must address liability, performance metrics and IP rights, reflecting the unique nature of AI-generated content, training and services. (For more detail, see 12.1 Procurement of AI Technology.)
  • Regulation ‒ following the enactment of the EU AI Act, businesses are meticulously planning to determine the extent of their affected operations and strategising to comply with new obligations. This includes assessing AI systems’ risk levels, implementing necessary risk management measures, and adhering to transparency requirements. (For more detail, see 11. Legal Issues With Predictive and Generative AI and 3.7 Proposed AI-Specific Legislation and Regulations.)
  • Labour law ‒ the deployment of generative AI in the workplace raises questions regarding employee rights and corporate governance, especially concerning the co-determination rights in the creation and enforcement of internal policies on AI usage. This encompasses employee privacy, surveillance concerns, and the impact on job roles and responsibilities. (For more detail, see 13. AI in Employment.)
  • M&A ‒ in M&A, the due diligence process for AI companies now involves scrutinising the ethical AI use, data management practices, compliance with AI legislation, and the valuation of AI-driven assets or capabilities, reflecting the nuanced risks and opportunities presented by generative AI technologies.

Possible IP Protection

The AI technology itself can be protected under copyright law. It is important to differentiate between the different components of the AI technology, such as the AI training algorithm, the AI model architecture, and the training date. The AI model and the AI algorithm may be protected as a software according to Section 69a of the UrhG. The training data can be protected as database (Section 87a of the UrhG). However, it should be noted that the training data itself is typically scraped or licensed from third parties, which is why the individual training data is often protected by copyright (if it is text or images, for example). The rights then lie with the third party, so the use must either be justified by the text and data mining exception or by a licence.

The input or prompts are often too simple or technically determined (eg, technical requirements for a picture such as format) to be granted copyright protection because they do not meet the requirements on originality constituted in Section 2(2) of the UrhG. However, more detailed prompts – in which the author utilised a creative decision-making space ‒ may be protected by copyright. Many prompts could also be allocated in a database and enjoy protection under a database right.

In most cases, however, the output will not be IP-protected. The typical prompt will be too vague, giving the AI a range to produce different results. Therefore, the output cannot be seen as the work of the (human) author and is typically not protected. An exception might be if the user does not leave this range open by using very specific input that predetermines the shape of the output, as might be the case with “auto-filling” lines of code into an existing code that sets the context. Another exception could be where an already protected work is only slightly edited with AI.

Possible IP Infringements

Collecting training data from the internet is generally a reproduction under copyright law, which can be justified as legal text and data mining (Section of the 44b of the UrhG). This primarily requires that the data is lawfully accessible on the internet (eg, freely available) and that there is no machine-readable opt-out of the rights-holder (eg, in the robots.txt code and in the company information of the website in an Optical Character Recognition (OCR) format).

Even if only a small but recognisable amount of the copyrighted work or parts of it are included in the input or generated output of the work, courts are likely to consider this to be a relevant reproduction or transformation of the work requiring the author’s consent. Private users may be able to rely on their permission to make copies (Section 53 of the UrhG).

The GDPR and generative AI are generally compatible. However, in certain situations, the requirements of the GDPR create difficulties in relation to generative AI that need to be addressed using the risk-based approach of the GDPR. The following issues are not exhaustive but give a flavour of some of the difficulties. Further issues were published in May 2024 in the guidance on generative AI and data protection by German DPAs. These guidelines are the first comprehensive recommendations by German DPAs specifically for generative AI.

Data Subject Rights

For data controllers, it is important to appropriately manage the trade-offs arising from these difficulties and the risk-based approach. By way of example, in the case of inaccurate personal data produced as output by an AI model, the data subject’s right to rectification or erasure may not be enforceable. This is due to the “black box effect”, which makes the identification and deletion of specific data sets from an AI model extremely complex (both technically and logistically), especially if the data has already been integrated into the model and can no longer be uniquely identified. While some German DPAs have required extensive re-training of the model to avoid similar outputs, filtering seems more appropriate – although it is unclear whether German DPAs would accept this.

Data Minimisation

With regard to data minimisation and purpose limitation, as per other issues that reflect some apparent contradictions between GDPR and generative AI, German regulators have so far not put this in the spotlight. In terms of data minimisation – which, if taken seriously, could jeopardise the accuracy of outputs ‒ one German regulator has already pointed out that, instead of data minimisation, a wealth of data is needed from a societal perspective to make AI work. This demonstrates that legal discussions around AI are constantly evolving.

Past

Initially, predictive AI tools in legal tech focused primarily on analysing large sets of documents. These tools helped lawyers by clustering documents based on similar content and identifying specific clauses (eg, liability clauses) with greater accuracy than simple keyword searches. In addition, AI has facilitated the extraction of key information from large data sets. Historically, document automation in the legal sector has been predominantly rule-based only, failing to realise the potential of AI.

Present

The legal profession is currently experiencing a paradigm shift with the introduction of generative AI technologies. Law firms are increasingly experimenting with standard or fine-tuned LLMs to assist lawyers with various tasks, including answering legal questions, summarising text, brainstorming, and translating documents. Despite these advances, the legal industry faces challenges in effectively integrating LLMs with large amounts of own data. Current technology solutions – such as Retrieval Augmented Generation (RAG), fine-tuning and knowledge graphs ‒ have yet to provide an off-the-shelf product that allows lawyers to seamlessly interact with thousands of pages of data on a sophisticated level.

Future

Overcoming the current technological challenges of implementing large amounts of proprietary data promises a new era of sophisticated legal AI applications. Potential future use cases include the development of intelligent policy databases, improved contract drafting based on internal preferences, and the analysis of lengthy court opinions to prepare new legal documents. These advances are expected to significantly disrupt the legal profession.

Professional Law

German professional law for lawyers (Berufsrecht der Rechtsanwälte) does not pose insurmountable obstacles to the adoption of AI technologies. Currently, most AI solutions in the legal sector are procured as software as a service (SaaS) models. This approach presents lawyers with challenges similar to those encountered during past cloud outsourcing activities.

Liability and Insurability

Establishing liability for damages caused by generative and predictive AI systems is crucial owing to their potential harmful outcomes. Under German law, as AI itself is not a legal person, liability for damages caused by AI systems must be attributed to the operator or others in the supply chain. Insurability of AI-related damages is closely tied to liability, but as AI blurs the line between human and machine behaviour, it becomes challenging to allocate responsibility and determine insurability. This has sparked a debate on the need for separate AI insurance to cover innovation and development risks.

Liability Issues

From a German legal perspective, liability for AI damages can generally be established through contract law, product liability claims, and tort liability. However, each approach presents difficulties. Proving breach of duty and causality in contract law can be challenging, especially when the inner workings of an AI system are not accessible. Product liability claims face difficulties due to the complexity and opacity of AI systems, including establishing a defect, damage, and causal link. Tort liability is hindered by the lack of regulatory rules for AI safety, complexities in proving fault and causation, and challenges in assessing non-human AI systems.

In conclusion, German law is not adequately equipped to address the unique challenges of AI liability. However, the EU has recognised these limitations and is working on creating a harmonised legal framework to address AI-related challenges in product liability and tort law.

Status Quo

Although there are no local governmental initiatives addressing the issues related to AI liability, the EC has taken steps to regulate AI. In February 2020, the EC published a White Paper and a report on AI safety and liability, which set the stage for updates to product liability legislation in the EU and Germany.

EU Initiatives

The proposed updates include revising the current Product Liability Directive and introducing a new AI Liability Directive. The revised Product Liability Directive maintains strict liability for manufacturers, holding them responsible for harm caused by defective products, including those based on AI. Additionally, victims seeking compensation for damages caused by AI products and services can also rely on fault-based tort liability regimes in EU member states.

The key changes proposed in the EU Directives concern the burden of proof and disclosure powers. They aim to address information asymmetries between victims and those responsible for AI-related harm. The EU Directives introduce enhanced disclosure powers for potential tortfeasors and alter the burden of proof for claimants. Presumptions of evidence and orders for prima facie evidence are also proposed to streamline the process of proving liability in product-related cases.

Impact

These changes represent a significant shift in the product liability landscape in the EU and Germany. They have the potential to impact the liability of supply chain actors and shape the legal framework governing AI-based products and services.

Scope

Bias in AI refers to unfair or discriminatory preferences embedded in AI systems, leading to unequal treatment based on characteristics such as race or gender. The EU AI Act, along with the GDPR, addresses bias in high-risk AI systems and requires controllers to mitigate these risks. Currently, best practices for addressing bias in AI are limited and industry efforts in Germany are insufficient.

Bias in AI

Managing the risk of biased outcomes in AI systems requires a tailored approach, considering the specific domain and context. Trade-offs must be made in choosing safeguards for different characteristics and groups. Documentation and justification of the chosen approach ‒ considering privacy, fairness, and the application’s context – ensure accountability for AI risk management decisions.

Examples and Issues

Two areas where bias poses significant risks are employment (automated CV pre-selection) and finance (automated investment advice and credit scoring). However, individuals face challenges in proving bias following algorithmic decisions, leading to a lack of case law on compensation claims. Regulatory investigations by German DPAs play a crucial role in identifying bias in AI systems. While enforcement actions are yet unknown, German DPAs have expressed their concern regarding bias. There is occasional political movement to revise the General Equal Treatment Act (Allgemeine Gleichbehandlungsgesetz, or AGG) to include algorithmic decisions, given their increasing importance for consumers.

When it comes to protecting personal data in AI, there are several risks and benefits to consider.

Risks

In terms of risks, the vast amount of personal data required for AI systems creates the risk of data breaches and unauthorised access, which can lead to identity theft and other malicious activities. It can also produce biased outcomes in areas such as employment, lending or criminal justice, leading to unfair or discriminatory practices.

Potential

On the other hand, AI-powered systems can automate processes, increasing efficiency and productivity to deliver more effective services to individuals. In healthcare, for example, AI can help diagnose diseases, predict outcomes and suggest personalised treatment plans. Personal health data can also facilitate medical research and the development of new therapies.

Data Security

Against this backdrop, data security is one of the crucial elements under the GDPR (though not exclusively) to strike a balance between risks and benefits. For the German market, the recommendations of the German Federal Office for Information Security (eg, the recent paper on “Generative AI Models – Opportunities and Risks for Industry and Authorities”) must be taken seriously. While they recognised that risks cannot always be avoided 100%, but that it is often a matter of minimising the risk appropriately, the development of best practices should be closely monitored.

The advent of AI has significantly expanded the capabilities and applications of facial recognition and biometrics. The EU AI Act distinguishes between “post” and “live” biometric identification methods ‒ each of which is associated with different levels of risk and regulatory requirements.

Post Biometric Identification ‒ High-Risk Applications and Regulatory Requirements

Post-biometric identification is classified as a high-risk application under the EU AI Act, requiring a comprehensive set of regulatory requirements to ensure data security and privacy. The only exception to this strict regulation is biometric verification used solely to confirm an individual’s claimed identity.

Live Biometric Identification ‒ Prohibitions and Exceptions

Contrastingly, live biometric identification faces a general prohibition, especially when applied in real-time in publicly accessible spaces for law enforcement purposes. Exceptions to this prohibition are narrowly defined and permitted only under three critical conditions, as follows.

  • Search and rescue operations ‒ specifically targeting the search for victims of serious crimes such as abduction, human trafficking, and sexual exploitation, as well as locating missing persons.
  • Imminent threats to safety ‒ preventing immediate, significant threats to the safety of individuals or preventing genuine and present or foreseeable terrorist attacks.
  • Criminal investigations ‒ localising or identifying individuals suspected of serious criminal offences as defined in an annex and which must be punishable by substantial custodial sentences in the respective EU member state.

Liability Across Various Legal Frameworks

The use of facial recognition and biometric data intersects with various legal domains, requiring compliance with the consent rules of each jurisdiction. These include the GDPR in the EU, which treats the processing of biometric data as sensitive data.

Relevance

In Germany, the prohibition of automated decision-making (ADM) under Article 22 of the GDPR is relevant to both predictive and generative AI systems. These legal restrictions aim to address concerns about the potential risks and harmful effects of ADM, particularly in areas that significantly impact individuals’ lives. Inaccuracies and biases in ADM processes can have severe consequences, including unfair discrimination and ethical issues.

Scope

Meaningful human involvement is crucial for excluding the strict requirements of Article 22 of the GDPR, as highlighted in the recent SCHUFA case (ECJ, Case C-634/21). When meaningful human involvement is lacking and decisions will have “legal” or “similarly significant” effects on individuals, such as in contractual or vital areas of life (eg, work, finance, and living conditions), data controllers can only rely on limited justifications such as consent and contractual necessity. They must provide comprehensive information about the decision-making process’ underlying logic and implement individual algorithmic due process, allowing individuals to express their views and be heard.

Impact

Non-compliance with the GDPR can lead to significant penalties and there are also reputational risks to consider. If customers perceive automated decision-making as unfair, biased, or lacking transparency, it can undermine their trust in the company.

The EU AI Act sets out a variety of rules targeting different levels of risk and transparency requirements associated with AI systems.

AI Systems With So-Called Specific Transparency Requirements

The EU AI Act contains the following rules for AI systems with specific transparency requirements.

  • Direct interaction with natural persons ‒ AI systems designed for direct interaction with individuals must inform users of their non-human nature unless it is evident under normal circumstances. This is to ensure that individuals are aware when they are interacting with an AI, thereby fostering an environment of informed consent and trust.
  • Synthetic content disclosure ‒ providers of AI systems (particularly those generating synthetic media such as audio, images, videos, or text) are mandated to clearly mark the outputs in a machine-readable manner. This ensures that consumers can easily distinguish between content created by humans and that generated by AI, helping prevent misinformation and maintaining content integrity.
  • Deep fake regulation ‒ there is a specific mandate for the disclosure of AI-generated or manipulated content, especially in cases of deep fakes involving images, audio, or video. This regulation aims to combat the spread of misleading or harmful media by ensuring individuals can recognise and understand when content has been artificially altered.

Further Transparency Obligations (Not Enumerative)

The EU AI Act also contains further transparency obligations, as follows.

  • High-risk systems ‒ regulations extend to high-risk AI systems, emphasising transparency from providers to downstream deployers. This encompasses clear communication about the capabilities, limitations, and proper usage of AI systems to ensure safe and ethical deployment.
  • General-purpose AI ‒ for general-purpose AI systems, there are transparency requirements regarding the training processes and data. Understanding how an AI system has been trained and what data was used is crucial for assessing its reliability, biases and potential impacts on consumers and society at large.

There are some individual regulations for price-setting using AI technology, as follows.

  • The customer must be informed if the price was automatically personalised (Article 246a, Section 1(1), sentence 1, no 6 of the Introductory Act to the Civil Code (Einführungsgesetz zum Bürgerlichen Gesetzbuch, or EGBGB).
  • Price-setting using AI (“dynamic pricing”) might be in violation of antitrust law if the same AI is used by more competitors. However, this question has not been definitively answered yet.
  • Price-setting must not be based on discriminatory factors such as race, gender, religion, and age (Section 19 of the AGG).

Similarities to Traditional SaaS Contracts

Many AI solutions are now being procured as a service (SaaS), which has contractual similarities to traditional cloud service negotiations. This includes areas such as availability and fault resolution (service-level agreements), as well as maintenance and support.

Emerging Challenges in AI Contracts

However, the integration of AI presents the following unique challenges that have not previously been encountered in cloud negotiations.

  • Custom model development ‒ there are concerns about the provider developing a model with the customer’s data that could be used for other customers. Contracts need to clearly address ownership and usage rights of developed AI models.
  • Quality and reliability of AI outputs ‒ it is crucial to include clauses that guarantee the quality, reproducibility and correctness of AI-generated output. This will ensure that the AI’s performance meets the customer’s business requirements.
  • Transparency in training data and processes ‒ given the problems of the “black box” nature of AI and lack of explainability, there should be transparency around the data used to train the AI and the processes involved. Contracts should require disclosure of training datasets (or at least summaries thereof) and methodologies to ensure compliance and alignment with business values.
  • Indemnification for third-party rights violation ‒ if AI outputs infringe the rights of third parties, contracts currently sometimes include indemnification clauses. These should detail the scope of liability and the safeguards in place should the AI’s output lead to legal challenges.

There is currently no established market standard that addresses these emerging issues. Lawyers will need to develop individual, bespoke solutions for their clients.

AI technologies have a profound impact on work environments, particularly in the area of personnel decisions. They offer advantages in processing large amounts of data quickly for tasks such as pre-selecting job applicants and creating scoring tables for employee dismissals and performance reviews.

Exclusively Automated Decisions

However, the GDPR restricts exclusively automated decisions in employment relationships. Therefore, decisions with legal implications (eg, hiring, transfers, and dismissals) should generally involve human review or decision-making unless the narrow justifications and strong safeguards under Article 22 of the GDPR can be complied with.

Pre-selection/Support Measures

Pre-selection and support measures utilising AI are permissible but require careful examination. AI can be effectively used in various HR functions, including generating content, automating job descriptions, pre-selections, reference letters, employee chatbots, and relocation support.

Risks

Nevertheless, there is a notable risk of discriminatory decisions made by AI tools. Employers can be held liable under the AGG if they inadequately programme AI systems, use flawed data or formulas, or neglect regular quality checks. Liability applies regardless of whether the tools are internal or external and irrespective of technical responsibility for errors or discriminatory practices. Indirect discrimination can occur when seemingly neutral criteria end up favouring certain employee groups or genders in practice.

Co-determination Rights

Furthermore, depending on the specific AI tools and set-up, works councils have significant co-determination rights and detailed works agreements must be negotiated with employee representatives. Compliance with these rights and agreements is crucial for the lawful implementation of AI in the workplace.

Evaluation

Performance evaluation using AI tools promises to be more objective and efficient. Manual errors as well as misbehaviours can be revealed easier and automated. There are tools that review performance, analyse individual or group work activities, tools that review manual or automated data and processes such as (travel) cost reimbursements, and many more.

At the same time, there is a risk of violations of the AGG, especially if the programming and/or output is inadequate.

Monitoring

Monitoring employees is subject to strict conditions based on case law. Generally, total surveillance of employees without cause (including covert video or audio surveillance and site surveillance) is not allowed. Exceptions are limited to specific cases and suspicions. Preventive or support measures are permissible as long as they do not create undue surveillance pressure. However, these principles may conflict with new technologies such as voice-based live evaluation of calls and transcription tool reviews.

To align with evolving practices, Germany needs to adapt previous employment laws to the changing nature of work. Furthermore, when processing individual (log) data with specific tools and set-ups, works councils have significant co-determination rights, and detailed works agreements should be negotiated with employee representatives. As there is no established case law in this area, it is crucial to establish reasonable and detailed agreements and guidelines, accompanied by regular checks and training sessions.

Today’s digital platforms and their success would not be imaginable without algorithms. Recommendation algorithms play an essential role, as the typical user only swipes through the recommended content in their feed.

AI has a big part essentially for platforms hosting user-generated content because the law and jurisdiction may expect the use of algorithms from them to some extent to prevent the repetition of a known breach of law on their platform (“notice and stay down”).

Further obligations arise from the European Digital Services Act (DSA), as follows.

  • Platforms allowing user-generated content but limiting AI-generated content in their games may face transparency obligations in their terms and conditions (Article 14 of the DSA).
  • It remains to be seen if the EC will issue guidelines on AI-based dark patterns, such as deep fakes or AI-generated ads that subtly influence users’/buyers’ decisions (Article 25 of the DSA).
  • AI-generated advertising must also observe the transparency requirements under Article 26 of the DSA (eg, advertising labelling and indication of the parameters for the play-out).

Financial services companies increasingly rely on AI to enhance operational efficiency, customer service, and risk management.

Regulatory Framework for Outsourcing

The outsourcing of IT services in the financial sector is subject to stringent regulations. When an outsourced function is considered a critical or important operation, national and international regulatory frameworks come into play. In Germany, for instance, the Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, or BaFin) sets national standards ‒ whereas at the European level, the European Banking Authority (EBA) provides guidelines. Historically, IT outsourcing in this sector has predominantly involved cloud services, shaping the regulatory approach towards outsourcing.

Comparing AI and Cloud Outsourcing

Upcoming AI outsourcing shares several similarities with cloud-based outsourcing, especially given that cloud solutions provide the infrastructure for AI tools at the application level. Contractual implications for the financial services sector concerning AI will be analogous to those for cloud services, addressing aspects such as data security and risk management — details of which are discussed in 12.1 Procurement of AI Technology.

New Regulatory Challenges Posed by AI

However, AI outsourcing introduces new challenges that future regulations must address:

  • Are existing control mechanisms, such as audits and information sharing, sufficient to ensure an adequate level of security for AI applications?
  • Is there a need for heightened data security standards due to the unique vulnerabilities associated with AI technologies?
  • Do financial institutions need to negotiate specific quality standards to ensure AI systems meet operational requirements?

Regulatory authorities are expected to issue guidelines for AI outsourcing akin to those established for cloud services.

The use of AI in healthcare is raising concerns with regard to the sensibility of the data handled and with the potential damages of wrong AI decisions or hallucinations.

EU AI Act

In annex III, number 5 of the EU AI Act in its current form, the following AI systems are classified as “high-risk”:

  • AI systems deciding access to health services;
  • AI systems assessing risk and price of health insurances; and
  • AI systems used in emergencies.

Additionally, AI used in critical infrastructure is classified as high-risk according to annex III, number 2 of the EU AI Act. The following are considered critical infrastructure: stationary medical treatment, supply of life-sustaining medical products, supply of prescription medicines, and laboratory diagnostics.

Those AI systems must comply with Article 8 et seq of the EU AI Act (eg, with regard to risk management, quality of training data, documentation, transparency, human oversight, and cybersecurity).

GDPR

Additionally, Article 9 of the GDPR sets high requirements for processing genetic, biometric, health or sexual data.

Current Legal Landscape

Levels 1 and 2, which involve assisted and semi-automated driving, align with existing German legislation. The levels follow the industry standard of the Society of Automotive Engineers, scaling from zero (no automation) to five (fully autonomous).

Initially, Levels 3 and 4, which have a higher degree of automation, posed challenges under German law. However, legislative changes in 2017 expanded the scope to allow these levels of automation. Nevertheless, the driver must remain perceptually ready to assume control of the vehicle when prompted by the system or when they realise that the conditions for proper use are no longer met.

Autonomous Driving

Level 5 (known as autonomous driving – ie, where there are only passengers and no driver) does not meet current legal requirements and remains prohibited under German law. Car owners are strictly liable under the German Road Traffic Act. However, establishing liability in cases of damage caused by AI remains challenging, as the victim must prove a breach of duty, resulting damage, and the causal link between the two.

At the German level, there is no specific legislation on autonomous vehicles. Regulatory developments in this area will primarily occur at the EU level, such as through the Type Approval Framework Regulation. The future EU AI Act will not address this issue directly but will require the EC to establish the AI-specific accountability requirements from the EU AI Act through delegated acts under the Type Approval Framework Regulation. This is expected to introduce comprehensive requirements for autonomous vehicles in the future.

The manufacturing sector in Germany is rapidly adopting AI, with applications in assembly, packaging, customer service, and open-source robotics.

Autonomous Mobile and Professional Service Robots

There is a growing market for autonomous mobile robots that can navigate uncontrolled environments and interact with humans. Additionally, AI applications in professional service robots (eg, crop detection and sorting objects) are highly valued.

Regulation

The regulation of these technologies in Germany will be governed by the EU’s Machinery Regulation, which will be effective in 2027. This comprehensive EU regulation aims to provide legal certainty and harmonise health and safety requirements for machinery products (including AI-based machinery) throughout the EU. It focuses on the design, construction, and marketing of machinery products in various sectors, including manufacturing.

The use of AI in the professional services sector is governed by a mix of existing regulations and emerging guidelines that address different facets of AI use.

Confidentiality and Data Protection

Confidentiality remains paramount in professional services. The integration of AI must not compromise client confidentiality or data protection standards. Professionals must ensure that AI systems comply with strict data protection regulations, such as the GDPR in the EU, which requires the protection of personal data processed by AI technologies. For further details, please refer to 8.3 Data Protection and Generative AI and 11.2 Data Protection and Privacy.

IP Concerns

The use of AI can raise complex IP issues, particularly in relation to the ownership of AI-generated outputs and the use of proprietary datasets to train AI. Professionals need to navigate these IP concerns to avoid infringement risks and ensure that contracts clearly delineate the IP rights associated with AI-generated work. For further details, please refer to 8.2 IP and Generative AI and 15. Intellectual Property.

Regulatory Compliance

Professionals need to ensure that AI applications comply with sector-specific regulations and codes of conduct. This includes adhering to ethical guidelines set by professional bodies to ensure that AI systems are used in a manner consistent with professional ethics and standards. For further details, please refer to 3.7 Proposed AI-Specific Legislation and Regulations.

Only natural persons can be inventors; therefore AI can’t be an inventor under German Patent Act (Federal Patent Court, 11 November 2021, 11 W (pat) 5/21). The same applies to copyrights. Copyright holders can only be a natural person.

According to Section 2 no 1 of the Trade Secrets Protection Act (Gesetz zum Schutz von Geschäftsgeheimnissen, or GeschGehG) a trade secret is information:

  • that is not publicly available and of commercial value;
  • that is protected by appropriate non-disclosure measures; and
  • the owner of which must have a legitimate interest in non-disclosure.

The first and last requirement may be fulfilled in most cases, given that the material AI technology such as the AI model and the training date is kept secret (as long as it is no open-source model).

Appropriate measures could be encryption, for example, and non-disclosure agreements with a fine in case of breach. Unauthorised use of a trade secret can be a criminal offence according to Sec 23 of the GeschGehG. Therefore, a reliable legal protection of AI is possible through the law of trade secrets.

Until now, there is no case law on the copyright protection of AI generated content. In literature, however, there is a broad consensus that there is no protection in most cases but only in specific constellations (eg, AI as “auto-fill” to existing works such as code (see 8.2 IP and Generative AI)).

It also remains to be seen whether courts will lower the requirements marginally so that, even with a slightly more detailed prompt and despite the leeway of the AI tools, they will satisfy copyright protection if only the defining and copyright-creating features of the output were already recognisably laid out in the input.

As a result, the use of AI tools can raise IP questions in the same manner as human-created works. The human user is still obligated to ensure that the input is free of third-party rights and that the output does not infringe on others’ rights before using it. Users should also always check what rights they themselves grant to the providers of the AI tools. Is the AI developer allowed to use the content for training? Are they allowed to view it (data protection and confidentiality issues)?

If a person creates a work themselves or has it created, they either know possible third-party rights from their research/work or can hold the third party liable (through indemnity clauses). With AI tools, people often do not know how the creation process of the AI tool works. While there are also indemnity clauses from leading AI tool providers, these are typically subject to conditions and cannot protect against the rights-holder demanding cease-and-desist.

When advising corporate boards on identifying and mitigating the risks of adopting AI, it is essential to approach the task systematically and comprehensively.

Identification of AI Application and Purpose

  • Clarify the use of AI ‒ start by defining the specific AI tools the company intends to use and their intended purpose.
  • Understand the business context ‒ consider the company’s industry, size and operational geography, as these factors have a significant impact on the legal landscape, including compliance and risk exposure.
  • Data usage ‒ analyse the type of data that will be input into the AI tool and how the output will be used, both internally and externally.

Legal Areas Impacted by AI Implementation

  • Based on the initial assessment, compile a list of legal areas that are likely to be impacted by the AI implementation, such as contract law, regulations, data protection, and IP rights.
  • Industry-specific regulations ‒ identify any industry-specific legal requirements or standards that may apply to the intended use of AI within the organisation.

Developing Holistic AI Compliance Strategies

  • Develop a holistic AI compliance strategy that incorporates the findings from the previous steps. This strategy should address identified legal risks and ensure that the organisation’s use of AI tools complies with applicable laws and regulations.
  • In many cases, skilled AI legal professionals play a critical role – not as an obstacle, but as an enabler of AI technology, helping companies navigate the complex legal landscape while reaping the benefits of AI.

Implementing specific AI best practices requires addressing key issues to ensure effectiveness, manageability and proportionality for businesses. The following steps for organisations to follow have been proven to work in practice.

  • Identify regulatory requirements for specific use cases ‒ AI compliance is always use case-specific and different regulations may apply to different use cases. One size does not fit all.
  • Conduct a risk assessment ‒ start by conducting a thorough risk assessment to identify potential risks and challenges associated with AI implementation in the specific business context, based on the applicable above-mentioned legal requirements.
  • Prioritise explainability and transparency ‒ focus on implementing AI systems that are explainable and transparent. This will foster trust, facilitate audits, and help mitigate potential risks associated with a lack of transparency.
  • Invest in data quality and bias mitigation ‒ ensure the quality, accuracy and representativeness of data used to train AI models. Implement processes to identify and address biases in training data and algorithms. Regularly monitor and evaluate the performance of AI systems to identify and address any biases or unfair outcomes.
  • Develop an AI governance framework ‒ establish a comprehensive AI governance framework that outlines the policies, procedures and accountability mechanisms for developing, deploying and monitoring AI. This framework should cover data governance, model development, algorithmic transparency, bias mitigation, and ongoing evaluation to ensure responsible and ethical AI practices.
  • Continually learn and adapt ‒ stay on top of the evolving landscape of AI technologies and best practices. Foster a culture of continuous learning and improvement within the organisation. Stay informed of emerging regulatory developments and adapt the organisation’s AI practices accordingly.
  • Collaboration and knowledge sharing ‒ engage in collaboration and knowledge-sharing with industry peers, academia and regulators. This collaborative approach can help shape industry standards and contribute to the development of effective and proportionate AI regulation.
Bird & Bird LLP

Carl-Theodor-Strasse 6
Düsseldorf
40213
Germany

+49 (0)211 2005 6000

+49 (0)211 2005 6011

duesseldorf@twobirds.com https://www.twobirds.com/
Author Business Card

Trends and Developments


Authors



Aitava is an innovative boutique law firm for AI and IT law, specialising in AI and IT outsourcing projects. Aitava’s is located throughout Germany, with its headquarters in Munich. Aitava is agile, interdisciplinary and efficient. Agile –because no two projects are the same and the highly available team delivers tailor-made and concrete solutions, focusing on clients and standing closely by their side in every project phase. Interdisciplinary – because the firm combines legal, business and technical expertise to offer new perspectives on existing challenges. Efficient – because the team has been working together for years and is supported by smart knowledge management and automated processes in the background.

General Overview

The world is experiencing a turning point. AI is the central key technology. Whether voice assistants, translation tools, personalised recommendations, self-driving cars, AI-supported health diagnoses or predictive maintenance – AI is expanding human capabilities. In the process, new value creation opportunities are emerging. More and more companies are learning about the innovative value of data and information and, in the process, that the data economy raises the following questions.

  • How can data be used and shared in a legally secure way?
  • Is it possible to legally secure one’s own share of value creation?
  • How can AI output be used in a legally compliant manner?

AI is not only powerful, but also vulnerable. AI models can make complex “black box” decisions using logic that is often incomprehensible. In addition, the quality of the underlying data is critical to AI’s performance. If training data is unrepresentative or misclassified, this deficit carries over into AI output and can cause immediate damage, such as discrimination.

New AI Legislation on the Horizon

The AI regulation applicable to Germany originates primarily from the EU.

A number of European and German laws and regulations that apply to AI are already in force, as follows.

  • The EU General Data Protection Regulation (GDPR) remains an important guardrail in the context of AI training and AI applications and must be implemented through clever technical and legal design. The transparency, purpose limitation and data minimisation required by Article 5(1) of the GDPR often conflict with the technical requirements of AI. In the case of particularly sensitive data, the especially strict requirements of Article 9 of the GDPR must also be observed. The silver bullet is the use of anonymous or synthetic data (where sensible). Leveraging anonymised or synthetic data sets can pave the way for innovative AI development while staying within the bounds of the GDPR, thus enabling a balance between technological advancement and data protection principles.
  • European copyright law protects texts, music, images and computer programs from unauthorised use but contains numerous stumbling blocks, particularly when it comes to AI training and generative AI. There is a threat of infringement of third-party property rights and the risk that one’s own AI model or its output may not be used. In the worst-case scenario, the persons involved can be prosecuted.

Section 44b of the German Copyright Law (Urheberrechtsgesetz, or UrhG) contains a permission for AI training with copyrighted content and thus is one step ahead of the “fair use” debate in the USA. This legislation paves the way for more legally secure experimentation and innovation within the AI sector by providing a specific framework for the use of copyrighted material. Importantly, this approach can serve as a model for other jurisdictions grappling with similar issues, suggesting a path forward that respects both IP rights and the need for AI advancement.

Nevertheless, careful legal interpretation and compliance efforts are essential to ensure that the use of copyrighted content in AI training does not overstep the boundaries set by this law. This highlights the ongoing challenge of aligning fast-paced technological innovation with existing legal frameworks.

  • The European Data Act, which entered into force in January 2024 and will apply from 12 September 2024, regulates data extensively and provides for the obligation of a data licence in order to be allowed to use user-generated data for AI training. The market value of certain data is also likely to decline. In addition, considerable obligations will be imposed on providers and manufacturers that are difficult to reconcile with the GDPR. Its key term is “access by design”.

The European Data Act emphasises fairness, non-discrimination and transparency in B2B data access conditions, ensuring that contractual terms are reasonable and do not impose undue restrictions on data access or use. Data holders are barred from making the process of accessing or using data unduly difficult for users, which reinforces the principle of autonomy and prevents manipulation through interface design or function.¬¬Furthermore, the European Data Act sets out specific rules for the protection of trade secrets, balancing the need for data access with the protection of sensitive information. In cases where security or trade secrets are at risk, access to data can be restricted.

The following European legislation applicable to AI has been adopted.

  • The EU AI Act has been adopted at a political level and is expected to be published in the Official Journal of the European Union in April 2024. It will strictly regulate general-purpose AI and high-risk AI, in particular, and will give AI providers a lot of homework. Numerous software solutions are classified by the EU AI Act as high-risk AI systems. This classification results in far-reaching catalogues of obligations with regard to data quality, explainability, documentation and IT design, among other things, as well as a compliance test. The EU AI Act is to apply horizontally – ie, across all business and industry sectors.
  • The EU Cyber Resilience Act has been adopted at a political level and is expected to come into force later in 2024. It aims to enhance the cybersecurity of a wide range of products with digital elements. It introduces mandatory requirements for the security of hardware and software products available on the EU market, including AI systems. This includes obligations for manufacturers to ensure that their products are free from known vulnerabilities at the time of release and to provide regular updates to address any new vulnerabilities that may be discovered post-launch. The EU Cyber Resilience Act also mandates that manufacturers report significant cybersecurity incidents to a designated authority. The goal of the EU Cyber Resilience Act is to increase the overall cybersecurity position of the EU, protect consumers, and ensure the integrity and availability of digital products and services.

AI Is Increasingly Becoming a Compliance Topic

Compliance is not an agile process, but a proactive one. The focus is on setting a strategic course – for example, in product development or the establishment of databases. The EU AI Act and the European Data Act will come into force soon. Those who think in terms of the new set of obligations conceptually will be at least one step ahead of the competition. On the other hand, anyone who rests on the status quo or concentrates solely on GDPR issues is endangering their own business model.

AI deployment is a matter for executives, in two respects. First, management needs a clear understanding of the functionality and legal dimension of a planned AI deployment in order to make strategic transformation decisions. Second, important corporate decisions require extensive preparation. Here, the use of AI can serve to create an appropriate information basis – in some cases, it may even be required.

This evolution towards AI as a compliance issue reflects a broader understanding that AI technologies are not just tools for enhancing efficiency and innovation but also carry significant ethical, legal and social implications. As such, organisations are tasked not only with harnessing the power of AI but also with ensuring that their AI initiatives comply with an increasingly complex web of regulations and standards.

By way of example, the introduction of the EU AI Act highlights the need for businesses to assess and classify their AI systems based on the level of risk they pose. High-risk applications – such as those involving biometric identification or critical infrastructure – will face stricter regulatory requirements, including with regard to transparency, accuracy, and security measures. These obligations underscore the importance of incorporating compliance considerations into the AI development life cycle, from initial design to deployment and monitoring.

Moreover, the compliance landscape for AI is complicated by the international scope of many AI systems. Companies operating across borders must navigate not only the regulations of the EU but also those of other jurisdictions in which they operate. This global dimension necessitates a sophisticated, well-coordinated compliance strategy that can adapt to the diverse legal environments and cultural expectations regarding AI.

Every Company Needs AI Guidelines (and Further Risk Mitigation Measures)

The incorporation of AI into business processes and on company devices brings with it a complex array of considerations, both in terms of opportunity and risk. The transformative potential of AI across various sectors – from automating mundane tasks to providing insights from data analytics – is undeniable. However, the deployment of AI technologies must be navigated carefully to safeguard the company against the following significant risks.

  • Data entered by employees is accessible to third parties – many AI providers continue to use the input data for analysis purposes and have corresponding usage rights granted in the terms and conditions. In return for the “free” use of AI, companies pay for the data with their data. If confidential internal company information (eg, research results, customer data, and Key Performance Indicators) is entered into the AI tool, there is a risk of uncontrolled dissemination of sensitive business secrets and personal data. A worst-case scenario for any company.
  • Training and use of generative AI can lead to the infringement of third-party property rights – several copyright class action lawsuits are pending in the USA, including against GitHub, StabilityAI (Stable Diffusion), Midjourney and the art platform DeviantArticle. Companies also face the threat of trouble if the AI (unnoticed) generates results that are too similar to existing works and they use this output.
  • AI hallucinates – AI models, especially those based on large language or generative techniques, can produce outputs that are convincingly inaccurate (known colloquially as “hallucinations”). In other words, it invents facts with great conviction and it is very difficult to tell whether AI is hallucinating. However, if the output is implemented without being checked, there are considerable liability risks (especially if the information is critical to decision-making processes or public disclosures).
  • Lack of copyright protection for AI-generated creations – in addition, there is no copyright protection if AI generates texts, images or software autonomously. Copyright law in the EU only protects human creations. Employees therefore need clear guidelines as to which AI output may only be used as a source of inspiration and which may be used 1:1.

The use of AI is a balancing act for management. The ideal approach is to evaluate the specific planned use of AI based on the actual relevant technical and legal risks and to find tailored solutions. Businesses must proactively assess the specific applications of AI, weighing the technological and legal implications to devise bespoke strategies that mitigate risk while capitalising on the benefits of AI. This process involves the following.

  • Developing and implementing AI usage guidelines – clear, actionable policies should be established, delineating how AI tools can be used safely and effectively within the organisation. These guidelines must address, among other things, data handling, copyright considerations, and the verification of AI-generated information. The operational decisions for or against certain AI tools are then documented in the form of an internal AI guidelines and introduced to employees in a binding manner.
  • Conducting thorough risk assessments – before integrating AI into business processes, a comprehensive analysis of the associated risks (ranging from data privacy to IP concerns) must be undertaken.
  • Employee training and awareness – ensuring that all team members understand the potential risks and guidelines for AI use is crucial. Regular training sessions can help build a culture of responsible AI usage.
  • Continuous monitoring and review – AI technologies and the legal landscape around them are evolving rapidly. Ongoing monitoring of both AI tool performance and regulatory developments is necessary to ensure that AI usage remains compliant and effective.
  • Engaging with AI ethically – beyond legal compliance, companies should consider the ethical implications of AI use, including issues of bias, fairness and transparency.

By adopting a comprehensive approach to AI integration that encompasses technical diligence, legal compliance and ethical considerations, companies can navigate the complexities of AI deployment, thereby ensuring that they harness its potential while minimising associated risks.

Outlook

Regulation and compliance obligations for AI will continue to increase. Only those who deal with these issues at an early stage will be able to gain a competitive advantage from the use of AI. As the landscape of AI evolves, proactive engagement with regulatory changes and ethical considerations will become a cornerstone of innovation and market leadership. Companies that prioritise transparency, accountability, and the ethical use of AI will not only navigate the complex regulatory environment more effectively but will also build trust with customers and stakeholders. This trust is essential in a digital economy where data privacy and security are paramount. Moreover, organisations that excel in embedding ethical AI practices into their operations will likely see enhanced brand reputation and customer loyalty translating into long-term success. In essence, the future will belong to those who view compliance not as a burden but as an opportunity to lead in the responsible development and application of AI technologies.

Aitava

Walhallastrasse 36
80639 Munich
Germany

+49 89 9233 4290

mail@aitava.com www.aitava.com
Author Business Card

Law and Practice

Authors



Bird & Bird delivers expertise covering a full range of legal services through more than 1,400 lawyers and legal practitioners across a worldwide network of 32 offices. The firm has built a stellar, global reputation from a deep industry understanding of key sectors and through its sophisticated, pragmatic advice. Bird & Bird is a global leader in advising organisations being changed by digital technology, as well as advising companies who are carving the world’s digital future. AI and generative AI are a cornerstone of the firm’s legal practice, which helps clients leverage AI and digital technology against a backdrop of increasing regulation. Bird & Bird’s longstanding strengths in data protection, commercial and IP law – allied with the team’s technology and communications expertise – mean the firm is ideally placed to work with clients in order to help them reach their full potential for growth.

Trends and Developments

Authors



Aitava is an innovative boutique law firm for AI and IT law, specialising in AI and IT outsourcing projects. Aitava’s is located throughout Germany, with its headquarters in Munich. Aitava is agile, interdisciplinary and efficient. Agile –because no two projects are the same and the highly available team delivers tailor-made and concrete solutions, focusing on clients and standing closely by their side in every project phase. Interdisciplinary – because the firm combines legal, business and technical expertise to offer new perspectives on existing challenges. Efficient – because the team has been working together for years and is supported by smart knowledge management and automated processes in the background.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.