The general legal background for AI under German law can be summarised as follows.
AI is revolutionising industries globally, including in Germany, by enhancing efficiency, innovation, and decision-making processes. Predictive AI has been integrated into mainstream applications for years, whereas generative AI is in a phase of industry implementation, gaining momentum since 2023 and further accelerating in 2024.
Predictive AI
Generative AI
In Germany, the government actively supports the adoption and development of AI through targeted funding programmes. These initiatives aim to stimulate AI innovation across various sectors, including public welfare, start-ups, SMEs, and environmental technology. The funding programmes reflect a strategic approach to supporting AI research and application, driving technological progress and socio-economic benefits. Prominent programmes include AI for the Common Good, European EUREKA clusters, Research and Development of AI methods in SMEs, and the DeepTech Future Fund. These programmes provide financial support to projects that enhance social well-being, foster cross-border collaboration, encourage AI engagement in SMEs, and promote innovative startups.
Germany has taken a cautious approach to regulating AI by relying on existing legal frameworks rather than creating AI-specific legislation. This technology-neutral regulatory environment is partly driven by the need to align with the EU’s AI-specific draft legislation, such as the EU AI Act and proposals on AI liability. As an EU member state, Germany has limited national regulatory options and must adhere to the overarching EU framework. This has left little room for independent action at the national level.
To date, Germany has not enacted any AI-specific legislation.
Government bodies in Germany have not yet issued AI-specific guidelines, but they have been involved in promoting ethical guidelines for trustworthy AI in specific areas. By way of example, the Federal Ministry of Economics and Climate Protection funded the “ForeSight” project, which integrated ethical considerations into the development and application of smart living services. ForeSight developed a code of ethics based on the “Ethics Guidelines for Trustworthy AI” commissioned by the EC and the “Algo.Rules” from the Bertelsmann Foundation. The code focuses on ethical principles such as respect for human autonomy, avoidance of harm, and fairness and accountability. It provides developers with seven core indicators to assess smart living services.
As part of the EU’s approach to AI, the EU has introduced several legal initiatives to promote trust in AI. While the EU AI Act and the sectoral safety legislation are directly applicable in the EU member states owing to their nature as EU Regulations, the liability provisions have to be transposed into German law owing to their nature as EU Directives.
EU AI Act
The EU AI Act, a cross-sectoral product safety regulation, targets high-risk AI systems and general purpose AI models. It will be directly applicable in all EU member states, including Germany, and will enter into force around June 2024.
Liability Rules
Two EU Directives, the revised Product Liability Directive and the new AI Liability Directive, address liability rules for AI-based products and services. The revised Product Liability Directive has been agreed upon and will enter into force in the first half of 2024. The AI Liability Directive is still being negotiated (for further details, see 10. Theories of Liability).
Sectoral Safety Legislation
Additionally, sectoral safety legislation ‒ for example, the General Product Safety Regulation (GPSR) and the Machinery Regulation (MR) ‒ is being revised to address AI integration into existing product safety frameworks. These regulations aim to ensure the safety and accountability of AI-enabled products within their respective sectors. The GPSR came into force on 12 June 2023 and will apply from 13 December 2024, while the MR came into force on 19 July 2023, and will apply from 20 January 2027. These regulations do not require national implementation as they are already in effect.
In the absence of an AI-specific jurisdictional law, inconsistencies are unlikely to arise in Germany.
This is not applicable in Germany.
Content Law
To implement Articles 3 and 4 of the EU’s Digital Single Market Directive (the “DSM Directive”), the German government introduced Section 44b and supplemented Section 60d of the German Copyright Act (Urheberrechtsgesetz, or UrhG) on text and data mining. These new rules are essential for AI, as these exemptions generally allow AI developers to scrape data such as text and images from the internet and train their AI on it. However, the main requirements are:
Data Protection Law
Unfortunately, the same cannot be said for data protection. In contrast to copyright law, the GDPR establishes a strict guardrail for the collection and use of personal data from the internet to train AI models. Meanwhile, German data protection authorities (DPAs) have made no effort to ease the interpretation of the GDPR in a way that would make the use of personal data for AI training easier to justify.
To date, Germany has not proposed any new AI-specific legislation.
There are no landmark rulings yet on the pressing IP issues concerning generative AI in Germany. Unlike in the USA, there have not been any rulings on how to handle training data, particularly in relation to the Text-and-Data-Mining Exemption under Section 44b of the UrhG. This exemption allows for the scraping of works, such as texts and images, for training purposes under certain conditions. There have also been no rulings on the protectability of AI-generated works. However, a number of other rulings have dealt with AI-related issues in a broader sense during the past year, as follows.
Therefore Article 22 of the GDPR prohibits the automated analysis of data if the result decides whether a contract is made, executed, or cancelled, unless data controllers can rely on limited justifications such as consent and contractual necessity.
In Germany, there has not yet been a higher court ruling defining generative AI. In general, there is still no high court ruling on copyright issues related to generative AI.
The definition of AI in the future EU AI Act promises to be central. It is likely that further legislation will refer to this definition and that judgments will also refer to this definition. The definition in Article 3(1) of the EU AI Act reads as follows.
No German AI Regulator (Yet)
Germany currently lacks a specific “AI regulator” (but will have to appoint one under the EUAI Act in the future). However, German DPAs are assuming a leading role in enforcing the GDPR against companies utilising and offering AI systems in the German market. Although not all AI systems rely on personal data, personal data is often involved in the training and deployment of AI systems. Data protection has emerged as a crucial aspect of AI regulation for two main reasons. First, the concept of personal data is broad and encompasses various types of information processed by AI systems, making data protection rules applicable across sectors. Second, there is significant overlap between the governance of AI and data protection, with ethical considerations, accountability mechanisms, and transparency requirements being fundamental principles of both.
DPAs as the De Facto AI Regulators in Germany
Consequently, DPAs are effectively acting as de facto AI regulators for the time being ‒ actively working to regulate AI systems and likely to continue playing an increasingly important role in governing AI systems and their handling of personal data. Recently, German DPAs also published a position paper outlining the national competences required for the EU AI Act. In this paper, they even argue that German DPAs should be designated as the market surveillance authorities for AI systems in Germany, based on their tasks and expertise.
German DPAs recognise that “AI” refers to the application of machine learning techniques and the use of AI components. They acknowledge the existence of different machine learning methods with distinct characteristics and applications, including considerations such as method selection, design, implementation, training data, and deployment. DPAs also acknowledge that AI systems can pose various risks to individuals’ rights and freedoms, which may be challenging to identify, predict or prove. Consequently, German DPAs require AI-specific measures to mitigate these risks, as there is no universal solution. Processing personal data with an AI component must have a legitimate purpose, a legal basis, and minimise the associated risks. The use of AI systems often entails high risks, necessitating the implementation of rigourous technical and organisational measures, particularly concerning data processing transparency.
German DPAs aim to prevent various harms related to the misuse or mishandling of personal data. In the context of AI, they particularly focus on preventing discriminatory use of personal data and promoting transparency in data processing. DPAs have expressed skepticism regarding the compliance of generative AI systems with the GDPR. They raise concerns about the disruptive nature of generative AI and the potential disregard for data protection principles. However, these concerns should be understood in light of the rapid emergence of new generative AI technology and the regulators’ apprehension about effectively enforcing data protection requirements. It is unlikely that German data protection regulators will outright prohibit the use of generative AI. Instead, they expect organisations using such tools to strike a balance and adequately address the requirements of data protection law.
The German DPAs initiated an investigation into OpenAI’s ChatGPT service in 2023. The DPAs raised questions regarding the compliance of ChatGPT’s data processing with key data protection principles, such as transparency, legal basis, data processing of minors, and information to data subjects. They focused on topics such as personal data collection, its use in machine learning training, storage resulting from machine learning, data transfer to third parties, and user data processing in ChatGPT. This is still ongoing.
In Germany, the national approach to AI standard-setting emphasises the development and adoption of standards specific to key industry sectors. This focus reflects a targeted strategy to ensure AI technologies are implemented responsibly and effectively. The national efforts are primarily oriented towards creating frameworks that guide the ethical, secure and effective use of AI across various domains. These include healthcare, mobility, and environmental sectors, where AI has the potential to drive significant advancements and efficiencies.
At the core of Germany’s standard-setting are collaborations between different stakeholders, including industry leaders, academic institutions, and government entities.
In the EU, AI standardisation involves key players such as the EC, the European Standardisation Organisations (CEN, CENELEC and ETSI) and the national standardisation bodies of the EU member states. These bodies are working together to develop harmonised standards that ensure AI technologies comply with EU regulatory requirements and promote security, privacy and interoperability. This collaborative effort aims to create a standardised framework in line with the regulatory and ethical guidelines outlined in the EU AI Act, thereby ensuring the safe and responsible use of AI technologies across the EU.
The use of AI by government agencies, particularly at the national and local levels, is still in its infancy. It leads to new possibilities for increasing efficiency, while also raising privacy and data protection concerns.
Past and Present
Current and past applications mainly involve simple chatbots to facilitate citizen-government interactions. These chatbots are not based on sophisticated models such as LLMs, but on simpler AI that follows a predetermined, strict decision-tree logic after understanding the citizen’s input (facilitating AI only in the form of natural language processing). In the justice sector, predictive AI has begun to assist judges by clustering and analysing incoming mass litigation cases – although these applications remain relatively simple and not widespread.
Future
Looking ahead, the landscape will evolve with generative AI, which is expected to significantly outperform existing systems. Future applications are likely to include more sophisticated chatbots for public interaction, based on LLMs that offer unprecedented depth and flexibility in longer conversations. Internally focused, new “AuthorityGPTs” (counterparts to the currently proliferating “CompanyGPT” phenomenon) will assist civil servants with tasks such as summarising text and preparing administrative acts. Other use cases will include generative AI tools for courts to help them understand and prepare incoming statements, as well as even prepare court decisions.
Facial Recognition and Biometrics
AI-based facial recognition and biometrics currently do not play a major role in government operations. The upcoming EU AI Act will strictly regulate the application of these two use cases, especially for governments and public authorities. See 11.3 Facial Recognition and Biometrics for more details.
Automated data analysis or evaluation by the State interferes with citizens’ right to informational self-determination. The Federal Constitutional Court, with its judgment of 16 February 2023, decided that the legal regulations for automated data analysis in Hesse and Hamburg are unconstitutional (1 BvR 1547/19, 1 BvR 2634/20). The content concerns the use of analysis software that compiles and evaluates data from the police databases.
The right to informational self-determination is a fundamental right in German law, which allows individuals to decide for themselves when and within what limits information about their private lives should be communicated to others. This right is particularly important in the digital age, where personal data is often collected and used for various purposes, such as marketing, profiling, or surveillance.
Whether a violation of the right to informational self-determination exists depends on a balancing of interests ‒ the interest in data collection (by the State) and the citizen’s interest in preventing this. The weight of the interference of the State is determined in particular by the type and scope of the data that can be processed and the permitted method of data analysis or evaluation. The legislator can control this by regulating the type and scope of the data and limiting the analysis and evaluation method. The broader the possibilities for analysis and evaluation, the greater the burden of justification on the legislator.
The AI Act will play a central role in the future and will massively restrict how governments may use AI for national security (eg, biometrical surveillance). In Germany, there is no comparable set of rules ‒ decisions are often scattered across various areas of law and based on fundamental rights considerations (as in 7.2 Judicial Decisions for the evaluation of police data).
The emergence of generative AI technologies raises new legal complexities in several areas beyond IP and data protection ‒ the latter two of which will be discussed in 8.2 IP and Generative AI and 8.3 Data Protection and Generative AI. A few of the others are discussed here, as follows.
Possible IP Protection
The AI technology itself can be protected under copyright law. It is important to differentiate between the different components of the AI technology, such as the AI training algorithm, the AI model architecture, and the training date. The AI model and the AI algorithm may be protected as a software according to Section 69a of the UrhG. The training data can be protected as database (Section 87a of the UrhG). However, it should be noted that the training data itself is typically scraped or licensed from third parties, which is why the individual training data is often protected by copyright (if it is text or images, for example). The rights then lie with the third party, so the use must either be justified by the text and data mining exception or by a licence.
The input or prompts are often too simple or technically determined (eg, technical requirements for a picture such as format) to be granted copyright protection because they do not meet the requirements on originality constituted in Section 2(2) of the UrhG. However, more detailed prompts – in which the author utilised a creative decision-making space ‒ may be protected by copyright. Many prompts could also be allocated in a database and enjoy protection under a database right.
In most cases, however, the output will not be IP-protected. The typical prompt will be too vague, giving the AI a range to produce different results. Therefore, the output cannot be seen as the work of the (human) author and is typically not protected. An exception might be if the user does not leave this range open by using very specific input that predetermines the shape of the output, as might be the case with “auto-filling” lines of code into an existing code that sets the context. Another exception could be where an already protected work is only slightly edited with AI.
Possible IP Infringements
Collecting training data from the internet is generally a reproduction under copyright law, which can be justified as legal text and data mining (Section of the 44b of the UrhG). This primarily requires that the data is lawfully accessible on the internet (eg, freely available) and that there is no machine-readable opt-out of the rights-holder (eg, in the robots.txt code and in the company information of the website in an Optical Character Recognition (OCR) format).
Even if only a small but recognisable amount of the copyrighted work or parts of it are included in the input or generated output of the work, courts are likely to consider this to be a relevant reproduction or transformation of the work requiring the author’s consent. Private users may be able to rely on their permission to make copies (Section 53 of the UrhG).
The GDPR and generative AI are generally compatible. However, in certain situations, the requirements of the GDPR create difficulties in relation to generative AI that need to be addressed using the risk-based approach of the GDPR. The following issues are not exhaustive but give a flavour of some of the difficulties. Further issues were published in May 2024 in the guidance on generative AI and data protection by German DPAs. These guidelines are the first comprehensive recommendations by German DPAs specifically for generative AI.
Data Subject Rights
For data controllers, it is important to appropriately manage the trade-offs arising from these difficulties and the risk-based approach. By way of example, in the case of inaccurate personal data produced as output by an AI model, the data subject’s right to rectification or erasure may not be enforceable. This is due to the “black box effect”, which makes the identification and deletion of specific data sets from an AI model extremely complex (both technically and logistically), especially if the data has already been integrated into the model and can no longer be uniquely identified. While some German DPAs have required extensive re-training of the model to avoid similar outputs, filtering seems more appropriate – although it is unclear whether German DPAs would accept this.
Data Minimisation
With regard to data minimisation and purpose limitation, as per other issues that reflect some apparent contradictions between GDPR and generative AI, German regulators have so far not put this in the spotlight. In terms of data minimisation – which, if taken seriously, could jeopardise the accuracy of outputs ‒ one German regulator has already pointed out that, instead of data minimisation, a wealth of data is needed from a societal perspective to make AI work. This demonstrates that legal discussions around AI are constantly evolving.
Past
Initially, predictive AI tools in legal tech focused primarily on analysing large sets of documents. These tools helped lawyers by clustering documents based on similar content and identifying specific clauses (eg, liability clauses) with greater accuracy than simple keyword searches. In addition, AI has facilitated the extraction of key information from large data sets. Historically, document automation in the legal sector has been predominantly rule-based only, failing to realise the potential of AI.
Present
The legal profession is currently experiencing a paradigm shift with the introduction of generative AI technologies. Law firms are increasingly experimenting with standard or fine-tuned LLMs to assist lawyers with various tasks, including answering legal questions, summarising text, brainstorming, and translating documents. Despite these advances, the legal industry faces challenges in effectively integrating LLMs with large amounts of own data. Current technology solutions – such as Retrieval Augmented Generation (RAG), fine-tuning and knowledge graphs ‒ have yet to provide an off-the-shelf product that allows lawyers to seamlessly interact with thousands of pages of data on a sophisticated level.
Future
Overcoming the current technological challenges of implementing large amounts of proprietary data promises a new era of sophisticated legal AI applications. Potential future use cases include the development of intelligent policy databases, improved contract drafting based on internal preferences, and the analysis of lengthy court opinions to prepare new legal documents. These advances are expected to significantly disrupt the legal profession.
Professional Law
German professional law for lawyers (Berufsrecht der Rechtsanwälte) does not pose insurmountable obstacles to the adoption of AI technologies. Currently, most AI solutions in the legal sector are procured as software as a service (SaaS) models. This approach presents lawyers with challenges similar to those encountered during past cloud outsourcing activities.
Liability and Insurability
Establishing liability for damages caused by generative and predictive AI systems is crucial owing to their potential harmful outcomes. Under German law, as AI itself is not a legal person, liability for damages caused by AI systems must be attributed to the operator or others in the supply chain. Insurability of AI-related damages is closely tied to liability, but as AI blurs the line between human and machine behaviour, it becomes challenging to allocate responsibility and determine insurability. This has sparked a debate on the need for separate AI insurance to cover innovation and development risks.
Liability Issues
From a German legal perspective, liability for AI damages can generally be established through contract law, product liability claims, and tort liability. However, each approach presents difficulties. Proving breach of duty and causality in contract law can be challenging, especially when the inner workings of an AI system are not accessible. Product liability claims face difficulties due to the complexity and opacity of AI systems, including establishing a defect, damage, and causal link. Tort liability is hindered by the lack of regulatory rules for AI safety, complexities in proving fault and causation, and challenges in assessing non-human AI systems.
In conclusion, German law is not adequately equipped to address the unique challenges of AI liability. However, the EU has recognised these limitations and is working on creating a harmonised legal framework to address AI-related challenges in product liability and tort law.
Status Quo
Although there are no local governmental initiatives addressing the issues related to AI liability, the EC has taken steps to regulate AI. In February 2020, the EC published a White Paper and a report on AI safety and liability, which set the stage for updates to product liability legislation in the EU and Germany.
EU Initiatives
The proposed updates include revising the current Product Liability Directive and introducing a new AI Liability Directive. The revised Product Liability Directive maintains strict liability for manufacturers, holding them responsible for harm caused by defective products, including those based on AI. Additionally, victims seeking compensation for damages caused by AI products and services can also rely on fault-based tort liability regimes in EU member states.
The key changes proposed in the EU Directives concern the burden of proof and disclosure powers. They aim to address information asymmetries between victims and those responsible for AI-related harm. The EU Directives introduce enhanced disclosure powers for potential tortfeasors and alter the burden of proof for claimants. Presumptions of evidence and orders for prima facie evidence are also proposed to streamline the process of proving liability in product-related cases.
Impact
These changes represent a significant shift in the product liability landscape in the EU and Germany. They have the potential to impact the liability of supply chain actors and shape the legal framework governing AI-based products and services.
Scope
Bias in AI refers to unfair or discriminatory preferences embedded in AI systems, leading to unequal treatment based on characteristics such as race or gender. The EU AI Act, along with the GDPR, addresses bias in high-risk AI systems and requires controllers to mitigate these risks. Currently, best practices for addressing bias in AI are limited and industry efforts in Germany are insufficient.
Bias in AI
Managing the risk of biased outcomes in AI systems requires a tailored approach, considering the specific domain and context. Trade-offs must be made in choosing safeguards for different characteristics and groups. Documentation and justification of the chosen approach ‒ considering privacy, fairness, and the application’s context – ensure accountability for AI risk management decisions.
Examples and Issues
Two areas where bias poses significant risks are employment (automated CV pre-selection) and finance (automated investment advice and credit scoring). However, individuals face challenges in proving bias following algorithmic decisions, leading to a lack of case law on compensation claims. Regulatory investigations by German DPAs play a crucial role in identifying bias in AI systems. While enforcement actions are yet unknown, German DPAs have expressed their concern regarding bias. There is occasional political movement to revise the General Equal Treatment Act (Allgemeine Gleichbehandlungsgesetz, or AGG) to include algorithmic decisions, given their increasing importance for consumers.
When it comes to protecting personal data in AI, there are several risks and benefits to consider.
Risks
In terms of risks, the vast amount of personal data required for AI systems creates the risk of data breaches and unauthorised access, which can lead to identity theft and other malicious activities. It can also produce biased outcomes in areas such as employment, lending or criminal justice, leading to unfair or discriminatory practices.
Potential
On the other hand, AI-powered systems can automate processes, increasing efficiency and productivity to deliver more effective services to individuals. In healthcare, for example, AI can help diagnose diseases, predict outcomes and suggest personalised treatment plans. Personal health data can also facilitate medical research and the development of new therapies.
Data Security
Against this backdrop, data security is one of the crucial elements under the GDPR (though not exclusively) to strike a balance between risks and benefits. For the German market, the recommendations of the German Federal Office for Information Security (eg, the recent paper on “Generative AI Models – Opportunities and Risks for Industry and Authorities”) must be taken seriously. While they recognised that risks cannot always be avoided 100%, but that it is often a matter of minimising the risk appropriately, the development of best practices should be closely monitored.
The advent of AI has significantly expanded the capabilities and applications of facial recognition and biometrics. The EU AI Act distinguishes between “post” and “live” biometric identification methods ‒ each of which is associated with different levels of risk and regulatory requirements.
Post Biometric Identification ‒ High-Risk Applications and Regulatory Requirements
Post-biometric identification is classified as a high-risk application under the EU AI Act, requiring a comprehensive set of regulatory requirements to ensure data security and privacy. The only exception to this strict regulation is biometric verification used solely to confirm an individual’s claimed identity.
Live Biometric Identification ‒ Prohibitions and Exceptions
Contrastingly, live biometric identification faces a general prohibition, especially when applied in real-time in publicly accessible spaces for law enforcement purposes. Exceptions to this prohibition are narrowly defined and permitted only under three critical conditions, as follows.
Liability Across Various Legal Frameworks
The use of facial recognition and biometric data intersects with various legal domains, requiring compliance with the consent rules of each jurisdiction. These include the GDPR in the EU, which treats the processing of biometric data as sensitive data.
Relevance
In Germany, the prohibition of automated decision-making (ADM) under Article 22 of the GDPR is relevant to both predictive and generative AI systems. These legal restrictions aim to address concerns about the potential risks and harmful effects of ADM, particularly in areas that significantly impact individuals’ lives. Inaccuracies and biases in ADM processes can have severe consequences, including unfair discrimination and ethical issues.
Scope
Meaningful human involvement is crucial for excluding the strict requirements of Article 22 of the GDPR, as highlighted in the recent SCHUFA case (ECJ, Case C-634/21). When meaningful human involvement is lacking and decisions will have “legal” or “similarly significant” effects on individuals, such as in contractual or vital areas of life (eg, work, finance, and living conditions), data controllers can only rely on limited justifications such as consent and contractual necessity. They must provide comprehensive information about the decision-making process’ underlying logic and implement individual algorithmic due process, allowing individuals to express their views and be heard.
Impact
Non-compliance with the GDPR can lead to significant penalties and there are also reputational risks to consider. If customers perceive automated decision-making as unfair, biased, or lacking transparency, it can undermine their trust in the company.
The EU AI Act sets out a variety of rules targeting different levels of risk and transparency requirements associated with AI systems.
AI Systems With So-Called Specific Transparency Requirements
The EU AI Act contains the following rules for AI systems with specific transparency requirements.
Further Transparency Obligations (Not Enumerative)
The EU AI Act also contains further transparency obligations, as follows.
There are some individual regulations for price-setting using AI technology, as follows.
Similarities to Traditional SaaS Contracts
Many AI solutions are now being procured as a service (SaaS), which has contractual similarities to traditional cloud service negotiations. This includes areas such as availability and fault resolution (service-level agreements), as well as maintenance and support.
Emerging Challenges in AI Contracts
However, the integration of AI presents the following unique challenges that have not previously been encountered in cloud negotiations.
There is currently no established market standard that addresses these emerging issues. Lawyers will need to develop individual, bespoke solutions for their clients.
AI technologies have a profound impact on work environments, particularly in the area of personnel decisions. They offer advantages in processing large amounts of data quickly for tasks such as pre-selecting job applicants and creating scoring tables for employee dismissals and performance reviews.
Exclusively Automated Decisions
However, the GDPR restricts exclusively automated decisions in employment relationships. Therefore, decisions with legal implications (eg, hiring, transfers, and dismissals) should generally involve human review or decision-making unless the narrow justifications and strong safeguards under Article 22 of the GDPR can be complied with.
Pre-selection/Support Measures
Pre-selection and support measures utilising AI are permissible but require careful examination. AI can be effectively used in various HR functions, including generating content, automating job descriptions, pre-selections, reference letters, employee chatbots, and relocation support.
Risks
Nevertheless, there is a notable risk of discriminatory decisions made by AI tools. Employers can be held liable under the AGG if they inadequately programme AI systems, use flawed data or formulas, or neglect regular quality checks. Liability applies regardless of whether the tools are internal or external and irrespective of technical responsibility for errors or discriminatory practices. Indirect discrimination can occur when seemingly neutral criteria end up favouring certain employee groups or genders in practice.
Co-determination Rights
Furthermore, depending on the specific AI tools and set-up, works councils have significant co-determination rights and detailed works agreements must be negotiated with employee representatives. Compliance with these rights and agreements is crucial for the lawful implementation of AI in the workplace.
Evaluation
Performance evaluation using AI tools promises to be more objective and efficient. Manual errors as well as misbehaviours can be revealed easier and automated. There are tools that review performance, analyse individual or group work activities, tools that review manual or automated data and processes such as (travel) cost reimbursements, and many more.
At the same time, there is a risk of violations of the AGG, especially if the programming and/or output is inadequate.
Monitoring
Monitoring employees is subject to strict conditions based on case law. Generally, total surveillance of employees without cause (including covert video or audio surveillance and site surveillance) is not allowed. Exceptions are limited to specific cases and suspicions. Preventive or support measures are permissible as long as they do not create undue surveillance pressure. However, these principles may conflict with new technologies such as voice-based live evaluation of calls and transcription tool reviews.
To align with evolving practices, Germany needs to adapt previous employment laws to the changing nature of work. Furthermore, when processing individual (log) data with specific tools and set-ups, works councils have significant co-determination rights, and detailed works agreements should be negotiated with employee representatives. As there is no established case law in this area, it is crucial to establish reasonable and detailed agreements and guidelines, accompanied by regular checks and training sessions.
Today’s digital platforms and their success would not be imaginable without algorithms. Recommendation algorithms play an essential role, as the typical user only swipes through the recommended content in their feed.
AI has a big part essentially for platforms hosting user-generated content because the law and jurisdiction may expect the use of algorithms from them to some extent to prevent the repetition of a known breach of law on their platform (“notice and stay down”).
Further obligations arise from the European Digital Services Act (DSA), as follows.
Financial services companies increasingly rely on AI to enhance operational efficiency, customer service, and risk management.
Regulatory Framework for Outsourcing
The outsourcing of IT services in the financial sector is subject to stringent regulations. When an outsourced function is considered a critical or important operation, national and international regulatory frameworks come into play. In Germany, for instance, the Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, or BaFin) sets national standards ‒ whereas at the European level, the European Banking Authority (EBA) provides guidelines. Historically, IT outsourcing in this sector has predominantly involved cloud services, shaping the regulatory approach towards outsourcing.
Comparing AI and Cloud Outsourcing
Upcoming AI outsourcing shares several similarities with cloud-based outsourcing, especially given that cloud solutions provide the infrastructure for AI tools at the application level. Contractual implications for the financial services sector concerning AI will be analogous to those for cloud services, addressing aspects such as data security and risk management — details of which are discussed in 12.1 Procurement of AI Technology.
New Regulatory Challenges Posed by AI
However, AI outsourcing introduces new challenges that future regulations must address:
Regulatory authorities are expected to issue guidelines for AI outsourcing akin to those established for cloud services.
The use of AI in healthcare is raising concerns with regard to the sensibility of the data handled and with the potential damages of wrong AI decisions or hallucinations.
EU AI Act
In annex III, number 5 of the EU AI Act in its current form, the following AI systems are classified as “high-risk”:
Additionally, AI used in critical infrastructure is classified as high-risk according to annex III, number 2 of the EU AI Act. The following are considered critical infrastructure: stationary medical treatment, supply of life-sustaining medical products, supply of prescription medicines, and laboratory diagnostics.
Those AI systems must comply with Article 8 et seq of the EU AI Act (eg, with regard to risk management, quality of training data, documentation, transparency, human oversight, and cybersecurity).
GDPR
Additionally, Article 9 of the GDPR sets high requirements for processing genetic, biometric, health or sexual data.
Current Legal Landscape
Levels 1 and 2, which involve assisted and semi-automated driving, align with existing German legislation. The levels follow the industry standard of the Society of Automotive Engineers, scaling from zero (no automation) to five (fully autonomous).
Initially, Levels 3 and 4, which have a higher degree of automation, posed challenges under German law. However, legislative changes in 2017 expanded the scope to allow these levels of automation. Nevertheless, the driver must remain perceptually ready to assume control of the vehicle when prompted by the system or when they realise that the conditions for proper use are no longer met.
Autonomous Driving
Level 5 (known as autonomous driving – ie, where there are only passengers and no driver) does not meet current legal requirements and remains prohibited under German law. Car owners are strictly liable under the German Road Traffic Act. However, establishing liability in cases of damage caused by AI remains challenging, as the victim must prove a breach of duty, resulting damage, and the causal link between the two.
At the German level, there is no specific legislation on autonomous vehicles. Regulatory developments in this area will primarily occur at the EU level, such as through the Type Approval Framework Regulation. The future EU AI Act will not address this issue directly but will require the EC to establish the AI-specific accountability requirements from the EU AI Act through delegated acts under the Type Approval Framework Regulation. This is expected to introduce comprehensive requirements for autonomous vehicles in the future.
The manufacturing sector in Germany is rapidly adopting AI, with applications in assembly, packaging, customer service, and open-source robotics.
Autonomous Mobile and Professional Service Robots
There is a growing market for autonomous mobile robots that can navigate uncontrolled environments and interact with humans. Additionally, AI applications in professional service robots (eg, crop detection and sorting objects) are highly valued.
Regulation
The regulation of these technologies in Germany will be governed by the EU’s Machinery Regulation, which will be effective in 2027. This comprehensive EU regulation aims to provide legal certainty and harmonise health and safety requirements for machinery products (including AI-based machinery) throughout the EU. It focuses on the design, construction, and marketing of machinery products in various sectors, including manufacturing.
The use of AI in the professional services sector is governed by a mix of existing regulations and emerging guidelines that address different facets of AI use.
Confidentiality and Data Protection
Confidentiality remains paramount in professional services. The integration of AI must not compromise client confidentiality or data protection standards. Professionals must ensure that AI systems comply with strict data protection regulations, such as the GDPR in the EU, which requires the protection of personal data processed by AI technologies. For further details, please refer to 8.3 Data Protection and Generative AI and 11.2 Data Protection and Privacy.
IP Concerns
The use of AI can raise complex IP issues, particularly in relation to the ownership of AI-generated outputs and the use of proprietary datasets to train AI. Professionals need to navigate these IP concerns to avoid infringement risks and ensure that contracts clearly delineate the IP rights associated with AI-generated work. For further details, please refer to 8.2 IP and Generative AI and 15. Intellectual Property.
Regulatory Compliance
Professionals need to ensure that AI applications comply with sector-specific regulations and codes of conduct. This includes adhering to ethical guidelines set by professional bodies to ensure that AI systems are used in a manner consistent with professional ethics and standards. For further details, please refer to 3.7 Proposed AI-Specific Legislation and Regulations.
Only natural persons can be inventors; therefore AI can’t be an inventor under German Patent Act (Federal Patent Court, 11 November 2021, 11 W (pat) 5/21). The same applies to copyrights. Copyright holders can only be a natural person.
According to Section 2 no 1 of the Trade Secrets Protection Act (Gesetz zum Schutz von Geschäftsgeheimnissen, or GeschGehG) a trade secret is information:
The first and last requirement may be fulfilled in most cases, given that the material AI technology such as the AI model and the training date is kept secret (as long as it is no open-source model).
Appropriate measures could be encryption, for example, and non-disclosure agreements with a fine in case of breach. Unauthorised use of a trade secret can be a criminal offence according to Sec 23 of the GeschGehG. Therefore, a reliable legal protection of AI is possible through the law of trade secrets.
Until now, there is no case law on the copyright protection of AI generated content. In literature, however, there is a broad consensus that there is no protection in most cases but only in specific constellations (eg, AI as “auto-fill” to existing works such as code (see 8.2 IP and Generative AI)).
It also remains to be seen whether courts will lower the requirements marginally so that, even with a slightly more detailed prompt and despite the leeway of the AI tools, they will satisfy copyright protection if only the defining and copyright-creating features of the output were already recognisably laid out in the input.
As a result, the use of AI tools can raise IP questions in the same manner as human-created works. The human user is still obligated to ensure that the input is free of third-party rights and that the output does not infringe on others’ rights before using it. Users should also always check what rights they themselves grant to the providers of the AI tools. Is the AI developer allowed to use the content for training? Are they allowed to view it (data protection and confidentiality issues)?
If a person creates a work themselves or has it created, they either know possible third-party rights from their research/work or can hold the third party liable (through indemnity clauses). With AI tools, people often do not know how the creation process of the AI tool works. While there are also indemnity clauses from leading AI tool providers, these are typically subject to conditions and cannot protect against the rights-holder demanding cease-and-desist.
When advising corporate boards on identifying and mitigating the risks of adopting AI, it is essential to approach the task systematically and comprehensively.
Identification of AI Application and Purpose
Legal Areas Impacted by AI Implementation
Developing Holistic AI Compliance Strategies
Implementing specific AI best practices requires addressing key issues to ensure effectiveness, manageability and proportionality for businesses. The following steps for organisations to follow have been proven to work in practice.
Carl-Theodor-Strasse 6
Düsseldorf
40213
Germany
+49 (0)211 2005 6000
+49 (0)211 2005 6011
duesseldorf@twobirds.com https://www.twobirds.com/General Overview
The world is experiencing a turning point. AI is the central key technology. Whether voice assistants, translation tools, personalised recommendations, self-driving cars, AI-supported health diagnoses or predictive maintenance – AI is expanding human capabilities. In the process, new value creation opportunities are emerging. More and more companies are learning about the innovative value of data and information and, in the process, that the data economy raises the following questions.
AI is not only powerful, but also vulnerable. AI models can make complex “black box” decisions using logic that is often incomprehensible. In addition, the quality of the underlying data is critical to AI’s performance. If training data is unrepresentative or misclassified, this deficit carries over into AI output and can cause immediate damage, such as discrimination.
New AI Legislation on the Horizon
The AI regulation applicable to Germany originates primarily from the EU.
A number of European and German laws and regulations that apply to AI are already in force, as follows.
Section 44b of the German Copyright Law (Urheberrechtsgesetz, or UrhG) contains a permission for AI training with copyrighted content and thus is one step ahead of the “fair use” debate in the USA. This legislation paves the way for more legally secure experimentation and innovation within the AI sector by providing a specific framework for the use of copyrighted material. Importantly, this approach can serve as a model for other jurisdictions grappling with similar issues, suggesting a path forward that respects both IP rights and the need for AI advancement.
Nevertheless, careful legal interpretation and compliance efforts are essential to ensure that the use of copyrighted content in AI training does not overstep the boundaries set by this law. This highlights the ongoing challenge of aligning fast-paced technological innovation with existing legal frameworks.
The European Data Act emphasises fairness, non-discrimination and transparency in B2B data access conditions, ensuring that contractual terms are reasonable and do not impose undue restrictions on data access or use. Data holders are barred from making the process of accessing or using data unduly difficult for users, which reinforces the principle of autonomy and prevents manipulation through interface design or function.¬¬Furthermore, the European Data Act sets out specific rules for the protection of trade secrets, balancing the need for data access with the protection of sensitive information. In cases where security or trade secrets are at risk, access to data can be restricted.
The following European legislation applicable to AI has been adopted.
AI Is Increasingly Becoming a Compliance Topic
Compliance is not an agile process, but a proactive one. The focus is on setting a strategic course – for example, in product development or the establishment of databases. The EU AI Act and the European Data Act will come into force soon. Those who think in terms of the new set of obligations conceptually will be at least one step ahead of the competition. On the other hand, anyone who rests on the status quo or concentrates solely on GDPR issues is endangering their own business model.
AI deployment is a matter for executives, in two respects. First, management needs a clear understanding of the functionality and legal dimension of a planned AI deployment in order to make strategic transformation decisions. Second, important corporate decisions require extensive preparation. Here, the use of AI can serve to create an appropriate information basis – in some cases, it may even be required.
This evolution towards AI as a compliance issue reflects a broader understanding that AI technologies are not just tools for enhancing efficiency and innovation but also carry significant ethical, legal and social implications. As such, organisations are tasked not only with harnessing the power of AI but also with ensuring that their AI initiatives comply with an increasingly complex web of regulations and standards.
By way of example, the introduction of the EU AI Act highlights the need for businesses to assess and classify their AI systems based on the level of risk they pose. High-risk applications – such as those involving biometric identification or critical infrastructure – will face stricter regulatory requirements, including with regard to transparency, accuracy, and security measures. These obligations underscore the importance of incorporating compliance considerations into the AI development life cycle, from initial design to deployment and monitoring.
Moreover, the compliance landscape for AI is complicated by the international scope of many AI systems. Companies operating across borders must navigate not only the regulations of the EU but also those of other jurisdictions in which they operate. This global dimension necessitates a sophisticated, well-coordinated compliance strategy that can adapt to the diverse legal environments and cultural expectations regarding AI.
Every Company Needs AI Guidelines (and Further Risk Mitigation Measures)
The incorporation of AI into business processes and on company devices brings with it a complex array of considerations, both in terms of opportunity and risk. The transformative potential of AI across various sectors – from automating mundane tasks to providing insights from data analytics – is undeniable. However, the deployment of AI technologies must be navigated carefully to safeguard the company against the following significant risks.
The use of AI is a balancing act for management. The ideal approach is to evaluate the specific planned use of AI based on the actual relevant technical and legal risks and to find tailored solutions. Businesses must proactively assess the specific applications of AI, weighing the technological and legal implications to devise bespoke strategies that mitigate risk while capitalising on the benefits of AI. This process involves the following.
By adopting a comprehensive approach to AI integration that encompasses technical diligence, legal compliance and ethical considerations, companies can navigate the complexities of AI deployment, thereby ensuring that they harness its potential while minimising associated risks.
Outlook
Regulation and compliance obligations for AI will continue to increase. Only those who deal with these issues at an early stage will be able to gain a competitive advantage from the use of AI. As the landscape of AI evolves, proactive engagement with regulatory changes and ethical considerations will become a cornerstone of innovation and market leadership. Companies that prioritise transparency, accountability, and the ethical use of AI will not only navigate the complex regulatory environment more effectively but will also build trust with customers and stakeholders. This trust is essential in a digital economy where data privacy and security are paramount. Moreover, organisations that excel in embedding ethical AI practices into their operations will likely see enhanced brand reputation and customer loyalty translating into long-term success. In essence, the future will belong to those who view compliance not as a burden but as an opportunity to lead in the responsible development and application of AI technologies.