Artificial Intelligence 2025 Comparisons

Last Updated May 22, 2025

Contributed By Bird & Bird

Law and Practice

Authors



Bird & Bird delivers expertise covering a full range of legal services through more than 1,400 lawyers and legal practitioners across a worldwide network of 32 offices. The firm has built a stellar, global reputation from a deep industry understanding of key sectors and through its sophisticated, pragmatic advice. Bird & Bird is a global leader in advising organisations being changed by digital technology, as well as advising companies who are shaping the world’s digital future. AI and generative AI are a cornerstone of the firm’s legal practice, which helps clients leverage AI and digital technology against a backdrop of increasing regulation. Bird & Bird’s long-standing strengths in data protection, commercial and IP law – allied with the team’s technology and communications expertise – mean the firm is ideally placed to work with clients in order to help them reach their full potential for growth.

The general legal background for AI under German law can be summarised as follows.

  • German contract law ‒ there are distinct issues related to contracting for AI services and contracting through AI. The former involves procuring AI services for an organisation, which presents unique legal challenges (eg, regarding the quality and reliability of AI outputs). Contracting through AI raises questions about the validity of contracts entered into by autonomous bots, with contractual obligations potentially being imputed to human users.
  • German tort and product liability law ‒ the key concerns are proving defects or breaches of duty, establishing damages, and determining the causal link in AI-related damages.
  • EU and German data protection law ‒ the General Data Protection Regulation (GDPR) and Federal Data Protection Act (Bundesdatenschutzgesetz, or BDSG) are the main legal frameworks in Germany. Although there are no specific AI regulations, general provisions on automated decision-making and consent apply. Challenges include data deletion or correction in AI models, justification for AI model training, and whether the model itself qualifies as personal data.
  • German copyright law – the use of generative AI to produce works such as texts, images, music, videos and code raises major questions under EU/German copyright law. These questions affect the entire value chain – from training AI systems on copyrighted material to ensuring IP compliance during the input process, and protecting AI-generated content. Copyright infringements may occur if AI output includes copyrighted works in an identical or recognisable form, and liability for such infringements remains unsettled.
  • German labour law ‒ the integration of AI in the workplace raises labour law issues, such as managing job losses, addressing improper handling of AI, and concerns about discrimination or misuse of AI solutions. Works councils, which have influence over AI introduction, are also relevant.
  • German consumer protection law ‒ AI-based consumer products and services fall under German consumer protection law, triggering documentation, transparency requirements, and granting consumer rights in the case of defects. Sellers of AI products have update obligations, but defining defects and determining when an AI system requires an update is legally uncharted territory.
  • German criminal law ‒ AI development presents unique challenges in criminal law, particularly in terms of foreseeability of harm, appropriate standard of care, and criminal liability of robots and machines. The discussion focuses on adapting criminal categories to technological changes and addressing risks posed by autonomous systems, trust in AI decisions, and permissible risk.

AI continues to revolutionise industries globally, including in Germany, by enhancing efficiency, innovation and decision-making processes. Predictive AI has been integrated into mainstream applications for years, whereas generative AI is moving beyond its initial phase of industry implementation, gaining momentum since 2023, further accelerating in 2024, and now showing early 2025 trends towards increasingly agentic AI.

Predictive AI

Healthcare

AI significantly enhances the ability to diagnose diseases such as cancer from x-ray images, leading to quicker and more accurate treatments. In 2024 and early 2025, remote AI-assisted monitoring solutions have further streamlined patient care.

Energy

By predicting peak demand times and optimising cooling processes, AI reduces energy consumption in data centres, contributing to environmental sustainability. Recent advancements focus on extending these predictive capabilities to grid-wide efficiency measures.

Finance

Financial institutions utilise AI for identifying patterns indicative of fraudulent credit card payments, enhancing security measures. Additionally, predictive analytics have become more sophisticated, shaping personalised investment strategies and AI-driven risk assessments.

Manufacturing

AI predicts machine failures before they occur, reducing unplanned downtime and extending equipment life. Continued IoT integration in 2024–2025 has strengthened real-time performance tracking and predictive maintenance.

Generative AI

Marketing

AI tools generate targeted content and personalised marketing campaigns, increasing customer engagement and driving sales. Since 2024, large language models (LLMs) have enabled more nuanced, multilingual campaign strategies.

Programming

In software development, AI accelerates code creation, reducing development times and human error. Increasingly agentic AI approaches can now autonomously refactor codebases, freeing developers to tackle complex tasks.

Customer service

AI generates responses that mimic human interaction, improving the customer experience while reducing wait times. Advancements in 2025 have allowed chatbots to handle ever more intricate scenarios, enhancing user satisfaction.

Architecture and design

AI aids architects and designers by generating innovative design alternatives, reducing time and costs. Larger, more advanced generative models introduced through 2024–2025 facilitate broader exploration, leading to designs that balance creativity and functionality.

Germany continues to strengthen its government-backed AI ecosystem, guided by an updated national AI strategy and increased budget allocations through 2025. The Federal Government focuses on translating AI research into practical applications, particularly in public welfare, environmental sustainability, and support for SMEs and start-ups. Key initiatives include:

  • AI for the Common Good – expanded to support AI-driven public health projects, such as diagnostic tools in university hospitals, addressing societal challenges;
  • KI4KMU programme – increased funding for SMEs to adopt AI, with grants for pilot testing, infrastructure, and training in areas such as production and logistics;
  • European EUREKA Clusters – enhanced cross-border collaborations for AI R&D, focusing on manufacturing intelligence and sustainable solutions, boosting Europe’s global AI competitiveness; and
  • DeepTech Future Fund (DTCF) – expanded investments in cutting-edge fields such as resource-efficient computing and synthetic data, with stronger links to start-up financing initiatives.

Additional measures include expanded high-performance computing (HPC) infrastructure through the National High Performance Computing Alliance and new research funding via the German Research Foundation (DFG) to nurture top AI talent.

Germany has taken a cautious approach to regulating AI by relying on existing legal frameworks rather than creating AI-specific legislation. This technology-neutral regulatory environment is partly driven by the need to align with the EU’s AI-specific draft legislation, such as the EU AI Act and proposals on AI liability. As an EU member state, Germany has limited national regulatory options and must adhere to the overarching EU framework. This has left little room for independent action at the national level.

To date, Germany has not enacted any AI-specific legislation.

Government bodies in Germany have not yet issued AI-specific guidelines, but they have been involved in promoting ethical guidelines for trustworthy AI in specific areas. For example, the Federal Ministry of Economics and Climate Protection funded the “ForeSight” project, which integrated ethical considerations into the development and application of smart living services. ForeSight developed a code of ethics based on the “Ethics Guidelines for Trustworthy AI” commissioned by the European Commission (EC) and the “Algo.Rules” from the Bertelsmann Foundation. The code focuses on ethical principles such as respect for human autonomy, avoidance of harm, and fairness and accountability. It provides developers with seven core indicators to assess smart living services.

The EU has introduced several legal initiatives to promote trust in AI. While the EU AI Act and the sectoral safety legislation are directly applicable in the EU member states owing to their nature as EU Regulations, the liability provisions have to be transposed into German law owing to their nature as EU Directives.

EU AI Act

The EU AI Act, a cross-sectoral product safety regulation, targets high-risk AI systems and general-purpose AI models. It will be directly applicable in all EU member states, including Germany. It entered into force in August 2024.

Liability rules

One new EU Directive, the revised Product Liability Directive, addresses liability rules for products including AI-based products and services. The revised Product Liability Directive entered into force on 9 December 2024 and will need to be transposed into German law within two years from that date. Meanwhile, the AI Liability Directive, which was also proposed by the EC, has been withdrawn by the Commission (for further details, see 10. Liability for AI).

Sectoral safety legislation

Sectoral safety legislation ‒ for example, the General Product Safety Regulation (GPSR) and the Machinery Regulation (MR) ‒ is being revised to address AI integration into existing product safety frameworks. These regulations aim to ensure the safety and accountability of AI-enabled products within their respective sectors. The GPSR came into force on 12 June 2023 and became applicable in December 2024, while the MR came into force on 19 July 2023 and will apply from 20 January 2027. These regulations do not require national implementation as they are already in effect.

In the absence of an AI-specific jurisdictional law, inconsistencies are unlikely to arise in Germany.

This is not applicable in Germany.

Content Law

To implement Articles 3 and 4 of the EU’s Digital Single Market Directive (the “DSM Directive”), the German government introduced Section 44b and amended Section 60d of the German Copyright Act (Urheberrechtsgesetz, or UrhG) on text and data mining.

These new rules are essential for gathering AI training data, as the exemptions generally allow AI developers to scrape data – such as text and images – from the internet and use it to train their models under specific conditions. However, the main requirements for this statutorily permitted text and data mining are as follows:

  • the data must be lawfully accessible (eg, freely available on the internet); and
  • the rights-holder must not have opted out in an appropriate manner (eg, in a machine-readable format such as the robots.txt file on their website). 

Data Protection Law

Unfortunately, the foregoing cannot be said for data protection. In contrast to copyright law, the GDPR establishes a strict guardrail for the collection and use of personal data from the internet to train AI models. Meanwhile, German data protection authorities (DPAs) have made no effort to ease the interpretation of the GDPR in a way that would make the use of personal data for AI training easier to justify, and neither did the EDPB in their opinion on AI models from December 2024.

To date, Germany has not proposed any new AI-specific legislation.

In Germany, the first landmark copyright-related decision concerning generative AI was handed down in 2024 by the Hamburg District Court in the LAION case. Higher court rulings and established case law on copyright and generative AI are still missing – similar to further decisions on other pressing legal issues in this field.

However, a number of other rulings have dealt with AI-related issues in a broader sense during the past years, which read as follows.

District Court of Hamburg, 27 September 2024, 310 O 227/23

In a landmark 2024 ruling, the Hamburg District Court dismissed a copyright infringement claim against LAION, a non-profit providing open datasets for AI training, including one image belonging to the plaintiff. The court held that LAION could rely on Germany’s scientific text and data mining exception (Section 60d UrhG /Article 3 DSM Directive), recognising dataset creation as a valid scientific activity and finding no commercial intent.

While the court did not need to decide on the commercial TDM exception (Section 44b UrhG), its reasoning – particularly referencing the EU AI Act – signals that generative AI model training may fall under this broader exception if rights-holders have not opted out. It took a flexible view on the format of opt-outs, suggesting that even natural language notices may suffice depending on technological context. However, the decision is not final and has been appealed by the plaintiff.

Labour Court of Hamburg, 16 January 2024, 24 BVGa 1/24

Employers generally do not need the consent of the works council to allow employees the optional use of AI tools with private accounts if employees were previously authorised to use the AI tool by the employer, provided that they used their private accounts. A works council is a group of elected representatives of the employees of a company in Germany. Its role is to represent the interests of employees in discussions with management.

CJEU, 7 December 2023, C-634/21 (the “SCHUFA case”)

The applicability of Article 22 of the GDPR depends on three cumulative requirements:

  • there must be a “decision”;
  • this decision must be “based solely on automated processing, including profiling”; and
  • it must “produce legal effects concerning the data subject” or “significantly affect them in a similar way”.

Therefore, Article 22 of the GDPR prohibits the automated analysis of data if the result decides whether a contract is made, executed or cancelled, unless data controllers can rely on limited justifications such as consent and contractual necessity.

Federal Patent Court, 11 November 2021, 11 W (pat) 5/21

Only natural persons can be inventors, so AI cannot be an inventor under German patent law. A similar decision can be expected in the future for copyright law, where a human creator is also central to copyright protection.

Federal Court of Justice (X ZB 5/22)

Here the Court ruled in June 2024 that AI-generated inventions can be protected under patent law if a human contributor is named as the inventor. The court stated that listing AI itself as the inventor is not allowed, but naming a person who influenced the AI system is sufficient. This contrasts with the USA, which requires substantial human contribution, and the UK, where AI-devised inventions are unprotectable.

No German AI Regulator (Yet)

Germany currently lacks a specific “AI regulator” (but will have to appoint one under the EU AI Act in the future). The first draft of the German AI Market Surveillance Act outlines a comprehensive framework for overseeing AI systems within Germany under the EU AI Act. Central to this draft is the designation of the BNetzA as the primary market surveillance authority and notifying authority. However, German DPAs are also assuming a leading role in enforcing the GDPR against companies utilising and offering AI systems in the German market.

Although not all AI systems rely on personal data, personal data is often involved in the training and deployment of AI systems. Data protection has emerged as a crucial aspect of AI regulation for two main reasons. First, the concept of personal data is broad and encompasses various types of information processed by AI systems, making data protection rules applicable across sectors. Second, there is significant overlap between the governance of AI and data protection, with ethical considerations, accountability mechanisms and transparency requirements being fundamental principles of both.

DPAs as the De Facto AI Regulators in Germany

DPAs are effectively acting as de facto AI regulators for the time being ‒ actively working to regulate AI systems and likely to continue playing an increasingly important role in governing AI systems and their handling of personal data. Recently, German DPAs also published a position paper outlining the national competences required for the EU AI Act. In this paper, they even argue that German DPAs should be designated as the market surveillance authorities for AI systems in Germany, based on their tasks and expertise.

German regulators have issued several AI-specific guidelines in the last 18 months, though no dedicated AI law is in force pending the EU AI Act.

In May 2024, the German Data Protection Conference (Datenschutzkonferenz, or DSK) – a body of federal and state data protection authorities – published AI and data protection guidance for AI deployers. This non-binding guidance highlights GDPR compliance steps, warning against fully automated decisions (eg, in hiring) and urging transparency about AI logic. It recommends measures such as documentation, impact assessments and training to mitigate risks of unlawful data processing or bias.

Moreover, on 17 December 2024, the European Data Protection Board (EDPB) adopted Opinion 28/2024, addressing critical data protection aspects related to the processing of personal data in the context of AI models. This opinion is highly relevant under the General Data Protection Regulation (GDPR) framework in the EU, which includes Germany as a member state. The GDPR, enacted to protect personal data and privacy, applies to AI models that process personal data, ensuring that such technologies adhere to stringent data protection standards.

Sectoral regulators are also active – for example, the Federal Office for Information Security (BSI) released secure AI development guidelines in 2023 with best practices on AI design, deployment and operation.

While these directives and “orientation aids” are not legally binding, they shape AI governance by setting expectations for responsible AI use. They also lay groundwork for future oversight – notably, Germany must designate a national AI supervisor under the EU AI Act. In this context, the first draft of the German AI Market Surveillance Act outlines a comprehensive framework for overseeing AI systems within Germany. Central to this draft is the designation of the Federal Network Agency (Bundesnetzagentur, or BNetzA) as the primary market surveillance authority and notifying authority. The BNetzA is tasked with ensuring compliance and monitoring AI systems. The BNetzA’s role involves overseeing the implementation of the AI Act, which includes managing AI regulatory sandboxes and co-ordinating with other competent authorities to support their tasks.

The German DPAs initiated an investigation into OpenAI’s ChatGPT service in 2023. The DPAs raised questions regarding the compliance of ChatGPT’s data processing with key data protection principles, such as transparency, legal basis, data processing of minors, and information to data subjects. They focused on topics such as personal data collection, its use in machine-learning training, storage resulting from machine learning, data transfer to third parties, and user data processing in ChatGPT. This remains ongoing even in 2025, highlighting the complexities of reconciling the GDPR with generative AI services.

In Germany, the national approach to AI standard-setting emphasises the development and adoption of standards specific to key industry sectors. This focus reflects a targeted strategy to ensure that AI technologies are implemented responsibly and effectively. The national efforts are primarily oriented towards creating frameworks that guide the ethical, secure and effective use of AI across various domains. These include healthcare, mobility and environmental sectors, where AI has the potential to drive significant advancements and efficiencies.

At the core of Germany’s standard-setting are collaborations between different stakeholders, including industry leaders, academic institutions and government entities. Much of the standard-setting, particularly under the EU AI Act, will take place at the EU level through expected CEN and CENELEC standards (see 6.2 International Standard-Setting Bodies).

In the EU, AI standardisation involves key players such as the EC, the European Standardisation Organisations (CEN, CENELEC and ETSI) and the national standardisation bodies of the EU member states. These bodies are working together to develop harmonised standards that ensure that AI technologies comply with EU regulatory requirements and promote security, privacy and interoperability. This collaborative effort aims to create a standardised framework in line with the regulatory and ethical guidelines outlined in the EU AI Act, thereby ensuring the safe and responsible use of AI technologies across the EU.

The use of AI by government agencies – particularly at the national and local levels – is still at an early stage but is evolving rapidly. It offers new opportunities to increase efficiency, improve public services and support internal processes, while also raising significant concerns around privacy, data protection, transparency and accountability.

Past and Present

Initial applications have largely focused on rule-based chatbots to support citizen-government interactions. These systems typically rely on limited natural language processing and follow rigid decision-tree logic, without the flexibility or learning capabilities of modern AI. In the judicial sector, early predictive analytics tools are being used to assist in mass litigation management – eg, by clustering and prioritising incoming cases – though adoption remains limited in scope and scale.

Recent Developments

Since the release of LLMs, a growing number of governments are exploring generative AI use cases. Some administrations have begun piloting more advanced chatbots and document summarisation tools internally. For example, prototypes of “AuthorityGPTs” (akin to the corporate “CompanyGPT” trend) are being tested to support civil servants in drafting administrative acts, summarising citizen input and generating responses. Courts are also beginning to evaluate AI-driven tools to assist in the analysis and drafting of court decisions, although widespread deployment is still pending.

In addition, several countries – including Germany, France and the Netherlands – have launched national strategies or guidelines for public sector use of AI, emphasising human oversight, fundamental rights compliance, and risk-based governance in line with the EU AI Act. Some public sector bodies have also appointed dedicated AI officers or ethics boards to oversee deployments.

Future Outlook

Looking ahead, generative AI is expected to significantly enhance both external (citizen-facing) and internal (administrative) government functions. Chatbots powered by LLMs will enable more natural and responsive interactions with citizens, while internal tools will support knowledge management, policy drafting and regulatory analysis. Key challenges will include ensuring legal compliance, especially under the EU AI Act, and maintaining public trust through transparency and accountability measures.

Facial Recognition and Biometrics

Currently, AI-based facial recognition and biometric systems are not widely used in day-to-day government operations within the EU. However, they remain politically and legally sensitive topics. The EU AI Act will regulate such applications. It introduces a general prohibition on real-time remote biometric identification in public spaces, with narrowly defined exceptions for law enforcement. For further details, see 11.2 Facial Recognition and Biometrics.

Automated data analysis or evaluation by the State interferes with citizens’ right to informational self-determination. The Federal Constitutional Court, in its judgment of 16 February 2023, decided that the legal regulations for automated data analysis in Hesse and Hamburg are unconstitutional (1 BvR 1547/19, 1 BvR 2634/20). The content concerns the use of analysis software that compiles and evaluates data from police databases.

The right to informational self-determination is a fundamental right in German law, which allows individuals to decide for themselves when and within what limits information about their private lives should be communicated to others. This right is particularly important in the digital age, where personal data is often collected and used for various purposes, such as marketing, profiling or surveillance.

Whether a violation of the right to informational self-determination exists depends on a balancing of interests ‒ the interest in data collection (by the State) and the citizen’s interest in preventing this. The weight of the interference of the State is determined in particular by the type and scope of the data that can be processed and the permitted method of data analysis or evaluation. The legislature can control this by regulating the type and scope of the data, and by limiting the analysis and evaluation method. The broader the possibilities for analysis and evaluation, the greater the burden of justification on the legislature.

The AI Act will play a central role in the future and will massively restrict how governments may use AI for national security (eg, biometrical surveillance). In Germany, there is no comparable set of rules ‒ decisions are often scattered across various areas of law and based on fundamental rights considerations (as in 7.2 Judicial Decisions for the evaluation of police data).

The emergence of generative AI technologies raises new legal complexities in several areas beyond IP and data protection ‒ please see 15.1 IP and Generative AI. A few are discussed here, as follows.

  • Contractual law ‒ the integration of generative AI outputs in services (eg, AI-created marketing campaigns) and the procurement of AI services for businesses necessitate new contractual frameworks. These frameworks must address liability, performance metrics and IP rights, reflecting the unique nature of AI-generated content, training and services. (For more detail, see 12.1 Procurement of AI Technology.)
  • Regulation ‒ following the entry into force of the EU AI Act, businesses are meticulously planning to determine the extent of their affected operations and strategising to comply with new obligations. This includes assessing AI systems’ risk levels, implementing necessary risk management measures, and adhering to transparency requirements. (For more detail, see 11. Specific Legal Issues With Predictive and Generative AI and 3.7 Proposed AI-Specific Legislation and Regulations.)
  • Labour law ‒ the deployment of generative AI in the workplace raises questions regarding employee rights and corporate governance, especially concerning the co-determination rights in the creation and enforcement of internal policies on AI usage. This encompasses employee privacy, surveillance concerns, and the impact on job roles and responsibilities. (For more detail, see 13. AI in Employment.)
  • M&A ‒ in M&A, the due diligence process for AI companies now involves scrutinising ethical AI use, data management practices, compliance with AI legislation, and the valuation of AI-driven assets or capabilities, reflecting the nuanced risks and opportunities presented by generative AI technologies.

In Germany, there has not yet been a higher court ruling defining generative AI. In general, there is still no high court ruling on copyright issues related to generative AI.

The definition of AI in the EU AI Act promises to be central. It is likely that further legislation will refer to this definition, and that judgments will as well. The definition in Article 3(1) of the EU AI Act reads as follows.

  • “AI system” means a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
  • AI systems are defined in a broad way, emphasising their autonomous nature. This definition is deliberately broad to prevent the EU AI Act from becoming obsolete in the near future. This technology-agnostic definition refers to the elements of “autonomy” that distinguish AI systems from other more deterministic, algorithm-driven types of software. This definition will take precedence over other definitions that may exist due to the primacy of application of EU law.
  • In February 2025, the EC published guidelines for this definition. These guidelines provide further explanations for each aspect of the definition, with a clear emphasis on the “ability to infer”. In a positive sense, the guidelines outline various machine-learning approaches that enable this ability. At the same time, they list systems – particularly those primarily based on mathematical or statistical methods – that do not possess this ability and should therefore not fall within the scope of the AI Act.

Possible IP Protection

AI technology itself can be protected under copyright law. It is important to differentiate between the different components of the AI technology, such as the AI training algorithm, the AI model architecture and the training date. The AI model and the AI algorithm may be protected as a software according to Section 69a of the UrhG. The training data can be protected as a database (Section 87a of the UrhG). However, it should be noted that the training data itself is typically scraped from third parties, and that this individual training data is often protected by copyright (if it is text or images, for example). The rights then lie with the third party, so the use must either be justified by the text and data mining exception or by a licence.

The inputs or prompts are often too simple or technically constrained (eg, specifying technical requirements for a picture such as format) to qualify for copyright protection, as they do not meet the originality threshold required under CJEU case law. However, more detailed prompts – where the author exercises creative discretion – may qualify for copyright protection. Additionally, many prompts could be stored in a structured collection and may be eligible for protection under database rights.

In many cases, however, the output will not be protected by IP rights if the AI deployer does not provide sufficient context to control the generative process. A typical short and simple prompt – as commonly used – is often too vague, allowing the AI to generate a wide range of possible results. As a consequence, the output cannot be attributed to a (human) author and is therefore generally not protected. An exception may arise when the user provides highly specific input that effectively predetermines the shape and content of the output – for example, when “auto-filling” lines of code within an existing codebase that provides a clear contextual framework. Another possible exception is when a pre-existing, protected work is only slightly modified using AI.

Possible IP Infringements

Collecting training data from the internet generally constitutes a reproduction under copyright law. This can be justified under the text and data mining exception in Section 44b of the German Copyright Act (UrhG), provided certain conditions are met. In particular, the data must be lawfully accessible online (eg, freely available) and the rights-holder must not have opted out in an appropriate manner (eg, in a machine-readable format – such as through a robots.txt file or in the website’s company information, using a format detectable via Optical Character Recognition (OCR). If a third party’s copyrighted work is included in an identical or recognisably similar manner in the input and/or output, courts are likely to consider each instance a relevant reproduction or transformation that requires the author’s consent. In the absence of a licence or a statutory exception, such use may constitute copyright infringement. However, private users may be able to rely on the private copying exception under Section 53 of the UrhG).

The GDPR and generative AI are generally compatible. However, in certain situations, the requirements of the GDPR create difficulties in relation to generative AI that need to be addressed using the risk-based approach of the GDPR. The following issues are not exhaustive but give an idea of some of the difficulties. Further issues were published in May 2024 in the guidance on generative AI and data protection by German DPAs. These guidelines were the first comprehensive recommendations by German DPAs specifically for generative AI.

Data Subject Rights

For data controllers, it is important to appropriately manage the trade-offs arising from these difficulties and the risk-based approach. For example, in the case of inaccurate personal data produced as output by an AI model, the data subject’s right to rectification or erasure may not be enforceable. This is due to the “black box effect”, which makes the identification and deletion of specific datasets from an AI model extremely complex (both technically and logistically), especially if the data has already been integrated into the model and can no longer be uniquely identified. While some German DPAs have required extensive re-training of the model to avoid similar outputs, filtering seems more appropriate – although it is unclear whether German DPAs would accept this.

Data Minimisation

With regard to data minimisation and purpose limitation, as per other issues that reflect some apparent contradictions between the GDPR and generative AI, German regulators have so far not put this in the spotlight. In terms of data minimisation – which, if taken seriously, could jeopardise the accuracy of outputs ‒ one German regulator has already pointed out that, instead of data minimisation, a wealth of data is needed from a societal perspective to make AI work. This demonstrates that legal discussions around AI are constantly evolving.

Past

Initially, predictive AI tools in legal tech focused primarily on analysing large sets of documents. These tools helped lawyers by clustering documents based on similar content and identifying specific clauses (eg, liability clauses) with greater accuracy than simple keyword searches. In addition, AI has facilitated the extraction of key information from large datasets. Historically, document automation in the legal sector has been predominantly rule-based only, failing to realise the potential of AI.

Present

The legal profession is currently experiencing a paradigm shift with the introduction of generative AI technologies. Law firms are increasingly experimenting with standard or fine-tuned LLMs to assist lawyers with various tasks, including answering legal questions, summarising text, brainstorming and translating documents. Despite these advances, the legal industry faces challenges in effectively integrating LLMs with large amounts of own data. Current technology solutions – such as Retrieval Augmented Generation (RAG), fine-tuning and knowledge graphs ‒ have yet to provide an off-the-shelf product that allows lawyers to seamlessly interact with thousands of pages of data on a sophisticated level.

Future

Overcoming the current technological challenges of implementing large amounts of proprietary data promises a new era of sophisticated legal AI applications. The emerging trend of agentic AI systems will also have an impact on their use in legal contexts. Such systems are capable of independently breaking down complex tasks into smaller subtasks, conducting autonomous research, and, ideally, even accessing parts of the internal IT infrastructure (for example, in order to store a result directly in accordance with predefined specifications).

Professional Law

For lawyers, German professional law (Berufsrecht der Rechtsanwälte) does not pose insurmountable obstacles to the adoption of AI technologies. Currently, most AI solutions in the legal sector are procured as software as a service (SaaS) models. This approach presents lawyers with challenges similar to those encountered during past cloud outsourcing activities.

Liability and Insurability

Establishing liability for damages caused by generative and predictive AI systems is crucial owing to their potential harmful outcomes. Under German law, as AI itself is not a legal person, liability for damages caused by AI systems must be attributed to the operator or others in the supply chain. Insurability of AI-related damages is closely tied to liability, but as AI blurs the line between human and machine behaviour, it becomes challenging to allocate responsibility and determine insurability. This has sparked a debate on the need for separate AI insurance to cover innovation and development risks.

Liability Issues

From a German legal perspective, liability for AI damages can generally be established through contract law, product liability claims and tort liability. However, each approach presents difficulties. Proving breach of duty and causality in contract law can be challenging, especially when the inner workings of an AI system are not accessible. Product liability claims face difficulties due to the complexity and opacity of AI systems, including establishing a defect, damage and causal link. Tort liability is hindered by the lack of regulatory rules for AI safety, complexities in proving fault and causation, and challenges in assessing non-human AI systems.

In conclusion, German law is not adequately equipped to address the unique challenges of AI liability. However, the EU has recognised these limitations and is working on creating a harmonised legal framework to address AI-related challenges in product liability and tort law.

Status Quo

Although there are no local governmental initiatives addressing the issues related to AI liability, the EC has taken steps to regulate AI. In February 2020, the EC published a White Paper and a report on AI safety and liability, which set the stage for updates to product liability legislation in the EU and Germany.

EU Initiatives

The proposed updates include revising the current Product Liability Directive and introducing a new AI Liability Directive. The Product Liability Directive maintains strict liability for manufacturers, holding them responsible for harm caused by defective products, including those based on AI. Under the proposed AI Liability Directive, it was intended that victims seeking compensation for damages caused by AI products and services could also rely on fault-based tort liability regimes in EU member states. However, in a rare move to reduce regulatory density in the EU, the EC withdrew the AI Liability Directive in February 2025. This decision has sparked controversy, with critics arguing that it undermines adequate protection for victims of AI-related harm. It remains to be seen whether the Commission will introduce an updated proposal, though this seems unlikely in the current climate.

The key changes in the revised Product Liability Directive concern the burden of proof and disclosure powers. They aim to address information asymmetries between victims and those responsible for AI-related harm. The revised Product Liability Directive introduces enhanced disclosure powers for potential tortfeasors and alters the burden of proof for claimants. Presumptions of evidence and orders for prima facie evidence are also proposed to streamline the process of proving liability in product-related cases.

Impact

Although the revised Product Liability Directive clarifies certain aspects of liability for AI-based products, the withdrawal of the AI Liability Directive moderates its overall impact on AI liability. With no dedicated EU-level rules specific to AI fault-based claims, supply chain actors remain subject primarily to the existing strict liability approach under the revised Product Liability Directive and relevant national legislation. While the Directive’s enhanced disclosure requirements and updated burden-of-proof rules will help address information asymmetries in product liability cases, the broader overhaul of AI liability originally envisioned under the proposed AI Liability Directive is no longer in view. As a result, the liability framework for AI-based products in Germany and the EU will see relatively incremental change. Nonetheless, the ongoing policy debate signals that future legislative action at either the national or EU level – especially if AI-related harm becomes more prevalent – cannot be ruled out.

Scope

Bias in AI refers to unfair or discriminatory preferences embedded in AI systems, leading to unequal treatment based on characteristics such as race or gender. The EU AI Act, along with the GDPR, addresses bias in high-risk AI systems and requires controllers to mitigate these risks. Currently, best practices for addressing bias in AI are limited and industry efforts in Germany are insufficient.

Bias in AI

Managing the risk of biased outcomes in AI systems requires a tailored approach, considering the specific domain and context. Trade-offs must be made in choosing safeguards for different characteristics and groups. Documentation and justification of the chosen approach ‒ considering privacy, fairness and the application’s context – ensure accountability for AI risk management decisions.

Examples and Issues

Two areas where bias poses significant risks are employment (automated CV pre-selection) and finance (automated investment advice and credit scoring). However, individuals face challenges in proving bias following algorithmic decisions, leading to a lack of case law on compensation claims. Regulatory investigations by German DPAs play a crucial role in identifying bias in AI systems. While enforcement actions are yet unknown, German DPAs have expressed their concern regarding bias. There is occasional political movement to revise the General Equal Treatment Act (Allgemeine Gleichbehandlungsgesetz, or AGG) to include algorithmic decisions, given their increasing importance for consumers.

The advent of AI has significantly expanded the capabilities and applications of facial recognition and biometrics. The EU AI Act distinguishes between “post” and “live” biometric identification methods ‒ each of which is associated with different levels of risk and regulatory requirements.

Post-Biometric Identification ‒ High-Risk Applications and Regulatory Requirements

Post-biometric identification is classified as a high-risk application under the EU AI Act, requiring a comprehensive set of regulatory requirements to ensure data security and privacy. The only exception to this strict regulation is biometric verification used solely to confirm an individual’s claimed identity.

Live Biometric Identification ‒ Prohibitions and Exceptions

In contrast to the foregoing, live biometric identification faces a general prohibition, especially when applied in real-time in publicly accessible spaces for law enforcement purposes. Exceptions to this prohibition are narrowly defined and permitted only under three critical conditions, as follows.

  • Search and rescue operations ‒ specifically targeting the search for victims of serious crimes such as abduction, human trafficking and sexual exploitation, as well as locating missing persons.
  • Imminent threats to safety ‒ preventing immediate, significant threats to the safety of individuals or preventing genuine and present or foreseeable terrorist attacks.
  • Criminal investigations ‒ localising or identifying individuals suspected of serious criminal offences as defined in an annex and which must be punishable by substantial custodial sentences in the respective EU member state.

Liability Across Various Legal Frameworks

The use of facial recognition and biometric data intersects with various legal domains, requiring compliance with the consent rules of each jurisdiction. These include the GDPR in the EU, which treats the processing of biometric data as sensitive data.

Relevance

In Germany, the prohibition of automated decision-making (ADM) under Article 22 of the GDPR is relevant to both predictive and generative AI systems. These legal restrictions aim to address concerns about the potential risks and harmful effects of ADM, particularly in areas that significantly impact individuals’ lives. Inaccuracies and biases in ADM processes can have severe consequences, including unfair discrimination and ethical issues.

Scope

Meaningful human involvement is crucial for excluding the strict requirements of Article 22 of the GDPR, as highlighted in the recent SCHUFA case (ECJ, Case C-634/21). When meaningful human involvement is lacking and decisions will have “legal” or “similarly significant” effects on individuals (eg, in contractual or vital areas of life such as work, finance and living conditions), data controllers can only rely on limited justifications such as consent and contractual necessity. They must provide comprehensive information about the decision-making process’s underlying logic and implement individual algorithmic due process, allowing individuals to express their views and to be heard.

Impact

Non-compliance with the GDPR can lead to significant penalties, and there are also reputational risks to consider. If customers perceive ADM as unfair, biased or lacking transparency, it can undermine their trust in the company.

The EU AI Act sets out a variety of rules targeting different levels of risk and transparency requirements associated with AI systems.

AI Systems With So-Called Specific Transparency Requirements

The EU AI Act contains the following rules for AI systems with specific transparency requirements.

  • Direct interaction with natural persons ‒ AI systems designed for direct interaction with individuals must inform users of their non-human nature unless it is evident under normal circumstances. This is to ensure that individuals are aware when they are interacting with an AI, thereby fostering an environment of informed consent and trust.
  • Synthetic content disclosure ‒ providers of AI systems (particularly those generating synthetic media such as audio, images, videos or text) are mandated to clearly mark the outputs in a machine-readable manner. This ensures that consumers can easily distinguish between content created by humans and that generated by AI, helping prevent misinformation and maintaining content integrity.
  • Deepfake regulation ‒ there is a specific mandate for the disclosure of AI-generated or manipulated content, especially in cases of deepfakes involving images, audio or video. This regulation aims to combat the spread of misleading or harmful media by ensuring that individuals can recognise and understand when content has been artificially altered.

Further Transparency Obligations (Not Enumerative)

The EU AI Act also contains further transparency obligations, as follows.

  • High-risk systems ‒ regulations extend to high-risk AI systems, emphasising transparency from providers to downstream deployers. This encompasses clear communication about the capabilities, limitations and proper usage of AI systems to ensure safe and ethical deployment.
  • General-purpose AI ‒ for general-purpose AI systems, there are transparency requirements regarding the training processes and data. Understanding how an AI system has been trained and what data was used is crucial for assessing its reliability, biases and potential impacts on consumers and society at large.

Similarities to Traditional SaaS Contracts

Many AI solutions are now being procured as a service (SaaS), which has contractual similarities to traditional cloud service negotiations. This includes areas such as availability and fault resolution (service-level agreements), as well as maintenance and support.

Emerging Challenges in AI Contracts

The integration of AI presents the following unique challenges that have not previously been encountered in cloud negotiations.

  • Custom model development ‒ there are concerns about the provider developing a model with the customer’s data that could be used for other customers. Contracts need to clearly address ownership and usage rights of developed AI models.
  • Quality and reliability of AI outputs ‒ it is crucial to include clauses that guarantee the quality, reproducibility and correctness of AI-generated output. This will ensure that the AI’s performance meets the customer’s business requirements.
  • Transparency in training data and processes ‒ given the problems of the “black box” nature of AI and lack of explainability, there should be transparency around the data used to train the AI and the processes involved. Contracts should require disclosure of training datasets (or at least summaries thereof) and methodologies to ensure compliance and alignment with business values.
  • Indemnification for third-party rights violation ‒ if AI outputs infringe the rights of third parties, contracts currently sometimes include indemnification clauses. These should detail the scope of liability and the safeguards in place should the AI’s output lead to legal challenges.

There is currently no established market standard that addresses these emerging issues. Lawyers will need to develop individual, bespoke solutions for their clients.

AI technologies have a profound impact on work environments, particularly in the area of personnel decisions. They offer advantages in processing large amounts of data quickly for tasks such as pre-selecting job applicants and creating scoring tables for employee dismissals and performance reviews.

Exclusively Automated Decisions

The GDPR restricts exclusively automated decisions in employment relationships. Therefore, decisions with legal implications (eg, hiring, transfers and dismissals) should generally involve human review or decision-making unless the narrow justifications and strong safeguards under Article 22 of the GDPR can be complied with.

Pre-Selection/Support Measures

Pre-selection and support measures utilising AI are permissible but require careful examination as well as understanding of the tools in use. AI can be effectively used in various HR functions, including generating content, automating job descriptions, pre-selections, reference letters, employee chatbots and relocation support.

Risks

There is a notable risk of discriminatory decisions made by AI tools. Employers can be held liable under not just the AGG if they inadequately programme AI systems, use flawed data or formulas, or neglect regular quality checks. Liability applies regardless of whether the tools are internal or external and irrespective of technical responsibility for errors or discriminatory practices. Indirect discrimination can occur when seemingly neutral criteria end up favouring certain employee groups or genders in practice.

Co-Determination Rights

Depending on the specific AI tools and set-up, works councils have significant co-determination rights and detailed works agreements must be negotiated with employee representatives. Compliance with these rights and agreements is crucial for the lawful implementation of AI in the workplace.

Evaluation

Performance evaluation using AI tools promises to be more objective and efficient. Manual errors as well as misbehaviours can be revealed easier and automated. There are tools that review performance, analyse individual or group work activities, review manual or automated data and processes such as (travel) cost reimbursements, and many more.

At the same time, there is a risk of violations of laws, especially if the programming and/or output is inadequate, and an open feedback culture to improve the systems used is advisable.

Monitoring

Monitoring employees is subject to strict conditions based on case law. Generally, total surveillance of employees without cause (including covert video or audio surveillance and site surveillance) is not allowed. Exceptions are limited to specific cases and suspicions. Preventative or support measures are permissible as long as they do not create undue surveillance pressure. However, these principles may conflict with new technologies such as voice-based live evaluation of calls and transcription tool reviews.

On the other hand, employers must ensure through regular checks and compliance systems that employees use AI in a safe and compliant way. Such mechanisms are to be established. When processing individual (log) data with AI tools, works councils – if existing – have significant co-determination rights, and detailed works agreements should be negotiated with the employee representatives. As there is no established case law in this area, it is crucial to establish reasonable and detailed agreements and guidelines, accompanied by regular checks and training sessions.

Conclusion

In the end, it is the workforce that uses the AI tools in practice. Regular reviews and general monitoring of what is used and how are essential to maintain compliance, and employers should document measures to ensure legal adherence. Communication with employees is crucial to address concerns and provide informed guidance.

Employers must ensure transparency, fulfil information and co-determination obligations, train their employees and mitigate risks such as bias and discrimination or infringement of IP rights, etc.

Today’s digital platforms and their success would be unimaginable without algorithms. Recommendation algorithms play a central role, as users typically engage only with the content shown in their feed.

AI is particularly significant for platforms hosting user-generated content, as legal frameworks and case law may, to some extent, require the use of algorithmic tools to prevent the recurrence of known legal violations on the platform (“notice and stay down”).

Further obligations arise from the European Digital Services Act (DSA). Platforms that allow user-generated content but restrict AI-generated content – such as in-game assets – may be subject to transparency obligations under their terms and conditions, pursuant to Article 14 of the DSA.

It remains to be seen whether the EC will issue guidance on AI-driven dark patterns, including deepfakes or AI-generated advertisements that subtly manipulate user or consumer decisions (Article 25 DSA). AI-generated advertising must comply with the transparency requirements set out in Article 26 DSA, including clear labelling and disclosure of the main parameters used to determine the targeting and delivery of the advertisement.

Financial services companies increasingly rely on AI to enhance operational efficiency, customer service and risk management.

Regulatory Framework for Outsourcing and Cybersecurity

The outsourcing of IT services in the financial sector is subject to stringent regulations. When an outsourced function is considered a critical or important operation, national and international regulatory frameworks come into play. In Germany, for instance, the Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, or BaFin) sets national standards ‒ whereas at the European level, the European Banking Authority (EBA) provides guidelines. The same regulatory mechanism is employed by the sector-specific cybersecurity regulation in the financial sector on an EU level, the Digital Operational Resilience Act (DORA).

Comparing AI and Cloud Outsourcing

Upcoming AI outsourcing shares several similarities with cloud-based outsourcing, especially given that cloud solutions provide the infrastructure for AI tools at the application level. Contractual implications for the financial services sector concerning AI will be analogous to those for cloud services, addressing aspects such as data security and risk management – details of which are discussed in 12.1 Procurement of AI Technology.

New Regulatory Challenges Posed by AI

AI outsourcing introduces new challenges that future regulations must address.

  • Are existing control mechanisms, such as audits and information sharing, sufficient to ensure an adequate level of security for AI applications?
  • Is there a need for heightened data security standards due to the unique vulnerabilities associated with AI technologies?
  • Do financial institutions need to negotiate specific quality standards to ensure that AI systems meet operational requirements?

Regulatory authorities are expected to issue guidelines for AI outsourcing akin to those established for cloud services.

The use of AI in healthcare is raising concerns with regard to the sensibility of the data handled and the potential damages of wrong AI decisions or hallucinations.

EU AI Act

In Annex III, No 5 of the EU AI Act, the following AI systems are classified as “high-risk”:

  • AI systems deciding access to health services;
  • AI systems assessing risk and pricing of health insurances; and
  • AI systems used in emergencies.

Those AI systems must comply with Article 8 et seq of the EU AI Act (eg, with regard to risk management, quality of training data, documentation, transparency, human oversight and cybersecurity).

GDPR

Article 9 of the GDPR also sets high requirements for processing genetic, biometric, health or sexual data.

Current Legal Landscape

Germany generally follows the SAE classification from Level 0 (no automation) through Level 5 (full automation). Levels 1 and 2, which centre on assisted and partially automated driving functions (eg, adaptive cruise control, lane-keep assist) continue to align with existing German law.

Legislative changes have clarified the status of higher levels of automation: Level 3 (conditional automation) and Level 4 (high automation). A law adopted in 2021, often referred to as the Act on Autonomous Driving or the Autonomous Driving Act, explicitly allows SAE Level 4 vehicles to operate in approved operating areas on public roads without a human driver physically being in the vehicle. However, a remote technical supervisor must be able to deactivate the vehicle remotely and authorise certain manoeuvres.

Autonomous Driving

Level 5 (known as autonomous driving – ie, where there are only passengers and no driver) does not meet current legal requirements and remains prohibited under German law. Car owners are strictly liable under the German Road Traffic Act. However, establishing liability in cases of damage caused by AI remains challenging, as the victim must prove a breach of duty, resulting damage, and the causal link between the two.

Regulatory developments in this area will primarily occur at the EU level, such as through the Type Approval Framework Regulation. The future EU AI Act will not address this issue directly but will require the EC to establish the AI-specific accountability requirements from the EU AI Act through delegated acts under the Type Approval Framework Regulation. This is expected to introduce comprehensive requirements for autonomous vehicles in the future.

The manufacturing sector in Germany is rapidly adopting AI, with applications in assembly, packaging, customer service and open-source robotics.

Autonomous Mobile and Professional Service Robots

There is a growing market for autonomous mobile robots that can navigate uncontrolled environments and interact with humans. Additionally, AI applications in professional service robots (eg, crop detection and sorting objects) are highly valued.

Regulation

The regulation of these technologies in Germany will be governed by the EU’s Machinery Regulation, which will be effective in 2027. This comprehensive EU regulation aims to provide legal certainty and harmonise health and safety requirements for machinery products (including AI-based machinery) throughout the EU. It focuses on the design, construction and marketing of machinery products in various sectors, including manufacturing.

The use of AI in the professional services sector is governed by a mix of existing regulations and emerging guidelines that address different facets of AI use.

Confidentiality and Data Protection

Confidentiality remains paramount in professional services. The integration of AI must not compromise client confidentiality or data protection standards. Professionals must ensure that AI systems comply with strict data protection regulations, such as the GDPR in the EU, which requires the protection of personal data processed by AI technologies. For further details, please refer to 15.1 IP and Generative AI.

IP Concerns

The use of AI can raise complex IP issues, particularly in relation to the ownership of AI-generated outputs and the use of proprietary datasets to train AI. Professionals need to navigate these IP concerns to avoid infringement risks and ensure that contracts clearly delineate the IP rights associated with AI-generated work. For further details, please refer to 15. Intellectual Property and 15.1 IP and Generative AI.

Regulatory Compliance

Professionals need to ensure that AI applications comply with sector-specific regulations and codes of conduct. This includes adhering to ethical guidelines set by professional bodies to ensure that AI systems are used in a manner consistent with professional ethics and standards. For further details, please refer to 3.7 Proposed AI-Specific Legislation and Regulations.

Possible IP Protection

AI technology itself can be protected under copyright law. It is important to differentiate between the different components of the AI technology, such as the AI training algorithm, the AI model architecture and the training date. The AI model and the AI algorithm may be protected as a software according to Section 69a of the UrhG. The training data can be protected as a database (Section 87a of the UrhG). However, it should be noted that the training data itself is typically scraped from third parties, and that this individual training data is often protected by copyright (if it is text or images, for example). The rights then lie with the third party, so the use must either be justified by the text and data mining exception or by a licence.

The inputs or prompts are often too simple or technically constrained (eg, specifying technical requirements for a picture such as format) to qualify for copyright protection, as they do not meet the originality threshold required under CJEU case law. However, more detailed prompts – where the author exercises creative discretion – may qualify for copyright protection. Additionally, many prompts could be stored in a structured collection and may be eligible for protection under database rights.

In many cases, however, the output will not be protected by intellectual property (IP) rights if the AI deployer does not provide sufficient context to control the generative process. A typical short and simple prompt – as commonly used – is often too vague, allowing the AI to generate a wide range of possible results. As a consequence, the output cannot be attributed to a (human) author and is therefore generally not protected. An exception may arise when the user provides highly specific input that effectively predetermines the shape and content of the output – for example, when “auto-filling” lines of code within an existing codebase that provides a clear contextual framework. Another possible exception is when a pre-existing, protected work is only slightly modified using AI.

Possible IP Infringements

Collecting training data from the internet generally constitutes a reproduction under copyright law. This can be justified under the text and data mining exception in Section 44b of the German Copyright Act (UrhG), provided certain conditions are met. In particular, the data must be lawfully accessible online (eg, freely available) and the rights-holder must not have opted out in an appropriate manner (eg, in a machine-readable format – such as through a robots.txt file or in the website’s company information, using a format detectable via Optical Character Recognition (OCR). If a third party’s copyrighted work is included in an identical or recognisably similar manner in the input and/or output, courts are likely to consider each instance a relevant reproduction or transformation that requires the author’s consent. In the absence of a licence or a statutory exception, such use may constitute copyright infringement. However, private users may be able to rely on the private copying exception under Section 53 of the UrhG).

Only natural persons can be authors or inventors; therefore, AI cannot be recognised as an inventor under the German Patent Act (Federal Patent Court, 11 November 2021, 11 W (pat) 5/21). The same applies to copyrights (Section 7 of the German Copyright Act).

According to Section 2 No 1 of the Trade Secrets Protection Act (Gesetz zum Schutz von Geschäftsgeheimnissen, or GeschGehG) a trade secret is information:

  • that is not publicly available and of commercial value;
  • that is protected by appropriate non-disclosure measures; and
  • the owner of which must have a legitimate interest in non-disclosure.

The first and last requirements may be met, provided that the core AI technology – such as the model and training data – is kept confidential (assuming it is not an open-source model).

Appropriate safeguards (second requirement) may include encryption and non-disclosure agreements with contractual penalties in cases of breach. Unauthorised use of a trade secret can also constitute a criminal offence under Section 23 of the GeschGehG. Therefore, trade secret law offers a viable and effective legal framework for protecting AI systems.

There is broad consensus in the literature that AI output is not protected when the user provides little context (eg, only a short prompt) and the AI has significant creative leeway in generating the output. The situation may be different, however, when the human provides substantial context – for example, by having the AI complete a line of code or proofread a text. In such cases, the AI merely modifies or supplements an existing work, which generally retains its protection.

(See 8.2 Data Protection and Generative AI.)

It also remains to be seen whether courts will lower the requirements marginally so that, even with a slightly more detailed prompt and despite the leeway of the AI tools, they will satisfy copyright protection if only the defining and copyright-creating features of the output were already recognisably laid out in the input.

No information is available on this topic.

Acqui-Hires and Mergers

German competition regulators are alert to big tech “killer acquisitions” of AI start-ups that could stifle innovation. Merger control rules were adjusted to catch such deals even if targets have low turnover (eg, via a EUR400 million transaction-value threshold introduced in 2017). In November 2023, Germany’s 11th Competition Act amendment went further: after a sector inquiry, the Bundeskartellamt can now require notification of any future acquisitions in that sector regardless of size (new Section 32f GWB). This tool lets enforcers scrutinise acqui-hire deals in nascent AI markets that previously escaped review.

Algorithmic Collusion

Regulators also see risks of price-fixing through AI. The Bundeskartellamt has studied scenarios where pricing algorithms lead to tacit collusion without direct human agreements. While no AI-specific cartel law exists, existing rules on concerted practices apply equally to collusion via algorithms. The authority has signalled that companies “can’t escape responsibility for collusion by hiding behind a computer program” and is even developing AI-powered screening tools to detect suspicious pricing patterns.

Data-Driven Market Power

Germany has been a frontrunner in tackling digital market dominance built on data – a key concern in AI markets. The 10th GWB amendment (2021) introduced Section 19a GWB, allowing early oversight of large digital firms deemed to have “paramount significance” across markets. Using these powers, the Bundeskartellamt secured commitments from Google in 2023 to give users more control over data-combining across services, curbing Google’s data-fuelled competitive advantage. Additionally, dominant firms leveraging AI might face abuse-of-dominance scrutiny – for instance, if they use algorithms to self-preference their services or deny rivals access to crucial data.

Germany’s IT Security Act 2.0 (2021) already mandates robust protections for critical infrastructure, covering systems utilising AI. Implementation of the EU’s NIS2 Directive (due from October 2024) remains pending, as the related Cybersecurity Strengthening Act draft stalled amidst political disagreement. Despite delays, current laws oblige organisations using AI-based systems to implement rigorous cybersecurity measures and incident reporting. The Federal Office for Information Security (BSI) has issued an AI Cloud Services Compliance Catalogue and highlights increasing risks posed by generative AI, such as facilitating sophisticated cyberattacks, phishing and malware creation. Agencies recommend enhanced cybersecurity practices and defensive AI solutions, affirming that existing criminal laws apply fully to AI-driven cybercrime.

In Germany, ESG reporting obligations are expanding under the EU’s Corporate Sustainability Reporting Directive (CSRD) from 2025, covering around 13,000 companies. Using AI tools (eg, LLMs and analytics) to fulfil ESG reporting is permitted and increasingly common, enhancing efficiency, accuracy and compliance. To address AI’s environmental impacts, Germany enacted the Energy Efficiency Act (EnEfG) (2023), requiring data centres (key AI infrastructure) to use at least 50% renewable energy by 2024 and 100% by 2027, alongside efficiency improvements. Additionally, Germany’s AI strategy promotes sustainable AI practices, aligning innovation with climate and ESG goals, emphasising human rights and ethical AI governance.

Implementing specific AI governance and compliance best practices requires addressing key issues to ensure effectiveness, manageability and proportionality for businesses. The following steps for organisations to follow have been proven to work in practice.

  • Identify regulatory requirements for specific use cases ‒ AI compliance is always use case-specific and different regulations may apply to different use cases. One size does not fit all.
  • Conduct a risk assessment ‒ start by conducting a thorough risk assessment to identify potential risks and challenges associated with AI implementation in the specific business context, based on the applicable above-mentioned legal requirements.
  • Prioritise explainability and transparency ‒ focus on implementing AI systems that are explainable and transparent. This will foster trust, facilitate audits, and help mitigate potential risks associated with a lack of transparency.
  • Invest in data quality and bias mitigation ‒ ensure the quality, accuracy and representativeness of data used to train AI models. Implement processes to identify and address biases in training data and algorithms. Regularly monitor and evaluate the performance of AI systems to identify and address any biases or unfair outcomes.
  • Develop an AI governance framework ‒ establish a comprehensive AI governance framework that outlines the policies, procedures and accountability mechanisms for developing, deploying and monitoring AI. This framework should cover data governance, model development, algorithmic transparency, bias mitigation, and ongoing evaluation to ensure responsible and ethical AI practices.
  • Continually learn and adapt ‒ stay on top of the evolving landscape of AI technologies and best practices. Foster a culture of continuous learning and improvement within the organisation. Stay informed of emerging regulatory developments and adapt the organisation’s AI practices accordingly.
  • Collaboration and knowledge-sharing ‒ engage in collaboration and knowledge-sharing with industry peers, academia and regulators. This collaborative approach can help shape industry standards and contribute to the development of effective and proportionate AI regulation.
Bird & Bird

Carl-Theodor-Strasse 6
Düsseldorf
40213
Germany

+49 0 211 2005 6000

+49 0 211 2005 6011

duesseldorf@twobirds.com www.twobirds.com
Author Business Card

Law and Practice in Germany

Authors



Bird & Bird delivers expertise covering a full range of legal services through more than 1,400 lawyers and legal practitioners across a worldwide network of 32 offices. The firm has built a stellar, global reputation from a deep industry understanding of key sectors and through its sophisticated, pragmatic advice. Bird & Bird is a global leader in advising organisations being changed by digital technology, as well as advising companies who are shaping the world’s digital future. AI and generative AI are a cornerstone of the firm’s legal practice, which helps clients leverage AI and digital technology against a backdrop of increasing regulation. Bird & Bird’s long-standing strengths in data protection, commercial and IP law – allied with the team’s technology and communications expertise – mean the firm is ideally placed to work with clients in order to help them reach their full potential for growth.