Artificial Intelligence 2025 Comparisons

Last Updated May 22, 2025

Contributed By Jeantet

Law and Practice

Authors



Jeantet is an independent law firm founded in 1924, celebrating a century of excellence in business law. Operating in over 150 jurisdictions, the firm is renowned for its expertise and forward-looking approach. In 2025, Jeantet further strengthened its IP tech and data practice by integrating the distinguished Aramis team, co-led by Benjamin May and David Roche, enhancing its position in intellectual property and technology law. The practice, now composed of twelve lawyers, including three partners, is co-headed by Benjamin May and Frédéric Sardain. It offers a comprehensive range of services in intellectual property, technology law, data protection, and artificial intelligence. Providing both strategic advice and litigation support, the team navigates regulatory challenges and contractual matters while regularly publishing on AI-related legal developments.

In France, there are no laws specifically dedicated to AI or machine learning, but several European regulations govern its use in areas like risks and governance, privacy, intellectual property, and liability.

  • The AI Act, which came into force on 1 August 2024, categorises AI systems based on their risk level, with strict requirements for high-risk systems. AI systems deemed high-risk must comply by February 2025, while obligations for general-purpose AI models will come into force in August 2025. Full compliance is expected by 2 August 2026.
  • The General Data Protection Regulation (GDPR) governs personal data processing, including for AI models, alongside France’s Loi Informatique et Libertés.
  • The Data Act (EU) 2023/2854 regulates data sharing, especially for IoT, impacting AI systems dependent on data.
  • NIS2Directive (EU) 2022/2555, enhancing cybersecurity across the EU, sets standards that impact AI systems’ resilience and security.
  • DSA (EU) 2022/2065 regulates digital platforms, including those powered by AI.
  • The Product Liability Directive (PLD), adopted in early 2024, introduces significant changes to liability rules. It must be transposed into French law by 9 December 2026, at which point it will extend product liability to software, including AI systems to address AI-related harms.
  • Intellectual property laws in the French Intellectual Property Code apply to AI, addressing ownership of AI-generated content and the protection of algorithms and patents.

Since the launch of France’s National Strategy for Artificial Intelligence (SNIA) in 2018, the country has rapidly emerged as a global AI leader.

By 2023, France was Europe’s top destination for AI investments, attracting global tech giants like Google, Meta, OpenAI, and Microsoft to set up research centres.

The AI ecosystem includes over 1,000 start-ups, 4,000 researchers, and a growing pool of AI-trained graduates. French companies such as Aircall, Alan, Dataiku, and Mistral AI are driving innovations across sectors like healthcare, manufacturing, and logistics, with significant capital raised by startups like Mistral AI.

Generative AI (eg, ChatGPT) is widely adopted, especially for customer interactions and content generation, while predictive AI is optimising operations and decision-making. At the February 2025 AI Summit in Paris, Air France highlighted its use of AI-driven predictive maintenance through its Prognos system, which has been anticipating technical failures and optimising aircraft maintenance since the 2000s, with more recent applications in baggage handling, eco-piloting, and resource management.

Adoption is particularly strong among mid-sized firms, though SMEs are also increasingly integrating AI.

Since its launch in 2019, the Jean Zay supercomputer has been at the forefront of high-performance computing for research, undergoing several upgrades to expand its processing power. Most recently, a EUR40 million investment under the France 2030 programme is set to further enhance its capabilities.

Building on this momentum, France is securing new partnerships to strengthen its AI computing infrastructure. Among them, the government has signed an agreement with UK-based Fluidstack to develop one of the world’s largest low-carbon supercomputers. Backed by EUR10 billion, the project will deploy 500,000 GPUs by 2026 and reach one gigawatt of capacity by 2028, powered by France’s nuclear energy.

France is actively advancing AI through its national strategy, integrated into the broader “France 2030” programme. AI is a key priority for the country’s future, with the government heavily investing to position France as a global leader in AI. This strategy builds on the 2018 Villani report, which recommended actions across research, data governance, talent development, and ethics.

The roadmap is structured in three key phases:

  • Phase 1 (2018–2022) is focused on strengthening research and infrastructure, the government invested EUR1.5 billion to enhance AI research, create AI institutes, and expand high-performance computing facilities, such as the Jean Zay supercomputer.
  • Phase 2 (2022–2025) prioritises AI adoption and talent development, allocating EUR560 million for AI education and helping 400 SMEs implement AI technologies. The focus is on sectors like healthcare, energy, transportation, and emerging technologies such as generative and trusted AI.
  • Phase 3 (2025–ongoing) includes a EUR2.5 billion investment, alongside a EUR109 billion international funding initiative announced by President Macron during the Paris AI Summit.

In addition to these investments, the French government is supporting AI adoption through public-private partnerships, facilitating public sector experimentation, such as the collaboration between France Travail and Mistral AI, and increasing the public procurement threshold to encourage AI innovators. The government is also focusing on developing 35 key sites for data centres to support AI infrastructure and innovation.

The regulation of artificial intelligence in France is primarily shaped at the European level rather than through national legislative initiatives, as a way to adopt regulatory harmonisation across the European Union (see 1.1. General Legal Background).This approach seeks to foster innovation while protecting citizens’ fundamental rights within the EU.

No AI-specific law is currently in force in France.

Pursuant to the ”Law for a Digital Republic” of 7 October 2016, certain general principles regarding the rights of individuals subject to individual decisions based on algorithmic processing, akin to transparency, are already in force. This legislation is applicable to decisions made by the French administration. Any individual must be explicitly informed that an individual decision will be made based on algorithmic processing, and the administration must communicate the rules defining this processing and its main characteristics of implementation to the individual upon request.

These principles are applicable to AI-driven tools, to some extent.

French governmental bodies have issued several recommendations and guidelines to support the ethical deployment of AI, with a focus on best practices.

  • The 2018 Villani Report: This was the first comprehensive initiative outlining France’s approach to AI, serving as a roadmap for AI development and deployment (see 2.2 Involvement of Governments in AI Innovation).
  • March 2022: The French Council of State Report emphasised “trustworthy public AI” principles like transparency, human supervision, and sustainability.
  • February 2024: The French National Assembly Report addressed generative AI challenges, recommending expanded class actions and specialised AI litigation jurisdictions.
  • October 2024: The French Competition Authority opinion highlighted competitive risks posed by tech giants in generative AI and called for increased regulatory frameworks, better access to computing power, and transparency (See 16.1. Emerging Antitrust Issues in AI) .
  • 2024-2025: The CNIL Guidelines for AI providers relate to applying GDPR principles to AI system design, deployment, and fine-tuning (see 3.6. Data Information or Content Laws).
  • February 2025: The INRIA and Ministry of Ecological Transition report stresses AI’s role in ecological transition but warns of its environmental impact, urging the development of energy-efficient, sustainable AI tools (see 18.1. ESG Dimensions of AI).

France’s approach to AI regulation is largely shaped by EU regulations (see 1.1 General Legal Background).

France, which has been supporting European-level AI regulation since 2018, has recently pushed back on some aspects, particularly around generative AI. In 2023, France, Italy, and Germany opposed including foundation models in the AI Act, preferring a more gradual focus on AI applications rather than regulating the underlying technology itself. This shift in stance aligns with the interests of AI innovators like Mistral AI.

Furthermore, the European Commission’s decision to remove the AI liability directive from its 2025 work programme highlights a growing emphasis on prioritising innovation over stringent regulation.

The EU AI Act will take precedence over national laws, replacing any potential conflicting French regulation. However, differences may arise in implementation due to local policy preferences.

France has recently favoured self-regulation and a gradual approach, especially regarding generative AI, rather than strict, uniform regulations (see 3.4.1 Jurisdictional Commonalities).

There is no applicable information in this jurisdiction.

To date, French law has not been substantially amended on data protection aspects nor on information or content to foster AI technology.

The French Data Protection Authority, the CNIL, published guidelines throughout 2024 and 2025 to help AI developers comply with data protection laws (see 5.2. Regulatory Directives).

Additionally, Law Proposal No 1630, presented in 2023, seeks to clarify IP rights for AI-generated works. It proposes that ownership would go to the human authors of the works that contributed to the AI creation and mandates labelling AI-generated works. It also introduces a taxation system benefiting collective management organisations. However, the proposal has faced criticism for its practicality and technical challenges and has not yet been discussed in Parliament.

In France, AI legislation is largely shaped by EU initiatives, particularly the EU AI Act and the revised PLD.

Rather than pursuing national-level laws, France focuses on aligning with these EU frameworks to foster innovation while addressing risks.

The EU AI Act (effective by 2026) categorises AI systems by risk and imposes obligations for transparency and ethics, targeting high-risk AI applications. It aims to prevent harm such as discrimination and bias, with some provisions enforced by 2025.

The revised PLD focuses on product liability for AI technologies, clarifying manufacturers’ responsibilities and providing avenues for redress in case of AI-related harm.

To date, French courts have not had the occasion to deal with cases involving AI systems. There are various reasons for the lack of court rulings. For example, the anticipation of EU legislation and the scarcity of specific legislation might have created a situation where there have not been any questions on the interpretation of law that would require settlement in court. Also, disputes settled or resolved through alternative dispute resolution mechanisms would not have generated court rulings.

However, on 12 March 2025, French organisations representing publishers and authors announced legal proceedings against Meta, alleging large-scale copyright infringement through the unauthorised use of their works to train generative AI models. The lawsuit, brought before the Paris Judicial Court by the SNE, SGDL, and SNAC, marks the first case in France addressing AI-driven copyright violations. Meta, which leveraged a dataset containing 200,000 books – including French titles – to enhance its Llama model, claims “fair use”, while plaintiffs argue AI development should not violate creators’ rights.

Regulatory agencies in France operate independently from the government, overseeing specific sectors with enforcement powers, including sanctions and soft law guidance.

There is currently no formally designated AI regulator yet but the CNIL is expected to be appointed as France’s AI Act market surveillance authority, under Article 70 of the AI Act, expanding its role beyond data protection. It has already issued significant guidance on AI, established an AI division and received support from the Council of State and the National Assembly for this designation.

Several other existing agencies are addressing AI-related issues within their respective areas of expertise. For example, the Defender of Rights focuses on algorithmic bias within the human rights sector and the French Competition Authority has warned of anti-competitive risks in generative AI, advocating for improved regulation and fair access to computing resources. (See 16.1 Emerging Antitrust Issues in AI).

Additionally, at the February 2025 AI Summit in Paris, the French government announced the establishment of INESIA, an AI Safety Institute focused on systemic risks, model reliability and regulatory support. Although not a regulatory agency, it will co-ordinate national stakeholders in AI evaluation and security, without the need to establish a new legal structure, including the National Agency for the Security of Information Systems (ANSSI), the National Institute for Research in Digital Science and Technology (Inria), the National Laboratory for Metrology and Testing (LNE), and the Digital Regulatory Expertise Hub (PEReN).

France has yet to implement binding AI-specific directives, relying instead on soft law through agency recommendations.

The CNIL leads AI regulation, focusing on GDPR compliance and data protection. Since May 2023, it has issued 12 practical guides and, in February 2025, released two recommendations confirming GDPR’s adequacy for AI. These recommendations propose practical solutions for informing individuals and enabling their rights, especially when personal data is used to train AI models.

Moreover, the CSPLA (French Supreme Council for Literary and Artistic Property) issued a report in December 2024 on AI Act implementation for rights-holders. It addresses Article 53’s copyright compliance requirements, proposing a data summary model balancing transparency and business confidentiality while aligning with DSM Directive “opt-out” clauses.

The OPECST (Parliamentary Office for the Evaluation of Scientific and Technological Choices) November 2024 report, requested by the Senate, assesses AI advancements, particularly generative AI. It criticises the AI Act as complex and unfriendly to innovation. Amongst its 18 recommendations, it calls for EU-led global AI governance and adapting copyright legislation.

At this stage no major enforcement actions have been taken by regulatory agencies in France specifically targeting AI-related violations but there have been notable regulatory decisions involving AI technologies.

A French Competition Authority decision dated 20 March 2024 notably fined Google EUR250 million for failing to comply with commitments related to press publishers’ neighbouring rights. The decision cited Google’s AI system, Bard, and criticised Google for not providing a technical solution allowing publishers and press agencies to opt out of Bard’s use of their content while still displaying protected content. This was considered an unfair practice, hindering negotiation efforts for fair remuneration with right holders.

Additionally, on 31 December 2024, the CNIL imposed its first sanction related to artificial intelligence on a company operating a chatbot utilising AI. The fine was relatively modest (EUR5,000) but it signals the CNIL’s increasing involvement in addressing AI-related issues and marks a significant step in regulating AI technologies in France.

Most of the norms and standards in France are predicated upon international or European standards, such as ISO, IEC, CEN and CENELEC standards. Within France, the Association Française de Normalisation (AFNOR) is the national delegation tasked with representing French interests in the formulation of international and European standards.

For more information, see 6.2 International Standard-Setting Bodies.

Under the AI Act, certification bodies are entrusted with certifying high-risk AI systems prior to them being placed on the European market.

AFNOR, in collaboration with other national EU delegations, has undertaken a stakeholder consultation process, engaging with start-ups to draft “operational” certification standards adapted to the realm of AI. To date, several international standards have been promulgated, including ISO/IEC 42001 on AI Management Systems and ISO/IEC 23894, which provides recommendations for AI risk management that may be applied by various types of industries and businesses for the conception and deployment of AI systems.

The most recent reference framework regarding AI, Spec 2314, was published in June 2024. At the request of the Ministry for Ecological Transition and Territorial Cohesion, this free framework outlines calculation methodologies and best practices for measuring the environmental impact of AI.

Governmental authorities in France have extensively incorporated AI into various sectors. For instance, agencies like the Directorate General of Public Finances (DGFiP) have implemented AI projects such as the CFVR initiative to enhance tax control operations, exemplifying the government’s commitment to leveraging AI for administrative efficiency.

Similarly, in 2024 the French Ministry of Justice announced its own AI plan focusing on four priority use cases: interview transcription, research assistance, translation and case summaries. An internal AI solution for the automatic transcription of interviews is currently in development and is expected to be operational in 2025.

The use of AI technologies by law enforcement agencies in France has also sparked debate, particularly over the use of Briefcam, an Israeli facial recognition tool, reportedly employed for years by French police for surveillance purposes without proper declaration or oversight. The Interior Ministry has announced an administrative investigation into the matter, while the CNIL has initiated a control procedure to assess the extent of facial recognition usage by law enforcement (see 7.2. Judicial Decisions).

Considering these developments, the enforcement of the AI Act introduces strict regulations on certain AI applications to protect citizens’ rights. Applications such as the untargeted extraction of facial images for facial recognition databases are prohibited by principle since 2 February 2025. Real-time biometric identification systems will only be deployable under strict conditions, including temporal and geographical limitations, and will require judicial or administrative authorisation, primarily for locating missing persons or preventing terrorist attacks.

There are no pending legal actions related to government use of AI, but recent rulings highlight concerns over automated decision-making.

In an 18 November 2024 decision (Case No 472912), the Conseil d’Etat ruled that human verification by sworn officers is required before issuing parking fines, in compliance with the GDPR.

On 5 December 2025, following an investigation by the media outlet “Disclose.”, the CNIL issued a formal notice to the Interior Ministry and six municipalities regarding their use of Briefcam, a video analysis software. The CNIL found that Briefcam’s live facial recognition feature was not compliant with legal requirements. The Ministry reported only one instance of its use in a judicial case, but the CNIL has ordered it to be disabled or restricted.

AI represents critical technology in the defence sector for France, supporting autonomous navigation, planning, decision support and analysis of massive datasets. Since 2019, the Ministry of Armed Forces has been establishing close ties with the French scientific AI community, funding projects to strengthen national sovereignty.

Under the 2024 to 2030 military programming law, France has allocated EUR10 billion to AI and algorithm development, aiming to enhance the army’s autonomous data processing capabilities for faster and more precise strategic and tactical decisions.

Although the use of AI in defence and national security is not governed by the AI Act, France applies certain ethical principles in this field. For instance, in 2020 the Ministry of Armed Forces tasked a committee with addressing ethical issues related to the use of AI in the defence sector.

France actively experiments with AI-enabled surveillance devices. For the 2024 Paris Olympics, intelligent cameras were authorised for security purposes on an experimental basis until 31 March 2025, “solely for the purpose of ensuring the security of sports, recreational, or cultural events that, due to their scale of attendance or circumstances, are particularly exposed to risks of terrorist acts or serious harm to individuals” (Law of 19 May 2023, concerning the Olympic and Paralympic Games of 2024). Following positive results, the Paris police prefect intends to extend the use of AI for image analysis in security operations.

Additionally, in May 2024, the French Ministry of Armed Forces launched the “Amiad” initiative to enable France to maintain sovereign control over defense AI, minimising reliance on foreign powers with an overall budget of EUR300 million. One of the primary focuses of Amiad is the development of AI language models tailored to military operations, considering the sensitive and strategic nature of the data involved. To support this initiative, Amiad has also contracted the acquisition of Europe’s most powerful classified supercomputer for defence AI, in collaboration with HP and Orange.

Emerging issues with generative AI include concerns over IP rights, data protection (see 8.2 Data Protection and Generative AI), image rights and fake news.

Image rights, part of privacy law, cover all elements of personality, such as a person’s appearance and voice. Unauthorised deepfakes violate these rights and are criminally sanctioned.

To combat AI-generated deepfakes, France amended Article L. 226-8 of the Penal Code on 21 May 2024. It criminalises distributing AI-generated content using someone’s image or voice without consent unless AI use is clearly indicated. Penalties include up to one year in prison and a EUR15,000 fine, increasing to two years and EUR45,000 for online distribution. Sexual deepfakes (L. 226-8-1) carry even harsher penalties, up to three years in prison and EUR75,000 in fines.

Generative AI also fuels fake news and scams. Although France has laws against disinformation in electoral campaigns, broader political deepfakes remain unregulated.

Other legal tools, such as copyright, image rights, privacy laws, the DSA, and the AI Act, may help address AI-driven misinformation, but enforcement is challenged by anonymous online actors.

In France, data protection laws – primarily governed by the GDPR and the French Data Protection Act – impose obligations on AI providers and deployers regarding the processing of personal data.

As data controllers, they must be able to respond to data subjects’ requests for rectification or deletion of data, without removing entire models, but rather by identifying and deleting specific training data. This necessitates the provider being technically capable of identifying the relevant data and metadata relating to the data subject.

Moreover, the CNIL stresses the importance of data minimisation from the AI system’s conception stage. It has issued several thematic fact sheets on this matter, including an updated guide in February 2025, formulating practical recommendations for developers and deployers of AI systems on handling data subject rights.

With regards to generative AI, the CNIL recommends establishing internal processes to query models (ie, using a list of carefully chosen queries) to verify any data that may have been stored about an individual. It also highlights the risk of correlation errors, where AI outputs may falsely suggest to individuals the presence of personal information concerning them due to interactions with external knowledge bases. Controllers are encouraged to carefully assess how their AI systems’ interactions with other service providers may impact the handling of a data-subject request.

For AI models trained on web-scraped data, the CNIL recommends implementing technical solutions like “push-back lists” or opt-out mechanisms to facilitate individuals’ right to object.

For AI systems with continuous learning, the CNIL recommends that controllers implement strict monitoring to anticipate and prevent data drifts.

Overall, the CNIL highly encourages AI actors to invest in privacy-preserving practices and integrate data protection by design throughout AI system development and deployment.

Use Cases

AI adoption in France’s legal sector is growing, though reliable AI solutions remain limited due to the early stage of training models on dedicated legal databases. Nevertheless, there are many use cases for AI in the legal sector, such as legal research based on natural language (eg, Doctrine, Lefebvre Dalloz and Lexis360), contract analysis (eg, Della AI), predictive analysis identifying the potential outcome based on precedents (eg, Case Law Analytics or Predictice) and e-discovery platforms to analyse a vast amount of documents including contracts during legal due diligence (eg, Relativity).

In 2023, a firm operating in France partnered with Harvey, an AI-powered legal assistant based on OpenAI’s latest model.

Ethical Concerns

A LexisNexis survey found that 85% of legal professionals have ethical concerns about AI, particularly its reliability in legal reasoning. French lawyers, as part of their ethical obligations, must have the required expertise to provide informed advice to clients; lacking competence may trigger a lawyer’s professional responsibility. In this context, AI-assisted legal advice must be supervised by a qualified lawyer.

To address these challenges, the government published the “Generative Artificial Intelligence and Legal Professions: Act Rather Than Suffer” report in December 2024, providing recommendations to ensure AI aligns with ethical and professional standards.

In France, in the absence of specific AI liability regulation, the common law civil liability regime remains applicable to address liability regarding AI-related damages, relying on principles of fault, causality and harm.

Current legal frameworks already provide some mechanisms to address liability for damages caused by AI systems such as intellectual property violations, defamation and data protection breaches arising from AI-generated content.

French law also prohibits clauses limiting liability for bodily injury – a key consideration given AI’s potential physical risks. However, determining responsibility for AI-caused harm remains complex due to the involvement of multiple stakeholders and difficulties in establishing a clear causal link.

The revised Product Liability Directive seeks to address these challenges by clarifying liability rules for AI systems (see 10.2 Regulatory).

Furthermore, without greater clarity on exposure to risks, AI-related insurance is expected to face similar difficulties to those encountered in the emerging cyber-risk market, necessitating careful definition of exclusions and coverage limitations.

AI-related liability is increasingly addressed at the European level, with the EU AI Act being a key framework for regulating AI liability in Europe.

Additionally, the revised PLD, adopted by the European Parliament on 13 March 2024, includes AI systems within the scope of “products” and simplifies the burden of proof. It removes the EUR500 threshold, introduces discovery mechanisms, and allows for non-professional data loss compensation. Notably, it requires a business based in the EU to take liability for damages, even in cases of online purchases outside the EU. The revised PLD must be implemented into national law by December 2026.

Initially proposed in 2022, the AI Liability Directive (AILD) aimed to streamline the process for AI-related damages by presuming causality in specific cases and easing the burden of proof. However, on 11 February, the European Commission withdrew the AILD from its 2025 work programme due to dwindling support and concerns over its impact on AI innovation. Even though critics argue this creates legal uncertainty and favours Big Tech, others see it as a step toward more flexible AI regulation.

The growing reliance on algorithms and machine learning in decision-making processes has raised significant concerns regarding algorithmic bias. Institutions like the Defender of Rights, through its recommendations “Fighting Against Discrimination Produced by Algorithms and AI”, and the CNIL, in its 2025-2028 strategic plan, have emphasised the need for ethical AI to respect individual rights. The latter aims to mitigate AI-related risks by enhancing knowledge-sharing in the AI ecosystem, clarifying the legal framework, and strengthening regulation of algorithmic bias.

Regarding administrative decision-making, Article L.311-3-1 of the Code of Relations between the Public and the Administration (CRPA) requires that individual decisions based on algorithmic processing must be explicitly disclosed to the individual concerned. This includes information on the algorithm’s role, data sources, parameters and operational logic. The rules defining this processing and its main characteristics must be communicated by the administration to the individual upon request, as stipulated by the Law for a Digital Republic of 7 October 2016.

Furthermore, proposed legislation from 6 December 2023 aims to address discrimination through individual and statistical testing practices by proposing the creation of a committee of stakeholders to conduct prospective studies on the risks associated with AI-based algorithms to ensure the fair and non-discriminatory use of algorithms.

On the technical front, researchers from Inria (National Institute for Research in Digital Science and Technology) have developed “FairGrad”, an open-source software designed to correct algorithmic bias by prioritising data from disadvantaged groups in machine learning processes.

Additionally, initiatives like Confianceai and Positive.ai have developed ethical guidelines, technical standards and best practices to ensure transparency, fairness, and trust in AI systems.

In France, the use of facial recognition and biometrics has predominantly sparked legal concerns in law enforcement (see 7.1 Government Use of AI and 7.3 National Security). To date, biometric recognition techniques are not governed by a specific legal framework and fall under data protection law which imposes prior consent of the individual unless a specific exception applies.

To fill this gap, a 2023 law proposal aims to regulate the use of biometrics and facial recognition in public spaces, particulary by public authorities, to prevent mass surveillance. Under this proposal, biometric identification in public spaces would require the individual’s prior consent and any categorisation or rating of individuals would be prohibited.

However, limited exceptions are proposed, including access control at public events exposed to terrorist threats, post-hoc biometric data processing for judicial or intelligence purposes, and real-time biometric processing on an experimental basis to combat terrorism and serious crime.

Other administrative agencies, such as the CNCDH (Commission Nationale Consultative des Droits de l’Homme) have also issued recommendations regarding the use of biometrics and facial recognition applied to videosurveillance, urging public authorities to take action to regulate the use of such technologies.

Beyond public use, biometrics are increasingly employed by companies for employee identification. Such applications must be carefully assessed to ensure compliance with data protection and employment laws.

Automated decision-making in France is governed by several legal frameworks, with key concerns around transparency, non-discrimination, and the ability to challenge decisions.

Under Article 22 of the GDPR and Article 47 of the French Data Protection Law, individuals are protected against fully automated decisions, ensuring transparency and the right to request human review. In sectors like taxation, automated systems are used, but they must meet strict legal standards to ensure fairness.

The CNIL oversees compliance, with penalties for non-compliance, especially regarding informing individuals about automated decision-making. Discriminatory outcomes can result in criminal charges.

In banking, AI-driven decision-making is used for credit scoring and is considered high-risk under the AI Act. The ACPR (Autorité de Contrôle Prudentiel et de Résolution) announced in July 2024 that it is developing an audit methodology for AI systems used by banks and insurers.

Use of Chatbots

Advancements in generative and predictive AI have made it increasingly difficult for individuals to discern when they are interacting with AI, particularly with the widespread use of chatbots. These conversational agents deployed by both private or public entities offer round-the-clock assistance to users, necessitating clear transparency obligations as emphasised in the 2018 Villani report and international principles like those of the G20, now formalised in the AI Act.

On 11 March 2025, the European Commission released the third version of the Code of Practice for General-Purpose AI, focusing on transparency under Article 53 of the AI Act. Signatories must maintain model documentation for ten years, disclose contact details, and ensure the quality and security of the information, promoting transparency by sharing some data publicly.

Additionally, data protection regulations mandate that users of chatbots, which process personal data, are informed. Chatbots must be included in privacy policies with distinct terms of use.

Ethical Concerns

The National Consultative Ethics Committee (CCNE) in France has examined the ethical risks of chatbots, stressing the risk of users anthropomorphising bots. This correlates with Article 53 of the AI Act, which mandates chatbot providers disclose their AI nature to users, except for those used for crime prevention.

The CCNE also advocates for ethically designed chatbots, ensuring traceable responses and compliance with data protection laws, emphasising transparency and user awareness to mitigate privacy risks and manipulation.

Nudge

The AI Act prohibits the use of conversational agents to influence consumer behaviour through nudging techniques. Such techniques may also be considered misleading commercial practices under French consumer law, especially when they conceal crucial information from consumers.

As the future implementation of the AI Act imposes specific obligations on actors within the AI supply chain, including suppliers, users and importers, transactional contracts between customers and AI suppliers need to clearly define roles and responsibilities to address new and unique risks associated with AI technology.

This includes delineating obligations related to data privacy, security, accountability and compliance with regulatory requirements.

Businesses must ensure that contractual agreements reflect these considerations to mitigate potential legal and operational challenges arising from AI deployment.

Technology and Benefits

AI is increasingly used in recruitment to automate CV screening, skills assessments, and initial candidate interactions, improving efficiency and allowing recruiters to focus on human engagement. It can also promote diversity, equity, and inclusion (DEI) by reducing unconscious bias, though this depends on the neutrality of training data.

In France, France Travail has implemented AI tools like Chat FT and Match FT to enhance job matching. Match FT interacts with preselected candidates via SMS, gathering real-time insights on their availability, constraints, and preferences, helping advisers better connect employers with job seekers.

Risks and Liability

Despite these advantages, AI-driven hiring carries risks, particularly regarding bias and discrimination. If training data reflects existing inequalities, AI could reinforce them, potentially leading to violations of anti-discrimination laws.

For terminations, French labour law imposes strict procedural safeguards that AI cannot replace. Economic dismissals, for example, require a prior interview with an employer representative. Delegating such decisions to AI could result in wrongful dismissal claims.

Under the AI Act, recruitment tools like CV analysers and AI-driven promotion systems are classified as high-risk applications (Annex III). Companies using them must comply with Article 29, ensuring transparency, human oversight, and risk management to mitigate legal and ethical risks.

Strict Regulation of Monitoring Tools

Under French labour law, employee evaluation and monitoring tools must meet strict conditions. Employers cannot impose permanent surveillance unless justified by the nature of the task and proportionate to the intended purpose.

Before implementing a monitoring tool, employers must consult employee representative bodies if such bodies exist, as these tools impact working conditions. AI-driven monitoring systems are not exempt from these requirements and must respect employee privacy and avoid discriminatory biases.

GDPR and Employer Liability

Employers must comply with GDPR principles, ensuring transparency and lawfulness in data processing. Additionally, under the French Labour Code, employers have a duty to safeguard employees’ physical and mental health. AI-powered monitoring tools could increase workplace stress, potentially harming mental well-being and exposing employers to liability.

For digital platform companies like car services and food delivery, the use of AI has become commonplace, particularly in pricing strategies such as surge pricing, which is based on demand-supply dynamics and is a prime example of AI’s impact on pricing tactics. However, concerns about fairness and transparency have surfaced alongside these innovations.

From a regulatory perspective, laws like Article L.221-5 of the French Consumer Code mandate companies to disclose their use of AI-driven pricing mechanisms, like surge pricing, to consumers. This transparency requirement aims to empower consumers to make informed decisions about their purchases.

Furthermore, the “SREN law”, effective from 21 May 2024, aims to secure and regulate the digital space in alignment with the European Digital Services Act. Therefore, the SREN law imposes a penalty of up to one year in prison and a EUR15,000 fine for disclosing AI-generated visual or audio content featuring a person without their consent, unless it is clearly marked as AI-generated. This increases to two years’ imprisonment and a EUR45,000 fine if the content is shared via an online communication service (Article 15 of the SREN law amending Article 226-8 of the Penal Code).

AI is reshaping financial services, transforming cost centres into profit centres through automation, cost savings, and revenue generation. Financial institutions leverage AI for customer engagement (chatbots, personalised recommendations) and fraud detection or risk management, with financial institutions pooling data resources to develop collective solutions that improve efficiency and security.

Although AI offers significant advantages, its adoption comes with risks, particularly in relation to data protection, regulatory compliance, and bias in decision-making processes. The use of AI in financial services must be carefully managed to prevent algorithmic discrimination, which could lead to unintended exclusionary practices. Financial institutions must ensure that AI-driven decision-making remains transparent, fair, and compliant with existing regulations.

Recognising the growing impact of AI, the ACPR has set up a task force to assess AI adoption in the banking and insurance sectors, focusing on the opportunities, risks, and regulatory implications. In parallel, the Ministry of Finance and the DGFIP have implemented AI-driven tools for fraud detection, undeclared property identification, property valuation, and optimisation of public spending control.

An October 2024 Court of Auditors’ report on AI within the DGFIP highlights the need for social dialogue and human resources considerations in AI deployment. The report calls for AI performance indicators, assessments of quality and user satisfaction, and the creation of an AI incubator to foster internal expertise. It also emphasises that AI should be applied selectively, focusing on processes where its impact can be most beneficial.

The integration of AI into healthcare systems presents both opportunities and challenges.

Regulatory bodies such as the French National Agency for the Safety of Medicines and Health Products (ANSM) and the French National Authority for Health (HAS) provide guidelines for the use of AI in software as a medical device (SaMD) and related technologies. These regulations address concerns such as the potential treatment risks associated with AI, including hidden bias in training data that can lead to algorithmic biases affecting patient care outcomes. Compliance with strict data protection laws, such as the GDPR, is essential to safeguard patient privacy when using personal health information to train machine learning algorithms.

AI-powered medical decision support systems (MDSS) offer the promise of improving diagnostic accuracy and treatment selection. However, concerns about potential risks, including diagnostic errors and breaches of patient privacy, highlight the need for robust regulatory oversight. The liability landscape surrounding MDSS use encompasses fault-based liability, such as diagnostic errors attributed to healthcare professionals, and product defect liability, which may arise from software malfunctions. To address these concerns, rigorous testing, validation and ongoing monitoring of AI systems are essential to ensure compliance with regulations such as the GDPR, the Medical Device Regulation (MDR) and forthcoming application of legislation like the AI Act and the PLD.

Moreover, the obligation to inform patients about the use of MDSS underscores the importance of transparency and patient autonomy, although questions persist regarding the extent and timing of this obligation.

Despite these challenges, the PLD aims to streamline the burden of proof in AI-related liability cases, demonstrating efforts to adapt regulatory frameworks to the evolving landscape of healthcare AI in France.

At the February 2025 AI Action Summit, Minister of Health and Access to Care, Yannick Neuder, shared a report on AI in healthcare in France, outlining the progress and future perspectives of AI implementation in the French healthcare system and detailing public sector actions. The report highlights actions to assess risks and benefits for patients and professionals, support innovation, and establish a framework for evaluation, regulation, and ethical principles. AI is seen as a strategic tool to enhance patient care, improve efficiency, and support public health policies while reducing the burden on healthcare professionals.

In France, AI integration in autonomous vehicles is primarily governed by the Law of 5 July 1985, also known as the “Badinter Law”. This law addresses civil liability in road accidents involving motor vehicles, and remains applicable even in cases involving autonomous vehicles, as it focuses on the involvement of the motor vehicle itself, regardless of its automation level.

Recent regulatory developments have also addressed the criminal liability of autonomous vehicle manufacturers. Order No 2021-443, issued on 14 April 2021, establishes a framework for holding manufacturers criminally liable in the event of accidents occurring while the vehicle is in automatic mode. This regulation clarifies the responsibilities of manufacturers in the case of accidents during autonomous operation, contributing to the legal structure for autonomous mobility.

For defective autonomous vehicles, manufacturers’ liability can be pursued under the updated Product Liability Directive (PLD), adopted in March 2024. This directive enables the accountability of manufacturers for defects, ensuring consumer protection and safety standards in autonomous vehicle deployment.

The integration of AI into France’s manufacturing sector is closely linked to the country’s relocalisation efforts, aimed at addressing the challenges of globalisation and reducing technological dependence on China. Government-backed initiatives such as the France 2030 plan promote AI’s role in modernising production methods and enhancing industrial competitiveness.

Despite these opportunities, AI adoption raises concerns regarding workforce displacement due to automation. While AI supports the French government’s economic sovereignty goals, especially for critical products, it also necessitates workforce development strategies. These strategies focus on equipping workers with the skills to collaborate effectively with AI technologies, ensuring their relevance in the evolving industrial landscape.

At this stage, there is no specific legislation for AI in manufacturing, but existing legal frameworks address concerns related to AI integration. Notably, the collection and processing of sensitive data in AI-driven manufacturing must comply with the GDPR. Labour laws also address issues such as job displacement, workplace safety, and equitable treatment in an increasingly automated environment.

Looking ahead, the transposition of the amended Product Liability Directive (PLD), set for December 2026, will introduce specific regulations concerning liability for products integrating AI, ensuring manufacturers adhere to strict safety standards and comprehensive risk assessments.

Regulations on AI in professional services are currently evolving, with the AI Act projected to take effect around 2026. This legislation imposes obligations on professional users, including those whose employees use AI systems. These obligations include the establishment of technical and organisational measures aligned with usage notices, adequate human oversight, impact assessments focusing on fundamental rights, governance and transparency. These requirements are particularly relevant when users employ AI systems that generate or manipulate text, audio or visual content that may be perceived as authentic by the public.

In France, employers are civilly liable for damages caused by their employees within the scope of their duties, underscoring the need for employers to ensure the accuracy and reliability of AI systems in the workplace. Employers should provide adequate training on AI capabilities, limitations, and risks, while also establishing internal codes of conduct and procedures for human oversight and intervention in AI-driven decisions.

Data security and confidentiality are significant considerations, especially when AI systems rely on sensitive employee data. Employers must implement robust measures to protect against breaches or unauthorised access, ensuring compliance with data protection regulations.

At this stage, incidents related to AI in professional services had been relatively rare in France. However, on 14 February 2025, the interim judge of Nanterre suspended the deployment of new AI software applications in a company following a request from its Works Council (CSE). The CSE argued the tools were introduced without consultation, violating its rights. While the employer claimed the tools were still in an experimental phase, the court ruled that the partial use of the software by employees constituted implementation, not experimentation, and thus was unlawful due to the lack of prior consultation with the CSE.

IP Protection

Protection of AI models

AI models, built on sophisticated algorithms and vast datasets, may be protected through copyright, trade secrets, and patents, though their legal status varies depending on their specificities. Copyright may apply to the model’s source code if it meets the originality requirements but the underlying mathematical algorithms often fall outside its scope. Trade secrets may offer strong protection, especially for proprietary data and processes that define the model’s functionality. Patents may also be an option for novel and non-obvious AI-related inventions, particularly for innovative architectures or technical solutions. However, under current French AI models as a whole lack dedicated IP protection. As AI advances legal frameworks may evolve to offer more tailored protections for AI models.

Protection of input/training data

Original creations used to train AI may be protected by copyright, as well as neighbouring rights such as performers’ rights, phonogram producers’ rights, and press publishers’ rights. The sui generis right of the database producer may apply if the training data is extracted from a database that involved substantial investment in its creation, verification, or presentation.

Protection of output data

Output data may be protected by copyright if AI is used as an aid in the creative process, with human involvement beyond providing mere instructions to the AI. However, fully AI-generated content should not be protected by copyright under French law.

AI Tool Providers’ Terms and Conditions

An AI tool provider may contractually determine asset protection with respect to the input and output data of the generative AI tool through their terms and conditions of use. For instance, OpenAI states in its terms and conditions of use that, to the extent permitted by applicable law, the user retains ownership of the input and output data. More generally, AI tool providers incorporate good practices into their T&Cs, so that nothing shall be deemed as granting a right to use data as training data for the purpose of testing or improving AI technologies, or to the contrary that all necessary rights have been obtained to use the training data for such use.

Alongside these contractual protections, 2024 guidelines from authors’ and producers’ organisations regulate AI use in audiovisual and cinematic productions. These ensure that authors cannot be forced to use AI-generated content and must approve any AI use in their work. Producers can use AI for tasks like localisation and promotion, but must inform authors in advance. AI systems must obtain appropriate licensing for copyrighted content before use.

IP Infringement

Training an AI system with copyrighted or neighbouring rights-protected content without authorisation may constitute infringement. Additionally, the extraction or reuse of a significant part of a database could violate database sui generis rights. Data collection might also breach confidentiality or business secrecy agreements.

In France and Europe, the legal framework provides some protection for AI developers. The “text and data mining” exception (Article L.122-5-3 of the French Intellectual Property Code) can neutralise copyright or database rights, as it applies to AI systems using automated analysis to extract information from text and data. However, right-holders can opt out, preventing their content from being used for AI training. Collectives, such as SACEM, have opted out for their members. AI tool developers must seek prior authorisation from such organisations. AI-generated content can also replicate works used for training, leading to potential IP infringement if identifiable works are reproduced.

Inventor/Co-Inventor

The French Intellectual Property Office (INPI) has not yet ruled on whether AI can be designated as an inventor, but this appears incompatible with Article R. 612-10 of the French Intellectual Property Code, which requires an inventor to be a natural person.

Similarly, the 2024 EPC Guidelines confirm that only natural persons can be designated as inventors, following decision J8/20 (DABUS), which aligns with rulings in other jurisdictions, including the UK Supreme Court in Thaler v Comptroller-General of Patents, Designs and Trade Marks.

Author/Co-author

French courts have not yet addressed whether AI can qualify as an author, but existing case law suggests it cannot.

The Court of Cassation has ruled that legal entities cannot be authors (Cass, civ. 15 January 2015, No 13-23.566), implying that only natural persons can hold authorship. Furthermore, the requirement of originality, which necessitates human creative intent, would exclude AI-generated works, as AI does not exercise free and personal creative choices.

See 15.1 IP and Generative AI.

See 3.6 Data, Information or Content Laws.

See 15.1 IP and Generative AI.

In June 2024, the French Competition Authority published an opinion on the competitive dynamics of the generative AI sector, identifying several key issues that could shape market competition.

Control Over AI Infrastructure and Market Access

The Authority highlights the reliance on high-performance computing, cloud services, and large datasets, often controlled by dominant players like US tech giants Nvidia. This concentration risks restricting access for smaller AI firms. The Authority recommends closer oversight of partnerships and vertical integrations to prevent monopolistic practices.

Strategic Acquisitions and “Acqui-Hires”

Tech companies increasingly acquire AI start-ups for talent rather than products (“acqui-hires”), often avoiding merger scrutiny due to low financial thresholds. The Authority calls for greater transparency in minority investments and expanded oversight to safeguard innovation.

Algorithmic Collusion and the Role of AI in Price Fixing

AI-powered pricing strategies could lead to algorithmic collusion, where pricing algorithms autonomously co-ordinate without explicit human intervention. This poses a challenge for regulators, who may need new tools to detect such behaviour.

Abuse of Market Power Through Data and Interoperability Restrictions

The dominance of tech firms over vast consumer data raises concerns that restricted data sharing or interoperability could lock out competitors. The Authority is exploring whether AI service providers should face ex-ante obligations to ensure fair competition, particularly regarding data portability and interoperability, echoing concerns under the Digital Markets Act (DMA).

Regulatory Outlook

The Authority proposes ten recommendations addressing these concerns, aligning with broader European initiatives like the DMA, the AI Act, and the Data Act. No immediate legislative changes are proposed but increased scrutiny of AI-related transactions, data access policies, and the role of dominant firms in market dynamics is expected in the coming years.

Cybersecurity legislation is adapting to address the emerging risks associated with AI technologies, especially as large language models lower entry barriers for malicious actors and facilitate faster, more sophisticated cyberattacks. These attacks often involve malware, social engineering, and data analysis, making cybercrimes more effective.

The French National Cybersecurity Agency (ANSSI) reported a 15% rise in security incidents in 2024, totalling 4,386 events, with threats coming from both cybercriminals and state-sponsored hacktivists, particularly from Russia and China. ANSSI highlighted the growing risk of AI being used for malicious purposes, including phishing and malware creation, requiring constant monitoring of AI misuse in cybersecurity.

A February 2024 report, Building Trust in AI through a Cyber Risk Approach, co-signed by 19 international and five national partners, identifies cybersecurity risks to AI systems and provides strategic recommendations to better integrate cybersecurity into AI system development.

The CNIL’s 2025-2028 strategic plan aims to strengthen cybersecurity awareness, as 61% of French respondents reported cyberattacks. The plan focuses on co-operation within the security ecosystem, assisting data breach victims, developing privacy solutions, and ensuring compliance with cybersecurity rules.

The Corporate Sustainability Reporting Directive (CSRD) (EU) 2022/2464 mandates that large companies report on sustainability within their regular management reports or publish a separate sustainability statement. This requirement began in 2024 for large, listed companies and financial institutions, with a progressive implementation plan.

AI plays a significant role in advancing ESG goals by streamlining data management, ensuring transparency, and improving the accuracy of sustainability reporting. However, it also introduces certain risks: the widespread use of automation and AI could lead to job reductions, potentially harming employee well-being and negatively impacting a company’s social responsibility reputation. Furthermore, the energy consumption of AI systems, particularly in large-scale operations, poses a challenge to meeting environmental targets. If not managed properly, the adoption of AI could increase energy consumption and undermine the environmental goals of companies.

In response to these concerns, the INRIA and the Ministry of Ecological Transition published a position paper in February 2025. The document outlines the challenges posed by AI’s environmental impact. Although AI can support ecological transition by optimising energy consumption and improving climate modelling, its intensive deployment also results in considerable resource consumption, high electricity demand, water usage, and the generation of electronic waste.

The position paper identifies five major challenges in reconciling AI development with ecological concerns. These include improving the environmental performance of AI technologies through energy efficiency, optimising hardware architecture, and better data management. It also stresses the importance of developing more specialised AI models that require less energy and are trained on reliable, precise data to avoid the overreliance on energy-consuming general-purpose models. The paper further emphasises the need for more accurate methods to evaluate the environmental footprint of AI technologies and the application of circular economy principles to AI infrastructure. Lastly, it advocates for promoting frugal AI tools by encouraging sustainable practices and raising awareness through specialised conferences and accessible educational content.

The AI Act establishes product-specific regulations for artificial intelligence systems marketed within the EU. It applies to all businesses developing, providing, or deploying AI systems and includes specific provisions for generative AI models, which is particularly relevant as its use continues to expand in applications like content creation, customer service, and more.

Under the AI Act, companies must assess their AI systems’ risk levels and adapt their compliance accordingly:

  • Unacceptable risk AI systems are banned from the market since early 2025.
  • High-risk AI systems must obtain a CE marking which signifies compliance with the AI Act.
  • Low or minimal risk AI systems face fewer regulatory obligations, but businesses must still provide clear user information, such as ensuring transparency about AI usage and its capabilities. Additionally, they can opt to adhere to voluntary codes of conduct.

For generative AI models, the compliance requirements will vary depending on whether the model is open-source or not. The regulation distinguishes between basic and systemic risk models, with “systemic” risk determined by factors such as the scale of AI deployment, the computing power required, and the potential for business applications to impact users and industries.

Companies need to prepare for compliance now, as the first obligations began to take effect within six months of the AI Act’s official publication.

This requires businesses to:

  • map their AI systems;
  • conduct necessary risk assessments and tests;
  • prepare and provide required documentation; and
  • establish ongoing governance.

As AI adoption expands across industries in France, the focus on risk management has led to the emergence of specialised compliance tools, such as Naaia, designed to help organisations seeking to streamline their compliance efforts and navigate AI-related challenges.

Jeantet AARPI

11 rue Galilée
75116 Paris
France

+33 01 45 05 80 08

communication@jeantet.fr www.jeantet.fr
Author Business Card

Law and Practice in France

Authors



Jeantet is an independent law firm founded in 1924, celebrating a century of excellence in business law. Operating in over 150 jurisdictions, the firm is renowned for its expertise and forward-looking approach. In 2025, Jeantet further strengthened its IP tech and data practice by integrating the distinguished Aramis team, co-led by Benjamin May and David Roche, enhancing its position in intellectual property and technology law. The practice, now composed of twelve lawyers, including three partners, is co-headed by Benjamin May and Frédéric Sardain. It offers a comprehensive range of services in intellectual property, technology law, data protection, and artificial intelligence. Providing both strategic advice and litigation support, the team navigates regulatory challenges and contractual matters while regularly publishing on AI-related legal developments.