In France, there are no laws specifically dedicated to AI or machine learning, but several European regulations govern its use in areas like risks and governance, privacy, intellectual property, and liability.
Since the launch of France’s National Strategy for Artificial Intelligence (SNIA) in 2018, the country has rapidly emerged as a global AI leader.
By 2023, France was Europe’s top destination for AI investments, attracting global tech giants like Google, Meta, OpenAI, and Microsoft to set up research centres.
The AI ecosystem includes over 1,000 start-ups, 4,000 researchers, and a growing pool of AI-trained graduates. French companies such as Aircall, Alan, Dataiku, and Mistral AI are driving innovations across sectors like healthcare, manufacturing, and logistics, with significant capital raised by startups like Mistral AI.
Generative AI (eg, ChatGPT) is widely adopted, especially for customer interactions and content generation, while predictive AI is optimising operations and decision-making. At the February 2025 AI Summit in Paris, Air France highlighted its use of AI-driven predictive maintenance through its Prognos system, which has been anticipating technical failures and optimising aircraft maintenance since the 2000s, with more recent applications in baggage handling, eco-piloting, and resource management.
Adoption is particularly strong among mid-sized firms, though SMEs are also increasingly integrating AI.
Since its launch in 2019, the Jean Zay supercomputer has been at the forefront of high-performance computing for research, undergoing several upgrades to expand its processing power. Most recently, a EUR40 million investment under the France 2030 programme is set to further enhance its capabilities.
Building on this momentum, France is securing new partnerships to strengthen its AI computing infrastructure. Among them, the government has signed an agreement with UK-based Fluidstack to develop one of the world’s largest low-carbon supercomputers. Backed by EUR10 billion, the project will deploy 500,000 GPUs by 2026 and reach one gigawatt of capacity by 2028, powered by France’s nuclear energy.
France is actively advancing AI through its national strategy, integrated into the broader “France 2030” programme. AI is a key priority for the country’s future, with the government heavily investing to position France as a global leader in AI. This strategy builds on the 2018 Villani report, which recommended actions across research, data governance, talent development, and ethics.
The roadmap is structured in three key phases:
In addition to these investments, the French government is supporting AI adoption through public-private partnerships, facilitating public sector experimentation, such as the collaboration between France Travail and Mistral AI, and increasing the public procurement threshold to encourage AI innovators. The government is also focusing on developing 35 key sites for data centres to support AI infrastructure and innovation.
The regulation of artificial intelligence in France is primarily shaped at the European level rather than through national legislative initiatives, as a way to adopt regulatory harmonisation across the European Union (see 1.1. General Legal Background).This approach seeks to foster innovation while protecting citizens’ fundamental rights within the EU.
No AI-specific law is currently in force in France.
Pursuant to the ”Law for a Digital Republic” of 7 October 2016, certain general principles regarding the rights of individuals subject to individual decisions based on algorithmic processing, akin to transparency, are already in force. This legislation is applicable to decisions made by the French administration. Any individual must be explicitly informed that an individual decision will be made based on algorithmic processing, and the administration must communicate the rules defining this processing and its main characteristics of implementation to the individual upon request.
These principles are applicable to AI-driven tools, to some extent.
French governmental bodies have issued several recommendations and guidelines to support the ethical deployment of AI, with a focus on best practices.
France’s approach to AI regulation is largely shaped by EU regulations (see 1.1 General Legal Background).
France, which has been supporting European-level AI regulation since 2018, has recently pushed back on some aspects, particularly around generative AI. In 2023, France, Italy, and Germany opposed including foundation models in the AI Act, preferring a more gradual focus on AI applications rather than regulating the underlying technology itself. This shift in stance aligns with the interests of AI innovators like Mistral AI.
Furthermore, the European Commission’s decision to remove the AI liability directive from its 2025 work programme highlights a growing emphasis on prioritising innovation over stringent regulation.
The EU AI Act will take precedence over national laws, replacing any potential conflicting French regulation. However, differences may arise in implementation due to local policy preferences.
France has recently favoured self-regulation and a gradual approach, especially regarding generative AI, rather than strict, uniform regulations (see 3.4.1 Jurisdictional Commonalities).
There is no applicable information in this jurisdiction.
To date, French law has not been substantially amended on data protection aspects nor on information or content to foster AI technology.
The French Data Protection Authority, the CNIL, published guidelines throughout 2024 and 2025 to help AI developers comply with data protection laws (see 5.2. Regulatory Directives).
Additionally, Law Proposal No 1630, presented in 2023, seeks to clarify IP rights for AI-generated works. It proposes that ownership would go to the human authors of the works that contributed to the AI creation and mandates labelling AI-generated works. It also introduces a taxation system benefiting collective management organisations. However, the proposal has faced criticism for its practicality and technical challenges and has not yet been discussed in Parliament.
In France, AI legislation is largely shaped by EU initiatives, particularly the EU AI Act and the revised PLD.
Rather than pursuing national-level laws, France focuses on aligning with these EU frameworks to foster innovation while addressing risks.
The EU AI Act (effective by 2026) categorises AI systems by risk and imposes obligations for transparency and ethics, targeting high-risk AI applications. It aims to prevent harm such as discrimination and bias, with some provisions enforced by 2025.
The revised PLD focuses on product liability for AI technologies, clarifying manufacturers’ responsibilities and providing avenues for redress in case of AI-related harm.
To date, French courts have not had the occasion to deal with cases involving AI systems. There are various reasons for the lack of court rulings. For example, the anticipation of EU legislation and the scarcity of specific legislation might have created a situation where there have not been any questions on the interpretation of law that would require settlement in court. Also, disputes settled or resolved through alternative dispute resolution mechanisms would not have generated court rulings.
However, on 12 March 2025, French organisations representing publishers and authors announced legal proceedings against Meta, alleging large-scale copyright infringement through the unauthorised use of their works to train generative AI models. The lawsuit, brought before the Paris Judicial Court by the SNE, SGDL, and SNAC, marks the first case in France addressing AI-driven copyright violations. Meta, which leveraged a dataset containing 200,000 books – including French titles – to enhance its Llama model, claims “fair use”, while plaintiffs argue AI development should not violate creators’ rights.
Regulatory agencies in France operate independently from the government, overseeing specific sectors with enforcement powers, including sanctions and soft law guidance.
There is currently no formally designated AI regulator yet but the CNIL is expected to be appointed as France’s AI Act market surveillance authority, under Article 70 of the AI Act, expanding its role beyond data protection. It has already issued significant guidance on AI, established an AI division and received support from the Council of State and the National Assembly for this designation.
Several other existing agencies are addressing AI-related issues within their respective areas of expertise. For example, the Defender of Rights focuses on algorithmic bias within the human rights sector and the French Competition Authority has warned of anti-competitive risks in generative AI, advocating for improved regulation and fair access to computing resources. (See 16.1 Emerging Antitrust Issues in AI).
Additionally, at the February 2025 AI Summit in Paris, the French government announced the establishment of INESIA, an AI Safety Institute focused on systemic risks, model reliability and regulatory support. Although not a regulatory agency, it will co-ordinate national stakeholders in AI evaluation and security, without the need to establish a new legal structure, including the National Agency for the Security of Information Systems (ANSSI), the National Institute for Research in Digital Science and Technology (Inria), the National Laboratory for Metrology and Testing (LNE), and the Digital Regulatory Expertise Hub (PEReN).
France has yet to implement binding AI-specific directives, relying instead on soft law through agency recommendations.
The CNIL leads AI regulation, focusing on GDPR compliance and data protection. Since May 2023, it has issued 12 practical guides and, in February 2025, released two recommendations confirming GDPR’s adequacy for AI. These recommendations propose practical solutions for informing individuals and enabling their rights, especially when personal data is used to train AI models.
Moreover, the CSPLA (French Supreme Council for Literary and Artistic Property) issued a report in December 2024 on AI Act implementation for rights-holders. It addresses Article 53’s copyright compliance requirements, proposing a data summary model balancing transparency and business confidentiality while aligning with DSM Directive “opt-out” clauses.
The OPECST (Parliamentary Office for the Evaluation of Scientific and Technological Choices) November 2024 report, requested by the Senate, assesses AI advancements, particularly generative AI. It criticises the AI Act as complex and unfriendly to innovation. Amongst its 18 recommendations, it calls for EU-led global AI governance and adapting copyright legislation.
At this stage no major enforcement actions have been taken by regulatory agencies in France specifically targeting AI-related violations but there have been notable regulatory decisions involving AI technologies.
A French Competition Authority decision dated 20 March 2024 notably fined Google EUR250 million for failing to comply with commitments related to press publishers’ neighbouring rights. The decision cited Google’s AI system, Bard, and criticised Google for not providing a technical solution allowing publishers and press agencies to opt out of Bard’s use of their content while still displaying protected content. This was considered an unfair practice, hindering negotiation efforts for fair remuneration with right holders.
Additionally, on 31 December 2024, the CNIL imposed its first sanction related to artificial intelligence on a company operating a chatbot utilising AI. The fine was relatively modest (EUR5,000) but it signals the CNIL’s increasing involvement in addressing AI-related issues and marks a significant step in regulating AI technologies in France.
Most of the norms and standards in France are predicated upon international or European standards, such as ISO, IEC, CEN and CENELEC standards. Within France, the Association Française de Normalisation (AFNOR) is the national delegation tasked with representing French interests in the formulation of international and European standards.
For more information, see 6.2 International Standard-Setting Bodies.
Under the AI Act, certification bodies are entrusted with certifying high-risk AI systems prior to them being placed on the European market.
AFNOR, in collaboration with other national EU delegations, has undertaken a stakeholder consultation process, engaging with start-ups to draft “operational” certification standards adapted to the realm of AI. To date, several international standards have been promulgated, including ISO/IEC 42001 on AI Management Systems and ISO/IEC 23894, which provides recommendations for AI risk management that may be applied by various types of industries and businesses for the conception and deployment of AI systems.
The most recent reference framework regarding AI, Spec 2314, was published in June 2024. At the request of the Ministry for Ecological Transition and Territorial Cohesion, this free framework outlines calculation methodologies and best practices for measuring the environmental impact of AI.
Governmental authorities in France have extensively incorporated AI into various sectors. For instance, agencies like the Directorate General of Public Finances (DGFiP) have implemented AI projects such as the CFVR initiative to enhance tax control operations, exemplifying the government’s commitment to leveraging AI for administrative efficiency.
Similarly, in 2024 the French Ministry of Justice announced its own AI plan focusing on four priority use cases: interview transcription, research assistance, translation and case summaries. An internal AI solution for the automatic transcription of interviews is currently in development and is expected to be operational in 2025.
The use of AI technologies by law enforcement agencies in France has also sparked debate, particularly over the use of Briefcam, an Israeli facial recognition tool, reportedly employed for years by French police for surveillance purposes without proper declaration or oversight. The Interior Ministry has announced an administrative investigation into the matter, while the CNIL has initiated a control procedure to assess the extent of facial recognition usage by law enforcement (see 7.2. Judicial Decisions).
Considering these developments, the enforcement of the AI Act introduces strict regulations on certain AI applications to protect citizens’ rights. Applications such as the untargeted extraction of facial images for facial recognition databases are prohibited by principle since 2 February 2025. Real-time biometric identification systems will only be deployable under strict conditions, including temporal and geographical limitations, and will require judicial or administrative authorisation, primarily for locating missing persons or preventing terrorist attacks.
There are no pending legal actions related to government use of AI, but recent rulings highlight concerns over automated decision-making.
In an 18 November 2024 decision (Case No 472912), the Conseil d’Etat ruled that human verification by sworn officers is required before issuing parking fines, in compliance with the GDPR.
On 5 December 2025, following an investigation by the media outlet “Disclose.”, the CNIL issued a formal notice to the Interior Ministry and six municipalities regarding their use of Briefcam, a video analysis software. The CNIL found that Briefcam’s live facial recognition feature was not compliant with legal requirements. The Ministry reported only one instance of its use in a judicial case, but the CNIL has ordered it to be disabled or restricted.
AI represents critical technology in the defence sector for France, supporting autonomous navigation, planning, decision support and analysis of massive datasets. Since 2019, the Ministry of Armed Forces has been establishing close ties with the French scientific AI community, funding projects to strengthen national sovereignty.
Under the 2024 to 2030 military programming law, France has allocated EUR10 billion to AI and algorithm development, aiming to enhance the army’s autonomous data processing capabilities for faster and more precise strategic and tactical decisions.
Although the use of AI in defence and national security is not governed by the AI Act, France applies certain ethical principles in this field. For instance, in 2020 the Ministry of Armed Forces tasked a committee with addressing ethical issues related to the use of AI in the defence sector.
France actively experiments with AI-enabled surveillance devices. For the 2024 Paris Olympics, intelligent cameras were authorised for security purposes on an experimental basis until 31 March 2025, “solely for the purpose of ensuring the security of sports, recreational, or cultural events that, due to their scale of attendance or circumstances, are particularly exposed to risks of terrorist acts or serious harm to individuals” (Law of 19 May 2023, concerning the Olympic and Paralympic Games of 2024). Following positive results, the Paris police prefect intends to extend the use of AI for image analysis in security operations.
Additionally, in May 2024, the French Ministry of Armed Forces launched the “Amiad” initiative to enable France to maintain sovereign control over defense AI, minimising reliance on foreign powers with an overall budget of EUR300 million. One of the primary focuses of Amiad is the development of AI language models tailored to military operations, considering the sensitive and strategic nature of the data involved. To support this initiative, Amiad has also contracted the acquisition of Europe’s most powerful classified supercomputer for defence AI, in collaboration with HP and Orange.
Emerging issues with generative AI include concerns over IP rights, data protection (see 8.2 Data Protection and Generative AI), image rights and fake news.
Image rights, part of privacy law, cover all elements of personality, such as a person’s appearance and voice. Unauthorised deepfakes violate these rights and are criminally sanctioned.
To combat AI-generated deepfakes, France amended Article L. 226-8 of the Penal Code on 21 May 2024. It criminalises distributing AI-generated content using someone’s image or voice without consent unless AI use is clearly indicated. Penalties include up to one year in prison and a EUR15,000 fine, increasing to two years and EUR45,000 for online distribution. Sexual deepfakes (L. 226-8-1) carry even harsher penalties, up to three years in prison and EUR75,000 in fines.
Generative AI also fuels fake news and scams. Although France has laws against disinformation in electoral campaigns, broader political deepfakes remain unregulated.
Other legal tools, such as copyright, image rights, privacy laws, the DSA, and the AI Act, may help address AI-driven misinformation, but enforcement is challenged by anonymous online actors.
In France, data protection laws – primarily governed by the GDPR and the French Data Protection Act – impose obligations on AI providers and deployers regarding the processing of personal data.
As data controllers, they must be able to respond to data subjects’ requests for rectification or deletion of data, without removing entire models, but rather by identifying and deleting specific training data. This necessitates the provider being technically capable of identifying the relevant data and metadata relating to the data subject.
Moreover, the CNIL stresses the importance of data minimisation from the AI system’s conception stage. It has issued several thematic fact sheets on this matter, including an updated guide in February 2025, formulating practical recommendations for developers and deployers of AI systems on handling data subject rights.
With regards to generative AI, the CNIL recommends establishing internal processes to query models (ie, using a list of carefully chosen queries) to verify any data that may have been stored about an individual. It also highlights the risk of correlation errors, where AI outputs may falsely suggest to individuals the presence of personal information concerning them due to interactions with external knowledge bases. Controllers are encouraged to carefully assess how their AI systems’ interactions with other service providers may impact the handling of a data-subject request.
For AI models trained on web-scraped data, the CNIL recommends implementing technical solutions like “push-back lists” or opt-out mechanisms to facilitate individuals’ right to object.
For AI systems with continuous learning, the CNIL recommends that controllers implement strict monitoring to anticipate and prevent data drifts.
Overall, the CNIL highly encourages AI actors to invest in privacy-preserving practices and integrate data protection by design throughout AI system development and deployment.
Use Cases
AI adoption in France’s legal sector is growing, though reliable AI solutions remain limited due to the early stage of training models on dedicated legal databases. Nevertheless, there are many use cases for AI in the legal sector, such as legal research based on natural language (eg, Doctrine, Lefebvre Dalloz and Lexis360), contract analysis (eg, Della AI), predictive analysis identifying the potential outcome based on precedents (eg, Case Law Analytics or Predictice) and e-discovery platforms to analyse a vast amount of documents including contracts during legal due diligence (eg, Relativity).
In 2023, a firm operating in France partnered with Harvey, an AI-powered legal assistant based on OpenAI’s latest model.
Ethical Concerns
A LexisNexis survey found that 85% of legal professionals have ethical concerns about AI, particularly its reliability in legal reasoning. French lawyers, as part of their ethical obligations, must have the required expertise to provide informed advice to clients; lacking competence may trigger a lawyer’s professional responsibility. In this context, AI-assisted legal advice must be supervised by a qualified lawyer.
To address these challenges, the government published the “Generative Artificial Intelligence and Legal Professions: Act Rather Than Suffer” report in December 2024, providing recommendations to ensure AI aligns with ethical and professional standards.
In France, in the absence of specific AI liability regulation, the common law civil liability regime remains applicable to address liability regarding AI-related damages, relying on principles of fault, causality and harm.
Current legal frameworks already provide some mechanisms to address liability for damages caused by AI systems such as intellectual property violations, defamation and data protection breaches arising from AI-generated content.
French law also prohibits clauses limiting liability for bodily injury – a key consideration given AI’s potential physical risks. However, determining responsibility for AI-caused harm remains complex due to the involvement of multiple stakeholders and difficulties in establishing a clear causal link.
The revised Product Liability Directive seeks to address these challenges by clarifying liability rules for AI systems (see 10.2 Regulatory).
Furthermore, without greater clarity on exposure to risks, AI-related insurance is expected to face similar difficulties to those encountered in the emerging cyber-risk market, necessitating careful definition of exclusions and coverage limitations.
AI-related liability is increasingly addressed at the European level, with the EU AI Act being a key framework for regulating AI liability in Europe.
Additionally, the revised PLD, adopted by the European Parliament on 13 March 2024, includes AI systems within the scope of “products” and simplifies the burden of proof. It removes the EUR500 threshold, introduces discovery mechanisms, and allows for non-professional data loss compensation. Notably, it requires a business based in the EU to take liability for damages, even in cases of online purchases outside the EU. The revised PLD must be implemented into national law by December 2026.
Initially proposed in 2022, the AI Liability Directive (AILD) aimed to streamline the process for AI-related damages by presuming causality in specific cases and easing the burden of proof. However, on 11 February, the European Commission withdrew the AILD from its 2025 work programme due to dwindling support and concerns over its impact on AI innovation. Even though critics argue this creates legal uncertainty and favours Big Tech, others see it as a step toward more flexible AI regulation.
The growing reliance on algorithms and machine learning in decision-making processes has raised significant concerns regarding algorithmic bias. Institutions like the Defender of Rights, through its recommendations “Fighting Against Discrimination Produced by Algorithms and AI”, and the CNIL, in its 2025-2028 strategic plan, have emphasised the need for ethical AI to respect individual rights. The latter aims to mitigate AI-related risks by enhancing knowledge-sharing in the AI ecosystem, clarifying the legal framework, and strengthening regulation of algorithmic bias.
Regarding administrative decision-making, Article L.311-3-1 of the Code of Relations between the Public and the Administration (CRPA) requires that individual decisions based on algorithmic processing must be explicitly disclosed to the individual concerned. This includes information on the algorithm’s role, data sources, parameters and operational logic. The rules defining this processing and its main characteristics must be communicated by the administration to the individual upon request, as stipulated by the Law for a Digital Republic of 7 October 2016.
Furthermore, proposed legislation from 6 December 2023 aims to address discrimination through individual and statistical testing practices by proposing the creation of a committee of stakeholders to conduct prospective studies on the risks associated with AI-based algorithms to ensure the fair and non-discriminatory use of algorithms.
On the technical front, researchers from Inria (National Institute for Research in Digital Science and Technology) have developed “FairGrad”, an open-source software designed to correct algorithmic bias by prioritising data from disadvantaged groups in machine learning processes.
Additionally, initiatives like Confianceai and Positive.ai have developed ethical guidelines, technical standards and best practices to ensure transparency, fairness, and trust in AI systems.
In France, the use of facial recognition and biometrics has predominantly sparked legal concerns in law enforcement (see 7.1 Government Use of AI and 7.3 National Security). To date, biometric recognition techniques are not governed by a specific legal framework and fall under data protection law which imposes prior consent of the individual unless a specific exception applies.
To fill this gap, a 2023 law proposal aims to regulate the use of biometrics and facial recognition in public spaces, particulary by public authorities, to prevent mass surveillance. Under this proposal, biometric identification in public spaces would require the individual’s prior consent and any categorisation or rating of individuals would be prohibited.
However, limited exceptions are proposed, including access control at public events exposed to terrorist threats, post-hoc biometric data processing for judicial or intelligence purposes, and real-time biometric processing on an experimental basis to combat terrorism and serious crime.
Other administrative agencies, such as the CNCDH (Commission Nationale Consultative des Droits de l’Homme) have also issued recommendations regarding the use of biometrics and facial recognition applied to videosurveillance, urging public authorities to take action to regulate the use of such technologies.
Beyond public use, biometrics are increasingly employed by companies for employee identification. Such applications must be carefully assessed to ensure compliance with data protection and employment laws.
Automated decision-making in France is governed by several legal frameworks, with key concerns around transparency, non-discrimination, and the ability to challenge decisions.
Under Article 22 of the GDPR and Article 47 of the French Data Protection Law, individuals are protected against fully automated decisions, ensuring transparency and the right to request human review. In sectors like taxation, automated systems are used, but they must meet strict legal standards to ensure fairness.
The CNIL oversees compliance, with penalties for non-compliance, especially regarding informing individuals about automated decision-making. Discriminatory outcomes can result in criminal charges.
In banking, AI-driven decision-making is used for credit scoring and is considered high-risk under the AI Act. The ACPR (Autorité de Contrôle Prudentiel et de Résolution) announced in July 2024 that it is developing an audit methodology for AI systems used by banks and insurers.
Use of Chatbots
Advancements in generative and predictive AI have made it increasingly difficult for individuals to discern when they are interacting with AI, particularly with the widespread use of chatbots. These conversational agents deployed by both private or public entities offer round-the-clock assistance to users, necessitating clear transparency obligations as emphasised in the 2018 Villani report and international principles like those of the G20, now formalised in the AI Act.
On 11 March 2025, the European Commission released the third version of the Code of Practice for General-Purpose AI, focusing on transparency under Article 53 of the AI Act. Signatories must maintain model documentation for ten years, disclose contact details, and ensure the quality and security of the information, promoting transparency by sharing some data publicly.
Additionally, data protection regulations mandate that users of chatbots, which process personal data, are informed. Chatbots must be included in privacy policies with distinct terms of use.
Ethical Concerns
The National Consultative Ethics Committee (CCNE) in France has examined the ethical risks of chatbots, stressing the risk of users anthropomorphising bots. This correlates with Article 53 of the AI Act, which mandates chatbot providers disclose their AI nature to users, except for those used for crime prevention.
The CCNE also advocates for ethically designed chatbots, ensuring traceable responses and compliance with data protection laws, emphasising transparency and user awareness to mitigate privacy risks and manipulation.
Nudge
The AI Act prohibits the use of conversational agents to influence consumer behaviour through nudging techniques. Such techniques may also be considered misleading commercial practices under French consumer law, especially when they conceal crucial information from consumers.
As the future implementation of the AI Act imposes specific obligations on actors within the AI supply chain, including suppliers, users and importers, transactional contracts between customers and AI suppliers need to clearly define roles and responsibilities to address new and unique risks associated with AI technology.
This includes delineating obligations related to data privacy, security, accountability and compliance with regulatory requirements.
Businesses must ensure that contractual agreements reflect these considerations to mitigate potential legal and operational challenges arising from AI deployment.
Technology and Benefits
AI is increasingly used in recruitment to automate CV screening, skills assessments, and initial candidate interactions, improving efficiency and allowing recruiters to focus on human engagement. It can also promote diversity, equity, and inclusion (DEI) by reducing unconscious bias, though this depends on the neutrality of training data.
In France, France Travail has implemented AI tools like Chat FT and Match FT to enhance job matching. Match FT interacts with preselected candidates via SMS, gathering real-time insights on their availability, constraints, and preferences, helping advisers better connect employers with job seekers.
Risks and Liability
Despite these advantages, AI-driven hiring carries risks, particularly regarding bias and discrimination. If training data reflects existing inequalities, AI could reinforce them, potentially leading to violations of anti-discrimination laws.
For terminations, French labour law imposes strict procedural safeguards that AI cannot replace. Economic dismissals, for example, require a prior interview with an employer representative. Delegating such decisions to AI could result in wrongful dismissal claims.
Under the AI Act, recruitment tools like CV analysers and AI-driven promotion systems are classified as high-risk applications (Annex III). Companies using them must comply with Article 29, ensuring transparency, human oversight, and risk management to mitigate legal and ethical risks.
Strict Regulation of Monitoring Tools
Under French labour law, employee evaluation and monitoring tools must meet strict conditions. Employers cannot impose permanent surveillance unless justified by the nature of the task and proportionate to the intended purpose.
Before implementing a monitoring tool, employers must consult employee representative bodies if such bodies exist, as these tools impact working conditions. AI-driven monitoring systems are not exempt from these requirements and must respect employee privacy and avoid discriminatory biases.
GDPR and Employer Liability
Employers must comply with GDPR principles, ensuring transparency and lawfulness in data processing. Additionally, under the French Labour Code, employers have a duty to safeguard employees’ physical and mental health. AI-powered monitoring tools could increase workplace stress, potentially harming mental well-being and exposing employers to liability.
For digital platform companies like car services and food delivery, the use of AI has become commonplace, particularly in pricing strategies such as surge pricing, which is based on demand-supply dynamics and is a prime example of AI’s impact on pricing tactics. However, concerns about fairness and transparency have surfaced alongside these innovations.
From a regulatory perspective, laws like Article L.221-5 of the French Consumer Code mandate companies to disclose their use of AI-driven pricing mechanisms, like surge pricing, to consumers. This transparency requirement aims to empower consumers to make informed decisions about their purchases.
Furthermore, the “SREN law”, effective from 21 May 2024, aims to secure and regulate the digital space in alignment with the European Digital Services Act. Therefore, the SREN law imposes a penalty of up to one year in prison and a EUR15,000 fine for disclosing AI-generated visual or audio content featuring a person without their consent, unless it is clearly marked as AI-generated. This increases to two years’ imprisonment and a EUR45,000 fine if the content is shared via an online communication service (Article 15 of the SREN law amending Article 226-8 of the Penal Code).
AI is reshaping financial services, transforming cost centres into profit centres through automation, cost savings, and revenue generation. Financial institutions leverage AI for customer engagement (chatbots, personalised recommendations) and fraud detection or risk management, with financial institutions pooling data resources to develop collective solutions that improve efficiency and security.
Although AI offers significant advantages, its adoption comes with risks, particularly in relation to data protection, regulatory compliance, and bias in decision-making processes. The use of AI in financial services must be carefully managed to prevent algorithmic discrimination, which could lead to unintended exclusionary practices. Financial institutions must ensure that AI-driven decision-making remains transparent, fair, and compliant with existing regulations.
Recognising the growing impact of AI, the ACPR has set up a task force to assess AI adoption in the banking and insurance sectors, focusing on the opportunities, risks, and regulatory implications. In parallel, the Ministry of Finance and the DGFIP have implemented AI-driven tools for fraud detection, undeclared property identification, property valuation, and optimisation of public spending control.
An October 2024 Court of Auditors’ report on AI within the DGFIP highlights the need for social dialogue and human resources considerations in AI deployment. The report calls for AI performance indicators, assessments of quality and user satisfaction, and the creation of an AI incubator to foster internal expertise. It also emphasises that AI should be applied selectively, focusing on processes where its impact can be most beneficial.
The integration of AI into healthcare systems presents both opportunities and challenges.
Regulatory bodies such as the French National Agency for the Safety of Medicines and Health Products (ANSM) and the French National Authority for Health (HAS) provide guidelines for the use of AI in software as a medical device (SaMD) and related technologies. These regulations address concerns such as the potential treatment risks associated with AI, including hidden bias in training data that can lead to algorithmic biases affecting patient care outcomes. Compliance with strict data protection laws, such as the GDPR, is essential to safeguard patient privacy when using personal health information to train machine learning algorithms.
AI-powered medical decision support systems (MDSS) offer the promise of improving diagnostic accuracy and treatment selection. However, concerns about potential risks, including diagnostic errors and breaches of patient privacy, highlight the need for robust regulatory oversight. The liability landscape surrounding MDSS use encompasses fault-based liability, such as diagnostic errors attributed to healthcare professionals, and product defect liability, which may arise from software malfunctions. To address these concerns, rigorous testing, validation and ongoing monitoring of AI systems are essential to ensure compliance with regulations such as the GDPR, the Medical Device Regulation (MDR) and forthcoming application of legislation like the AI Act and the PLD.
Moreover, the obligation to inform patients about the use of MDSS underscores the importance of transparency and patient autonomy, although questions persist regarding the extent and timing of this obligation.
Despite these challenges, the PLD aims to streamline the burden of proof in AI-related liability cases, demonstrating efforts to adapt regulatory frameworks to the evolving landscape of healthcare AI in France.
At the February 2025 AI Action Summit, Minister of Health and Access to Care, Yannick Neuder, shared a report on AI in healthcare in France, outlining the progress and future perspectives of AI implementation in the French healthcare system and detailing public sector actions. The report highlights actions to assess risks and benefits for patients and professionals, support innovation, and establish a framework for evaluation, regulation, and ethical principles. AI is seen as a strategic tool to enhance patient care, improve efficiency, and support public health policies while reducing the burden on healthcare professionals.
In France, AI integration in autonomous vehicles is primarily governed by the Law of 5 July 1985, also known as the “Badinter Law”. This law addresses civil liability in road accidents involving motor vehicles, and remains applicable even in cases involving autonomous vehicles, as it focuses on the involvement of the motor vehicle itself, regardless of its automation level.
Recent regulatory developments have also addressed the criminal liability of autonomous vehicle manufacturers. Order No 2021-443, issued on 14 April 2021, establishes a framework for holding manufacturers criminally liable in the event of accidents occurring while the vehicle is in automatic mode. This regulation clarifies the responsibilities of manufacturers in the case of accidents during autonomous operation, contributing to the legal structure for autonomous mobility.
For defective autonomous vehicles, manufacturers’ liability can be pursued under the updated Product Liability Directive (PLD), adopted in March 2024. This directive enables the accountability of manufacturers for defects, ensuring consumer protection and safety standards in autonomous vehicle deployment.
The integration of AI into France’s manufacturing sector is closely linked to the country’s relocalisation efforts, aimed at addressing the challenges of globalisation and reducing technological dependence on China. Government-backed initiatives such as the France 2030 plan promote AI’s role in modernising production methods and enhancing industrial competitiveness.
Despite these opportunities, AI adoption raises concerns regarding workforce displacement due to automation. While AI supports the French government’s economic sovereignty goals, especially for critical products, it also necessitates workforce development strategies. These strategies focus on equipping workers with the skills to collaborate effectively with AI technologies, ensuring their relevance in the evolving industrial landscape.
At this stage, there is no specific legislation for AI in manufacturing, but existing legal frameworks address concerns related to AI integration. Notably, the collection and processing of sensitive data in AI-driven manufacturing must comply with the GDPR. Labour laws also address issues such as job displacement, workplace safety, and equitable treatment in an increasingly automated environment.
Looking ahead, the transposition of the amended Product Liability Directive (PLD), set for December 2026, will introduce specific regulations concerning liability for products integrating AI, ensuring manufacturers adhere to strict safety standards and comprehensive risk assessments.
Regulations on AI in professional services are currently evolving, with the AI Act projected to take effect around 2026. This legislation imposes obligations on professional users, including those whose employees use AI systems. These obligations include the establishment of technical and organisational measures aligned with usage notices, adequate human oversight, impact assessments focusing on fundamental rights, governance and transparency. These requirements are particularly relevant when users employ AI systems that generate or manipulate text, audio or visual content that may be perceived as authentic by the public.
In France, employers are civilly liable for damages caused by their employees within the scope of their duties, underscoring the need for employers to ensure the accuracy and reliability of AI systems in the workplace. Employers should provide adequate training on AI capabilities, limitations, and risks, while also establishing internal codes of conduct and procedures for human oversight and intervention in AI-driven decisions.
Data security and confidentiality are significant considerations, especially when AI systems rely on sensitive employee data. Employers must implement robust measures to protect against breaches or unauthorised access, ensuring compliance with data protection regulations.
At this stage, incidents related to AI in professional services had been relatively rare in France. However, on 14 February 2025, the interim judge of Nanterre suspended the deployment of new AI software applications in a company following a request from its Works Council (CSE). The CSE argued the tools were introduced without consultation, violating its rights. While the employer claimed the tools were still in an experimental phase, the court ruled that the partial use of the software by employees constituted implementation, not experimentation, and thus was unlawful due to the lack of prior consultation with the CSE.
IP Protection
Protection of AI models
AI models, built on sophisticated algorithms and vast datasets, may be protected through copyright, trade secrets, and patents, though their legal status varies depending on their specificities. Copyright may apply to the model’s source code if it meets the originality requirements but the underlying mathematical algorithms often fall outside its scope. Trade secrets may offer strong protection, especially for proprietary data and processes that define the model’s functionality. Patents may also be an option for novel and non-obvious AI-related inventions, particularly for innovative architectures or technical solutions. However, under current French AI models as a whole lack dedicated IP protection. As AI advances legal frameworks may evolve to offer more tailored protections for AI models.
Protection of input/training data
Original creations used to train AI may be protected by copyright, as well as neighbouring rights such as performers’ rights, phonogram producers’ rights, and press publishers’ rights. The sui generis right of the database producer may apply if the training data is extracted from a database that involved substantial investment in its creation, verification, or presentation.
Protection of output data
Output data may be protected by copyright if AI is used as an aid in the creative process, with human involvement beyond providing mere instructions to the AI. However, fully AI-generated content should not be protected by copyright under French law.
AI Tool Providers’ Terms and Conditions
An AI tool provider may contractually determine asset protection with respect to the input and output data of the generative AI tool through their terms and conditions of use. For instance, OpenAI states in its terms and conditions of use that, to the extent permitted by applicable law, the user retains ownership of the input and output data. More generally, AI tool providers incorporate good practices into their T&Cs, so that nothing shall be deemed as granting a right to use data as training data for the purpose of testing or improving AI technologies, or to the contrary that all necessary rights have been obtained to use the training data for such use.
Alongside these contractual protections, 2024 guidelines from authors’ and producers’ organisations regulate AI use in audiovisual and cinematic productions. These ensure that authors cannot be forced to use AI-generated content and must approve any AI use in their work. Producers can use AI for tasks like localisation and promotion, but must inform authors in advance. AI systems must obtain appropriate licensing for copyrighted content before use.
IP Infringement
Training an AI system with copyrighted or neighbouring rights-protected content without authorisation may constitute infringement. Additionally, the extraction or reuse of a significant part of a database could violate database sui generis rights. Data collection might also breach confidentiality or business secrecy agreements.
In France and Europe, the legal framework provides some protection for AI developers. The “text and data mining” exception (Article L.122-5-3 of the French Intellectual Property Code) can neutralise copyright or database rights, as it applies to AI systems using automated analysis to extract information from text and data. However, right-holders can opt out, preventing their content from being used for AI training. Collectives, such as SACEM, have opted out for their members. AI tool developers must seek prior authorisation from such organisations. AI-generated content can also replicate works used for training, leading to potential IP infringement if identifiable works are reproduced.
Inventor/Co-Inventor
The French Intellectual Property Office (INPI) has not yet ruled on whether AI can be designated as an inventor, but this appears incompatible with Article R. 612-10 of the French Intellectual Property Code, which requires an inventor to be a natural person.
Similarly, the 2024 EPC Guidelines confirm that only natural persons can be designated as inventors, following decision J8/20 (DABUS), which aligns with rulings in other jurisdictions, including the UK Supreme Court in Thaler v Comptroller-General of Patents, Designs and Trade Marks.
Author/Co-author
French courts have not yet addressed whether AI can qualify as an author, but existing case law suggests it cannot.
The Court of Cassation has ruled that legal entities cannot be authors (Cass, civ. 15 January 2015, No 13-23.566), implying that only natural persons can hold authorship. Furthermore, the requirement of originality, which necessitates human creative intent, would exclude AI-generated works, as AI does not exercise free and personal creative choices.
See 15.1 IP and Generative AI.
See 3.6 Data, Information or Content Laws.
See 15.1 IP and Generative AI.
In June 2024, the French Competition Authority published an opinion on the competitive dynamics of the generative AI sector, identifying several key issues that could shape market competition.
Control Over AI Infrastructure and Market Access
The Authority highlights the reliance on high-performance computing, cloud services, and large datasets, often controlled by dominant players like US tech giants Nvidia. This concentration risks restricting access for smaller AI firms. The Authority recommends closer oversight of partnerships and vertical integrations to prevent monopolistic practices.
Strategic Acquisitions and “Acqui-Hires”
Tech companies increasingly acquire AI start-ups for talent rather than products (“acqui-hires”), often avoiding merger scrutiny due to low financial thresholds. The Authority calls for greater transparency in minority investments and expanded oversight to safeguard innovation.
Algorithmic Collusion and the Role of AI in Price Fixing
AI-powered pricing strategies could lead to algorithmic collusion, where pricing algorithms autonomously co-ordinate without explicit human intervention. This poses a challenge for regulators, who may need new tools to detect such behaviour.
Abuse of Market Power Through Data and Interoperability Restrictions
The dominance of tech firms over vast consumer data raises concerns that restricted data sharing or interoperability could lock out competitors. The Authority is exploring whether AI service providers should face ex-ante obligations to ensure fair competition, particularly regarding data portability and interoperability, echoing concerns under the Digital Markets Act (DMA).
Regulatory Outlook
The Authority proposes ten recommendations addressing these concerns, aligning with broader European initiatives like the DMA, the AI Act, and the Data Act. No immediate legislative changes are proposed but increased scrutiny of AI-related transactions, data access policies, and the role of dominant firms in market dynamics is expected in the coming years.
Cybersecurity legislation is adapting to address the emerging risks associated with AI technologies, especially as large language models lower entry barriers for malicious actors and facilitate faster, more sophisticated cyberattacks. These attacks often involve malware, social engineering, and data analysis, making cybercrimes more effective.
The French National Cybersecurity Agency (ANSSI) reported a 15% rise in security incidents in 2024, totalling 4,386 events, with threats coming from both cybercriminals and state-sponsored hacktivists, particularly from Russia and China. ANSSI highlighted the growing risk of AI being used for malicious purposes, including phishing and malware creation, requiring constant monitoring of AI misuse in cybersecurity.
A February 2024 report, Building Trust in AI through a Cyber Risk Approach, co-signed by 19 international and five national partners, identifies cybersecurity risks to AI systems and provides strategic recommendations to better integrate cybersecurity into AI system development.
The CNIL’s 2025-2028 strategic plan aims to strengthen cybersecurity awareness, as 61% of French respondents reported cyberattacks. The plan focuses on co-operation within the security ecosystem, assisting data breach victims, developing privacy solutions, and ensuring compliance with cybersecurity rules.
The Corporate Sustainability Reporting Directive (CSRD) (EU) 2022/2464 mandates that large companies report on sustainability within their regular management reports or publish a separate sustainability statement. This requirement began in 2024 for large, listed companies and financial institutions, with a progressive implementation plan.
AI plays a significant role in advancing ESG goals by streamlining data management, ensuring transparency, and improving the accuracy of sustainability reporting. However, it also introduces certain risks: the widespread use of automation and AI could lead to job reductions, potentially harming employee well-being and negatively impacting a company’s social responsibility reputation. Furthermore, the energy consumption of AI systems, particularly in large-scale operations, poses a challenge to meeting environmental targets. If not managed properly, the adoption of AI could increase energy consumption and undermine the environmental goals of companies.
In response to these concerns, the INRIA and the Ministry of Ecological Transition published a position paper in February 2025. The document outlines the challenges posed by AI’s environmental impact. Although AI can support ecological transition by optimising energy consumption and improving climate modelling, its intensive deployment also results in considerable resource consumption, high electricity demand, water usage, and the generation of electronic waste.
The position paper identifies five major challenges in reconciling AI development with ecological concerns. These include improving the environmental performance of AI technologies through energy efficiency, optimising hardware architecture, and better data management. It also stresses the importance of developing more specialised AI models that require less energy and are trained on reliable, precise data to avoid the overreliance on energy-consuming general-purpose models. The paper further emphasises the need for more accurate methods to evaluate the environmental footprint of AI technologies and the application of circular economy principles to AI infrastructure. Lastly, it advocates for promoting frugal AI tools by encouraging sustainable practices and raising awareness through specialised conferences and accessible educational content.
The AI Act establishes product-specific regulations for artificial intelligence systems marketed within the EU. It applies to all businesses developing, providing, or deploying AI systems and includes specific provisions for generative AI models, which is particularly relevant as its use continues to expand in applications like content creation, customer service, and more.
Under the AI Act, companies must assess their AI systems’ risk levels and adapt their compliance accordingly:
For generative AI models, the compliance requirements will vary depending on whether the model is open-source or not. The regulation distinguishes between basic and systemic risk models, with “systemic” risk determined by factors such as the scale of AI deployment, the computing power required, and the potential for business applications to impact users and industries.
Companies need to prepare for compliance now, as the first obligations began to take effect within six months of the AI Act’s official publication.
This requires businesses to:
As AI adoption expands across industries in France, the focus on risk management has led to the emergence of specialised compliance tools, such as Naaia, designed to help organisations seeking to streamline their compliance efforts and navigate AI-related challenges.
11 rue Galilée
75116 Paris
France
+33 01 45 05 80 08
communication@jeantet.fr www.jeantet.frArtificial intelligence (AI) is reshaping the global economic and technological landscape, and France is positioning itself as a key player in this transformation. Through strategic investments and a balanced approach to innovation and regulation, the country aims to compete with major global tech powers.
This article explores the current dynamics of AI development in France, highlighting structural advantages and emerging regulatory challenges that companies must navigate.
Fostering Innovation and Ensuring Sovereignty
Since 2017, France has shaped its AI strategy based on the 2018 Villani Report, covering the entire AI value chain from R&D to market deployment. The government has made significant investments to drive innovation, support the adoption of AI across key industries and promote France’s savoir-faire on the global stage.
France’s AI roadmap is structured into three phases, each building on the previous one:
A thriving AI start-up ecosystem
France’s AI start-up scene is booming. As of 2024, over 1,000 AI start-ups are operating in the country, marking a twofold increase since 2021. These start-ups raised a record EUR1.9 billion in 2024, with half already profitable or projected to be within three years. Notable players like Mistral AI (EUR1.2 billion) and Poolside (EUR526 million) are leading the charge.
Corporate and institutional engagement
In addition to start-ups, large corporations and institutional initiatives are driving AI innovation in France. Major French companies such as Thales, CMA-CGM, and Iliad are integrating AI into their business models, while also establishing AI-focused research initiatives like Kyutai, a private lab dedicated to advancing foundational AI models and AI ethics.
France: Europe’s leading AI destination
For the fifth consecutive year, France has been ranked as Europe’s top destination for foreign AI investments, according to the EY Barometer. This recognition reflects a combination of factors, including a pro-business environment, world-class academic institutions, and a strong regulatory framework for a trustworthy and ethical AI development.
France’s Competitive Edge: Energy Sovereignty in the AI Era
As AI evolves, so does its demand for energy: data centres, essential for AI infrastructure, are consuming electricity at an accelerating trend. MIT research shows that global data centre energy consumption doubled in one year and could exceed 1,050 TWh by 2026, equivalent to France’s total national consumption. A single query on systems like ChatGPT requires five times the energy of a standard web search.
The International Energy Agency (IEA) recently conducted its first study on AI’s impact on the energy sector, and the Shift Project confirmed these findings, warning that data centre energy demands are struggling to keep pace with AI’s growth. Without significant changes, energy consumption could triple every seven years.
In this context, energy availability is just as strategic as computing power for AI sovereignty and France is well-positioned to leverage its strong data centre infrastructure and abundant, low-carbon, cost-effective energy to capitalise on this opportunity.
For 2024, France’s energy mix includes 95% low-carbon electricity, combining nuclear (63%) and renewables (32%). This diverse, stable, and affordable energy source gives the country a distinct advantage in powering AI infrastructure sustainably. With just 21.3 g CO₂/kWh, France boasts one of the lowest carbon intensities worldwide. It is also Europe’s largest net exporter of electricity (89 TWh in 2024), offering a valuable yet untapped resource to power AI infrastructure sustainably.
The rapid rise of China’s DeepSeek and the ongoing competition with US tech giants demonstrate that AI dominance is not a foregone conclusion, and Europe must act swiftly to secure a competitive edge. Clean, affordable energy and technological expertise will define global winners, and France is poised to play a central role in Europe’s digital sovereignty.
At the Paris AI Summit in February 2025, EDF took a bold step towards solidifying France’s position by launching a call for expressions of interest for new AI-dedicated data centres. It will provide pre-selected industrial sites with grid connections – removing a key expansion barrier. Four sites, totalling 2 GW (equivalent to 1.2 EPR reactors), have already been identified, with two more planned by 2026. By leveraging its energy dominance, France is not just competing in AI, it is securing a leadership position in Europe’s digital and energy future.
In addition, the French government has sealed a major agreement with UK-based Fluidstack, a leader in cloud GPU technology, to build one of the world’s largest decarbonised supercomputers. With a EUR10 billion investment, this supercomputer will leverage France’s abundant nuclear energy to power the supercomputer with 1 gigawatt of energy by 2028, strengthening France’s reputation as the go-to hub for sustainable AI technology.
AI Regulation: A Shifting Landscape
France has notably increased its AI investments and advocates for a progressive regulatory approach. Over recent years, France has expressed concerns over rigid AI governance, particularly regarding generative AI, favouring a more dynamic framework. Instead of imposing broad restrictions, France supports gradually evolving regulations, focusing specifically on defining “systemic risks” tied to general-purpose AI models.
At the European level, this shift is mirrored by the European Commission’s decision to remove the AI Liability Directive from its 2025 work programme, signalling growing concerns over excessive regulation.
Despite policymakers and industry leaders pushing for lighter regulation to maintain Europe’s competitiveness, civil society groups continue to call for stricter rules, particularly regarding transparency, accountability, and ethics. This tension between innovation and regulation highlights the complexities of governing AI in a rapidly advancing technological landscape.
At the AI Summit in Paris, President Macron launched the EUR150 billion “EU AI Champions” initiative, involving the USA, Canada, the UK, France, and Sweden, to accelerate AI innovation. Macron’s focus shifted from regulation to investment, calling for the simplification of the EU single market. Industry leaders, including OpenAI’s CEO, supported this shift, praising it as a move to foster technological progress.
In parallel, the Declaration on Inclusive and Sustainable AI calls for ethical, transparent, and accessible AI, while promoting innovation, market diversity, and sustainable growth. It also advocates for international co-operation to promote fairness and security in AI development.
2024–2025: A Period of Intense AI Rulemaking
Despite a perceived shift towards a pro-innovation stance, the past two years have witnessed an unprecedented acceleration in AI regulation, with both binding legal frameworks and voluntary initiatives reshaping the compliance landscape.
Hard law developments
Soft law initiatives
Despite Europe’s clear commitment to fostering a favourable environment for AI investment, its emerging regulatory landscape requires careful attention. The AI regulatory framework is dynamic and continuously developing and organisations must adapt to the evolving legal and ethical responsibilities, balancing innovation with compliance.
AI Compliance Challenges Under the AI Act and a Fast-Moving Regulatory Landscape
The AI sector operates at an unprecedented speed, often without pre-existing compliance structures. Unlike traditional industries, where regulatory frameworks evolve at a slower pace, AI faces the challenge of developing compliance structures in real-time.
Phased implementation of the AI Act
The AI Act follows a phased timeline for implementation, which means that compliance obligations will gradually come into force. Companies need to structure their compliance programmes to align with this progression, ensuring they meet requirements as they become enforceable.
Key milestones in this timeline include:
Given this phased approach, organisations must remain proactive, adjusting their strategies as compliance obligations evolve over time.
Sector-specific challenges
The AI Act, designed and presented as a horizontal regulation applicable to all industries, exerts varying levels of impact depending on the sector. Its risk-based approach brings it closer to a product regulation, with some industries facing more stringent compliance requirements than others:
The challenge of high-risk AI and CE marking
A central component of the AI Act is the requirement for high-risk AI systems to obtain CE marking. However, the technical standards required for CE marking are still under development, creating uncertainty for both developers and deployers.
This situation has sparked a battle between ISO standards, primarily driven by US tech firms, and European standards championed by organisations like CEN-CENELEC. The development of these standards will have significant consequences for compliance with the AI Act:
These developments are expected to reach a head as the European Commission pushes forward with regulatory guidelines, set to take full effect by August 2025.
Overlapping regulations and regulatory fragmentation
The AI Act does not operate in isolation; it intersects with several other regulatory frameworks, creating complex compliance requirements and potential jurisdictional conflicts.
Key overlapping regulations include cybersecurity laws like the NIS2 Directive and Digital Operational Resilience Act (DORA), both of which impact AI systems, as well as the General Data Protection Regulation (GDPR), which governs the use of personal data in AI models. Other regulations, such as the Digital Services Act (DSA) and Product Liability Directive (PLD), add additional layers of compliance for AI developers and deployers.
Navigating this regulatory fragmentation presents two significant challenges:
Navigating AI Act Compliance: How Businesses Can Stay Ahead of the New Regulation
With the AI Act entering into force, companies deploying AI systems must rethink their compliance strategies.
The first key deadline – 2 April 2025 – marks the prohibition of certain AI applications, forcing organisations to assess whether any of their systems will be affected. But beyond this immediate regulatory milestone, a much broader transformation is underway. Businesses will need to map their AI deployments, anticipate CE marking requirements, manage vendor compliance, and navigate the increasingly complex web of overlapping regulations.
This is not just a legal challenge but an operational one as AI systems are embedded across multiple business functions, from HR to risk management, often without centralised oversight. Understanding where and how AI is used within an organisation will be the foundation of a sustainable compliance strategy.
Here is how businesses can structure their AI governance strategy in a structured, forward-looking manner to stay ahead of the curve.
Step 1: Creating an AI inventory and risk classification
One of the greatest challenges in AI governance is visibility. Unlike traditional software, AI models do not always have a clear deployment path. They are often integrated into third-party solutions, used across different departments, or implemented in a decentralised manner without a structured approval process. Without a proper AI inventory, organisations risk overlooking their compliance obligations.
To address this, businesses should conduct a structured mapping exercise, identifying all AI-powered applications in use. This effort should not be confined to the IT or legal teams alone. Cybersecurity professionals, for example, already have expertise in tracking digital assets within enterprise systems and can play a pivotal role in identifying AI models deployed across an organisation. Establishing a cross-functional AI compliance framework – bringing together legal, IT, cybersecurity, and operational teams – will be key to ensuring that AI governance is both thorough and sustainable.
This step is also an opportunity to move beyond the current fixation on generative AI. Despite AI models like ChatGPT dominating public discussions, they represent only a fraction of the AI ecosystem. The highest-risk AI applications are often found in predictive models, automated decision-making systems, and AI-powered risk management tools. As a result, a company’s compliance approach must extend well beyond generative AI, making sure that all AI-powered automation and decision-making processes are accounted for.
Step 2: Identifying immediate risks and regulatory deadlines
The most urgent priority is ensuring that no AI systems in use fall within the list of explicitly prohibited applications. Although many companies may assume they are not using AI tools that would be banned, this assumption should not be made lightly. Organisations must systematically review their AI portfolio, including any third-party models they rely on, to ensure compliance.
Beyond avoiding outright bans, this is an opportunity to classify AI systems according to risk levels, particularly in anticipation of the CE marking obligations that will apply to high-risk AI starting in 2026 and 2027. Companies should also engage with their AI vendors early, ensuring that suppliers understand the upcoming regulatory requirements and can meet them. This is especially critical given that some AI providers may decide to withdraw certain models from the European market rather than pursue certification, potentially disrupting business operations.
The April 2025 deadline is more than just a compliance hurdle – it should serve as a trigger for organisations to establish a clear governance structure around AI, preparing for the much broader obligations that will follow.
Step 3: Managing CE marking and standardisation
For companies deploying high-risk AI systems, the next major challenge will be CE marking. However, while the AI Act establishes clear obligations, the technical standards underpinning these requirements are still evolving. The certification process will ultimately depend on a set of AI standards that are still being developed, and there is ongoing competition between international regulatory approaches.
The European AI landscape is increasingly shaped by a struggle between ISO-led standards, largely influenced by US tech giants, and EU-specific norms, which favour European industry players. Depending on how this regulatory battle unfolds, AI companies may face additional compliance burdens if they are required to meet different standards for different markets.
Businesses should take a proactive stance by ensuring that their AI vendors commit contractually to CE marking compliance. At the same time, organisations must prepare for the possibility that certain AI models may be discontinued if they do not meet the final certification criteria. Anticipating potential product withdrawals and assessing their impact on business operations will be determining in the coming years.
Step 4: Addressing overlapping regulations and compliance complexity
The AI Act does not operate in a vacuum. Its compliance requirements intersect with several other major regulatory frameworks, creating a complex compliance landscape that organisations must navigate carefully.
For instance, the AI Act’s security and resilience requirements are closely tied to cybersecurity regulations, such as NIS2 and DORA, both of which impose strict security controls on digital systems. Similarly, AI models that process personal data will fall under GDPR obligations, requiring additional transparency and data protection measures.
The Digital Services Act (DSA) also introduces algorithmic transparency rules, particularly for AI-driven content moderation and recommendation systems. This means that businesses deploying AI-powered user-facing platforms must comply with both the AI Act and the DSA’s requirements on algorithmic accountability.
Regulatory enforcement will also likely be fragmented. In France, for example, it is expected that the CNIL will oversee AI Act enforcement, but sector-specific regulators such as the ACPR for financial services will also play a role in certain cases. This overlap in regulatory responsibilities could lead to jurisdictional conflicts, making it essential for businesses to track how enforcement practices evolve across different EU member states.
Rather than treating these regulatory frameworks as separate compliance silos, organisations should integrate AI governance within their broader risk management and compliance strategies and adopt a cross-regulatory approach.
Step 5: Moving beyond a reactive approach
Too often, regulatory compliance is treated as a last-minute, reactive exercise. But AI governance is not just a regulatory issue – it is a strategic imperative. The companies that successfully navigate the AI Act will be those that embed compliance within their long-term AI strategies, rather than treating it as an external constraint.
This means moving beyond generative AI hype and focusing on the full spectrum of AI applications, ensuring that governance frameworks cover all high-risk AI-driven automation, predictive analytics, and decision-making models. It also requires a shift away from viewing compliance as a standalone initiative – AI governance must be integrated within existing cybersecurity, risk management, and regulatory compliance processes.
Most importantly, organisations should stay adaptable. AI regulations will continue to evolve, and sector-specific AI laws may impose additional requirements in the future. Developing a flexible compliance framework that can adjust to new regulatory developments will be key to ensuring both regulatory resilience and continued AI innovation. To support this, several compliance tools have recently emerged on the market, such as Naaia and other AI-driven platforms, which help businesses navigate the complex and ever-changing regulatory landscape.
11 rue Galilée
75116 Paris
France
+33 01 45 05 80 08
communication@jeantet.fr www.jeantet.fr