In France, there are currently no specific laws that exclusively govern artificial intelligence (AI) or machine learning. Nonetheless, several laws and regulations may apply to AI and its applications in various domains, such as privacy, data protection and intellectual property.
The General Data Protection Regulation (GDPR) applies to the processing of personal data, including AI and machine learning models that use personal data. The French Data Protection Act (Loi Informatique et Libertés) regulates the processing of personal data in France. The GDPR can be invoked in leaks of personal data, awarding a right for data subjects to seek compensation for damages.
Regarding bodily or material harm caused by faulty AI, liability concepts such as the special liability for defective products, “fault-based liability” and “liability for things” can apply.
Intellectual property as set out in the French Intellectual Property Code can also apply to AI systems, training, and input and output data, to a certain extent.
Since the launch of the National Strategy for Artificial Intelligence (SNIA) in 2018, France has seen a significant rise in the number of unicorns, reaching 29 in 2023, with 16 focusing on AI-driven propositions. Notable companies include Aircall, Alan, Algolia, ContentSquare, Dataiku, EcoVadis, Exotec, Ivalua, ManoMano, Meero, Mirakl, Owkin, Payfit, Qonto, Spendesk and Younited.
France was home to 590 AI start-ups in 2022, up from 502 in 2021, collectively raising over EUR3.2 billion throughout the year, representing an impressive sixfold increase compared to the EUR556 million raised in 2018. Recent additions to the French AI start-up ecosystem include Mistral AI, which was founded in April 2023 and specialises in generative artificial intelligence, quickly becoming a prominent player in the field. Following a substantial funding round of EUR385 million in December 2023, Mistral AI's valuation soared to nearly EUR2 billion, solidifying its position as one of the leading European companies in the AI domain.
These innovative players are driving numerous projects in product and service development based on AI, partly supported by state investment programmes. The total amount of national and regional aid, coupled with Bpifrance's financing mechanisms, reached EUR1.5 billion in 2022 – a tenfold increase in public investment compared to 2018.
From 2017 to 2021, France maintained its 7th global and 3rd European ranking in AI-related scientific conference publications, attracting numerous companies to establish or reinforce their AI laboratories in the country, including Alphabet (Google), Cisco, Criteo, DeepMind, Fujitsu, HPE, IBM, Intel, Meta, Microsoft, NaverLabs, Samsung, SAP and Uber.
According to a May 2023 study by consulting firm BVA, AI adoption is widespread across various French industries, with over 35% of companies with ten or more employees either using AI already or being in the process of implementation. The agriculture sector leads in AI usage at 58%, followed by the industry sector at 50%, with significant adoption rates also seen in the finance and commerce sectors, at 44% and 40% respectively. However, AI use remains relatively low in the construction and personal services sectors, with adoption rates below 30%. Larger companies with at least 200 employees tend to have higher AI adoption rates, at 45%. Decision support systems, natural language processing and robotics are the primary AI application areas in France.
This growing adoption of AI across various industries in France has spurred an increased focus on risk management, leading to the introduction of the first risk management tools, such as naaia, in 2022, designed to tackle the hurdles of AI implementation. These inventive solutions set the foundation for crafting comprehensive AI management frameworks, facilitating responsible and efficient AI utilisation across various industries.
France has formulated a national strategy for AI under the France 2030 programme, identifying AI as a key priority for the country's future development. This strategy builds upon the recommendations outlined in the 2018 Villani report, which provided comprehensive analysis and proposed strategies across various AI-related domains, such as research, data governance, talent development and ethics. Incorporated within the France 2030 programme, these recommendations guide the specific actions and initiatives of the French government to foster AI innovation, ensure responsible data usage, nurture AI talent and promote ethical AI practices.
Furthermore, the Commission Nationale de l'Informatique et des Libertés (CNIL) has been actively addressing the challenges posed by AI, expanding its focus to include augmented cameras, generative AI, large language models and chatbots. The CNIL's action plan, published in May 2023, revolves around four key pillars:
Efforts to regulate artificial intelligence are primarily being undertaken at the European level rather than at the national level, to harmonise regulations across the European Union and promote a unified approach to governing artificial intelligence.
The main regulatory instruments include:
No AI-specific law is currently in force in France.
Pursuant to the law for a Digital Republic of 7 October 2016, certain general principles regarding the rights of individuals subject to individual decisions based on algorithmic processing, akin to transparency, are already in force. This legislation is applicable to decisions made by the French administration. Any individual must be explicitly informed that an individual decision will be made based on algorithmic processing, and the administration must communicate the rules defining this processing and its main characteristics of implementation to the individual upon request.
These principles are applicable to AI-driven tools, to some extent.
Recommendations and guidelines issued by French governmental bodies mainly aim to provide guidance and best practices for the ethical deployment of AI. In this regard, the 2018 Villani report is the very first public initiative of this kind, serving as a comprehensive roadmap (see 2.2 Involvement of Governments in AI Innovation).
Since the emergence of generative AI in 2022, several governmental bodies have prioritised AI as a top subject, with the following examples.
The regulatory landscape surrounding AI in France will largely be shaped by the recently adopted AI Act, set to be fully implemented by 2026, and the national legislation implementing the proposed AILD and PLD. France's AI strategy is aligned with the fundamental principles outlined in the EU proposals, and the government has consistently expressed strong support for European-scale regulation of AI since 2018.
However, in recent months France has shown resistance to overly rigid regulations concerning AI and generative AI and, in an unexpected turn of events, joined with Italy and Germany as dissenting voices amidst the finalisation of the AI Act and internal turmoil within OpenAI in November 2023. These countries opposed the inclusion of foundation models within the legal framework, advocating instead for a gradual evolution of regulations. They emphasised mandatory self-regulation through codes of conduct and focused on regulating AI applications rather than the technology itself. This stance was influenced by pressure from influential AI champions within their borders, such as Mistral AI in France and Aleph Alpha in Germany.
Notably, this position contradicted earlier support for AI regulation expressed by figures like Cédric O, a former French Secretary of State for Digital Affairs and current board member of Mistral AI.
No AI-specific jurisdictional laws are currently in place in France, so there are no flagrant inconsistencies with proposed EU regulations. Like many other EU member states, France is primarily looking to the proposed EU regulations to shape its AI regulatory framework.
This section is not applicable.
To date, French law has not been substantially amended on data protection aspects nor on information and content to foster AI technology. Legislative reforms are expected following the entry into force of the AI Act.
However, a law proposal specific to AI and copyright (Law Proposal No 1630 aimed at providing a framework for AI under copyright law) was presented in September 2023, with the objective of amending the French Intellectual Property Code to offer better protection to authors and artists facing the rapid development of generative AI. This law proposes to clarify the ownership of IP rights in works generated by an AI system without direct human intervention. The IP rights would be owned by “authors or successors in title of the works which made it possible to conceive the said artificial work”. It also creates an obligation by which any work created through an AI system must be labelled as such and must specify the names of the authors whose works led to the creation of the artificial work. Finally, it proposes to create a taxation system for the benefit of collective management organisations when a work is generated by “AI from works whose origin remains uncertain”. Such taxation would be payable by the company exploiting the AI system to generate the work.
This law proposal has been criticised for its lack of pragmatism and the technical difficulties it raises, notably regarding the obligation to identify the works used. In addition, it could contribute to an effect of fragmentation of national legislation, in contradiction with the AI Act's main objective to apply harmonised regulation across the EU.
In France, AI legislative strategy mirrors broader European goals for global AI leadership. In line with the 2018 Villani report, France prioritises alignment with key EU initiatives such as the AI Act, the PLD and the proposed AILD rather than proposing national-level AI-specific legislation. These initiatives aim to establish a harmonised regulatory framework that is conducive to fostering innovation while addressing potential risks associated with AI implementation.
The EU AI Act was adopted in March 2024 and is scheduled to take full effect by 2026. It categorises AI systems based on risk levels and imposes specific regulatory obligations, with certain provisions – notably those concerning banned high-risk AI systems – to be enforced as early as the end of 2024. The Act aims to improve transparency, accountability and the ethical use of AI technologies while preventing potential harms such as discrimination, bias and infringement of fundamental rights.
In addition, France aligns its AI strategy with the future implementation of the recently adopted revised PLD, and, possibly, the proposed AILD, which addresses legal aspects related to liability for AI-related incidents, proposing mechanisms for repairing damages resulting from defective AI systems and allocating responsibilities for faults caused by AI. On the other hand, the revised PLD focuses on product liability issues associated with AI technologies, providing clarity on manufacturers' responsibilities, and facilitating recourse for individuals affected by AI-related incidents.
To date, French courts have not had the occasion to deal with cases involving AI systems. There are various reasons for the lack of court rulings. For example, the anticipation of EU legislation and the scarcity of specific legislation might have created a situation where there have not been any questions on the interpretation of law that would require settlement in court. Another reason could be a lack of litigation. Also, if potential disputes have been settled or resolved through alternative dispute resolution mechanisms, they would not have generated court rulings.
However, a significant case involving Google and its autonomous suggestion system, Google Suggest, has been brought before the French Supreme Court. In 2011, a company sued Google when the term “swindler” appeared in the search suggestions associated with its name. Initially, Google was found guilty of “public insults” as the judges considered that Google was not totally neutral in its data processing and could not hide behind the automatic nature of the process, as there was a “possibility of human control over the functionality”. The Supreme Court overturned the decision and ruled that Google could not be held liable for the automatic and random process of its suggestion functionality, as it did not have the intention to create or endorse the suggested remarks. This judgment represents a reversal of previous case law, where Google was held responsible for the content of its suggestions based on pre-sorting and the potential for subsequent control.
Considering the lack of court rulings pertaining to AI, no AI definition has yet been used by the French courts. However, and going forward, French courts would likely apply the definition of an AI system as set forth by the AI Act, which is based on the OECD definition.
Regulatory agencies in France operate independently from the French government, with the primary objective of overseeing specific sectors and enforcing applicable regulations. These agencies possess a wide range of powers, including the ability to levy sanctions or injunctions against entities found to be non-compliant. They also have the power to interpret regulations, and may issue guidelines in the form of soft-law instruments.
There is no regulatory agency specialising in AI matters, although several regulatory agencies have started to work on AI-driven questions and the impact of AI on their respective domain of expertise. For example, the Defender of Rights has been working on algorithmic bias, especially within the HR sphere and in January 2024 the French Anti-Competition Authority began a national public consultation regarding the French market of generative AI tools.
The CNIL may play a key role in the coming years. Pursuant to the AI Act (latest version dated 13 March 2024), each member state shall designate a market surveillance authority for the purpose of supervising the application of the AI Act at the national level. The CNIL is being considered to take on this role for France. It has already issued significant guidance on AI, and has also announced the creation of a department specialised in AI within its services. Its functions are expected to go beyond data protection matters with regards to AI, as its mission would be to supervise the application of the AI Act. The French Council of State and the Assemblée Nationale have already communicated in favour of the CNIL’s appointment as national supervisor for France.
No precise definition of AI has yet been applied by regulatory agencies, which are expected to apply the definitions set by the AI Act.
There is currently no dedicated authority tasked with addressing issues related specifically to AI; French regulatory agencies currently address AI within the context of their respective fields of expertise. For example, the CNIL’s objective is to facilitate the advancement of AI technology while concurrently ensuring a robust framework for safeguarding personal data and data subjects’ privacy. For instance, the CNIL has issued guidelines dedicated to professionals in the AI sector, to ensure their compliance with the GDPR.
There are no enforcement actions by regulatory agencies as such.
Nonetheless, a French Antitrust Authority decision dated 20 March 2024 notably fined Google EUR250 million for failing to comply with commitments related to press publishers' neighbouring rights. The decision cited Google's AI system, Bard, and criticised Google for not providing a technical solution allowing publishers and press agencies to opt out of Bard's use of their content while still displaying protected content. This was considered an unfair practice, hindering negotiation efforts for fair remuneration with right holders.
Most of the norms and standards in France are predicated upon international or European standards, such as ISO, IEC, CEN and CENELEC standards. Within France, the Association Française de Normalisation (AFNOR) is the national delegation tasked with representing French interests in the formulation of international and European standards.
For more information, see 6.2 International Standard-Setting Bodies.
Under the AI Act, certification bodies are entrusted with certifying high-risk AI systems prior to them being placed on the European market.
AFNOR, in collaboration with other national EU delegations, has undertaken a stakeholder consultation process, engaging with start-ups to draft “operational” certification standards adapted to the realm of AI. To date, several international standards have been promulgated, including ISO/IEC 42001 on AI Management Systems and ISO/IEC 23894, which provides recommendations for AI risk management that may be applied by various types of industries and businesses for the conception and deployment of AI systems.
Governmental authorities in France have extensively incorporated AI into various sectors. For instance, agencies like the Directorate General of Public Finances (DGFiP) have implemented AI projects such as the CFVR initiative to enhance tax control operations, exemplifying the government's commitment to leveraging AI for administrative efficiency.
However, the use of AI technologies by law enforcement agencies in France has sparked debate, particularly concerning issues like the utilisation of Briefcam software. This Israeli technology, which incorporates facial recognition capabilities, has been reportedly employed by French police forces for surveillance purposes without proper declaration or oversight for the past eight years. The Ministry of the Interior has announced an administrative investigation into the matter, while the CNIL has initiated a control procedure to assess the extent of facial recognition usage by law enforcement.
Furthermore, the upcoming legislation for the Olympics (see 7.3 National Security) has been met with controversy among organisations like Amnesty International, due to its provisions involving the use of AI-assisted cameras to identify abnormal situations in certain events.
Considering these developments, the impending implementation of the AI Act will introduce strict regulations on certain AI applications to protect citizens' rights. Applications such as the untargeted extraction of facial images for facial recognition databases will be prohibited. Real-time biometric identification systems will only be deployable under strict conditions, including temporal and geographical limitations, and will require judicial or administrative authorisation, primarily for locating missing persons or preventing terrorist attacks.
Considering the lack of judicial decisions pertaining to AI, there are no pending actions related to government use of AI.
AI represents critical technology in the defence sector for France, with a wide range of applications, such as autonomous navigation, planning, decision support and analysis of massive datasets. The French Ministry of Armed Forces has been involved since 2019, establishing close ties with the French scientific community specialising in AI. For example, the Ministry provides financial support for the development of innovative projects involving AI, with the aim of preserving France's sovereignty in this strategic domain.
As part of the military programming law for the years 2024 to 2030, the French government has announced substantial investments, including EUR10 billion in AI and algorithm development. The objective is to equip the French army with autonomous data processing capabilities, enabling it to make strategic and tactical decisions more rapidly and with increased precision.
Although the use of AI in defence and national security is not governed by the provisions of the AI Act, France has nevertheless chosen to apply certain ethical principles in this field. For instance, in 2020 the French Ministry of Armed Forces has established a committee tasked with addressing ethical issues related to the use of AI in the defence sector.
Finally, the French government is not reluctant to experiment with AI-enabled surveillance devices. As part of the upcoming Olympic Games in Paris, and on an experimental basis until 31 March 2025, the use of intelligent cameras equipped with facial recognition technology will be authorised “solely for the purpose of ensuring the security of sports, recreational, or cultural events that, due to their scale of attendance or circumstances, are particularly exposed to risks of terrorist acts or serious harm to individuals” (Law of 19 May 2023, concerning the Olympic and Paralympic Games of 2024).
Several emerging issues relating to generative AI have been raised, notably concerning IP rights (see 8.2 IP and Generative AI), data protection (see 8.3 Data Protection and Generative AI), image rights and the proliferation of fake news.
Image rights in France are a component of the right to privacy and cover all elements of personality, such as a person's physical appearance and voice. A deepfake would typically violate an individual’s image rights if it were generated without the individual's prior consent, which is sanctioned by French criminal law.
Considering the proliferation of deepfakes and particularly of pornographic and child pornographic content, the French government proposed to adapt its punitive arsenal by creating a specific criminal offence sanctioning the non-consensual dissemination of a sexually explicit deepfake with a two-year imprisonment and a fine of EUR60,000. This text is under debate at the Assemblée Nationale.
Alongside the infringement of individuals’ image rights, generative AI also contributes to the proliferation of fake news and online scams. To date, France has a specific piece of legislation to prevent the proliferation of fake news and disinformation. However, this text is only applicable to electoral campaigns and does not contemplate the impact of political deepfakes beyond this scope.
Other legal grounds could be used to fight against the proliferation of fake news through AI-generated content, such as copyright, image rights, privacy laws or the EU Digital Services Act and, going forward, the AI Act. Nonetheless, the current regulatory landscape presents certain gaps, notably with regards to the identification of perpetrators acting anonymously online.
IP Protection
Protection of input/training data
Among the enforceable rights, the original creations used to form AI may be protected by copyright. Neighbouring rights may also be involved, including performers' rights, phonogram producers' rights, press publishers' rights, etc. The sui generis right of the database producer may also apply if the training data has been extracted from a database whose creation, verification or presentation of content attests to substantial investment.
Protection of output data
Output data may be protected by copyright if the AI has only been used as an aid in the creative process, but there is an element of human creation that goes beyond mere instructions given to the AI. If the creation is fully generated by AI, it should not be protected by copyright under French law (see 15.1 Applicability of Trade Secrecy and Similar Protection).
AI Tool Providers' Terms and Conditions
An AI tool provider may contractually determine asset protection with respect to the input and output data of the generative AI tool through their terms and conditions of use.
For instance, OpenAI states in its terms and conditions of use that, to the extent permitted by applicable law, the user retains ownership of the input and output data. More generally, AI tool providers incorporate good practices into their T&Cs, so that nothing shall be deemed as granting a right to use data as training data for the purpose of testing or improving AI technologies, or to the contrary that all necessary rights have been obtained to use the training data for such use.
IP Infringement
Regarding input data, infringement of various IP rights may be claimed. The action of training an AI system, involving the reproduction of content protected by copyright or neighbouring rights without prior authorisation, is likely to be infringing. In another manner, the extraction or re-use of a substantial part of a database by an AI system may constitute an infringement of database sui generis rights. Finally, the data may have been collected unlawfully, in breach of a confidentiality clause or through unauthorised access to business secrecy.
In France and Europe, the risk is partly limited by a legal framework favourable to AI. Copyright or the sui generis right of the database producer can be neutralised, at least in part, by the “text and data mining” exception (Article L.122-5-3 of the French Intellectual Property Code). This would apply to most AI systems, since text and data mining is defined as “the implementation of a technique for the automated analysis of text and data in digital form in order to extract information, in particular patterns, trends and correlations”, a key stage in machine learning based on training data. However, the right holders can opt out and refuse to allow their content to be used for text and data mining, and therefore oppose its use as training data for AI. It should be noted that collective management organisations such as the SACEM have already opted out on behalf of members. Entities using training AI tools based on the SACEM’s repertoire should request its prior authorisation. However, the opt-out may be exercised by “any means”, leaving AI publishers in a state of great technical uncertainty as to how to implement the opt-out of right holders.
Concerning output data, AI-generated content can itself reproduce the works that were used to train the system, which may then be identifiable in the generated content, thereby infringing IP rights.
In France, data protection laws – primarily governed by the GDPR and supplemented by national legislation with the French Data Protection Act – address the rights of data subjects concerning generative AI and data processing activities.
Regarding the exercise of data subject rights, providers or deployers of AI systems, acting as data controllers under the GDPR, must be able to respond to data subjects’ requests for the rectification or deletion of data. Data deletion in response to such requests should not entail the removal of the entire AI model but simply the removal of the relevant data from the training dataset. This necessitates the provider being technically capable of identifying the relevant data and metadata relating to the data subject.
Moreover, the principle of minimisation should be applied from the conception stage of the AI system. The establishment of a training dataset must consider constraints related to the protection of personal data. The CNIL published several thematic fact sheets on this matter in October 2023 (and updated in March 2024 following public consultation) regarding the conception and deployment of AI systems involving the processing of personal data. It is recommended to ensure that the collected data is relevant to the intended purpose. The CNIL recommends implementing technical mechanisms for cleaning the database to establish a high-quality training dataset that respects data subjects' privacy.
In the case of AI systems with continuous learning, the CNIL recommends paying particular attention to the anticipation of potential data drifts. Data controllers are encouraged to establish precise monitoring of the data used to ensure its relevance and compliance with basic GDPR principles.
Use Cases
In France, the use of AI in the legal sector is booming, but the offer of reliable AI-based solutions in the legal field remains limited, as the training of AI models based on dedicated legal databases is in its infancy.
Nevertheless, there are many use cases for AI in the legal sector, such as legal research based on natural language (eg, Doctrine and Lexis360), contract analysis (eg, Della AI), predictive analysis identifying the potential outcome based on precedents (eg, Case Law Analytics or Predictice) and e-discovery platforms to analyse a vast amount of documents including contracts during legal due diligence (eg, Relativity).
In early 2023, an international firm operating in France announced a partnership with Harvey, an AI platform specifically designed to provide legal services, based on the latest OpenAI model.
Ethical Concerns
In a recent survey carried out by LexisNexis, 85% of legal professionals surveyed expressed ethical concerns regarding the use of AI. These ethical concerns are directly linked to the lack of reliability of AI systems when responding to a legal question. French lawyers, as part of their ethical obligations, must have the required expertise to provide informed advice to clients; lacking competence may trigger a lawyer’s professional responsibility. In this context, the use of AI as part of the provision of legal advice must remain supervised by the lawyer themselves, as a legal professional cannot delegate their work to an AI system considering the risk of errors.
In the absence of specific regulation, there is no inherent barrier preventing the application of common law civil liability regimes to address liability issues related to damages caused by AI systems in France. These mechanisms are rooted in principles of fault, causality and harm, and form the backbone of legal recourse for addressing AI-related damages.
Current legal frameworks already provide some mechanisms to address liability for damages caused by AI systems. For example, training generative AI systems using data collected from the internet may potentially violate intellectual property rights, generate defamatory or disparaging content, or infringe upon the protection of personal data. It is also worth noting that French law prohibits clauses limiting liability for bodily injury – a consideration particularly relevant given the potential physical risks associated with AI technologies.
However, the complexity of the technology and the involvement of multiple stakeholders introduce a particular difficulty: accurately identifying the responsible party and establishing a clear causal link between the fault and the harm are particularly complex tasks. This underscores the importance of the directives proposed by the European Commission, which acknowledge these challenges and seek to address them (see 10.2 Regulatory).
Furthermore, it is reasonable to anticipate that, without greater clarity on exposure to risks, insuring against AI-related risks will likely be challenging, akin to the emerging cyber-risk insurance market. As with cyber-insurance, which has seen exclusions for specific risks in policies, navigating the complexities of AI insurance will necessitate careful consideration of coverage limitations and exclusions.
Although AI-related liability issues have not been fully addressed through legislation, there are ongoing efforts at the European level to address them, namely through the AILD and the PLD.
AILD
The AILD aims to make it easier to hold the tortfeasor liable for damages caused by AI by applying a reversed burden of proof in situations where it is difficult for the injured party to prove a causal link. It also empowers courts to order pre-trial discovery on relevant evidence when a high-risk AI system is suspected of causing damage. However, the AILD has not yet been adopted.
PLD
The PLD modifies the existing product liability rules, and was adopted by the European Parliament on 13 March 2024. It includes AI systems within the scope of “products” and eases the burden of proof by removing the EUR500 threshold and introducing discovery mechanisms and presumptions. It also extends compensable damages to include non-professional data losses. The PLD requires that there must always be a business based in the EU to assume liability for damages caused by defective products, even for online purchases made outside the EU.
The increasing use of algorithms and machine learning in decision-making processes raises concerns about algorithmic bias, as highlighted by institutions like the CNIL and the Defender of Rights. Despite the absence of specific regulations governing algorithms in France, concerns persist, particularly regarding their impact on online platforms and the gig economy, as noted in the Senate's report on the “Uberisation of society”.
Regarding administrative decision-making, Article L.311-3-1 of the Code of Relations between the Public and the Administration (CRPA) requires that individual decisions based on algorithmic processing must be explicitly disclosed to the individual concerned. The rules defining this processing and its main characteristics must be communicated by the administration to the individual upon request, as stipulated by the Law for a Digital Republic of 7 October 2016.
Furthermore, proposed legislation from 6 December 2023 aims to address discrimination through individual and statistical testing practices by proposing the creation of a committee of stakeholders to conduct prospective studies on the risks associated with AI-based algorithms to ensure the fair and non-discriminatory use of algorithms.
In addition, companies and professional associations such as Confiance.ai and Positive.ai have developed ethical guidelines for AI use. For example, Confiance.ai has developed AI ethics frameworks, technical norms and best practices to ensure AI systems are transparent, trustworthy and bias-free.
In France, data protection and privacy are governed by the GDPR and the French Data Protection Act, which are applicable to all stages of the conception of an AI system.
Conception
Generative AI and deep learning models require substantial training data, often containing personal data that may trigger GDPR application, particularly when suppliers use web-scraping methods to access publicly available data for training.
Suppliers must assess if personal data processing is required for AI system operation and, if so, ensure compliance with GDPR principles such as transparency, lawfulness, minimisation and determination of the adequate data retention periods. It is also recommended to conduct a data protection impact assessment if the AI system entails a high risk for the individuals concerned (eg, processing of sensitive data such as health data).
The constitution of the training database must always comply with the basic principles of the GDPR. Even if the AI supplier is not responsible for the collection of personal data, it must determine whether the personal data used has been collected unlawfully. Data subjects must also be informed of the processing of their data for the purpose of constituting a training database and be able to exercise their rights.
Deployment
The supplier, acting as data controller, must ensure lawful data processing throughout the deployment of an AI system. The CNIL advises implementing data cleansing to rectify errors, identifying personal data upstream for AI system optimisation, and considering anonymisation or pseudonymisation to mitigate confidentiality risks. Regular audits should also be considered, to detect discriminatory biases or errors.
Data Security
AI system suppliers must implement GDPR-compliant security measures, including securing data collection with encryption and robust authentication methods. The encryption of backups and the monitoring of logging and logs are also emphasised by the CNIL to trace data duplications.
In France, the utilisation of facial recognition and biometric information has predominantly sparked legal concerns within law enforcement circles (see 7.1 Government Use of AI and 7.3 National Security).
The imminent implementation of the AI Act in France will introduce, by the end of 2024, explicit prohibitions on specific AI applications, including those incorporating facial recognition technology. The AI Act prohibits applications deemed detrimental to citizens' rights, such as biometric categorisation systems using sensitive attributes and the indiscriminate extraction of facial images for facial recognition databases. It imposes restrictions on emotional recognition in workplaces and educational institutions, predictive policing reliant solely on individual profiling, and systems manipulating human behaviour or exploiting vulnerabilities.
However, exemptions are provided for law enforcement entities, permitting limited deployment of biometric identification systems under stringent conditions, primarily for locating missing persons or thwarting terrorist activities. These biometric systems necessitate judicial or administrative authorisation, subject to rigorous temporal and geographical constraints. Furthermore, deploying such systems for retrospective identification is deemed high-risk, mandating judicial authorisation tied to criminal offences.
Automated decision-making technology in France falls under various regulations and legal frameworks. Article 22 of the GDPR safeguards individuals against adverse impacts of fully automated decision-making, ensuring transparency and the right to challenge decisions based on personal data.
In specific sectors, particularly within governmental and regulatory domains, automated decision-making processes are employed for tasks such as tax computations, all without human intervention, but they must adhere to stringent legal standards to ensure transparency, fairness and accountability.
Consumer protection laws, exemplified by Article L.221-5 of the French Consumer Code, require companies to disclose the use of automated decision-making for personalised pricing, ensuring transparency in pricing strategies.
Entities using such technology are overseen by the CNIL for regulatory compliance, with non-compliance being subject to fines and penalties. In addition, discriminatory outcomes from automated decision-making can lead to criminal charges under French law, emphasising the importance of fairness and rights protection in automated systems for companies.
Use of Chatbots
Advancements in AI, particularly in generative and predictive AI, pose challenges for individuals in discerning their interaction with AI, notably evident in the widespread use of chatbots. These conversational agents deployed by various entities, whether private or public, offer round-the-clock assistance to users. The imperative for transparency and usage clarity, emphasised in the 2018 Villani report and international principles like those of the G20, extends naturally to chatbots and is codified in the AI Act.
In addition, compliance with data protection regulations mandates that users of personal data processing are informed, which is a requirement pertinent to chatbots given their involvement in data retention and processing, necessitating their inclusion in privacy policies and having distinct terms of use.
Ethical Concerns
In France, the National Consultative Ethics Committee (CCNE) has examined the ethical implications of conversational agents stressing the risk of users anthropomorphising chatbots and highlighting the need for transparency. This concern correlates with Article 52 of the AI Act, mandating chatbot providers to disclose their AI nature to users, except for chatbots dedicated to crime prevention.
The CCNE also advocates for ethically designed chatbots with traceable responses, ensuring compliance with data protection laws. Thus, transparency and user awareness are paramount in chatbot usage to mitigate privacy breaches and manipulation risks.
Nudge
Regarding the potential to manipulate consumers, it should be noted that the AI Act classifies the use of conversational agents to influence individual behaviour, particularly through nudging techniques, as prohibited AI. Nudging technics may also be qualified as misleading commercial practices, which are sanctioned under French consumer law insofar as they contribute to the dissimulation of substantial information for consumers.
The surge of e-commerce platforms has triggered the widespread adoption of dynamic pricing algorithms, which facilitate real-time adjustments based on demand variations, often leveraging AI technology. Personalised pricing strategies, shaped by individual consumer data, are also gaining traction.
While AI aids businesses in forecasting market trends and consumer behaviour, concerns arise regarding potential monopolistic practices when companies control these algorithms, dictating prices to their advantage within sectors.
Furthermore, collusion risks among companies employing pricing algorithms may distort competition and negatively influence prices. The French Anti-Competition Authority has been examining algorithm usage, particularly dynamic pricing algorithms, utilising existing frameworks under Article 101 of the Treaty on the Functioning of the European Union (TFEU) to regulate the use of AI in price determination and anti-competitive practices. However, anti-competition authorities must enhance their understanding of algorithmic operations to better assess instances of abuse of dominant positions or collusion.
To that end, in February 2024 the French Anti-Competition Authority launched a public consultation on generative AI to evaluate the competitive landscape of this market in France, focusing on data access arrangements (such as exclusivity agreements) and recent investments by major digital players in AI-specialised start-ups.
As the future implementation of the AI Act imposes specific obligations on actors within the AI supply chain, including suppliers, users and importers, transactional contracts between customers and AI suppliers need to clearly define roles and responsibilities to address new and unique risks associated with AI technology. This includes delineating obligations related to data privacy, security, accountability and compliance with regulatory requirements. Businesses must ensure that contractual agreements reflect these considerations to mitigate potential legal and operational challenges arising from AI deployment.
With regards to recruitment procedures, AI brings several benefits, including time savings and enhanced efficiency in application analysis. By automating certain low-value tasks, AI enables recruiters to focus on interacting with candidates. Moreover, the adoption of AI-based technologies holds the potential to assist companies in achieving diversity, equality and inclusion objectives by facilitating the hiring of individuals from minority backgrounds. However, it will be up to the company to ensure that such tools do not lead to discriminatory biases or the standardisation of applications.
Dismissal procedures are governed by strict regulations outlined in the French labour code. For instance, an economic dismissal necessitates a prior interview in the presence of an employer representative, thereby preventing the complete delegation of dismissal proceedings to AI systems.
Under the AI Act, tools such as CV analysers or decision-making aids for promotions are categorised as high-risk AI applications (see Annex III of the AI Act). Given their widespread use across various sectors, companies employing such tools must adhere to specific obligations outlined in Article 29 of the AI Act.
In French labour law, the use of tools to evaluate an employee or monitor performance is strictly regulated, and the employer must justify certain conditions for implementing a monitoring tool within the company. Thus, the use of a tool must not result in permanent surveillance of the employee, unless justified by the nature of the task to be performed and the use of the tool is proportionate to the intended purpose.
The introduction of a monitoring tool also requires prior consultation with the employee representative bodies when the company is equipped with such, as such tool would affect working conditions.
Monitoring tools combining AI will not be exempt from these conditions, and employers must ensure that such uses respect employees' privacy and do not create discriminatory biases against them prior to their implementation. In addition, employers must ensure that monitoring tools respect GDPR principles regarding transparency and overall lawfulness of the data processing.
Pursuant to the French labour code, employers have an obligation to ensure the physical and mental safety of their employees. Therefore, the use of AI-powered monitoring and performance evaluation tools must be contemplated with utmost care, since it could create undue stress on employees, impacting their mental health and triggering the employer's responsibility.
For digital platform companies like car services and food delivery, the use of AI has become commonplace, particularly in pricing strategies such as surge pricing, which is based on demand-supply dynamics and is a prime example of AI's impact on pricing tactics. However, concerns about fairness and transparency have surfaced alongside these innovations.
From a regulatory perspective, laws like Article L.221-5 of the French Consumer Code mandate companies to disclose their use of AI-driven pricing mechanisms, like surge pricing, to consumers. This transparency requirement aims to empower consumers to make informed decisions about their purchases.
The financial services sector is undergoing a significant transformation propelled by the widespread adoption of AI, which is reshaping traditional models and priorities. AI enables institutions to shift certain operations from cost centres to profit centres by optimising tasks through AI-driven automation, leading to cost savings and revenue generation opportunities. Moreover, AI serves as a new differentiator for financial institutions, enhancing customer experiences and market differentiation through AI-powered chatbots and personalised recommendation systems.
Collaborative problem-solving facilitated by AI is also increasing, with financial institutions pooling data resources to develop collective solutions that enhance efficiency, security and performance, particularly in areas like fraud detection and risk management. However, the adoption of AI in financial services brings risks, notably concerning data regulations and biases in repurposed data. Financial services companies must be vigilant in identifying and mitigating these biases to avoid perpetuating discriminatory practices in decision-making processes.
In France, the Prudential Supervisory Authority (ACPR) has recognised the digital revolution within the banking and insurance sectors, establishing a task force to discuss AI implementation projects, associated opportunities, risks and regulatory challenges. This initiative aims to provide a preliminary assessment and gather feedback on key areas for secure technology development.
In France, the integration of AI into healthcare systems presents both opportunities and challenges. Regulatory bodies such as the French National Agency for the Safety of Medicines and Health Products (ANSM) and the French National Authority for Health (HAS) provide guidelines for the use of AI in software as a medical device (SaMD) and related technologies. These regulations address concerns such as the potential treatment risks associated with AI, including hidden bias in training data that can lead to algorithmic biases affecting patient care outcomes. Compliance with strict data protection laws, such as the GDPR, is essential to safeguard patient privacy when using personal health information to train machine learning algorithms.
While AI-powered medical decision support systems (MDSS) offer the promise of improving diagnostic accuracy and treatment selection, concerns about potential risks, including diagnostic errors and breaches of patient privacy, highlight the need for robust regulatory oversight. The liability landscape surrounding MDSS use encompasses fault-based liability, such as diagnostic errors attributed to healthcare professionals, and product defect liability, which may arise from software malfunctions. To address these concerns, rigorous testing, validation and ongoing monitoring of AI systems are essential to ensure compliance with regulations such as the GDPR, the Medical Device Regulation (MDR) and forthcoming legislation like the AI Act, PLD and AILD.
Moreover, the obligation to inform patients about the use of MDSS underscores the importance of transparency and patient autonomy, although questions persist regarding the extent and timing of this obligation.
Despite these challenges, the PLD and AILD aim to streamline the burden of proof in AI-related liability cases, indicating efforts to adapt regulatory frameworks to the evolving landscape of healthcare AI in France.
In France, regulations governing the integration of AI into autonomous vehicles can be guided primarily by the Law of 5 July 1985, commonly known as the “Badinter Law”. This legislation addresses civil liability in road accidents involving motor vehicles, and serves as the cornerstone for determining responsibility and compensating victims. Theoretically, although there have not yet been any specific cases involving automated vehicles, this law would be applicable even in such scenarios, as its application stems primarily from the involvement of a motor vehicle, regardless of its automated nature.
Furthermore, recent regulatory advancements have addressed the criminal liability of the manufacturers of autonomous vehicles in the event of accidents. Order No 2021-443, issued on 14 April 2021, delineates the framework for criminal liability concerning autonomous vehicle manufacturers, particularly in cases where accidents occur while the vehicle operates in automatic mode. This regulation clarifies that manufacturers can be held criminally liable for accidents that happen during automatic operation, underscoring efforts to define legal responsibilities within the realm of autonomous mobility services.
In instances of defective autonomous vehicles, manufacturers' liability may be pursued under the PLD, especially in its updated version adopted in March 2024. This directive provides a mechanism for holding manufacturers accountable for defects in their products, including autonomous vehicles, thereby ensuring that consumer protection and safety standards are upheld.
The integration of AI into manufacturing processes in France presents a multifaceted landscape of opportunities and challenges, deeply intertwined with the country's relocalisation efforts. These initiatives, driven by escalating production costs and a growing technological dependence on China, seek to address the challenges posed by globalisation and strengthen the nation's economic resilience. Government-backed incentives such as the France 2030 plan support these endeavours, emphasising the role of AI in modernising production methods and enhancing industrial competitiveness.
However, AI adoption in manufacturing raises concerns about workforce displacement due to automation. While AI aligns with the strategic objectives of the French government by offering avenues to bolster economic sovereignty, especially for critical products, it simultaneously poses challenges in terms of job security. To address this, comprehensive workforce development strategies are essential to empower workers with skills for effective collaboration with AI technologies, ensuring their relevance in the evolving industrial landscape.
In France, the absence of specific legislation dedicated solely to AI in manufacturing does not imply a regulatory vacuum. Instead, existing legal frameworks address various concerns arising from AI integration in manufacturing processes. Notably, the collection and processing of sensitive information within AI-driven manufacturing environments must align with the GDPR. Furthermore, labour laws can address aspects such as job displacement, workplace safety and equitable treatment in light of the increasing automation facilitated by AI technologies.
Looking ahead, the transposition of the amended PLD at the European level will introduce specific regulations targeting liability issues associated with products integrating AI. This upcoming legislation will hold manufacturers responsible for ensuring the safety and reliability of AI-driven products, emphasising adherence to rigorous safety standards and the implementation of thorough risk assessment procedures.
Regulations governing the use of AI in professional services are currently evolving, with the upcoming implementation of the AI Act projected to take effect around 2026. This legislation imposes obligations on professional users, including those whose employees utilise AI systems. These obligations include the establishment of technical and organisational measures aligned with usage notices, adequate human oversight, impact assessments focusing on fundamental rights, governance and transparency. These requirements are particularly relevant when users employ AI systems that generate or manipulate text, audio or visual content that may be perceived as authentic by the public.
Furthermore, in France, employers can be held civilly liable for damages caused by their employees within the scope of their duties, emphasising the importance of ensuring the accuracy and reliability of AI systems in the workplace. Adequate training and support should be provided to employees regarding AI capabilities, limitations and potential risks. In addition, guidelines, internal codes of conduct and procedures for accountability should be established, with mechanisms in place for human oversight and intervention in AI-driven decision-making processes.
Data security and confidentiality are critical considerations, especially when AI systems rely on sensitive employee data. Employers must implement robust measures to protect against breaches or unauthorised access, ensuring compliance with data protection regulations.
While incidents related to AI in professional services have not been widely reported in France, other jurisdictions have experienced challenges. For instance, in South Korea, Samsung took action following data leakage caused by employees using ChatGPT, highlighting the importance of implementing restrictions and conducting investigations to prevent further breaches.
Inventor/Co-inventor
The French Intellectual Property Office (INPI) has not yet ruled on whether AI can be designated as an inventor in French patent applications. However, this does not seem compliant with the INPI guidelines, which state that the inventor is a “natural person”, or Article R. 612-10 of the French Intellectual Property Code, which refers to the “surname, first name and domicile of the inventor”.
Furthermore, at European level, 2024 EPC Guidelines have been amended to specify that a designated inventor must be a natural person, and that such requirement will be checked by the office. These amendments follow from decision J8/20 (DABUS), where the Legal Board of Appeal found that AI cannot be designated as an inventor, a decision in line with many other jurisdictions (including most recently the UK Supreme Court in Thaler v Comptroller-General of Patents, Designs and Trade Marks).
Author/Co-author
French courts have not yet ruled on the question of whether AI can be qualified as an author or co-author, but it is likely that their traditional interpretation of the conditions for copyright protection will not go in this direction.
The condition of creation seems to require human intervention in the creative process. The Court of Cassation has ruled that legal entities cannot be authors, which implies that only natural persons can be (Cass, civ. 15 January 2015, no 13-23.566). In addition, the condition of originality requires the work to reflect the personality of the author, which would exclude creations generated by machines (AI does not make free and creative choices).
See 8.2 IP and Generative AI.
See 3.6 Data, Information or Content Laws.
See 8.2 IP and Generative AI.
When advising corporate boards of directors on mitigating risks in AI adoption, several key issues must be addressed, including:
Companies must anticipate the upcoming enforcement of the AI Act, whether they are AI developers or users; it is strongly advised not to wait for the transition period to prepare for compliance.
As best practices, it is recommended to:
9, rue Scribe
75009, Paris
France
+33 1 53 30 77 00
+33 1 53 30 77 01
contact@aramis-law.com www.aramis-law.comArtificial Intelligence in France: an Overview
As artificial intelligence (AI) continues to reshape industries worldwide, France stands at the forefront of innovation, propelled by ambitious strategies and concerted efforts to bolster its AI ecosystem. With a keen eye on fostering innovation while safeguarding sovereignty, France has embarked on a multifaceted journey to elevate its position in the global AI landscape. This article delves into the evolving trends and developments surrounding AI in France, from pioneering research initiatives to nuanced regulatory frameworks, all while navigating the complexities of data security and emerging cyber-risks.
Fostering innovation and ensuring France’s sovereignty
France has launched several initiatives to support AI, with the stated goal of positioning national players alongside global players. Since 2018, there has been an acceleration in public funding dedicated to innovative AI projects.
Under the “France 2030” strategy plan introduced by the government in 2017, France endeavours to emerge as a global hub of innovation. This multifaceted strategy unfolds across two phases: initially strengthening research capabilities from 2018 to 2022, with a significant investment in computing power, followed by a focus on AI talent acquisition and training in subsequent years. France's selection to host the second European supercomputer, “Jules Verne”, underscores its commitment to advancing computational prowess.
A recent report from the government’s AI commission, titled “AI: Our Ambition for France”, outlines a comprehensive set of 25 recommendations, including plans for substantial AI investments (EUR27 billion over five years) to narrow the gap with the USA. These recommendations also emphasise the importance of achieving strategic autonomy for data centres and augmenting computing capabilities domestically. France aspires to take a leading role in establishing a World Organization for AI, an international body governed democratically by states, civil society (including researchers, citizens and unions), and businesses, which would set binding standards and share scientific insights on AI, advocating for robust international AI governance. Simultaneously, France is committed to achieving semiconductor sovereignty to mitigate the dominance of the USA and China in this critical field.
Legal framework
The future regulation of AI usage in France will be governed by European rules, particularly the recently adopted AI Act, slated to come into full effect by the end of 2026, with certain provisions being implemented as early as the end of 2024.
Under the provisions of this legislation, certification bodies are mandated to evaluate and certify high-risk AI systems before they are permitted to enter the European market. The Association Française de Normalisation(AFNOR), in collaboration with other national EU delegations, has initiated a consultation process involving stakeholders and start-ups to formulate “operational” certification standards tailored specifically to AI. Notably, a range of international standards have been developed, including ISO/IEC 42001 for AI Management Systems and ISO/IEC 23894 for AI risk management. These standards offer guidance to various industries and businesses on the development and deployment of AI systems.
In parallel, significant European regulations such as the Digital Services Act (DSA) and the Digital Markets Act (DMA) for major platforms also influence the landscape.
Moreover, recent regulatory advancements include the draft Artificial Intelligence Liability Directive (AILD) and the recently adopted Product Liability Directive (PLD). The draft AILD simplifies burden of proof mechanisms and implements discovery and presumption mechanisms for liability for damages caused by AI. The PLD expands the scope of the existing product liability directive to include defective AI systems.
France opposes overly rigid regulations concerning AI in general and generative AI in particular, preferring instead a progressive regulatory approach that supports innovation in AI, and advocates for a gradual evolution of regulations, particularly focusing on defining “systemic risks” associated with general-purpose AI models. Indeed, amidst the finalisation of the AI Act and the turmoil within OpenAI in November 2023, France emerged as a dissenting voice, along with Italy and Germany. Despite earlier strides having been made in negotiations, France stood opposed to the inclusion of foundation models within the legal framework. Foundational models, known for their complexity and potential systemic risks, became a focal point of contention. The disagreement revolved around whether any regulatory measures should extend to these models.
France, Germany and Italy advocated for a stance favouring mandatory self-regulation through codes of conduct. They argued that the primary focus should be on regulating the applications of AI rather than the technology itself. This position was partly driven by pressure from influential AI champions within their borders, such as Mistral AI in France and Aleph Alpha in Germany.
AI and scientific research
In 2019, France took a significant stride in advancing its AI research capabilities by establishing four interdisciplinary institutes and centres of excellence dedicated solely to AI (called “3IA”). These institutes serve as pivotal hubs aimed at fostering collaboration and co-ordination within the AI research community. Their overarching goal is to bolster France's allure as a global leader in cutting-edge research and innovation in the field of AI.
Complementing this institutional framework, France boasts a thriving ecosystem of 500 start-ups specialising in AI, underscoring the country’s commitment to nurturing emerging technologies.
Furthermore, France has made notable strides in academia, ranking tenth worldwide in terms of scientific publications related to AI. The country aspires to ascend even higher in this ranking, signalling its ambition to become a frontrunner in AI research on the global stage. To support this momentum, France has also prioritised the development of comprehensive educational offerings in AI, ensuring that the next generation of researchers and innovators are equipped with the necessary skills and knowledge to tackle complex AI challenges.
March 2024 marked the launch of the Programme for Excellence in AI Research (PEPR), a collaborative initiative co-piloted by esteemed institutions such as CEA, CNRS and INRIA. With a substantial budget of EUR73 million allocated over six years, this programme is a testament to France's commitment to investing in AI research and development. It is funded as part of the broader France 2030 strategy, which underscores the nation's strategic vision and long-term commitment to harnessing the transformative potential of AI for the benefit of society.
AI, intellectual property and the protection of right holders
The advent of AI has brought complex challenges regarding intellectual property (IP) rights, particularly concerning copyright and neighbouring rights. Balancing the increasing need for vast amounts of data with the imperative to uphold IP rights has become a paramount concern. This paradox is even more striking regarding France, where public policies historically tend to be in favour of the protection of IP right-holders.
In response to these challenges, France implemented the European Directive of 17 April 2019 into national law in 2021, introducing exceptions to copyright laws, specifically addressing text and data mining. This directive incorporates the “opt-out” provision, allowing rights-holders to oppose the use of their works for data mining purposes (ie, the automated analysis of digital text and data to extract information).
However, the practical implementation of the opt-out mechanism raises concerns regarding transparency, particularly regarding the accessibility of lists detailing contents collected and used by AI systems. Even though French collective management organisations such as the SACEM (representing composers and songwriters) have publicly communicated on opting out on behalf of their members, a question remains as to the effectiveness of this opt-out right and how in practice its respect can be ensured by AI providers. As a result, there have been calls for a reform of the European copyright directives to address these practical issues related to generative AI.
French representatives of the cultural sector publicly share their concerns regarding the position of the French government towards AI and its regulation. During the trilogue for the negotiation of the AI Act in December 2023, the French government took a position favouring innovation and notably French emerging companies in the AI field, by calling for a reduction of the transparency and traceability obligations for general-purpose AI models.
Nonetheless, certain actors have gone against the grain of the opt-out approach to data mining and opted for a contractual negotiation with generative AI publishers. For example, a notable partnership recently emerged in March 2024 between renowned French newspaper Le Mond, and OpenAI. As part of the agreement, Le Monde has granted OpenAI access to its articles for indexing and analysis by OpenAI's language model, GPT. In return, Le Monde gains access to OpenAI's technology, enabling innovative AI-driven projects, while also securing additional revenue streams, including neighbouring rights.
Furthermore, Le Monde has implemented a comprehensive AI Charter within its editorial guidelines, reinforcing France's dedication to ethical AI integration in journalism. The charter sets clear boundaries for AI applications, such as prohibiting the use of AI-generated images.
Data access for AI training
The development and conception of AI systems implies a massive consumption of data, and the question of access to adequate training data is key for AI developers and conceptors. Jurisdictions facilitating such access will likely be favoured by AI players for their implementation.
In its latest report (“AI: Our Ambition for France”, published in March 2024), the AI Commission (a commission initiated by the French Prime Minister encompassing private and public actors, in charge of elaborating the French government's strategy on AI) issued a series of recommendations in order to adapt the French government's strategy. These recommendations include facilitating access to certain data, notably health data, for the purpose of developing AI systems. The goal is to modernise the prior authorisation process before the Commission nationale de l'informatique et des libertés (CNIL – the French regulator in charge of public policy and enforcement of the GDPR), to favour the training of AI models on French and European data and subsequently to enhance France’s attractiveness.
This recommendation also appears to be fully in line with the recent EU Data Act and Data Governance Act, the goal of which is to increase trust in data sharing in the EU while respecting privacy and third-party rights such as IP rights. The AI Commission also comes to the same conclusion and urges collective data management schemes and a new data governance.
In any case, the constitution of the training datasets must comply with GDPR principles and the French Data Protection Act. In the coming years, it will be necessary to find a balance between ensuring fair access to relevant and qualitative training data whilst ensuring the protection of third-party rights and the objective of transparency. In this regard, the AI Commission suggests the establishment of comprehensive standards for the publication of information regarding AI models and how in practice the opt-out should be exercised by right-holders.
AI and data security
The CNIL has published comprehensive guidelines that include a list of best practices to be implemented by actors regarding data security and AI. This guidance on AI has been included in 2024 in the CNIL’s comprehensive guide on data security, which indicates that AI development and AI deployment are to be taken into account by all sectors of activity. Among these practical guidelines, the CNIL recommends building development teams with multidisciplinary skills that are cognisant of AI vulnerabilities. Going forward, businesses will need to strengthen their IT and CIO teams to prevent security risks, whether their business model is to conceive AI or use it in their activities.
In November 2023, France also approved guidelines for secure AI system development through its national agency for information systems security, the Agence nationale de la sécurité des systèmes d'information (ANSSI – the French Cybersecurity Agency), alongside other countries. These guidelines aim to provide a set of good practices to all stakeholders involved in the conception of AI systems, to ensure the deployment of secure “by design” AI systems based on international standards.
AI and cyber-risks
Given the large number of cyber-attacks and attempted online scams in France, in 2017 the ANSSI and the French Ministry of Home Affairs created an online service called “cybermalveillance.gouv.fr”, to assist victims, whether businesses or legal persons. In its last annual report, AI is presented as both a threat and an opportunity.
The rapid development of AI systems and more specifically of generative AI increases the risk of cyber-threats for businesses and legal persons. In this regard, the ANSSI anticipates an increase of “Zero day” cyber-attacks in France, notably with the upcoming Olympics in 2024. This type of cyber-attack can be enhanced using machine learning technology to generate malicious code, and is already beginning to be used by cyber-criminals. Examples of CEO frauds and spoofing by using generative AI and deepfakes have also been identified in France.
Even though AI may be used to enhance security protocols within businesses to limit the level of fraud, this technology remains easily accessible to cyber-criminals. The challenge in the coming years will be to anticipate the development of these increasingly sophisticated cyber-threats and to adapt businesses and governmental response to such threats.
Promoting ethical AI practices in France
In France, the drive towards ethical AI integration involves a multitude of actors, including governmental initiatives, research programmes and corporate collaborations.
One significant player in this landscape is Confiance.ai, a French technological research programme dedicated to enabling the integration of trustworthy AI into critical systems. As a key pillar of the Grand Challenge “Securing, certifying, and ensuring the reliability of AI-based systems” initiated by the French government under France 2030, Confiance.ai aims to position France as a leader in the industrialisation of trustworthy AI.
Another notable initiative is Positive AI, a collaborative effort launched by BCG GAMMA, L'Oréal, Malakoff Humanis and Orange France in early 2022. Positive AI introduces a label for Responsible AI developed by industry practitioners themselves. Recognising the challenge for businesses in operationalising Responsible AI recommendations, the founding companies of Positive AI strive to contribute by developing a concrete framework that integrates key principles outlined by the European Commission. This framework focuses on three priority dimensions:
Starting from 2023, this framework will serve as the basis for obtaining the Positive AI label, to be awarded following an independent audit.
9, rue Scribe
75009, Paris
France
+33 1 53 30 77 00
+33 1 53 30 77 01
contact@aramis-law.com www.aramis-law.com