AI is being applied across a broad range of healthcare use cases in France, with varying levels of maturity. The AI Use Observatory of the French National Agency for the Performance of Health and Medico-Social Institutions (ANAP) currently includes around 50 AI-driven solutions implemented in hospitals and medico-social organisations. A May 2024 Senate report examined the national deployment of AI in healthcare.
Key applications include the following.
Adoption of these technologies:
Primary Benefits of AI in Healthcare
AI is increasingly recognised as a strategic asset in the transformation of the French healthcare system, particularly in addressing systemic pressures such as financial constraints, healthcare workforce shortages, and population ageing.
The main benefits include:
Challenges Arising From a Healthcare-Specific Perspective
Despite its potential, AI deployment in healthcare raises significant concerns, particularly:
France’s Supporting Strategy and Investments for AI in Healthcare
As part of the Summit for Action on Artificial Intelligence in February 2025, the Ministry of Health published a report on the state of AI in health in France (“’état des lieux de l’intelligence artificielle (IA) en santé en France”) outlining a comprehensive strategy for AI in healthcare and structured around the four key themes of prevention, care delivery, access to care and supportive framework.
According to the report, France’s support of the development of innovation incorporating AI in healthcare is part of the “Digital Health” acceleration strategy (SASN) under the “France 2030” plan. France has so far invested EUR500 million in Digital Health solutions, 50% of which is dedicated to projects involving AI.
Key stakeholders are as follows:
HAS has recently communicated its intention to publish several guides pertaining to AI, in parallel with the advancement of a number of ongoing initiatives involving AI applications in the healthcare sector.
Notable collaborations between healthcare institutions and technology developers are as follows:
Healthcare AI systems are classified based on intended use, risk level, and patient-health impact:
The AI Act also introduces risk-based categories, identifying high-risk AI systems, including medical AI, with stricter transparency, robustness, and oversight requirements.
According to the Medical Device Coordination Group (MDCG)’s guidance published in June 2025, the classification of an AI system as high-risk under the AI Act does not automatically result in a higher risk class for the corresponding medical device or in vitro diagnostic under the MDR or IVDR. Rather, it is the device’s classification under the MDR/IVDR that determines whether the AI system is considered high-risk under the AI Act.
Key Legal Frameworks Governing Healthcare AI in France
AI use in the healthcare sector is governed by the following.
Alignment of EU Regulations and Existing Healthcare Laws
In the field of AI and healthcare, European regulations such as the AI Act and the MDR provide a comprehensive framework for AI-driven medical technologies. The MDR sets specific requirements for the safety, performance and conformity assessment of medical devices, including those incorporating AI.
The AI Act introduces additional obligations that focus on the specific risks posed by AI systems, particularly high-risk applications in healthcare.
At national level, the provisions of the French Public Health Code (Article L.4001-3) also govern the use of AI in healthcare. These national rules are generally aligned with the ethical requirements set out in the AI Act.
In parallel, the PLD, or Product Liability Directive, aims to modernise the EU liability regime to better address the risks associated with AI systems and connected products. Existing French rules on medical liability remain applicable for the time being, and their interaction with evolving EU provisions is discussed in 4. Liability and Risk in Healthcare AI.
AI systems used for medical purposes must comply with both the EU AI Act and the MDR/IVDR. Under Article 6 of the AI Act, such AI-based medical devices are considered to be high risk, particularly when classified as Class IIa or higher under the MDR/IVDR, triggering stringent conformity assessment requirements.
Legal and Reimbursement Framework
CE Marking
To be marketed in the EU, AI-based digital medical devices must first obtain CE marking. This certification confirms that the device meets European regulatory requirements for safety and performance. To obtain CE marking, the manufacturer must compile technical documentation providing evidence to demonstrate the quality and safety of the device. The MDR/IVDR require detailed descriptions of software architecture, data processing, and risk management. The AI Act adds documentation requirements focused on transparency and accountability, including risk assessments, data governance, and performance testing of high-risk medical AI systems.
In France, the ANSM evaluates the benefits and risks associated, in particular, with the use of medical devices.
The ANS issues conformity certificates related to interoperability and cybersecurity requirements.
Coordination of AI Act and MDR/IVDR Conformity Procedures
According to the MDCG’s June 2025 guidance, developers must comply with conformity procedures under both the AI Act and the MDR/IVDR. Where obligations overlap, the guidance provides coordination rules to avoid duplication. Manufacturers of AI systems used for medical purposes are expected to integrate testing, reporting processes, and documentation required under the AI Act into the technical documentation already prepared for MDR/IVDR compliance. In France, notified bodies designated under the MDR/IVDR and the ANSM oversee regulatory processes.
Software with a medical purpose is included within the scope of the MDR. SaMD refers to a software application having a diagnostic, therapeutic, preventive, monitoring or disease management purpose. This includes in particular treatment planning tools, medical imaging algorithms, intelligent remote monitoring platforms.
The MDR introduced a specific rule (“Rule 11”) for software, classifying software based on its potential impact on the patient’s health.
According to the AI Act, continuous-learning algorithms must be developed in such a way as to eliminate or reduce, as far as is possible, the risk of potentially biased outputs influencing input for future operations and to ensure that any such feedback loops are duly addressed with appropriate mitigation measures (Article 15 of the AI Act). Per the MDCG’s June 2025 guidance, the post-market monitoring system is key to ensuring continued performance and compliance.
France applies GDPR rules – emphasising informed patient consent, proportional data use, and cybersecurity – and the French Data Protection Act (see 6. Data Governance in Healthcare AI).
Applicable Technical Standards for Healthcare AI Systems
ISO and IEC have developed the ISO/IEC 42001 standard to support manufacturers in aligning their AI-enabled products with the requirements of the AI Act. The standard promotes the responsible development of AI, emphasising safety, transparency, and ethics.
It thus provides a normative framework for manufacturers of devices incorporating AI, to implement an AI management system. The implementation of such a system should be aligned with the processes required for ISO 13485 certification, such as management responsibility, risk management, audits, and continuous improvement.
ISO/IEC 42001 introduces some additional requirements concerning:
In practice, manufacturers must comply with both ISO 13485 and ISO/IEC 42001. While the two share overlapping requirements, these must be consolidated into a unified quality management system.
Interoperability and Data Protection Requirements
Since February 2023, an Interoperability and Security Framework for Digital Medical Devices has been applicable to all medical devices reimbursed by the French National Health Insurance that involve the processing of personal data as defined by the GDPR. Certification of compliance is issued by ANS.
French key bodies are as follows:
Coordination between regulatory and data protection authorities in France occurs through several mechanisms, as follows.
Healthcare AI systems classified as “high risk” under the EU AI Act must undergo comprehensive pre-market validation and certification. Key requirements include:
The HAS “descriptive grid for medical devices with machine learning (AI)”, updated in 2022, provides a structured framework detailing device use, data, algorithms and performance. Used by CNEDiMTS, it ensures transparency and consistency in clinical evaluation and reimbursement decisions covering the following:
These requirements ensure that AI remains assistive, rather than autonomous, and that its results are traceable and controllable by health professionals.
Both the MDR/IVDR and AIA require manufacturers to establish post-market monitoring and surveillance systems to track the performance and safety of healthcare AI systems once on the market. This involves systematically collecting and analysing data on device performance, risks, adverse events, and other safety concerns, and taking necessary corrective and preventive actions. Manufacturers must also maintain a vigilance system, report adverse events to authorities and users, and regularly update their risk, quality management, and compliance processes based on post-market findings and regulatory feedback.
In France, the ANSM monitors the safety and performance of medical devices post-market, ensuring timely detection of risks and incidents. The ANSM conducts continuous and regular re-evaluations of the benefit-risk balance of health products in actual clinical practice.
Withdrawal and Recall Procedures for AI Systems in Healthcare
Under applicable regulations, including the AI Act and MDR, AI systems used in healthcare are subject to stringent withdrawal and recall procedures to safeguard patient safety and ensure regulatory compliance.
If the market surveillance authority – eg, the ANSM in France – determines that an AI system fails to meet the required regulatory standards, it will promptly require the responsible operator to implement all necessary corrective measures. This may include bringing the system into compliance, withdrawing it from the market, or initiating an immediate recall.
Sanctions for Non-Compliance
Failure to comply with regulations governing AI systems for medical purposes can result in severe administrative and legal penalties. These include suspension or withdrawal of the CE marking, substantial fines, and bans on marketing the product.
Notably, the AI Act imposes fines of up to EUR35 million or 7% of the global annual turnover.
Additionally, the CNIL, responsible for monitoring personal data protection, may impose fines for violations of the GDPR.
Measures and sanctions are systematically made public by authorities such as the ANSM and the CNIL, thereby entailing a substantial reputational risk for the companies concerned.
Liability Frameworks
The AI Act does not define the legal regime for AI-related damages. The proposed AI Liability Directive, intended to address this, was removed from the European Commission’s 2025 work programme.
In the absence of a specific instrument, PLD should therefore cover AI liability questions following its entry into force in December 2026. It treats software as a product subject to no-fault liability, with developers and AI providers regarded as manufacturers. The PLD also introduces eased proof requirements for claimants.
In France, liability for healthcare AI currently relies on existing frameworks – product liability (to be updated with the PLD) and traditional medical liability – without a dedicated AI-specific regime.
Allocation of Responsibility
Application of Traditional Medical Liability Standards to AI Systems
In France, traditional medical liability applies to clinicians using AI, which is viewed as a support tool – not a decision-maker. Doctors retain clinical judgment and are liable for errors from misuse of AI under Article L1142-1 of the French Public Health Code.
Healthcare professionals may be personally liable for mishandling AI (eg, misinterpreting results) unless they can justify following or disregarding AI recommendations. Patients must prove both breach of duty and causation for harm.
Standards of Care With AI
Doctors must provide attentive care consistent with current science (Article R4127-32 of the French Public Health Code). When using AI, they must be trained, avoid undue reliance if better methods exist, justify AI use or non-use, obtain informed consent, maintain autonomy and avoid automation bias. AI increases responsibility without lowering standards.
Though not involving AI specifically, decisions ruled by French jurisdictions suggest how liability could be applied:
Determining Causation in Cases Involving AI Systems
Proving causation is difficult due to what can be described at times as AI’s “black box” nature. The French courts allow some flexibility, but the burden remains on patients. If fault remains unproven, compensation may be sought via the National Office for Compensation of Medical Accidents (Oniam), subject to strict legal conditions and seriousness thresholds.
Regulatory Risk Management Requirements
Healthcare AI systems used for diagnosis, treatment, or clinical decision-making are classified as high-risk technologies under the EU AI Act. As such, AI systems are subject to stringent risk-management obligations across their entire lifecycle.
These include identifying and assessing foreseeable risks to health, safety, or fundamental rights; implementing effective controls; reducing or eliminating risks; and reporting serious incidents.
Risk Assessment Processes for Developers and Healthcare Institutions
Developers must establish and document a risk-management system for high-risk AI. Healthcare institutions must assess clinical risks before deployment, ensure proper use of AI tools through technical and organisational measures, and provide adequate training for staff to interpret AI results responsibly.
Defences Available to Healthcare Professionals
Healthcare professionals may avoid liability if they can demonstrate that they have used AI tools conscientiously, competently, and in line with accepted medical standards. French law maintains the principle that clinical judgment must prevail. Practitioners must evaluate AI-generated results critically and remain fully responsible for their decisions.
Liability may be avoided if the clinician can show:
Defences and Limitations for AI Developers and Manufacturers
Developers may invoke regulatory compliance – such as CE marking, ISO standards and obligations under the AI Act and PLD – as part of their defence in product liability claims. While not exempting them from liability, such compliance may mitigate responsibility or demonstrate due diligence.
Absence of Safe Harbour Provisions
French and EU law currently provide no explicit safe harbour for healthcare providers or AI developers. Certification or regulatory approval does not exempt them from liability in cases of harm caused by misuse, malfunction or oversight failures. Regulatory compliance does not replace the duty of care.
Challenges Posed by the “Black-Box” Nature of Some AI Systems
The opaque nature of some AI models creates challenges in liability litigation, including:
Because legal frameworks require proof of fault, damage and causality, the lack of “explainability” complicates the patient’s burden of proof.
Ethical Frameworks Governing Healthcare AI in France
Digital ethics in healthcare promotes values such as trust, transparency and fairness in AI use, evolving in a fast-changing environment.
International and European Foundations
At international level:
National Framework
French bioethics law, notably Article L.4001-3 of the French Public Health Code, reinforces these principles.
Binding vs Voluntary Standards
While GDPR, MDR, and French law impose mandatory rules, other guidelines remain voluntary but are encouraged to foster responsible innovation. They may later be incorporated into formal regulatory standards.
Transparency and Explainability Requirements for Healthcare AI Systems
In France, as in the broader European Union, healthcare AI systems are subject to strict transparency and explainability obligations aimed at protecting patients’ rights and ensuring trust in AI-assisted care.
Disclosure to Patients
Healthcare providers are required to inform patients when AI technologies are used in their diagnosis, treatment or care management (Article L. 4001-3 of the French Public Health Code and GDPR).
Information Available to Healthcare Providers and Patients
Healthcare professionals must have access to clear, understandable information on how the AI system works – its scope, performance, errors and limitations – so they can maintain clinical judgment, assess the relevance of AI-generated results, and justify medical decisions.
While detailed technical disclosures are not generally required due to IP and trade secret protections, the level of explainability must be sufficient for both healthcare professionals and patients to understand function of the AI and its impact on care decisions.
Addressing Algorithmic Bias in Healthcare AI
The AI Act requires high-risk AI systems to be subject to risk assessments targeting the bias and discrimination which can result from unrepresentative data, clinical assumptions or algorithmic limitations.
Testing, Monitoring, and Mitigation of Bias
The AI Act requires developers to test AI systems with large, diverse, and representative datasets, and to continuously monitor performance to detect bias affecting health and safety and implement mitigation strategies throughout the AI lifecycle. Similarly, under the MDR, medical AI must comply with strict safety and performance standards.
Protections for Vulnerable Populations and Data Diversity Requirements
French and EU regulations emphasise non-discrimination and require demographic diversity in training data to avoid biased outcomes. French ethical guidelines prioritise justice and inclusivity in dataset development, particularly regarding vulnerable populations.
Health Equity in Regulatory Frameworks
Health equity is increasingly reflected in:
French health authorities/institutions also support research and pilot programmes to validate AI in diverse clinical settings, ensuring fair and effective use.
Human Oversight Requirements for Healthcare AI Systems
French and EU regulations emphasise human oversight as essential, particularly for high-risk healthcare AI.
Limits on Autonomous AI Decision-Making
Current rules generally prohibit fully autonomous AI in healthcare. The AI Act mandates that high-risk systems (eg, diagnosis, monitoring) allow human control at all key stages.
Roles of Healthcare Professionals When Using AI Tools
Clinicians must remain the main decision-makers. When using AI, they are required to:
Level of Human Involvement
The level of human control depends on risk level as follows.
For healthcare AI systems, governance obligations are imposed by both the AI Act for high-risk AI and by the MDR/IVDR, as follows.
In support of these requirements, the European Commission will issue horizontal guidelines, and a committee is developing harmonised standards on data and bias.
Rules Governing the Secondary Use of Health Data in France
The secondary use of health data for innovation activities and the training of AI algorithms, etc, is primarily governed by Chapter 4 of the EHDS and the GDPR/French Data Protection Act. The cross-border infrastructure HealthData@EU is developed to meet the requirements of the EHDS.
Based on these regulations, the French government is expected to publish national strategies for the secondary use of health data aimed at developing the secondary use of health data and building trustworthy AI.
The CNIL also plays a key role in this area and has issued several recommendations and decisions regarding secondary data use.
Health Data Hub and SNDS
France has established the “Health Data Hub” as a centralised platform for accessing data from the National Health Data System (SNDS) (health data originating from national health sources). This platform enables the secondary use of health data for AI algorithm training/testing, research, etc. However, the use of health data may require prior authorisation from the CNIL.
For instance, in July 2024, the CNIL authorised a hospital to process SNDS data to develop a decision-support algorithm for admission to intensive care of elderly patients with respiratory infections. In this particular case, the CNIL accepted an exception to the principle of patients’ individual information for the development of the algorithm, provided that appropriate measures were implemented (see decision DR-2024-184 of 19 July 2024).
Cross-border transfers of health data within the EU are primarily governed by the GDPR and the EHDS, which entered into force on 26 March 2025. The EHDS sets a unified framework for secure, efficient sharing of electronic health data across Member States to support research, innovation, public health, and patient care. Transfers outside the EU are allowed only under strict conditions.
In France, data from the SNDS are subject to strict data localisation rules: they must be hosted within EU Member States and cannot be transferred outside the EU except under strict and limited conditions (Article R. 1461-1 of the French Public Health Code).
The processing of health data (classified as sensitive data) is only permitted in specific cases (Article 9 of the GDPR and Articles 6 and 44 of the French Data Protection Act). However, anonymised data, which are not considered personal data, are not subject to these provisions.
The CNIL offers guidance on the appropriate anonymisation techniques and on how to assess their effectiveness.
Care must be taken to ensure that the risk of re-identification using reasonable means is negligible (anonymisation vs pseudonymisation). Failing to do so may result in sanctions: in September 2024, the CNIL fined a company that develops and sells management software to healthcare professionals EUR800,000 for processing pseudonymised health data without obtaining the required authorisation, despite the company’s claim that the data were anonymised.
Verification of Standard Patent Criteria
To be eligible for European or national patent protection, an invention must meet both eligibility and patentability requirements.
In France, invention patents are governed by the French Intellectual Property Code (Article L. 611-1 et seq. and R. 611-1 et seq.). These provisions apply to AI-related inventions, which may be protected in particular if they serve a technical purpose (see below), and the inventor is human rather than an AI system.
AI systems frequently rely on mathematical methods, which are excluded from patentability because these methods are not regarded as inventions (Article 52 of the European Patent Convention, Article L. 611-10 of the French Intellectual Property Code).
However, this exclusion is not absolute: an AI-based invention may be eligible for patent protection if the mathematical method contributes to the technical character of an invention by providing a technical solution to a technical problem, and if the invention meets the standard criteria of patent protection.
Recent Guidance from the European Patent Office (EPO)
The EPO recently stated that patents may be granted when AI leaves the abstract realm of mathematical algorithms and computational models and is applied to solve a technical problem in a field of technology.
In its April 2025 edition of the Guidelines for Examination, the EPO further elaborated on its stance by introducing new guidance on AI (G-II, 3.3.1). The EPO clarified that “if a claim of an invention related to artificial intelligence or machine learning is directed either to a method involving the use of technical means (eg, a computer) or to a device, its subject matter has technical character as a whole and is thus not excluded from patentability under Art. 52(2) or (3)”.
Examples of eligible technical contributions made by a mathematical method include using neural networks in medical devices to detect irregular heartbeats or medical diagnosis by an automated system processing physiological measurements.
The French Patent Office (INPI) has also addressed AI patentability. For example, the INPI upheld a patent on medical image analysis after narrowing its scope (see INPI, 4 July 2024, OPP 22-0035).
Copyright: Software – Yes, Algorithms – No
In France, software – including source code, architecture and preparatory design material – is protected under copyright law provided that its originality can be demonstrated (Article L.112-2,13°, of the French Intellectual Property Code). Algorithms are, however, more difficult to protect under copyright law.
Trade Secret Protection Versus Mandatory Transparency
To avoid public disclosure, healthcare innovations can be protected under trade secrets law, provided that the information in question (i) is not generally known or readily accessible to persons within the relevant business sector (ie, is secret); (ii) has actual or potential commercial value due to its secret nature; and (iii) is subject to reasonable protection measures by its lawful holder to maintain its confidentiality (Article L. 151-1 of the French Commercial Code).
The secret nature of such information could, however, be called into question due to transparency obligations imposed at both the European and National levels — for instance, the requirement to disclose extensive information to obtain CE marking, or the need for healthcare professionals and patients to understand how the AI system works.
The question of usage rights and ownership of outputs generated by AI is complex and generally depends on the contractual arrangements with the technology provider.
Various types of licenses can be used, including proprietary licenses and co-licenses – eg, in cases involving partnerships between companies and hospitals or universities. Co-licensing agreements must clearly outline rights of use, ownership, revenue-sharing and responsibilities regarding further development and commercialisation.
For instance, in France the PARTAGES project led by a consortium of around 30 partners – including research laboratories, healthcare institutions and deep tech companies – is one of the winners of the France 2030 call for projects on generative AI.
A key consideration in such arrangements is the handling of health data. Since health data are classified as sensitive under the GDPR, their use is subject to strict legal requirements.
Standard Obligations for Clinical Decision Support AI in Medical Devices
Any AI-based medical device (MDAI) providing clinical-decision support must comply with the rules applicable to medical devices under the EU MDR/IDVR to obtain CE marking. These devices must also comply with national provisions, notably for the device to be reimbursed by the national health insurance (see 2. Legal Framework for Healthcare AI).
Specific Requirements of Transparency and Human Oversight
MDAI will fall under the category of high-risk AI systems if the MDAI is a safety component or if the AI system is itself a medical device, and the MDAI is subject to a third-party conformity assessment by a notified body in accordance with the MDR/IVDR.
MDAI must therefore comply with the requirements of transparency and human oversight (see 5.2 Transparency and Explainability and 5.4 Human Oversight).
In addition to the responses above, French authorities/agencies highlight the importance of transparent and responsible use of AI in healthcare by institutions and professionals.
For instance:
See responses above regarding the legal framework, human oversight, clinical decision support and diagnostic applications.
Supervision of Telemedicine by the French Public Health Code
Beyond compliance with the EU regulations, telemedicine, including remote monitoring, is governed by the French Public Health Code (Article L. 6316-1 and R. 6316-1 et seq.). A key requirement is that the healthcare professional and the patients are clearly identified. Furthermore, patient consent must be obtained, and the healthcare professional has a duty to inform (Article L. 1111-1 et seq.). The HAS has published a “Professional code of best practices” to help guide the practice of telemedicine in France.
AI-Specific Requirements for Teleconsultation Reimbursement
Telemedicine companies now also have ad hoc legal status and must obtain accreditation from the Ministry of Health to bill the national health insurance for teleconsultations performed by employed physicians.
To obtain such accreditation, companies are required, among other things, to certify that their information system complies with the standards established by the ANS. The accreditation framework includes several AI-related requirements, such as ensuring that patients are informed of the use of AI systems, understand the system, and give their consent.
Compliance With European and National Regulations
Drug discovery and development must comply with both European and French regulations, including procedures involving the ANSM when applicable.
Increasing Focus on AI in Drug Development by Health Agencies/Authorities
Health authorities are placing growing emphasis on the integration of AI in drug development processes. For instance, at EU level, the European Medicines Agency (EMA) published a reflection paper in September 2024 on “The use of AI in the medicinal product lifecycle”. This paper notably highlights that, within the context of clinical trials, AI use must comply with Good Clinical Practice (GCP) guidelines. When AI use involves high regulatory impact or patient risk and has not been previously qualified by the EMA for that specific purpose, detailed documentation – including model architecture, development logs, validation and testing results, training data, etc – may also be considered part of the clinical trial data and required for comprehensive assessment.
Furthermore, in May 2025, the EMA and Heads of Medicines Agencies (HMA) released a 2025-2028 work plan focused on “Data and AI in medicines regulation”.
At national level, the ANSM recognises both the opportunities AI presents for accelerating drug development and the challenges it introduces, such as ensuring data confidentiality, addressing potential algorithmic biases, and adapting to complex and diverse biological models.
Several key EU regulations and directives are set to shape the legal landscape in the coming years, including the following.
At EU level, the AI Act requires each Member State to establish an AI regulatory sandbox by 2 August 2026. These sandboxes are designed to support the development, testing and validation of innovative AI systems in a controlled environment while ensuring compliance with applicable legal and ethical requirements.
At national level, the CNIL has already implemented this approach by creating a regulatory sandbox to help innovative actors incorporate GDPR compliance from the early stages of development. For example, in 2023, the CNIL offered guidance to several health-related projects.
France plays a leading role in the development of AI and in promoting the harmonisation of regulatory frameworks at both European and international levels.
For instance, the country hosted and co-chaired the AI Action Summit in February 2025, which gathered representatives from over 100 countries and resulted in the announcement of 100 concrete actions and commitments. One full day of the summit focused specifically on AI in healthcare. In addition, France is a member of the WHO Europe’s Strategic Partners’ Initiative for Data and Digital Health (SPI-DDH), which aims to promote safe and responsible digital health innovation.
However, AI developers may encounter cross-border regulatory challenges, including the transfer of sensitive health data, particularly to non-EU countries. These transfers are subject to strict requirements under the GDPR and other applicable EU/national regulations, which may create operational and legal hurdles.
The French government underscores the need to anticipate technological advancements and emerging applications of AI, while ensuring their relevance, safety and ethical integrity. AI must be developed to support both patients and healthcare professionals.
Several national initiatives have been launched to meet these objectives. These include a pilot programme to evaluate the medico-economic impact of AI-assisted electrocardiogram interpretation, an observatory for monitoring AI usage in healthcare, and ongoing efforts to develop evaluation frameworks for digital medical devices incorporating AI. These initiatives aim to foster responsible and effective integration of AI into the healthcare system.
Healthcare AI providers must adopt a comprehensive compliance strategy, ensuring their solutions meet (notably) regulatory requirements under the GDPR, the AI Act and the MDR/IDVR. This involves implementing transparency and explainability measures, conducting regular internal audits, and establishing robust post-deployment monitoring and risk-management procedures.
Healthcare institutions and professionals, for their part, are expected to use AI tools in a transparent, ethical and accountable manner. This includes informing patients when AI is involved in their care and exercising professional judgment rather than relying solely on AI outputs. Failure to do so may result in liability in the event of errors and/or harm.
Key provisions in contracts involving healthcare AI technologies include the following.
Healthcare AI developers and users must proactively address several key categories of risks associated with the deployment of AI technologies. These include diagnostic errors resulting from incorrect or inappropriate AI outputs, the presence of biases or malfunctions in the algorithm that may lead to unequal or unsafe outcomes, and vulnerabilities to cybersecurity threats that could compromise sensitive medical data or system integrity.
While no dedicated AI insurance regime currently exists, general liability frameworks apply, as follows.
In France, the HAS included AI-related criteria in its 2025 certification framework for healthcare institutions (3.4-05 and 3.4-06).
With regard to digital medical devices incorporating AI for professional use:
In addition to data transfer issues (see 6.3 Data Sharing and Access), it is essential to ensure compliance for MDAI with other EU regulations, such as the AI Act and the MDR/IVDR.
To this end, the Artificial Intelligence Board and the MDCG published in June 2025 the first official guidance document clarifying how these regulations interact. Key areas addressed included data governance, transparency and human oversight, accuracy, robustness, and cybersecurity, clinical and performance evaluation, technical documentation and post-market monitoring.
7 rue Royale
75008
Paris
France
+33 (0) 1 47 23 78 80
contact@freget-glaser.fr www.freget-glaser.frThe deployment of artificial intelligence (AI) in healthcare is driving major transformation in France. As innovation accelerates and legal frameworks evolve at both national and European levels, stakeholders must navigate a fast-changing, complex landscape.
AI Deployment: A Strategic Challenge for France’s Healthcare Sector
AI is being integrated into the French healthcare system across a wide range of applications: medical imaging, diagnostic tools, remote patient monitoring, drug discovery, and optimisation of operational and administrative processes. These technologies are at various stages of maturity.
According to the AI Use Observatory of the French National Agency for the Performance of Health and Medico-Social Institutions (ANAP), around 50 AI-powered solutions are already being used in hospitals and medico-social organisations.
This dynamic was further reinforced in February 2025, when the Summit for Action on AI gathered researchers, healthcare professionals, and entrepreneurs at PariSanté Campus to discuss concrete AI-driven solutions in the healthcare sector. On this occasion, the Minister of Health presented a report entitled “The State of AI in healthcare in France”, reaffirming the government’s aim to promote a sovereign, competitive, and trustworthy approach to AI in healthcare.
Key priorities set out in the report include:
Strategic investments are also central to this national objective. France’s support of the development of innovations incorporating AI in healthcare forms part of the Digital Health Acceleration Strategy under the “France 2030” plan. The country has so far invested EUR500 million in digital health solutions, 50% of which is dedicated to projects involving AI.
Major collaborative initiatives – such as PariSanté Campus and the Health Data Hub – bring together healthcare institutions, researchers, and industry leaders. Regulatory bodies such as the French National Agency for the Safety of Medicines and Health Products (ANSM) and the French National Health Authority (HAS) play a crucial role in certifying and guiding AI implementation.
The HAS recently issued a general guide for selecting digital medical devices, including AI systems. A more specific economic evaluation guide is expected soon.
More broadly, AI is now increasingly recognised as a strategic lever for transforming the French healthcare system in response to structural challenges such as workforce shortages, ageing populations and rising costs.
Nonetheless, the implementation of AI in healthcare remains subject to considerable challenges. Key concerns include data quality, algorithmic bias and lack of transparency, as well as ethical risks such as the potential dehumanisation of care and unequal access to innovation. Legal and regulatory hurdles, including liability issues and the absence of dedicated reimbursement models, also hinder broader deployment. From an economic perspective, high development and deployment costs remain a major barrier.
Furthermore, user acceptance is far from assured. According to the 2024 barometer published by PulseLife and Interaction Healthcare, only 58.7% of healthcare professionals trust AI for diagnostic tasks. Among their concerns, algorithmic bias (59%), lack of transparency about data sources (50%), and potential degradation of the professional–patient relationship (49%) are most frequently cited. On the patient side, an OpinionWay survey for the Healthcare Data Institute shows that only 44% of patients believe doctors can safely use AI in medical care.
Lastly, the regulatory environment surrounding AI in healthcare is rapidly evolving. In addition to new EU regulations – some of which are already entering into force – numerous guidelines have been published or are forthcoming from key institutions such as HAS, the Digital Health Delegation, and the CNIL. Stakeholders must therefore remain attentive to legal developments at both national and European levels.
Key Precautions in Developing and Using Health AI in France
In light of the evolving regulatory environment, one of the primary challenges facing developers and users of AI systems in the healthcare sector is to understand how the new European instruments interact with one other and with existing national legal frameworks.
Ensuring compliance with growing EU and national regulations
From a legal standpoint, the deployment of AI in healthcare is now governed by a set of overlapping European and national regulations including:
Under the AI Act, AI-based medical devices are classified as high-risk systems, requiring dual conformity under both the AI Act and the MDR/IVDR. Guidance issued in June 2025 by the Medical Device Coordination Group (MDCG) establishes coordination rules to avoid duplicative procedures.
While the Act’s provisions concerning high-risk AI systems will come into effect in August 2026, the European Commission has not yet published its guidelines on classifying high-risk AI systems and related requirements and obligations. On this subject, two recent developments are worth mentioning, as follows.
Ethical considerations also play a growing role. Digital ethics in healthcare, both at French and European level, seeks to promote principles such as trust, transparency, and fairness in AI use. Transparency and explainability obligations are central to protecting patients’ rights and ensuring responsible use of AI in care.
Notably, the French Digital Health Delegation and the Digital Health Agency have contributed to this effort through the publication of ethical implementation guidelines:
In parallel, a regulatory framework on interoperability and security applies to digital medical devices that process personal data. Since February 2023, such devices must comply with the “Interoperability and Security Framework for Digital Medical Devices”, with certification issued by the Digital Health Agency (ANS) as a condition for reimbursement by the French National Health Insurance.
Lastly, on the matter of liability, the EU has not yet adopted a dedicated regime for AI-related damages.
While a specific AI Liability Directive was initially envisaged, it was ultimately excluded from the Commission’s 2025 work programme. In its place, the new Product Liability Directive (PLD), entering into force in December 2026, is expected to fill this gap. It introduces no-fault liability for software, including AI systems, treating developers and providers as manufacturers. The directive also lowers the burden of proof for claimants in compensation claims. In France, liability is currently governed by traditional frameworks: product liability (soon to be aligned with the PLD) and medical liability. In this context, AI is regarded as a decision-support tool, and healthcare professionals retain full responsibility for clinical decisions, in accordance with Article L. 1142-1 of the French Public Health Code.
Ensuring proper use of health data
Another critical issue concerns proper access to and use of health data, particularly for training and testing AI-based medical devices.
Access to such data is highly regulated. Healthcare stakeholders may gain access to health data for research purposes – such as innovation activities or algorithm training – through the dedicated framework for the secondary use of health data, but only under specific conditions.
At EU level, the EHDS Regulation establishes a harmonised framework for the access, use, and exchange of electronic health data across Member States. Its goals are to empower individuals with access to and control over their personal health data (primary use), and to enable the secure and reliable reuse of health data for research, innovation, policymaking, and regulatory purposes (secondary use).
It introduces HealthData@EU, a central online platform that connects health datasets from across Europe.
Under this framework, stakeholders may apply to the relevant health data access body (not yet designated) for permission to use health data for secondary purposes (as health data applicants). At the same time, they may be required to share health data (as health data holders).
At national level, France has already established the Health Data Hub as a centralised platform that facilitates access to data from the National Health Data System (SNDS). This enables the secondary use of health data for purposes such as the development and testing of AI algorithms. However, access to this data for secondary use may require prior authorisation from the French Data Protection Authority (the CNIL).
Once access is granted, several other compliance considerations must be addressed.
Key compliance requirements include differentiating anonymised vs pseudonymised data, ensuring data robustness for training/testing algorithms, and mitigating bias and ensuring equitable algorithm performance
Ensuring transparency and human oversight of health AI
In line with European and national regulations, transparency and human oversight are also key requirements for the development and use of high-risk AI systems.
Providers must ensure that their systems are designed and documented in a way that is understandable to healthcare professionals and patients and must allow human intervention and control at all critical stages.
Healthcare institutions and professionals, for their part, must apply AI tools ethically and transparently, retaining professional judgment (ie, not relying solely on AI outputs) and informing patients when AI is involved in a preventive, diagnostic, or therapeutic act, in accordance with Article L. 4001-3 of the Public Health Code.
Ensuring risks are properly identified and addressed
Finally, risk management is an essential component of any compliance strategy. High-risk healthcare AI systems must be subject to a comprehensive risk management process throughout their lifecycle. Developers must identify and mitigate foreseeable risks to health, safety, or fundamental rights, implement appropriate safeguards, and report serious incidents. Healthcare institutions must also carry out clinical risk assessments prior to deployment, adopt technical and organisational safeguards, and provide staff with adequate training to ensure responsible interpretation of AI outputs.
A robust compliance strategy must therefore integrate multiple layers, including transparency, regular internal audits, post-market monitoring, and adherence to data protection standards. On the user side, healthcare institutions and professionals are expected to adopt responsible practices aligned with legal and ethical standards, thereby ensuring the safe, effective, and trustworthy use of AI in medical care.
7 rue Royale
75008
Paris
France
+33 (0) 1 47 23 78 80
contact@freget-glaser.fr www.freget-glaser.fr