Healthcare AI 2025

Last Updated August 06, 2025

France

Law and Practice

Authors



Fréget Glaser et Associés has, since its inception in 2014, continuously grown to become a solidly established and well-reputed high-end practice in the field of competition and regulation, with extensive expertise in the pharmaceuticals/life sciences sectors. The firm’s primary differentiator with respect to other law firms, particularly those driven by their corporate departments, is its ability to propose complex litigation strategies with respect to all civil, commercial and administrative courts, as well as the regulatory authorities, rounding out clients’ overall business strategies. The keys to Fréget Glaser et Associés’ success lie not only in the cutting-edge legal expertise of its team members but in its ability to combine legal and economic expertise with out-of-the-box-thinking solutions.

AI is being applied across a broad range of healthcare use cases in France, with varying levels of maturity. The AI Use Observatory of the French National Agency for the Performance of Health and Medico-Social Institutions (ANAP) currently includes around 50 AI-driven solutions implemented in hospitals and medico-social organisations. A May 2024 Senate report examined the national deployment of AI in healthcare.

Key applications include the following.

  • Medical imaging – a leading domain for AI use, leveraging digitised scans (X-rays, CT, MRI) to enhance image quality and automate anomaly detection. These tools are widely integrated into radiologists’ workflows. The France 2030 “Santé Numérique” acceleration strategy supports this area through a EUR90 million investment.
  • Diagnostic tools, including:
        • ophthalmology – early detection of certain disease such as glaucoma and macular degeneration;
        • oncology – AI aids screening and diagnosis - eg, the Curie Institute developed AI to identify primary sites of metastatic cancers; the Society for Women’s Imaging (SIFEM) has also highlighted promising research on AI in breast cancer screening;
        • cardiology – improved detection of heart failure from ECGs; and
        • nephrology – an algorithm by Prof. Loupy predicts transplant rejection using complex patient data (Inserm Innovation Prize 2023).
  • Remote patient monitoring – already used in cardiac care (eg, pacemakers, defibrillators), with predictive AI models under development to anticipate cardiac events before symptoms appear.
  • Drug discovery – AI accelerates pharmaceutical R&D by analysing chemical/biological data to identify candidate molecules, predict drug activity and side effects, and model interactions.
  • Operational and administrative optimisation – AI streamlines administrative tasks, such as scheduling, billing, identity checks and coding.

Adoption of these technologies:

  • Deployment varies depending on institutional resources, funding, and trust.
  • According to a 2024 barometer by PulseLife and Interaction Healthcare, more than one in two healthcare professionals (53%) incorporate AI into their daily practice, particularly for assistance with access to medical information (46%), training (37%), and treatment prescription (28%). However, only 58.7% of healthcare professionals trust AI for diagnostics, with algorithmic bias (59%), source transparency (50%), and the deterioration of the healthcare professional-patient relationship (49%) emerging as the main concerns. According to an OpinionWay survey for the Healthcare Data Institute, only 44% of patients believe doctors can safely use AI in care.

Primary Benefits of AI in Healthcare

AI is increasingly recognised as a strategic asset in the transformation of the French healthcare system, particularly in addressing systemic pressures such as financial constraints, healthcare workforce shortages, and population ageing.

The main benefits include:

  • improved clinical outcomes – AI enables earlier and more accurate diagnostics and enhances treatment planning and facilitates personalised medicine;
  • operational efficiency – waiting times can be reduced and patient pathways streamlined, particularly in hospitals; resource allocation and workflow management can also be improved;
  • support for healthcare professionals – routine administrative tasks (eg, documentation, coding, scheduling) are automated and time is freed up for direct patient care, reducing stress and improving job satisfaction;
  • advancement of medical research – drug discovery and clinical trial design is accelerated, and generative AI contributes by making unexpected connections and enhancing innovation capacity; and
  • gains are achieved at system level, with efficiency supported across the healthcare system; this can contribute to making medical careers more attractive by reducing the overall non-clinical burden.

Challenges Arising From a Healthcare-Specific Perspective

Despite its potential, AI deployment in healthcare raises significant concerns, particularly:

  • data-related risks – data quality, bias and representativeness;
  • lack of transparency and “explainability” – “black-box” AI systems;
  • ethical and equity concerns – fears of dehumanised medicine due to excessive automation, and the risk of increased health inequalities if AI tools are not equitably deployed across regions;
  • regulatory and legal complexity – ongoing difficulty in assigning liability in the event of AI-related harm, and the need for coherence across overlapping legal frameworks; and
  • economic hurdles and barriers to deployment – high implementation costs can deter adoption, particularly when return on investment remains uncertain, with limited reimbursement models (while individual devices may be reimbursed, broader system-level AI tools often lack dedicated funding pathways).

France’s Supporting Strategy and Investments for AI in Healthcare

As part of the Summit for Action on Artificial Intelligence in February 2025, the Ministry of Health published a report on the state of AI in health in France (“’état des lieux de l’intelligence artificielle (IA) en santé en France”) outlining a comprehensive strategy for AI in healthcare and structured around the four key themes of prevention, care delivery, access to care and supportive framework.

According to the report, France’s support of the development of innovation incorporating AI in healthcare is part of the “Digital Health” acceleration strategy (SASN) under the “France 2030” plan. France has so far invested EUR500 million in Digital Health solutions, 50% of which is dedicated to projects involving AI.

Key stakeholders are as follows:

  • healthcare providers – hospitals and clinics adopting AI for clinical decision-making and operational efficiency;
  • technology companies – such companies develop AI tools tailored to healthcare needs, often in collaboration with medical institutions; and
  • regulatory bodies – agencies such as ANSM and HAS assess and certify AI systems and publish guidance to support healthcare stakeholders and companies in implementing AI. 

HAS has recently communicated its intention to publish several guides pertaining to AI, in parallel with the advancement of a number of ongoing initiatives involving AI applications in the healthcare sector.

Notable collaborations between healthcare institutions and technology developers are as follows:

  • PariSanté Campus – a collaborative hub fostering partnerships between public health institutions and private tech companies to accelerate AI innovation in healthcare; and
  • Health Data Hub – this platform is already operational and supports 168 projects, 54% of which are led by hospitals and 28% involve industry partners; 40% of these projects use AI methods, such as the DEEP.PISTE project, which aims to optimise organised breast cancer screening through advanced AI models.

Healthcare AI systems are classified based on intended use, risk level, and patient-health impact:

  • diagnostic AI tools – typically classified as medical devices, often high-risk if they affect critical clinical decisions, subject to MDR regulations requiring conformity assessment and CE marking before market entry;
  • therapeutic AI systems – AI applications influencing treatment or delivering therapy (eg, drug dosage calculators, robotic surgery aids) are also medical devices and regulated accordingly; and
  • administrative or operational AI tools – support healthcare operations without direct clinical impact (eg, appointment management), are generally not medical devices but must comply with data protection and cybersecurity laws.

The AI Act also introduces risk-based categories, identifying high-risk AI systems, including medical AI, with stricter transparency, robustness, and oversight requirements.

According to the Medical Device Coordination Group (MDCG)’s guidance published in June 2025, the classification of an AI system as high-risk under the AI Act does not automatically result in a higher risk class for the corresponding medical device or in vitro diagnostic under the MDR or IVDR. Rather, it is the device’s classification under the MDR/IVDR that determines whether the AI system is considered high-risk under the AI Act. 

Key Legal Frameworks Governing Healthcare AI in France

AI use in the healthcare sector is governed by the following.

  • Regulation 2024/1689 of 13 June 2024 on Artificial Intelligence (the AI Act). This entered into force on 1 August 2024 with a phased implementation schedule.
  • Regulation 2017/745 of 5 April 2017 on Medical Devices (MDR).
  • Regulation 2017/746 of 5 April 2017 on In Vitro Diagnostic Medical Devices (IVDR).
  • Regulation 2025/327 of 11 February 2025 on the European Health Data Space (EHDS). This entered into force on 26 March 2025, marking the beginning of the “transition period” (phased implementation schedule).
  • Directive 2024/2853 of 23 October 2024 on Liability for Defective Products. This must be transposed into French law by no later than 9 December 2026.
  • Article L. 4001-3 of the French Public Health Code, a provision stemming from the August 2021 Bioethics Law.
  • Regulation 2016/679 of 27 April 2016 (GDPR) and the French Data Protection Act of 6 July 1978.

Alignment of EU Regulations and Existing Healthcare Laws

In the field of AI and healthcare, European regulations such as the AI Act and the MDR provide a comprehensive framework for AI-driven medical technologies. The MDR sets specific requirements for the safety, performance and conformity assessment of medical devices, including those incorporating AI.

The AI Act introduces additional obligations that focus on the specific risks posed by AI systems, particularly high-risk applications in healthcare.

At national level, the provisions of the French Public Health Code (Article L.4001-3) also govern the use of AI in healthcare. These national rules are generally aligned with the ethical requirements set out in the AI Act.

In parallel, the PLD, or Product Liability Directive, aims to modernise the EU liability regime to better address the risks associated with AI systems and connected products. Existing French rules on medical liability remain applicable for the time being, and their interaction with evolving EU provisions is discussed in 4. Liability and Risk in Healthcare AI.

AI systems used for medical purposes must comply with both the EU AI Act and the MDR/IVDR. Under Article 6 of the AI Act, such AI-based medical devices are considered to be high risk, particularly when classified as Class IIa or higher under the MDR/IVDR, triggering stringent conformity assessment requirements.

Legal and Reimbursement Framework

CE Marking

To be marketed in the EU, AI-based digital medical devices must first obtain CE marking. This certification confirms that the device meets European regulatory requirements for safety and performance. To obtain CE marking, the manufacturer must compile technical documentation providing evidence to demonstrate the quality and safety of the device. The MDR/IVDR require detailed descriptions of software architecture, data processing, and risk management. The AI Act adds documentation requirements focused on transparency and accountability, including risk assessments, data governance, and performance testing of high-risk medical AI systems.

In France, the ANSM evaluates the benefits and risks associated, in particular, with the use of medical devices.

  • Clinical evaluation by HAS/CNEDiMTS: after CE marking, the device is assessed by CNEDiMTS within HAS to determine eligibility for reimbursement (LATM/LPPR inclusion).
  • Early access and accelerated pathways: France offers accelerated access options for innovative digital medical devices, such as the PECAN scheme (for temporary funding of devices presumed to be innovative) and other pathways to facilitate prompt use of AI tools with strong clinical potential.
  • Reimbursement pathway: after HAS approval, the Economic Committee for Health Products (CEPS) sets reimbursement tariffs for individually prescribed devices.
  • Data protection: AI systems processing personal health data must also comply with the GDPR.

The ANS issues conformity certificates related to interoperability and cybersecurity requirements.

Coordination of AI Act and MDR/IVDR Conformity Procedures

According to the MDCG’s June 2025 guidance, developers must comply with conformity procedures under both the AI Act and the MDR/IVDR. Where obligations overlap, the guidance provides coordination rules to avoid duplication. Manufacturers of AI systems used for medical purposes are expected to integrate testing, reporting processes, and documentation required under the AI Act into the technical documentation already prepared for MDR/IVDR compliance. In France, notified bodies designated under the MDR/IVDR and the ANSM oversee regulatory processes.

Software with a medical purpose is included within the scope of the MDR. SaMD refers to a software application having a diagnostic, therapeutic, preventive, monitoring or disease management purpose. This includes in particular treatment planning tools, medical imaging algorithms, intelligent remote monitoring platforms.

The MDR introduced a specific rule (“Rule 11”) for software, classifying software based on its potential impact on the patient’s health.

According to the AI Act, continuous-learning algorithms must be developed in such a way as to eliminate or reduce, as far as is possible, the risk of potentially biased outputs influencing input for future operations and to ensure that any such feedback loops are duly addressed with appropriate mitigation measures (Article 15 of the AI Act). Per the MDCG’s June 2025 guidance, the post-market monitoring system is key to ensuring continued performance and compliance.

France applies GDPR rules – emphasising informed patient consent, proportional data use, and cybersecurity – and the French Data Protection Act (see 6. Data Governance in Healthcare AI).

Applicable Technical Standards for Healthcare AI Systems

ISO and IEC have developed the ISO/IEC 42001 standard to support manufacturers in aligning their AI-enabled products with the requirements of the AI Act. The standard promotes the responsible development of AI, emphasising safety, transparency, and ethics.

It thus provides a normative framework for manufacturers of devices incorporating AI, to implement an AI management system. The implementation of such a system should be aligned with the processes required for ISO 13485 certification, such as management responsibility, risk management, audits, and continuous improvement.

ISO/IEC 42001 introduces some additional requirements concerning:

  • the needs and expectations of interested parties;
  • the development and deployment of AI;
  • the AI lifecycle;
  • data management; and
  • technical documentation for AI.

In practice, manufacturers must comply with both ISO 13485 and ISO/IEC 42001. While the two share overlapping requirements, these must be consolidated into a unified quality management system.

Interoperability and Data Protection Requirements

Since February 2023, an Interoperability and Security Framework for Digital Medical Devices has been applicable to all medical devices reimbursed by the French National Health Insurance that involve the processing of personal data as defined by the GDPR. Certification of compliance is issued by ANS.

French key bodies are as follows:

  • HAS (Authority for Health)
    1. assesses clinical utility and added value of medical devices, AI-based diagnostic and decision-support tools; and
    2. issues guidelines and best practices for evaluating digital health technologies.
  • ANSM (Medicine Agency)
    1. regulates and ensures the safety, performance, and conformity of medical devices, including SaMD; and
    2. authorises clinical investigations for AI-driven devices when required.
  • CNIL (Data Protection Authority)
    1. oversees compliance with data protection rules under the GDPR and French Data Protection Act; and
    2. issues guidance specific to health data processing and the use of AI in healthcare.
  • ANS (Digital Health Agency)
    1. develops digital infrastructure, interoperability standards, and cybersecurity frameworks for health IT systems, including AI tools; and
    2. manages the national health identifier system and supports the Mon espace santé platform.
  • Ministry of Health
    1. defines national strategies and policies for digital health and AI regulation; and
    2. co-ordinates public funding and innovation programmes (eg, the Health Innovation Plan).

Coordination between regulatory and data protection authorities in France occurs through several mechanisms, as follows.

  • Joint guidance and frameworks – regulatory and data protection authorities have already collaborated through working groups, notably to develop guidance documents on AI in healthcare, such as the Implementation Guide for Ethical AI Systems in Healthcare (2025).
  • National strategy alignment – France 2030 Stratégie d’accélération santé numérique aligns the roles of various agencies, fostering coordination in regulatory, technical, and ethical oversight of AI and digital health tools.
  • Shared evaluation processes – AI medical devices must undergo both technical/clinical evaluation (HAS/ANSM) and data protection impact assessments, often reviewed by CNIL. The Authorities may consult each other during these processes to avoid regulatory conflicts and ensure comprehensive oversight.

Healthcare AI systems classified as “high risk” under the EU AI Act must undergo comprehensive pre-market validation and certification. Key requirements include:

  • CE marking via a notified body, based on the device’s risk class; and
  • clinical validation by the HAS and its CNEDiMTS committee.

The HAS “descriptive grid for medical devices with machine learning (AI)”, updated in 2022, provides a structured framework detailing device use, data, algorithms and performance. Used by CNEDiMTS, it ensures transparency and consistency in clinical evaluation and reimbursement decisions covering the following:

  • documentation on algorithm design, data sets used, update procedures, and limitations;
  • risk assessments addressing cybersecurity, potential bias, and performance;
  • transparency obligations; and
  • bias detection and fairness audits, required for sensitive uses like diagnostics.

These requirements ensure that AI remains assistive, rather than autonomous, and that its results are traceable and controllable by health professionals.

Both the MDR/IVDR and AIA require manufacturers to establish post-market monitoring and surveillance systems to track the performance and safety of healthcare AI systems once on the market. This involves systematically collecting and analysing data on device performance, risks, adverse events, and other safety concerns, and taking necessary corrective and preventive actions. Manufacturers must also maintain a vigilance system, report adverse events to authorities and users, and regularly update their risk, quality management, and compliance processes based on post-market findings and regulatory feedback.

In France, the ANSM monitors the safety and performance of medical devices post-market, ensuring timely detection of risks and incidents. The ANSM conducts continuous and regular re-evaluations of the benefit-risk balance of health products in actual clinical practice.

Withdrawal and Recall Procedures for AI Systems in Healthcare

Under applicable regulations, including the AI Act and MDR, AI systems used in healthcare are subject to stringent withdrawal and recall procedures to safeguard patient safety and ensure regulatory compliance.

If the market surveillance authority – eg, the ANSM in France – determines that an AI system fails to meet the required regulatory standards, it will promptly require the responsible operator to implement all necessary corrective measures. This may include bringing the system into compliance, withdrawing it from the market, or initiating an immediate recall.

Sanctions for Non-Compliance

Failure to comply with regulations governing AI systems for medical purposes can result in severe administrative and legal penalties. These include suspension or withdrawal of the CE marking, substantial fines, and bans on marketing the product.

Notably, the AI Act imposes fines of up to EUR35 million or 7% of the global annual turnover.

Additionally, the CNIL, responsible for monitoring personal data protection, may impose fines for violations of the GDPR.

Measures and sanctions are systematically made public by authorities such as the ANSM and the CNIL, thereby entailing a substantial reputational risk for the companies concerned.

Liability Frameworks

The AI Act does not define the legal regime for AI-related damages. The proposed AI Liability Directive, intended to address this, was removed from the European Commission’s 2025 work programme.

In the absence of a specific instrument, PLD should therefore cover AI liability questions following its entry into force in December 2026. It treats software as a product subject to no-fault liability, with developers and AI providers regarded as manufacturers. The PLD also introduces eased proof requirements for claimants.

In France, liability for healthcare AI currently relies on existing frameworks – product liability (to be updated with the PLD) and traditional medical liability – without a dedicated AI-specific regime.

Allocation of Responsibility

  • Developers and manufacturers can be held liable if harm results from a defective AI system.
  • Healthcare providers remain liable for improper use of AI tools, including misuse, over-reliance, lack of training, or failure to inform patients.
  • Other actors, such as distributors and importers, may be liable when the manufacturer is not established in the EU.
  • The AI Act outlines responsibilities across the entire AI value chain, especially for high-risk AI systems.

Application of Traditional Medical Liability Standards to AI Systems

In France, traditional medical liability applies to clinicians using AI, which is viewed as a support tool – not a decision-maker. Doctors retain clinical judgment and are liable for errors from misuse of AI under Article L1142-1 of the French Public Health Code.

Healthcare professionals may be personally liable for mishandling AI (eg, misinterpreting results) unless they can justify following or disregarding AI recommendations. Patients must prove both breach of duty and causation for harm.

Standards of Care With AI

Doctors must provide attentive care consistent with current science (Article R4127-32 of the French Public Health Code). When using AI, they must be trained, avoid undue reliance if better methods exist, justify AI use or non-use, obtain informed consent, maintain autonomy and avoid automation bias. AI increases responsibility without lowering standards.

Though not involving AI specifically, decisions ruled by French jurisdictions suggest how liability could be applied:

  • in 2020, the Paris Court of Appeal held a doctor liable for using a robotic technique instead of a standard one, causing unnecessary risk to the patient (Article L. 1142-1 CSP) (Paris Court of Appeal, 19 November 2020, 17/15960); and
  • in 2022, the Council of State clarified that telemedicine reforms do not exempt doctors from their duties; teleconsultations still require attentive care, informed consent, and respect for the care pathway (French Council of State, 14 October 2022, 461412).

Determining Causation in Cases Involving AI Systems

Proving causation is difficult due to what can be described at times as AI’s “black box” nature. The French courts allow some flexibility, but the burden remains on patients. If fault remains unproven, compensation may be sought via the National Office for Compensation of Medical Accidents (Oniam), subject to strict legal conditions and seriousness thresholds.

Regulatory Risk Management Requirements

Healthcare AI systems used for diagnosis, treatment, or clinical decision-making are classified as high-risk technologies under the EU AI Act. As such, AI systems are subject to stringent risk-management obligations across their entire lifecycle.

These include identifying and assessing foreseeable risks to health, safety, or fundamental rights; implementing effective controls; reducing or eliminating risks; and reporting serious incidents.

Risk Assessment Processes for Developers and Healthcare Institutions

Developers must establish and document a risk-management system for high-risk AI. Healthcare institutions must assess clinical risks before deployment, ensure proper use of AI tools through technical and organisational measures, and provide adequate training for staff to interpret AI results responsibly.

Defences Available to Healthcare Professionals

Healthcare professionals may avoid liability if they can demonstrate that they have used AI tools conscientiously, competently, and in line with accepted medical standards. French law maintains the principle that clinical judgment must prevail. Practitioners must evaluate AI-generated results critically and remain fully responsible for their decisions.

Liability may be avoided if the clinician can show:

  • appropriate and informed use of the AI tool;
  • acceptance or rejection of AI recommendations based on their clinical judgment;
  • that the patient was informed of AI involvement in their care; and
  • that human oversight and standard care protocols were respected.

Defences and Limitations for AI Developers and Manufacturers

Developers may invoke regulatory compliance – such as CE marking, ISO standards and obligations under the AI Act and PLD – as part of their defence in product liability claims. While not exempting them from liability, such compliance may mitigate responsibility or demonstrate due diligence.

Absence of Safe Harbour Provisions

French and EU law currently provide no explicit safe harbour for healthcare providers or AI developers. Certification or regulatory approval does not exempt them from liability in cases of harm caused by misuse, malfunction or oversight failures. Regulatory compliance does not replace the duty of care.

Challenges Posed by the “Black-Box” Nature of Some AI Systems

The opaque nature of some AI models creates challenges in liability litigation, including:

  • difficulty in tracing how results were generated;
  • unclear attribution of responsibility between AI and user; and
  • obstacles in proving causation or fault when decision-making processes cannot be explained.

Because legal frameworks require proof of fault, damage and causality, the lack of “explainability” complicates the patient’s burden of proof.

Ethical Frameworks Governing Healthcare AI in France

Digital ethics in healthcare promotes values such as trust, transparency and fairness in AI use, evolving in a fast-changing environment.

International and European Foundations

At international level:

  • OECD Principles (2019) support human-centric, safe, fair and accountable AI;
  • UNESCO AI Ethics (2021) set global standards focused on human rights and data protection;
  • GDPR enforces data privacy and user rights;
  • MDR ensures safety, transparency and traceability of AI-based medical devices; and
  • the AI Act imposes risk-based rules and strict obligations for high-risk AI systems.

National Framework

French bioethics law, notably Article L.4001-3 of the French Public Health Code, reinforces these principles.

  • The Ethics by Design Guide (2022) outlines five core principles (non-maleficence, justice, autonomy, transparency and sustainability) and applies them across five AI development stages.
  • The 2025 Implementation Guide for Ethical AI Systems in Healthcare provides detailed, phase-aligned ethical criteria tailored to developers and providers (guide submitted for public consultation from 12 May to 6 June 2025).

Binding vs Voluntary Standards

While GDPR, MDR, and French law impose mandatory rules, other guidelines remain voluntary but are encouraged to foster responsible innovation. They may later be incorporated into formal regulatory standards.

Transparency and Explainability Requirements for Healthcare AI Systems

In France, as in the broader European Union, healthcare AI systems are subject to strict transparency and explainability obligations aimed at protecting patients’ rights and ensuring trust in AI-assisted care.

Disclosure to Patients

Healthcare providers are required to inform patients when AI technologies are used in their diagnosis, treatment or care management (Article L. 4001-3 of the French Public Health Code and GDPR).

Information Available to Healthcare Providers and Patients

Healthcare professionals must have access to clear, understandable information on how the AI system works – its scope, performance, errors and limitations – so they can maintain clinical judgment, assess the relevance of AI-generated results, and justify medical decisions.

While detailed technical disclosures are not generally required due to IP and trade secret protections, the level of explainability must be sufficient for both healthcare professionals and patients to understand function of the AI and its impact on care decisions.

Addressing Algorithmic Bias in Healthcare AI

The AI Act requires high-risk AI systems to be subject to risk assessments targeting the bias and discrimination which can result from unrepresentative data, clinical assumptions or algorithmic limitations.

Testing, Monitoring, and Mitigation of Bias

The AI Act requires developers to test AI systems with large, diverse, and representative datasets, and to continuously monitor performance to detect bias affecting health and safety and implement mitigation strategies throughout the AI lifecycle. Similarly, under the MDR, medical AI must comply with strict safety and performance standards.

Protections for Vulnerable Populations and Data Diversity Requirements

French and EU regulations emphasise non-discrimination and require demographic diversity in training data to avoid biased outcomes. French ethical guidelines prioritise justice and inclusivity in dataset development, particularly regarding vulnerable populations.

Health Equity in Regulatory Frameworks

Health equity is increasingly reflected in:

  • transparency and explainability requirements to help clinicians detect biased AI recommendations;
  • outcome monitoring to prevent worsening of existing disparities; and
  • public efforts to ensure equitable access to AI tools across populations and regions.

French health authorities/institutions also support research and pilot programmes to validate AI in diverse clinical settings, ensuring fair and effective use.

Human Oversight Requirements for Healthcare AI Systems

French and EU regulations emphasise human oversight as essential, particularly for high-risk healthcare AI.

Limits on Autonomous AI Decision-Making

Current rules generally prohibit fully autonomous AI in healthcare. The AI Act mandates that high-risk systems (eg, diagnosis, monitoring) allow human control at all key stages.

Roles of Healthcare Professionals When Using AI Tools

Clinicians must remain the main decision-makers. When using AI, they are required to:

  • understand the capabilities and limitations of the high-risk AI system;
  • be vigilant with respect to automation bias;
  • correctly interpret the outputs of the high-risk AI system; and
  • be able to override, disregard, or opt not to use the AI system when necessary.

Level of Human Involvement

The level of human control depends on risk level as follows.

  • High-risk AI requires strict human control and intervention mechanisms.
  • Low-risk/administrative AI requires transparency and accountability; CNIL advises enabling users to raise concerns, typically via a DPO.

For healthcare AI systems, governance obligations are imposed by both the AI Act for high-risk AI and by the MDR/IVDR, as follows.

  • Datasets: Datasets used for training, validation and testing must be relevant, sufficiently representative, as error-free as possible, and complete with respect to the intended purpose of the AI system/medical device. The French government emphasises this need for access to national databases that are “high-quality and representative” to obtain “robust” data.
  • Bias: Any potential bias in the datasets must be addressed if it is likely to affect people’s health or safety, negatively impact fundamental rights, or result in discrimination prohibited under EU law. This includes managing unintended bias, dataset drift, and identifying subgroups where the model may underperform.
  • Personal data: The transparency obligation concerning the original purpose of data collection must be respected, and stringent data governance practices must be in place to maintain data integrity, and address privacy and security concerns. The CNIL provides guidelines explaining how to process training data legally.

In support of these requirements, the European Commission will issue horizontal guidelines, and a committee is developing harmonised standards on data and bias.

Rules Governing the Secondary Use of Health Data in France

The secondary use of health data for innovation activities and the training of AI algorithms, etc, is primarily governed by Chapter 4 of the EHDS and the GDPR/French Data Protection Act. The cross-border infrastructure HealthData@EU is developed to meet the requirements of the EHDS.

Based on these regulations, the French government is expected to publish national strategies for the secondary use of health data aimed at developing the secondary use of health data and building trustworthy AI.

The CNIL also plays a key role in this area and has issued several recommendations and decisions regarding secondary data use.

Health Data Hub and SNDS

France has established the “Health Data Hub” as a centralised platform for accessing data from the National Health Data System (SNDS) (health data originating from national health sources). This platform enables the secondary use of health data for AI algorithm training/testing, research, etc. However, the use of health data may require prior authorisation from the CNIL.

For instance, in July 2024, the CNIL authorised a hospital to process SNDS data to develop a decision-support algorithm for admission to intensive care of elderly patients with respiratory infections. In this particular case, the CNIL accepted an exception to the principle of patients’ individual information for the development of the algorithm, provided that appropriate measures were implemented (see decision DR-2024-184 of 19 July 2024).

Cross-border transfers of health data within the EU are primarily governed by the GDPR and the EHDS, which entered into force on 26 March 2025. The EHDS sets a unified framework for secure, efficient sharing of electronic health data across Member States to support research, innovation, public health, and patient care. Transfers outside the EU are allowed only under strict conditions.

In France, data from the SNDS are subject to strict data localisation rules: they must be hosted within EU Member States and cannot be transferred outside the EU except under strict and limited conditions (Article R. 1461-1 of the French Public Health Code).

The processing of health data (classified as sensitive data) is only permitted in specific cases (Article 9 of the GDPR and Articles 6 and 44 of the French Data Protection Act). However, anonymised data, which are not considered personal data, are not subject to these provisions.

The CNIL offers guidance on the appropriate anonymisation techniques and on how to assess their effectiveness.

Care must be taken to ensure that the risk of re-identification using reasonable means is negligible (anonymisation vs pseudonymisation). Failing to do so may result in sanctions: in September 2024, the CNIL fined a company that develops and sells management software to healthcare professionals EUR800,000 for processing pseudonymised health data without obtaining the required authorisation, despite the company’s claim that the data were anonymised.

Verification of Standard Patent Criteria

To be eligible for European or national patent protection, an invention must meet both eligibility and patentability requirements.

In France, invention patents are governed by the French Intellectual Property Code (Article L. 611-1 et seq. and R. 611-1 et seq.). These provisions apply to AI-related inventions, which may be protected in particular if they serve a technical purpose (see below), and the inventor is human rather than an AI system.

AI systems frequently rely on mathematical methods, which are excluded from patentability because these methods are not regarded as inventions (Article 52 of the European Patent Convention, Article L. 611-10 of the French Intellectual Property Code).

However, this exclusion is not absolute: an AI-based invention may be eligible for patent protection if the mathematical method contributes to the technical character of an invention by providing a technical solution to a technical problem, and if the invention meets the standard criteria of patent protection.

Recent Guidance from the European Patent Office (EPO)

The EPO recently stated that patents may be granted when AI leaves the abstract realm of mathematical algorithms and computational models and is applied to solve a technical problem in a field of technology.

In its April 2025 edition of the Guidelines for Examination, the EPO further elaborated on its stance by introducing new guidance on AI (G-II, 3.3.1). The EPO clarified that “if a claim of an invention related to artificial intelligence or machine learning is directed either to a method involving the use of technical means (eg, a computer) or to a device, its subject matter has technical character as a whole and is thus not excluded from patentability under Art. 52(2) or (3)”.

Examples of eligible technical contributions made by a mathematical method include using neural networks in medical devices to detect irregular heartbeats or medical diagnosis by an automated system processing physiological measurements.

The French Patent Office (INPI) has also addressed AI patentability. For example, the INPI upheld a patent on medical image analysis after narrowing its scope (see INPI, 4 July 2024, OPP 22-0035).

Copyright: Software – Yes, Algorithms – No

In France, software – including source code, architecture and preparatory design material – is protected under copyright law provided that its originality can be demonstrated (Article L.112-2,13°, of the French Intellectual Property Code). Algorithms are, however, more difficult to protect under copyright law.

Trade Secret Protection Versus Mandatory Transparency

To avoid public disclosure, healthcare innovations can be protected under trade secrets law, provided that the information in question (i) is not generally known or readily accessible to persons within the relevant business sector (ie, is secret); (ii) has actual or potential commercial value due to its secret nature; and (iii) is subject to reasonable protection measures by its lawful holder to maintain its confidentiality (Article L. 151-1 of the French Commercial Code).

The secret nature of such information could, however, be called into question due to transparency obligations imposed at both the European and National levels — for instance, the requirement to disclose extensive information to obtain CE marking, or the need for healthcare professionals and patients to understand how the AI system works.

The question of usage rights and ownership of outputs generated by AI is complex and generally depends on the contractual arrangements with the technology provider.

Various types of licenses can be used, including proprietary licenses and co-licenses – eg, in cases involving partnerships between companies and hospitals or universities. Co-licensing agreements must clearly outline rights of use, ownership, revenue-sharing and responsibilities regarding further development and commercialisation.

For instance, in France the PARTAGES project led by a consortium of around 30 partners – including research laboratories, healthcare institutions and deep tech companies – is one of the winners of the France 2030 call for projects on generative AI.

A key consideration in such arrangements is the handling of health data. Since health data are classified as sensitive under the GDPR, their use is subject to strict legal requirements.

Standard Obligations for Clinical Decision Support AI in Medical Devices

Any AI-based medical device (MDAI) providing clinical-decision support must comply with the rules applicable to medical devices under the EU MDR/IDVR to obtain CE marking. These devices must also comply with national provisions, notably for the device to be reimbursed by the national health insurance (see 2. Legal Framework for Healthcare AI).

Specific Requirements of Transparency and Human Oversight

MDAI will fall under the category of high-risk AI systems if the MDAI is a safety component or if the AI system is itself a medical device, and the MDAI is subject to a third-party conformity assessment by a notified body in accordance with the MDR/IVDR.

MDAI must therefore comply with the requirements of transparency and human oversight (see 5.2 Transparency and Explainability and 5.4 Human Oversight).

In addition to the responses above, French authorities/agencies highlight the importance of transparent and responsible use of AI in healthcare by institutions and professionals.

For instance:

  • the ANAP, in its February 2025 guide on “deploying trustworthy AI in healthcare”, emphasises that AI systems must be understandable by developers, healthcare professionals and patients to ensure trust and safe use; and
  • the HAS is developing a “framework of trust” for the use of MDAI; therefore, in addition to its 2023 guide for choosing digital medical devices for professional use, the HAS is preparing further tools – including a forthcoming guide aimed at supporting healthcare institutions and professionals in the responsible deployment and use of AI in care delivery.

See responses above regarding the legal framework, human oversight, clinical decision support and diagnostic applications.

Supervision of Telemedicine by the French Public Health Code

Beyond compliance with the EU regulations, telemedicine, including remote monitoring, is governed by the French Public Health Code (Article L. 6316-1 and R. 6316-1 et seq.). A key requirement is that the healthcare professional and the patients are clearly identified. Furthermore, patient consent must be obtained, and the healthcare professional has a duty to inform (Article L. 1111-1 et seq.). The HAS has published a “Professional code of best practices” to help guide the practice of telemedicine in France.

AI-Specific Requirements for Teleconsultation Reimbursement

Telemedicine companies now also have ad hoc legal status and must obtain accreditation from the Ministry of Health to bill the national health insurance for teleconsultations performed by employed physicians.

To obtain such accreditation, companies are required, among other things, to certify that their information system complies with the standards established by the ANS. The accreditation framework includes several AI-related requirements, such as ensuring that patients are informed of the use of AI systems, understand the system, and give their consent.

Compliance With European and National Regulations

Drug discovery and development must comply with both European and French regulations, including procedures involving the ANSM when applicable.

Increasing Focus on AI in Drug Development by Health Agencies/Authorities

Health authorities are placing growing emphasis on the integration of AI in drug development processes. For instance, at EU level, the European Medicines Agency (EMA) published a reflection paper in September 2024 on “The use of AI in the medicinal product lifecycle”. This paper notably highlights that, within the context of clinical trials, AI use must comply with Good Clinical Practice (GCP) guidelines. When AI use involves high regulatory impact or patient risk and has not been previously qualified by the EMA for that specific purpose, detailed documentation – including model architecture, development logs, validation and testing results, training data, etc – may also be considered part of the clinical trial data and required for comprehensive assessment.

Furthermore, in May 2025, the EMA and Heads of Medicines Agencies (HMA) released a 2025-2028 work plan focused on “Data and AI in medicines regulation”.

At national level, the ANSM recognises both the opportunities AI presents for accelerating drug development and the challenges it introduces, such as ensuring data confidentiality, addressing potential algorithmic biases, and adapting to complex and diverse biological models.

Several key EU regulations and directives are set to shape the legal landscape in the coming years, including the following.

  • The AI Act, which entered into force on 1 August 2024, and will be implemented progressively. The application to high-risk AI systems is scheduled for 2 August 2026 and 2 August 2027.
  • The EHDS, which entered into force on 26 March 2025, which will also be implemented progressively. In particular, Chapter IV on secondary use will become applicable four years after entry into force for most data categories, six years after entry for certain sensitive categories, and ten years after entry for the participation of third countries in the HealthData@EU.
  • The Directive of 23 October 2024 on liability for defective products, which modernises the EU’s product liability framework to better account for emerging technologies, including AI-driven systems. Member States are required to transpose this directive into national law by December 2026.

At EU level, the AI Act requires each Member State to establish an AI regulatory sandbox by 2 August 2026. These sandboxes are designed to support the development, testing and validation of innovative AI systems in a controlled environment while ensuring compliance with applicable legal and ethical requirements.

At national level, the CNIL has already implemented this approach by creating a regulatory sandbox to help innovative actors incorporate GDPR compliance from the early stages of development. For example, in 2023, the CNIL offered guidance to several health-related projects.

France plays a leading role in the development of AI and in promoting the harmonisation of regulatory frameworks at both European and international levels.

For instance, the country hosted and co-chaired the AI Action Summit in February 2025, which gathered representatives from over 100 countries and resulted in the announcement of 100 concrete actions and commitments. One full day of the summit focused specifically on AI in healthcare. In addition, France is a member of the WHO Europe’s Strategic Partners’ Initiative for Data and Digital Health (SPI-DDH), which aims to promote safe and responsible digital health innovation.

However, AI developers may encounter cross-border regulatory challenges, including the transfer of sensitive health data, particularly to non-EU countries. These transfers are subject to strict requirements under the GDPR and other applicable EU/national regulations, which may create operational and legal hurdles.

The French government underscores the need to anticipate technological advancements and emerging applications of AI, while ensuring their relevance, safety and ethical integrity. AI must be developed to support both patients and healthcare professionals.

Several national initiatives have been launched to meet these objectives. These include a pilot programme to evaluate the medico-economic impact of AI-assisted electrocardiogram interpretation, an observatory for monitoring AI usage in healthcare, and ongoing efforts to develop evaluation frameworks for digital medical devices incorporating AI. These initiatives aim to foster responsible and effective integration of AI into the healthcare system.

Healthcare AI providers must adopt a comprehensive compliance strategy, ensuring their solutions meet (notably) regulatory requirements under the GDPR, the AI Act and the MDR/IDVR. This involves implementing transparency and explainability measures, conducting regular internal audits, and establishing robust post-deployment monitoring and risk-management procedures.

Healthcare institutions and professionals, for their part, are expected to use AI tools in a transparent, ethical and accountable manner. This includes informing patients when AI is involved in their care and exercising professional judgment rather than relying solely on AI outputs. Failure to do so may result in liability in the event of errors and/or harm.

Key provisions in contracts involving healthcare AI technologies include the following.

  • AI system description and intended use: the AI system’s intended purpose, functionality, limitations, and the required level of human oversight must be clearly defined.
  • Maintenance and updates: obligations for system maintenance, updates, and the implementation of necessary corrections to ensure continued performance and regulatory compliance must be established.
  • Liability allocation: clear terms on liability – such as caps on compensation – and a precise outline of cases where liability is excluded are provided. Contracts may also allocate shared liability – eg, the provider for system malfunctions and the healthcare institution/professional for clinical decision-making.
  • Compliance and guarantees: the provider must guarantee that the AI complies with applicable legal and regulatory frameworks, and matches the technical documentation provided.
  • Technical documentation: delivery of complete and detailed technical documentation must be ensured.
  • Standard legal clauses: standard provisions such as insurance coverage, force majeure, applicable law, and dispute resolution mechanisms must be included.

Healthcare AI developers and users must proactively address several key categories of risks associated with the deployment of AI technologies. These include diagnostic errors resulting from incorrect or inappropriate AI outputs, the presence of biases or malfunctions in the algorithm that may lead to unequal or unsafe outcomes, and vulnerabilities to cybersecurity threats that could compromise sensitive medical data or system integrity.

While no dedicated AI insurance regime currently exists, general liability frameworks apply, as follows.

  • Healthcare professionals are expected to carry medical malpractice insurance.
  • Developers must hold product liability coverage, particularly in light of the PLD. Insurers typically require robust documentation and risk management measures as prerequisites. Additionally, insurance policies must be tailored to reflect the specific regulatory environment, ethical concerns, and operational risks unique to the healthcare sector, especially when AI systems are integrated into medical devices.

In France, the HAS included AI-related criteria in its 2025 certification framework for healthcare institutions (3.4-05 and 3.4-06).

With regard to digital medical devices incorporating AI for professional use:

  • healthcare institutions are now subject to a range of obligations: mapping the use of such devices, establishing a structured process for their acquisition, human control of the results, organising training for healthcare professionals, etc; and
  • healthcare professionals have specific responsibilities: when using such devices, particularly for diagnostic or therapeutic purposes, they must ensure that the patient has been informed and is made aware of the interpretation resulting from their use.

In addition to data transfer issues (see 6.3 Data Sharing and Access), it is essential to ensure compliance for MDAI with other EU regulations, such as the AI Act and the MDR/IVDR.

To this end, the Artificial Intelligence Board and the MDCG published in June 2025 the first official guidance document clarifying how these regulations interact. Key areas addressed included data governance, transparency and human oversight, accuracy, robustness, and cybersecurity, clinical and performance evaluation, technical documentation and post-market monitoring.

Fréget Glaser et Associés

7 rue Royale
75008
Paris
France

+33 (0) 1 47 23 78 80

contact@freget-glaser.fr www.freget-glaser.fr
Author Business Card

Trends and Developments


Authors



Fréget Glaser et Associés has, since its inception in 2014, continuously grown to become a solidly established and well-reputed high-end practice in the field of competition and regulation, with extensive expertise in the pharmaceuticals/life sciences sectors. The firm’s primary differentiator with respect to other law firms, particularly those driven by their corporate departments, is its ability to propose complex litigation strategies with respect to all civil, commercial and administrative courts, as well as the regulatory authorities, rounding out clients’ overall business strategies. The keys to Fréget Glaser et Associés’ success lie not only in the cutting-edge legal expertise of its team members but in its ability to combine legal and economic expertise with out-of-the-box-thinking solutions.

The deployment of artificial intelligence (AI) in healthcare is driving major transformation in France. As innovation accelerates and legal frameworks evolve at both national and European levels, stakeholders must navigate a fast-changing, complex landscape.

AI Deployment: A Strategic Challenge for France’s Healthcare Sector

AI is being integrated into the French healthcare system across a wide range of applications: medical imaging, diagnostic tools, remote patient monitoring, drug discovery, and optimisation of operational and administrative processes. These technologies are at various stages of maturity.

According to the AI Use Observatory of the French National Agency for the Performance of Health and Medico-Social Institutions (ANAP), around 50 AI-powered solutions are already being used in hospitals and medico-social organisations.

This dynamic was further reinforced in February 2025, when the Summit for Action on AI gathered researchers, healthcare professionals, and entrepreneurs at PariSanté Campus to discuss concrete AI-driven solutions in the healthcare sector. On this occasion, the Minister of Health presented a report entitled “The State of AI in healthcare in France”, reaffirming the government’s aim to promote a sovereign, competitive, and trustworthy approach to AI in healthcare.

Key priorities set out in the report include:

  • accelerating innovation through a sustainable economic model;
  • supporting healthcare professionals with training and upskilling;
  • building a regulatory framework and best practices that foster trust and safety; and
  • adapting health technology assessments to the specific characteristics of AI.

Strategic investments are also central to this national objective. France’s support of the development of innovations incorporating AI in healthcare forms part of the Digital Health Acceleration Strategy under the “France 2030” plan. The country has so far invested EUR500 million in digital health solutions, 50% of which is dedicated to projects involving AI.

Major collaborative initiatives – such as PariSanté Campus and the Health Data Hub – bring together healthcare institutions, researchers, and industry leaders. Regulatory bodies such as the French National Agency for the Safety of Medicines and Health Products (ANSM) and the French National Health Authority (HAS) play a crucial role in certifying and guiding AI implementation.

The HAS recently issued a general guide for selecting digital medical devices, including AI systems. A more specific economic evaluation guide is expected soon.

More broadly, AI is now increasingly recognised as a strategic lever for transforming the French healthcare system in response to structural challenges such as workforce shortages, ageing populations and rising costs.

Nonetheless, the implementation of AI in healthcare remains subject to considerable challenges. Key concerns include data quality, algorithmic bias and lack of transparency, as well as ethical risks such as the potential dehumanisation of care and unequal access to innovation. Legal and regulatory hurdles, including liability issues and the absence of dedicated reimbursement models, also hinder broader deployment. From an economic perspective, high development and deployment costs remain a major barrier.

Furthermore, user acceptance is far from assured. According to the 2024 barometer published by PulseLife and Interaction Healthcare, only 58.7% of healthcare professionals trust AI for diagnostic tasks. Among their concerns, algorithmic bias (59%), lack of transparency about data sources (50%), and potential degradation of the professional–patient relationship (49%) are most frequently cited. On the patient side, an OpinionWay survey for the Healthcare Data Institute shows that only 44% of patients believe doctors can safely use AI in medical care.

Lastly, the regulatory environment surrounding AI in healthcare is rapidly evolving. In addition to new EU regulations – some of which are already entering into force – numerous guidelines have been published or are forthcoming from key institutions such as HAS, the Digital Health Delegation, and the CNIL. Stakeholders must therefore remain attentive to legal developments at both national and European levels.

Key Precautions in Developing and Using Health AI in France

In light of the evolving regulatory environment, one of the primary challenges facing developers and users of AI systems in the healthcare sector is to understand how the new European instruments interact with one other and with existing national legal frameworks.

Ensuring compliance with growing EU and national regulations

From a legal standpoint, the deployment of AI in healthcare is now governed by a set of overlapping European and national regulations including:

  • the AI Act (Reg. 2024/1689);
  • Medical Devices Regulations (MDR) and In Vitro Diagnostics Regulations (IVDR);
  • the European Health Data Space (EHDS) Regulation;
  • the Product Liability Directive (PLD);
  • the General Data Protection Regulation (GDPR) and the French Data Protection Act; and
  • Article L. 4001-3 of the French Public Health Code.

Under the AI Act, AI-based medical devices are classified as high-risk systems, requiring dual conformity under both the AI Act and the MDR/IVDR. Guidance issued in June 2025 by the Medical Device Coordination Group (MDCG) establishes coordination rules to avoid duplicative procedures.

While the Act’s provisions concerning high-risk AI systems will come into effect in August 2026, the European Commission has not yet published its guidelines on classifying high-risk AI systems and related requirements and obligations. On this subject, two recent developments are worth mentioning, as follows.

  • In June 2025, the European Commission launched a public consultation on high-risk AI systems under the AI Act seeking stakeholder input to refine definitions, classifications, and regulatory obligations. According to the European Commission, “this feedback will be taken into account in the upcoming Commission guidelines on classifying high-risk AI systems, and related requirements and obligations. It will also collect input on responsibilities along the AI value chain”.
  • Leading EU companies issued an open letter requesting a two-year delay for certain AI Act obligations (notably for general-purpose and high-risk AI), citing legal uncertainty and competitiveness concerns. The EU Commission immediately stated that there will be no postponement and no pause in the roll-out of its AI regulations.

Ethical considerations also play a growing role. Digital ethics in healthcare, both at French and European level, seeks to promote principles such as trust, transparency, and fairness in AI use. Transparency and explainability obligations are central to protecting patients’ rights and ensuring responsible use of AI in care.

Notably, the French Digital Health Delegation and the Digital Health Agency have contributed to this effort through the publication of ethical implementation guidelines:

  • the 2022 “Ethics by Design Guide” outlines five core principles (non-maleficence, justice, autonomy, transparency, sustainability) and applies them across five AI development stages;
  • a new 2025 “Implementation Guide for Ethical AI Systems in Healthcare” has also been submitted for public consultation to provide more practical, phase-specific ethical benchmarks for developers and providers.

In parallel, a regulatory framework on interoperability and security applies to digital medical devices that process personal data. Since February 2023, such devices must comply with the “Interoperability and Security Framework for Digital Medical Devices”, with certification issued by the Digital Health Agency (ANS) as a condition for reimbursement by the French National Health Insurance.

Lastly, on the matter of liability, the EU has not yet adopted a dedicated regime for AI-related damages.

While a specific AI Liability Directive was initially envisaged, it was ultimately excluded from the Commission’s 2025 work programme. In its place, the new Product Liability Directive (PLD), entering into force in December 2026, is expected to fill this gap. It introduces no-fault liability for software, including AI systems, treating developers and providers as manufacturers. The directive also lowers the burden of proof for claimants in compensation claims. In France, liability is currently governed by traditional frameworks: product liability (soon to be aligned with the PLD) and medical liability. In this context, AI is regarded as a decision-support tool, and healthcare professionals retain full responsibility for clinical decisions, in accordance with Article L. 1142-1 of the French Public Health Code.

Ensuring proper use of health data

Another critical issue concerns proper access to and use of health data, particularly for training and testing AI-based medical devices.

Access to such data is highly regulated. Healthcare stakeholders may gain access to health data for research purposes – such as innovation activities or algorithm training – through the dedicated framework for the secondary use of health data, but only under specific conditions.

At EU level, the EHDS Regulation establishes a harmonised framework for the access, use, and exchange of electronic health data across Member States. Its goals are to empower individuals with access to and control over their personal health data (primary use), and to enable the secure and reliable reuse of health data for research, innovation, policymaking, and regulatory purposes (secondary use).

It introduces HealthData@EU, a central online platform that connects health datasets from across Europe.

Under this framework, stakeholders may apply to the relevant health data access body (not yet designated) for permission to use health data for secondary purposes (as health data applicants). At the same time, they may be required to share health data (as health data holders).

At national level, France has already established the Health Data Hub as a centralised platform that facilitates access to data from the National Health Data System (SNDS). This enables the secondary use of health data for purposes such as the development and testing of AI algorithms. However, access to this data for secondary use may require prior authorisation from the French Data Protection Authority (the CNIL).

Once access is granted, several other compliance considerations must be addressed.

Key compliance requirements include differentiating anonymised vs pseudonymised data, ensuring data robustness for training/testing algorithms, and mitigating bias and ensuring equitable algorithm performance

Ensuring transparency and human oversight of health AI

In line with European and national regulations, transparency and human oversight are also key requirements for the development and use of high-risk AI systems.

Providers must ensure that their systems are designed and documented in a way that is understandable to healthcare professionals and patients and must allow human intervention and control at all critical stages.

Healthcare institutions and professionals, for their part, must apply AI tools ethically and transparently, retaining professional judgment (ie, not relying solely on AI outputs) and informing patients when AI is involved in a preventive, diagnostic, or therapeutic act, in accordance with Article L. 4001-3 of the Public Health Code.

Ensuring risks are properly identified and addressed

Finally, risk management is an essential component of any compliance strategy. High-risk healthcare AI systems must be subject to a comprehensive risk management process throughout their lifecycle. Developers must identify and mitigate foreseeable risks to health, safety, or fundamental rights, implement appropriate safeguards, and report serious incidents. Healthcare institutions must also carry out clinical risk assessments prior to deployment, adopt technical and organisational safeguards, and provide staff with adequate training to ensure responsible interpretation of AI outputs.

A robust compliance strategy must therefore integrate multiple layers, including transparency, regular internal audits, post-market monitoring, and adherence to data protection standards. On the user side, healthcare institutions and professionals are expected to adopt responsible practices aligned with legal and ethical standards, thereby ensuring the safe, effective, and trustworthy use of AI in medical care.

Fréget Glaser et Associés

7 rue Royale
75008
Paris
France

+33 (0) 1 47 23 78 80

contact@freget-glaser.fr www.freget-glaser.fr
Author Business Card

Law and Practice

Authors



Fréget Glaser et Associés has, since its inception in 2014, continuously grown to become a solidly established and well-reputed high-end practice in the field of competition and regulation, with extensive expertise in the pharmaceuticals/life sciences sectors. The firm’s primary differentiator with respect to other law firms, particularly those driven by their corporate departments, is its ability to propose complex litigation strategies with respect to all civil, commercial and administrative courts, as well as the regulatory authorities, rounding out clients’ overall business strategies. The keys to Fréget Glaser et Associés’ success lie not only in the cutting-edge legal expertise of its team members but in its ability to combine legal and economic expertise with out-of-the-box-thinking solutions.

Trends and Developments

Authors



Fréget Glaser et Associés has, since its inception in 2014, continuously grown to become a solidly established and well-reputed high-end practice in the field of competition and regulation, with extensive expertise in the pharmaceuticals/life sciences sectors. The firm’s primary differentiator with respect to other law firms, particularly those driven by their corporate departments, is its ability to propose complex litigation strategies with respect to all civil, commercial and administrative courts, as well as the regulatory authorities, rounding out clients’ overall business strategies. The keys to Fréget Glaser et Associés’ success lie not only in the cutting-edge legal expertise of its team members but in its ability to combine legal and economic expertise with out-of-the-box-thinking solutions.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.