Healthcare AI 2025 Comparisons

Last Updated August 06, 2025

Contributed By Fangda Partners

Law and Practice

Authors



Fangda Partners was founded in 1993 and is a leading full-service law firm with approximately 800 lawyers across offices in Beijing, Guangzhou, Hong Kong, Nanjing, Shanghai, Shenzhen and Singapore. The firm adopts a one-firm approach, providing integrated legal services across all practice areas and locations. Recognised as the firm of choice for complex and high-stakes legal matters, Fangda advises major domestic and international companies on both transactions and disputes. Fangda’s team includes lawyers qualified in the PRC, Hong Kong, the United States, the United Kingdom, Australia and Singapore, offering strong cross-border capabilities with a distinct China focus. Fangda has extensive experience in AI and life sciences sectors, such as personalised medicine, digital health and biotechnology, including genomics and cancer diagnostics. The firm has assisted several leading pharmaceutical companies in deploying AI tools to support offline marketing and streamline business operations.

In China, AI is widely used across the healthcare and life science sectors, primarily in relation to medical institutions, pharmaceutical R&D, sales and marketing, medical devices, diagnostics, treatment assistance and consumer health devices. In clinical settings, AI powers diagnostic imaging, virtual consultations and surgery planning tools, and provides clinical decision support for general practitioners. These technologies improve diagnostic accuracy and efficiency, particularly in underserved areas. In the pharmaceutical sector, AI is primarily used for drug discovery, including target identification, virtual screening and clinical trial design, as well as to educate patients and doctors on medical products. Meanwhile, AI-enabled wearable devices support remote patient monitoring and chronic disease management.

In recent years, Chinese authorities have taken active steps to guide and accelerate the development of healthcare AI. In November 2024, the Reference Guide for AI Application Scenarios in the Healthcare Industry (the “AI Application Scenarios Reference Guide”) identified 84 use cases across four categories: medical service management, public health and primary care, health industry innovation (such as robotics and drug development), and medical education and research. This guide provides a national framework to integrate AI into health services more systematically. Further, in early 2025, the Notice on Carrying out the 2025 Innovation Task of AI-based Medical Devices (the “2025 Innovation Task Notice”) was issued, promoting breakthroughs in intelligent diagnostic and therapeutic tools. In March 2025, the Opinions on Comprehensively Deepening the Reform of Drug and Medical Device Regulation to Promote the High-Quality Development of the Pharmaceutical Industry strengthened the regulation of next-generation medical technologies, including AI and medical robotics, by improving national technical standards and regulatory systems.

Healthcare AI adoption in China is driven by both systemic needs and innovation opportunities. On the one hand, medical institutions face growing pressure from limited medical resources, uneven care quality between urban and rural areas in China, and a rising demand for faster and more accurate diagnoses. On the other hand, pharmaceutical companies are under pressure to accelerate drug discovery, reduce R&D costs and improve trial design. These challenges have created strong incentives across the healthcare system to adopt AI solutions that can improve outcomes, enhance efficiency and support clinical decision-making.

AI technologies bring several significant benefits to healthcare delivery. First, they enhance diagnostic accuracy and efficiency, especially in radiology and pathology. AI tools used in image interpretation – such as for lung nodule, breast cancer and stroke detection – can match or exceed human performance in screening tasks, leading to earlier and more reliable disease identification. Second, AI improves access to care by supporting primary care healthcare professionals (HCPs) in under-resourced settings. Intelligent triage, symptom checkers and risk prediction tools help address the shortage of specialists and improve healthcare accessibility in rural and underserved areas. Third, AI enables more personalised and precise treatment decisions. By integrating genomics, pathology and patient-specific health data, AI systems can recommend tailored therapies, particularly in oncology, where AI supports the design of targeted or immune-based treatment strategies. Fourth, AI contributes to public health by strengthening early warning and monitoring systems. During the COVID-19 pandemic, AI tools were used for outbreak modelling, contact tracing and real-time policy support, highlighting their value in health emergency management. Fifth, AI supports clinical consistency and equity by reducing reliance on HCP intuition and minimising human error or bias. This helps standardise care pathways across medical institutions, promoting more equitable treatment regardless of geography. Finally, AI accelerates pharmaceutical innovation by optimising target identification, virtual compound screening and trial design. These tools are increasingly being integrated into R&D workflows, shortening timelines and boosting productivity.

Despite these advantages, AI also brings challenges around safety, liability, regulatory oversight and integration into professional workflows.

China’s healthcare AI market is rapidly evolving under strong policy support and industrial demand. The AI Application Scenarios Reference Guide and the 2025 Innovation Task Notice emphasise scenario-based adoption and regulatory readiness. Local governments have issued tailored plans to accelerate integration, such as the Shanghai Work Program for the Development of Artificial Intelligence in Medicine (2025–27) and the Suzhou Accelerates the Promotion of “Artificial Intelligence + Healthcare” Construction and Application Development Implementation Plan.

Pharmaceutical companies and technology developers are the primary drivers of innovation, and big technology firms and AI startups play a key role as enablers in these ecosystems.

Notable collaborations reflect how tech and medical institutions are jointly reshaping healthcare delivery through AI. Zhongshan Hospital partnered with Huawei and iFlytek to build a multi-modal smart hospital integrating AI imaging, voice assistants and clinical decision tools. Jiangsu Provincial People’s Hospital worked with Tencent to launch an AI imaging and virtual doctor system for patient engagement and diagnostic support. The Second Affiliated Hospital of Zhejiang University teamed up with Alibaba Health to deploy AI across triage, follow-up and patient management, while China-Japan Friendship Hospital collaborated with Baidu to develop a multi-modal foundation model for interdisciplinary clinical decision-making.

Current law and regulations lack a statutory definition for “healthcare AI systems”. However, the AI Application Scenarios Reference Guide provides concrete definitions for specific AI healthcare application scenarios. AI-based medical software packages meeting the statutory definition of medical devices are regulated as medical devices. The AI Application Scenarios Reference Guide categories healthcare AI applications into four domains with specific use cases:

  • AI + Medical Service Management – including medical services, pharmaceutical services, health insurance services, traditional Chinese medicine administration and hospital management;
  • AI + Primary Public Health Services – including health management services, public health services and elderly and childcare services;
  • AI + Health Industry Development – including medical robotics, drug research and development and traditional Chinese medicine industry; and
  • AI + Medical Education & Research – including medical education and medical research.

AI medical software packages qualifying as medical devices are categorised under the Guiding Principles for the Classification and Definition of AI-based Medical Software Products as follows.

  • Low-maturity algorithms – any AI medical software performing assisted decision-making functions is classified as a Class III medical device, and any such software without assisted decision-making capabilities is classified as a Class II medical device.
  • High-maturity algorithms – according to the Classified Catalogue of Medical Devices, the following product types exist, with the management categories (Class II and Class III) also determined based on this catalogue:
    1. treatment planning software;
    2. medical image analysis software;
    3. clinical data mining software;
    4. diagnostic decision support systems;
    5. in vitro diagnostic (IVD) algorithms; and
    6. rehabilitation progress tracking tools.

China lacks unified healthcare AI legislation.

Alongside existing medical device regulations, such as the Regulation on the Supervision and Administration of Medical Devices (revised in 2024; the “Medical Devices Supervision Regulation”) and the Administrative Measures on the Registration and Record-filing of Medical Devices (“Registration and Filing of Medical Devices Measures”), sector-specific rules target technologies like generative AI (GenAI) and deep synthesis algorithms – such as the Interim Measures for the Administration of Generative Artificial Intelligence Services (the “Gen AI Measures”), the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services (the “Deep Synthesis Rules”), the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services (the “Recommendation Rules”), the Cybersecurity Law (CSL), the Data Security Law (DSL) and the Personal Information Protection Law (PIPL). Specialised technical guidelines also form part of the regulatory framework, such as the Guiding Principles for Registration Review of AI-based Medical Devices (the “Guiding Principles for AMD Registration Review”). Moreover, investments in the healthcare AI sector remain subject to general foreign direct investment restrictions.

The regulatory framework comprises three distinct categories: AI medical devices, algorithm-based products, and medical service and medical technology.

Regarding AI medical devices, pursuant to the Guiding Principles for AMD Registration Review, the regulatory process covers several stages, starting with design and development (where each algorithm requires independent life cycle validation, and where 18 technical aspects including cybersecurity must be documented) and followed by pre-submission (involving activities to determine classification, conduct testing and compile clinical evaluation data), submission and review (requiring entities to file an application with the National Medical Products Administration (NMPA) for technical/good manufacturing practice (GMP) review) and finally certification (where the goal is to obtain a medical device registration certificate.

After determining the registration category, AI-based medical software is registered as standalone software. In special circumstances, streamlined pathways may apply, specifically covering two scenarios:

  • combined registration, which is applicable when software functionally depends on other medical software to operate, allowing it to be registered as an integrated component of that host software; and
  • priority review, for products meeting the criteria under the Registration and Filing of Medical Devices Measures (eg, innovative device, priority or emergency registration procedures) that may access the accelerated review process.

The foundational regulatory obligations include three main aspects:

  • general medical device requirements, covering registration/filing and post-market obligations (quality management, adverse event reporting, recalls);
  • AI-specific registration, encompassing the registration requirements of the Guiding Principles for AMD Registration Review and the cybersecurity requirements of the Guiding Principles for Cybersecurity Registration Review of Medical Devices (the “Guiding Principles for Cybersecurity Registration”); and
  • production standards, where for standalone software, the Appendix to Guidelines for Quality Management of Medical Device Production - SaMD (“SaMD Quality Management”) imposes specific controls (eg, separation of development/testing roles, record requirements, quality control).

Regarding AI algorithm requirements, according to the Deep Learning-Assisted Decision-Making SaMD Review, algorithm design shall consider the quality control requirements for the following activities: algorithm selection, algorithm training, cybersecurity protections and algorithm performance evaluation.

As for continuous learning, according to the Guiding Principles for AMD Registration Review, the registration applicant shall verify and validate the safety and effectiveness of self-learning updates under its quality management system, apply for change registration where required and deploy such updates only upon obtaining NMPA approval.

China has not issued specific privacy and data protection rules on the development and deployment of healthcare AI. Healthcare AI is still subject to general data protection legal requirements, including the PIPL, CSL, DSL and Network Data Security Management Regulations, etc.

Where patient data or wearable device users’ personal data is to be used for AI model training and operation, the following data processing activities should be carefully considered.

  • Collection and use: Developers and operators of AI-based medical devices must strictly adhere to the principles of lawfulness, legitimacy, necessity and data minimisation. Medical institutions should inform patients of the nature, function, and potential risks of AI assistance with a clear explanation of the intended purpose and obtain necessary consent prior to treatment.
  • Storage: Patients’ medical record data constitutes sensitive information and thus must be securely stored using encryption, access controls, audit logs, and other technical and managerial measures to ensure data security.
  • Sharing and cross-border data transfer (CBDT): Please see 6.3 Data Sharing and Access for details.
  • Anonymisation: Please see 6.4 De-Identification and Anonymisation for details.
  • Cybersecurity multi-level protection scheme (MLPS): Enterprises or medical institutions deploying and operating on-premises AI diagnostic systems must conduct pre-deployment security risk assessments and meet relevant MLPS obligations. Under China’s updated MLPS 3.0 (2025), healthcare system operators are required to re-assess the grading of their systems based on the new grading standards and fill in the data inventory for systems above level 2.

As the development of AI in the healthcare industry relies heavily on a large volume of sensitive patient data for training purposes, datasets composed of massive patient data or inferences drawn from comprehensive analysis based on such data may potentially be recognised as “important data” when a catalogue of important data in the healthcare sector is released. Accordingly, this could in turn trigger obligations such as data processing agreement drafting, risk assessments and annual reporting of important data processing activities, and graded classification and protection of important data.

There is currently no technical standard specific to healthcare AI systems. Instead, applicable standards are scattered across national, industry, and group standards. While national standards mainly cover general cybersecurity and data protection, technical guidance pertinent to healthcare AI is largely found in industry and group-level standards.

For GenAI systems that interact directly with patients – such as intelligent triage, virtual consultations or pre-diagnosis assistants – specific standards apply, including the following.

  • Basic security requirements: If healthcare AI systems directly interact with patients, the national standard GB/T 45654-2025 Basic Security Requirements for Generative AI Services requires AI developers to pay close attention to user notification, content moderation and safety requirements for both large language model (LLM) training and outputs, as well as overarching primary security measures.
  • Pre-training and training data: The national standard GB/T 45652-2025 Security Specification for Generative Artificial Intelligence Pre-Training and Fine-Tuning Data sets requirements for the pre-training and optimisation of training data used in GenAI, as well as the processing activities involved.
  • AI-generated content (AIGC) labelling: Labelling includes both explicit and implicit labelling. Explicit labelling refers to adding visible cues to AIGC to clearly alert the public and prevent confusion or misidentification. Implicit labelling involves metadata-based tagging. Specific requirements are further elaborated in the Measures for Labeling AI-Generated or Composed Content and the national standard GB 45438-2025 Labeling Method for Content Generated by Artificial Intelligence.

In addition to GenAI services, healthcare AI systems are also subject to the following technical requirements and standards.

  • Full-cycle personal health data processing: Various standards (including but not limited to GB/T 35273-2020 Personal Information Security Specification, GB/T 39375-2020 Health and Medical Data Security Guidelines) specify principles and security requirements for personal information-related activities such as collection, storage, use, sharing, transfer, public disclosure and deletion.
  • Telemedicine platforms, information access and data exchange: GB/T 44792-2024 Information Access and Data Exchange of Telemedicine Platform and certain other standards focus on the architecture of telemedicine platform data access and exchange, including technical requirements for front-end gateway data exchange, personal health device connectivity and audio/video integration.
  • Specific scenario-based applications: Industry and group standards have already been developed for various AI applications in different medical treatment scenarios, such as lung image analysis tools, coronary computed tomography (CT) imaging software, surgical assistance devices and systems using robotic technologies, multicentre medical data collaborative analysis platforms and glaucoma screening systems.

National standards are normally developed by standardisation institutions and sectoral administrations, such as the National Information Security Standardization Technical Committee (TC260) and the NMPA, while group standards are often led by the China Communications Standards Association, with supervision and direction by regulatory agencies like the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT) and the NMPA.

Regulators can be categorised by sectoral administration and supervision mandates as:

  • medical sector regulators, where the NMPA governs medical device registration, technical reviews and post-market surveillance for healthcare AI products, and the National Health Commission supervises medical institutions and their usage of AI products and/or services;
  • technology regulators, where CAC and MIIT govern cybersecurity, AI-related matters and data compliance; and
  • ancillary regulators, where the State Administration for Market Regulation monitors advertising compliance, while the National Development and Reform Commission and Ministry of Commerce supervise foreign investment.

Inter-agency co-ordination occurs through specialised law enforcement campaigns. CAC and MIIT, as technology regulators, lead these efforts, but in practice will defer to sector-specific authorities such as the NMPA for healthcare oversight.

Pre-market requirements for healthcare AI developers in China mainly apply to AI-based medical devices, which are regulated under the Medical Devices Supervision Regulation, Registration and Filing of Medical Devices Measure, and relevant technical guidelines.

Pursuant to the Guiding Principles for AMD Registration Review and the Guiding Principles for SaMD Registration Review, developers must:

  • conduct clinical evaluations – unless explicitly exempt – through clinical trials or literature-based analysis;
  • prepare technical documentation, including algorithm descriptions, software life cycle records, data governance protocols, and quality control for training and testing datasets; and
  • perform risk assessments that demonstrate traceability, reliability and safety throughout the product life cycle, along with defined usage limitations.

Regulators also require disclosure of algorithm structure, training data and performance metrics. To enhance transparency and interpretability – especially for deep learning models – visual tools such as heatmaps are often encouraged. Furthermore, developers must mitigate bias through representative data collection and fairness assessments. For transparency, explainability and bias mitigation, please see 5.2 Transparency and Explainability and 5.3 Bias and Fairness.

Post-market surveillance requirements differ based on application types. For hospital-deployed medical AI, according to the Administrative Measures for Adverse Drug Reaction Reporting and Monitoring, which require institutions to report adverse drug reactions, there is currently no overarching legal framework specifically addressing AI-related risks. AI-based medical devices are subject to the Medical Devices Supervision Regulation, which mandates that registrants and filing holders conduct adverse event monitoring, re-evaluate marketed devices and implement recall mechanisms where necessary.

Concerning algorithm updates, as outlined in 2.4 Software as a Medical Device (SaMD), developers must:

  • verify and validate the safety and effectiveness of any self-learning or updated models; and
  • apply for change registration when such updates materially affect the product’s intended use or safety profile.

For adaptive or continuous learning algorithms, the Guiding Principles for AMD Registration Review require that such features remain disabled or used solely for research purposes unless separately approved. These models, which update based on real-world data, introduce uncertainty in safety and effectiveness. Developers must validate any changes resulting from self-learning and apply for registration modification before such updates can be deployed in clinical settings.

A centralised adverse event reporting system exists for medical devices for monitoring and reporting adverse events; however, no dedicated monitoring mechanism is in place for AI applications outside the scope of medical device regulation.

Regarding enforcement, administrative penalties have been imposed on the use of unregistered AI-based medical software and on health data breaches. However, no publicly reported cases of regulatory intervention, warnings or product recalls specific to healthcare AI have been identified.

Penalties vary by violation type. Under the Medical Devices Supervision Regulation, use of unregistered Class II/III AI medical devices may trigger confiscation, fines or business suspension. Under the DSL, data protection failures may lead to fines of up to CNY2 million, suspension of operations or licence revocation. Although no significant or systematic enforcement against healthcare AI has been seen, in June 2023, a Beijing software company developing human gene exome data analysis systems was fined for failing to implement sufficient data security measures, resulting in 19.1 GB of genetic data being exposed to the risk of leakage.

China has not yet established a dedicated legal framework specifically addressing liability allocation between AI-related stakeholders. Liabilities are allocated under the traditional tort law, contract law and administrative regulations regarding generic products, medical devices, patients and healthcare providers. The applicable legal framework governing the liability of healthcare AI systems is as follows.

The Civil Code – Generic or Special Product Liability Provisions

If a healthcare AI system causes personal injury or property damage due to defects, the liability depends on the system type.

  • If the healthcare AI system qualifies as a “generic product” (which is highly likely as PRC law defines “products” mainly by their sales purpose, without requiring physical/tangible form), the patients may claim product liability against the system’s producer/manufacturer (developer) and seller. A seller who compensates patients has the right to seek recourse from producers/manufacturers (developers). A medical institution, as the direct user of the AI system, is generally not liable under product liability unless it caused the defect.
  • If the healthcare AI system is further classified as a “medical device”, then the patient can claim medical damage liability not only against the producer/manufacturer (developer) and the seller, but also directly against the medical institution in the first instance. The institution can then seek recourse from the producer/manufacturer (developer).

Steps to determine whether the AI system is defective in judicial practice typically include the following.

  • Verifying that the AI system’s design complies with mandatory or recommended standards; violation of standards indicates defects.
  • If no standards are violated, examining the algorithmic logic to see if obvious improvements could have prevented the harm. If so, a defect might be recognised (currently, AI diagnosis is not yet a “black box”, meaning that the underlying algorithmic logic can always be examined). Additionally, the producer (developer) failing to provide necessary warnings to the user or patient may constitute a warning defect.

Product Quality Law and Consumer Rights Protection Law

If healthcare AI systems are defined as “products”, their safety, suitability and instructions must comply with relevant standards. Developers and sellers bear civil and administrative liability for non-compliance.

Regulations on the Supervision and Administration of Medical Devices (2024 Revision)

If healthcare AI systems qualify as medical devices, manufacturers are responsible for their quality, safety and effectiveness. Regulatory authorities may order recalls or impose penalties for design defects or software update failures.

Medical institutions and HCPs remain subject to traditional medical malpractice standards. Given that most AI systems in clinical practice function as decision-support tools rather than fully autonomous systems, ultimate responsibility typically rests with the human user. Improper reliance on AI-generated recommendations or inadequate supervision of the system’s application can expose medical treatment providers to legal claims.

In such cases, traditional rules on patient harm and malpractice apply, so medical institutions will be held liable only if their personnel are proven to be at fault and the cause of patient harm during diagnosis or treatment. Specifically, the patient must prove four elements: wrongful act, damage, causation and fault, where proving fault of medical personnel is the most challenging.

China has not yet prescribed a unified risk management framework specific to healthcare AI, but medical institutions and developers are subject to some fragmented regulatory and technical requirements.

Medical Institutions

Medical institutions deploying AI systems are generally expected to establish internal oversight mechanisms, including risk identification, adverse event tracking and algorithm performance monitoring. AI is typically treated as an assistive tool, and liability remains with licensed HCPs, reinforcing the need for robust human-in-the-loop safeguards.

Developers

AI-based medical devices shall comply with existing medical device regulations. Local guidance, such as that from the Beijing Medical Products Administration, requires risk documentation covering the full life cycle – risk identification, control measures, residual risk evaluation and traceability – particularly for AI-specific risks like false negatives or model drift.

Risk Assessment and Insurance

There is no mandatory AI-specific insurance, but some policy proposals encourage tailored coverage. In practice, a few insurers and medical institutions have piloted AI-related liability coverage or internal reserve mechanisms to manage emerging risks.

In generic/medical device product liability cases, under general tort law and product liability law the burden of proof lies mainly with the patient rather than the producer/manufacturer (developer), seller or medical institute. There is no reversal of the burden of proof. Consequently, it remains relatively difficult for the patient to hold AI developers or users liable.

In current judicial practice, courts will rely on experts to review AI system algorithms and determine whether there is obvious room for improvement that could have prevented the harm (ie, design defects). To date, no cases have involved “black-box” AI systems that are completely opaque and cannot be reviewed, and no relevant precedents exist.

In medical institute malpractice cases, there are also no specific liability limitations or “safe harbour” provisions available to healthcare users who used AI tools in treatment or diagnosis. The medical institute still needs to independently review and verify the AI system’s conclusion according to the medical standards prevailing at the time.

The ethical framework for healthcare AI consists of various mandatory requirements, recommended guidelines and industrial standards. This ethical framework emphasises the ethical review process to ensure compliance and AI’s human-centred nature.

One of the key milestones is the promulgation of the Measures for Scientific and Technological Ethics Review (Trial) in 2023. Companies engaging in life science, healthcare and AI research that involves sensitive fields of sci-tech ethics have to establish an internal ethics review committee to assess compliance with applicable laws, ethical codes and sci-tech ethical principles promoting human well-being, respecting the right to life, adhering to fairness and impartiality, reasonably controlling risks, maintaining openness and transparency, etc. For sci-tech activities that may pose a greater possibility of ethical risks, such as R&D of automated decision-making systems with a high degree of autonomy (AI models) for scenarios with safety or personal health risks, additional expert ethical review is required.

Various recommended guidelines have also been developed as sectoral best practice for developers and health institutions. For example, the Code of Ethics for the New Generation Artificial Intelligence, issued in 2021, which emphasises privacy and data security and echoes the ethical principles listed in the foregoing, provides ethical codes from R&D, supply, use and management perspectives. Similar ethical principles are also seen in the Industrial Expert Consensus of Deployment of DeepSeek by Medical Institutions.

In practice, ethical considerations, like human welfare, privacy and data security, and accountability are integrated into regulatory processes through product registration, clinical trials and post-market monitoring on adverse incidents. Ethics committee approval in clinical trials for AI medical devices, pertaining to whether the trial has sufficiently considered ethical principles, is a must-have. In particular, ethics committee approval for an AI medical device to collect data is also required as part of the algorithm research report to be submitted for medical device registration.

For healthcare AI systems that qualify as medical devices, the instructions of the product shall comply with the requirements of transparency and explainability, and shall include basic algorithm information. If an AI system’s security level is severe (such as in the case of using a black-box algorithm or auxiliary decision-making), additional  algorithm research summaries, use restrictions and necessary precaution information shall also be provided, as required in the Guiding Principles for AMD Registration Review.

HCPs are only explicitly required to disclose to patients when healthcare AI is being used in their care in limited situations. For example, before using AI-assisted diagnostic technology for invasive examinations or performing surgery assisted by an AI surgical system, the purpose of the examination/surgery, risks, precautions, potential complications and preventive measures should be communicated by the HCP to the patient and their family members in advance. An informed consent form should also be signed.

For other healthcare AI systems that may process HCPs’ and patients’ personal information, general transparency requirements under the PIPL will apply, and the purpose and means of data processing shall also be made available to the HCPs and patients concerned.

Maintaining fairness and preventing algorithm bias is one of the key principles in healthcare AI-related regulations and guidelines. For example, the Gen AI Measures (where applicable) require GenAI service providers to take effective measures to prevent bias during algorithm design, the selection of training data, model generation and optimisation, service provision, etc.

For AI medical devices, the Guiding Principles for AMD Registration Review issued by the NMPA provide that, to ensure data quality and control data bias during the training of AI systems, the collection of sample data must consider the compliance, sufficiency and diversity of data sources (such as disease composition, population distribution, the scientific and rational distribution of data, and the sufficiency, effectiveness and accuracy of data quality control). In the registration materials for AI medical devices, it is also required that the NMPA be provided with algorithm risk management information, specifying control measures for risks such as overfitting and underfitting, false negatives and false positives, and data contamination and bias.

For healthcare-related GenAI services (such as GenAI tools for diagnostics and treatment planning or patient consultation), the Gen AI Measures require service providers to carry out a security assessment, under which training data and outputs that contain discriminative, unreliable or imprecise content that does not meet the security requirements in healthcare information services must be strictly managed and controlled during sampling tests. Content monitoring and a user complaint mechanism shall also be adopted during service provision.

Healthcare AI adheres to a human-centred principle, and automatically generating prescriptions, falsely using an HCP’s name or replacing an HCP in providing diagnosis and treatment services is explicitly prohibited. The final diagnosis and treatment must be determined by a qualified HCP.

Healthcare AI systems can only serve as a tool for users (HCPs or patients) to collect medical referential information, or to assist users (HCPs or patients) with auxiliary decision-making. In addition, highly autonomous AI systems that involve safety or health risks are subject to ethical review and expert re-examination.

For healthcare AI systems that qualify as medical devices, the Guiding Principles for AMD Registration Review provide key compliance requirements for the training data.

  • Data quality: It is mandated that the data training process consider compliance and quality control requirements during the key processes of data collection, collation and annotation, as well as the construction of a data set, particularly in relation to (i) data collection devices and personnel management; (ii) data desensitisation; and (ii) process management, the establishment of data collection-, cleansing- and annotation-related operational standards, quality assessment processes, etc.
  • Fair representation: The training dataset should, in principle, ensure that the sample distribution is balanced, scientific and rational (taking into account the epidemiological characteristics of the target disease). Data should be collected as extensively as possible based on the intended use and application scenarios of the product. This includes data from representative clinical institutions across multiple hospitals, regions and levels, as well as data from representative collection devices from multiple manufacturers, of multiple types and with multiple parameters.
  • Documentation: The source of training data and quality control processes (including data collection quality assessment results, annotation quality assessment results, etc) shall be traceable, well-documented and structurally managed.

For other generic healthcare-related Gen AI services, the Gen AI Measures will regulate the data training process, which primarily requires service providers to use training data and models from legal sources, ensure there is no infringement, take measures to improve data quality and prevent bias, etc.

  • Data quality: establish data screening and data quality assessment mechanisms, including managing illegal and harmful content (less than 5%), abandoning data sources with third-party infringement risks, identifying and removing misleading, fake or false content in vertical fields like healthcare, etc, and assessing data quality through annotation.
  • Data representation: ensure diverse sources of data of the same format (eg, code, images, audio, video, text in different languages, etc), use both overseas and local training data, etc.
  • Data documentation: sources of training data shall be traceable and documented, including through the provision of relevant authorisation documents (eg, open-source licence agreements, commercial contracts, authorisation records of users, etc) and data collection (from the Internet) records that comply with the limitations of robot protocols and technical restrictions.

For bias-mitigation measures, please refer to 5.3 Bias and Fairness.

If healthcare data used for training contains personal information, the PIPL and Measures for the Ethical Review of Life Science and Medical Research Involving Humans requires – as a general principle – that data processing activities, including secondary use of healthcare data for AI training and development, shall be disclosed in the privacy policy/informed consent form to patients, and consent shall be obtained.

Although it might not be feasible to obtain consent for secondary use, current laws do not provide consent exemptions for secondary use of healthcare data, nor do they specifically provide that secondary use is compatible use. Having said that, the recommended national standard GB/T 39375-2020 Health and Medical Data Security Guidelines provides a mechanism to request secondary use of healthcare data from medical institutions, albeit that this is limited to non-identifiable data and can only be used for non-profit purposes.

Current legislation governing data sharing and access remains centred around:

  • the PIPL, if personal information is involved in the training; and
  • the laws governing medical institutions’ responsibility for managing medical records, such as the Regulations for Medical Institutions on Medical Records Management, and the requirement for institutional approval for external data sharing under the Administrative Measures for the Cybersecurity of Medical and Healthcare Institutions. No dedicated healthcare AI legislation has been enacted in this regard.

As required by the PIPL, medical institutions collaborating with enterprises on healthcare AI development must strictly comply with the notification and separate consent requirements before sharing patients’ data. A data sharing agreement must be established to define the scope, purpose and means of data sharing and responsibilities.

Cross-border transfer of personal information and important data for healthcare AI development shall also comply with the CBDT mechanisms required by CAC, such as security assessment and standard contractual clause (SCC) filing. If training data involves human genetic resources, the Regulations on the Administration of Human Genetic Resources also require that mandatory filing and data backup be completed before such data can be lawfully transferred outside of China.

The de-identification and anonymisation of health data are primarily governed by the PIPL – which has established clear definitions for personal information de-identification and anonymisation – and the recommended national standard GB/T 37964-2019 Guidelines for De-identifying Personal Information, which provides detailed guidance on de-identification methods such as aggregation, encryption, suppression, pseudonymisation, generalisation and randomisation.

As raw health and medical data still constitute personal information, many AI system developers are considering the feasibility of de-identifying and anonymising such data for training purposes, to be exempted from the compliance requirements under the PIPL. While there are no legal standards for health and medical data anonymisation as of yet, AI system developers are adopting multiple de-identification measures, aiming to minimise the risk of re-identification to an acceptable level.

Under the Chinese Patent Law, an invention must be a novel technical solution that solves technical problems using natural laws and leads to technical effects. Purely abstract algorithms or mental methods, including AI models that do not have practical applications or technical implementability, are not patentable. To form a complete “problem–means–effect” chain, the healthcare AI patent application must clearly show how the technical systems solve specific technical issues (eg, how the algorithm is embedded in image-capturing devices or diagnostic apparatus). In practice, the Guidelines for Patent Applications for AI-related Inventions, published by the National Intellectual Property Administration (CNIPA), further clarify that, due to the “black-box” nature of AI, the patent specifications must include experimental data and parameter relationships so as to meet the implementation requirements.

Further, Article 25.1.3 of the Patent Law prohibits patents for diagnosis and treatment methods for illnesses. This limitation poses a significant barrier to healthcare AI patent applications that directly involve medical diagnoses. In patent examination practice, AI algorithms that directly diagnose diseases from patient data are typically considered “diagnostic methods” and are excluded from patent protection. To navigate this restriction, companies often reframe their inventions to avoid the word “diagnosis” and emphasise systems and devices rather than diagnostic methods.

A notable case illustrating the successful application of healthcare AI in China is Tencent’s MiYing AI for glaucoma diagnosis, which passed the regulatory requirements for medical devices and was approved as an innovative medical instrument. This case demonstrates that AI applications integrated with medical equipment and following specific technical and regulatory guidelines can be successfully patented.

Copyright Protection

In China, Article 3(8) of the Copyright Law expressly lists computer software among the categories of copyrightable works. This statutory protection is further elaborated in Articles 2 and 3 of the Regulations on the Protection of Computer Software, which specify that software, including computer programmes, source code and accompanying documentation qualify for copyright protection once the originality requirement has been met. Copyright protection is automatically granted upon creation without the compulsory need for registration, although voluntary registration is commonly used for evidentiary purposes in practice.

In the context of healthcare AI, the underlying algorithmic logic and structure of training models generally do not meet the threshold for authorship under copyright law and thus lack direct copyright protection. As noted in the foregoing, patent protection for AI algorithms integrated into concrete technical solutions remains uncertain. As a result, healthcare AI companies tend to rely more on trade secret protection to safeguard core models, parameters and data preprocessing workflows.

Trade Secret Protection

Pursuant to Article 9(4) of the Anti-Unfair Competition Law, technical information may qualify as a trade secret if it is not publicly known, commercially valuable and subject to reasonable confidentiality measures. In practice, healthcare AI companies treat key elements such as model weights, training datasets, algorithm design frameworks and operational processes as trade secrets. Protection mechanisms typically include non-disclosure agreements, information compartmentalisation, encrypted storage and access controls. Companies also implement clear internal policies on employee IP ownership and post-employment non-compete obligations to mitigate the risk of misappropriation or disputes over employee inventions.

Regulatory Disclosure and Confidentiality Mechanisms

In the context of medical device registration, healthcare AI developers shall submit detailed technical documentation to regulatory authorities. Reviewers and external experts are prohibited from disclosing technical information or other trade secrets obtained during the regulatory process without the applicant’s consent. To mitigate the risk of repeated disclosure, a “master file” system has been introduced, enabling companies to file core algorithmic materials separately and authorise their being referenced across multiple product applications.

Meanwhile, the Guiding Principles for AMD Registration Review mandate transparency by requiring companies to disclose key information – such as algorithm performance, data provenance and training processes – to ensure product safety. For clinical decision support tools, product manuals shall include performance evaluations and a summary of training data. For black-box models, additional disclosures regarding usage limitations and risk warnings are required. In practice, companies typically meet these transparency requirements through summary disclosures and performance reports while safeguarding detailed algorithms as internal confidential information.

Health AI outputs (eg, diagnostic findings, treatment suggestions) are often deemed part of medical services and are generally not recognised as independently tradable IP.

  • Diagnostic recommendations, treatment recommendations and other outputs generated by healthcare AI systems are generally regarded as analytical outcomes rather than original expressions and thus are typically not independently protectable by copyright or patent.
  • Healthcare AI outputs are merely the result of using a (potentially patented) tool; the outputs themselves are not novel “technical solutions”, thus failing to meet the patentability criteria.
  • For copyright protection, current law lacks specific provisions regarding AI outputs. As elaborated in 7.2 Copyright and Trade Secrets, Chinese courts have recognised that when the AI outputs created by natural persons reflect original expression, such outputs may obtain copyright protection. In practice, however, healthcare AI outputs normally lack human authorship and sufficient originality of form, and generally do not qualify as “works” under the Copyright Law, thus falling outside copyright protection.

Given the premise that the outputs themselves generally do not involve IP rights, contractual practice is unlikely to specifically allocate such rights. Instead, contracts would primarily treat the outputs as data and assign rights and obligations from the perspective of data usage.

Due to the absence of specific legal provisions, contractual agreements between AI technology providers and healthcare institutions play a decisive role in allocating IP rights and responsibilities. Typically, AI providers retain IP in core technologies, such as algorithms, software and models, while healthcare institutions (eg, hospitals) receive licences to use and deploy the AI outputs as end users. These contracts often address IP as follows:

  • copyrights of AI software and algorithms belong to the provider, and healthcare institutions may not infringe on providers’ technical secrets through means such as reverse engineering and redistribution;
  • healthcare institutions typically assume legal responsibility for the final diagnosis and decision-making, regardless of AI participation; and
  • healthcare institutions may be required to maintain medical liability insurance to cover potential AI-related errors or adverse outcomes.

Commercialisation Models for Healthcare AI

A variety of commercialisation models are employed in the healthcare AI sector, including technology licensing and collaboration, software-as-a-service (SaaS) subscriptions and direct sales of regulated medical devices.

  • Healthcare AI companies often licence their technologies to major pharmaceutical or medical device firms to facilitate AI adoption in areas such as drug discovery, imaging and diagnostics involving technical licensing during target identification or early-stage development. Additionally, companies frequently engage in joint innovation projects with hospitals or pharmaceutical companies to facilitate clinical integration of AI applications.
  • AI diagnostic services are also offered via cloud platforms, with hospitals subscribing annually or on a per-use basis. This model enables rapid updates and lowers maintenance costs, but raises issues around cybersecurity, network reliability and reimbursement eligibility.
  • Healthcare AI companies may also obtain Class II or Class III medical device approvals to directly commercialise their AI-assisted diagnostic or therapeutic products to healthcare institutions.

Regulatory and Reimbursement Challenges

Under the current Classified Catalogue of Medical Devices, AI diagnostic software offering only clinical support is regulated as Class II, while software generating autonomous diagnostic outputs requires Class III approval, including additional clinical trials. The longer approval timeline for Class III products often leads companies to frame their tools as assistive. Even after regulatory approval, inclusion in hospital billing systems and insurance coverage remains essential for commercial-scale use, yet no AI healthcare product is currently reimbursed under China’s public healthcare system. Consequently, commercialisation still requires active engagement with healthcare authorities to explore viable reimbursement models.

Academic-Industry Collaboration

To accelerate clinical adoption, many AI companies collaborate with hospitals and universities by forming joint labs or R&D alliances. These partnerships integrate clinical expertise and large-scale medical data, enabling the co-development of AI tools tailored to real-world settings. Notable examples include joint laboratories established by SenseTime and West China Hospital of Sichuan University, and by iFLYTEK and Anhui Provincial Hospital early in 2016. More recently, Baidu formed an AI hospital consortium with Shenzhen South Hospital and other partners to explore multi-agent collaborative AI solutions. These collaborations have produced widely adopted imaging, triage and diagnostics applications, forming replicable models for broader industry advancement.

In China, AI-based clinical decision support systems (CDSS) are regulated under a “general regulation + technical guidance” approach. At the general level, they are subject to the Medical Devices Supervision Regulation and the Registration and Filing of Medical Devices Measures, and are typically classified as Class III medical devices when they involve diagnostic or therapeutic decision-making. At the technical guidance level, several documents apply to AI-based CDSS, including those powered by AI. For example, the Good Practices for the Application of the Clinical Decision-making Support System for Medical Institutions (for Trial Implementation) sets out requirements for ethical review, clinical governance, safety and system integration within hospitals.

In practice, regulatory views and recent pilot cases indicate that developers need to disclose training data sources and validate model performance. For example, the Guiding Principles for AMD Registration Review emphasise that AI-based medical devices (including AI-based CDSS) shall undergo performance verification, including in relation to sensitivity, specificity and consistency with clinical standards. Hospitals are also expected to conduct ethical reviews, ensure system traceability and monitor diagnostic performance. Responsibility is shared among developers, institutions and clinicians.

AI-based diagnostic tools are regulated under the “general regulation + technical guidance” approach, applying the same core frameworks as AI-based CDSS. They are typically classified as Class II or III medical devices based on their risk profile. To address domain-specific challenges, regulators and industry bodies have issued supplemental technical guidelines. For example, the Center for Drug Evaluation (CDE) of the NMPA released the Review Guidelines for AI-based Pulmonary Nodule Detection Software via CT Imaging, and the Artificial Intelligence Medical Device Innovation and Cooperation Platform issued the Key Review Points for Deep Learning-Assisted Decision-Making Medical Device Software. These documents clarify regulatory expectations regarding training data, algorithm validation, clinical applicability and risk mitigation.

Under these frameworks, developers are generally required to provide clinical validation data, define algorithm performance metrics (such as sensitivity and specificity), and demonstrate proper data governance and human oversight mechanisms.

AI systems used in treatment planning are also regulated under the “general regulation + technical guidance” approach and are typically classified as Class III medical devices if they directly influence therapeutic decisions. Additionally, the Regulatory Rules for Internet-based Diagnosis and Treatment (Trial) explicitly prohibit AI from replacing licensed HCPs in delivering care or issuing prescriptions. In practice, such systems are treated as assistive tools that support but do not substitute for clinical judgment.

Currently, there are no dedicated technical guidance documents for treatment-planning AI. Nonetheless, oversight principles follow existing frameworks for clinical decision support: licensed HCPs shall validate AI outputs, and medical institutions remain responsible for ethical oversight, system traceability and patient safety.

AI applications and devices used for remote patient monitoring and telemedicine are subject to specific regulatory requirements, including government filing/registration for medical devices, filing/registration for AIGC products and requirements related to human oversight, data protection, ethical review and user training (for medical devices), as well as medical records with respect to data accuracy, completeness, integrity and traceability, etc.

Remote patient monitoring and AI use in home or non-clinical settings may encompass mobile medical devices and general wearables for consumers. In addition to privacy, data quality and security requirements, clear product handbooks and user training materials are essential to ensure proper use of AI systems, especially for medical devices to be used by patients, as required by the Provisions on the Administration of Instructions and Labels of Medical Devices. If used in decentralised clinical trials, the Technical Guidelines for the Implementation of Patient-Centered Clinical Trials (Trial) mandate proper de-identification and protection of patient data, and careful evaluation of digital health technologies (DHTs) based on disease characteristics and patient attributes (eg, education level, digital literacy). Real-time alerts for potential adverse events are also required.

As discussed in 5.4 Human Oversight, broader telemedicine laws, like the Regulatory Rules for Internet-based Diagnosis and Treatment (Trial), explicitly restrict the use of AI in clinical decision-making and require AI use to be human-centred. These AI-related considerations are closely linked with broader management requirements for medical records.

AI applications in drug discovery and development are subject to general pharmaceutical laws, such as the Drug Administration Law, the Measures for the Administration of Drug Registration and the Measures for the Administration of Drug Standards. Although there are no AI-specific regulations in this area, validation must align with existing technical standards.

Notably, the CDE issued the Guiding Principles for Model-Informed Drug Development, which require that the data used to establish models be derived from credible sources such as clinical trials, non-clinical studies or bibliographic references. When real-world data is used, developers must also comply with the Guiding Principles for Real-World Data regarding data quality, governance and applicability.

Several general legislative and regulatory initiatives in China are underway that may shape the development and use of healthcare AI.

  • The Medical Device Management Law (public consultation draft): This draft proposes a unified national framework for medical device data management, promoting data interoperability and resource sharing. It is expected to accelerate healthcare AI development by improving access to standardised, high-quality data. It also allows the use of qualified foreign clinical trial data in registration under certain circumstances.
  • The Artificial Intelligence Law (draft, included in the State Council’s 2024 legislative plan): While China’s proposed AI law draft was removed from the 2025 legislative agenda, the issues of liability and risk management in medical AI remain long-term concerns. It is advisable to keep in view and monitor the development of the comprehensive Artificial Intelligence Law. As laws and regulations governing specific AI-related scenarios are expected to be introduced, the compliance burden will not be alleviated.
  • Model Artificial Intelligence Law 2.0 (expert draft): The draft, crafted by legal experts, has not officially entered the legislative agenda but may potentially serve as a reference. The draft’s main points include strongly supporting open-source AI development through community building and clear liability rules. Additionally, it establishes new IP rules, addressing the use of training data and personal information, and defining protection in relation to AIGC.

At the national level, MIIT and the NMPA have launched a task-based programme targeting AI medical devices. Selected participants receive regulatory and technical support to accelerate AI product development and deployment. In parallel, the National Data Administration and other regulatory bodies have introduced policy to support enterprise data utilisation, with an emphasis on piloting regulatory sandboxes to create a flexible, innovation-friendly environment for emerging technologies and business models, like AI.

Many local governments have also published their own policies. In Beijing, the AI Data Training Base incorporates a regulatory sandbox that facilitates compliant access to large-scale, high-quality datasets for AI model training. It offers end-to-end services while safeguarding data rights and security. Shanghai and Shenzhen are piloting similar approaches.

Beijing’s Data Foundation System Pilot Zone and AI Data Training Base together provide trusted infrastructure for developing innovative AI data mechanisms. By integrating computing, data and compliance solutions, they offer comprehensive support to LLM developers. This represents China’s first successful implementation of an AI regulatory sandbox model, which may gradually extend nationwide. Beijing’s AI + Healthcare Action Plan (2025–27) further proposes a comprehensive support framework to boost healthcare AI development, including fast-track review channels for innovative AI medical devices, prioritised approvals, enhanced policy and financial incentives. By 2027, these measures aim to establish an innovative, globally influential healthcare ecosystem covering the entire value chain from R&D to application.

China actively engages in international efforts to harmonise healthcare AI regulation, participating in bodies like the International Medical Device Regulators Forum (IMDRF), World Health Organization (WHO) and International Organization for Standardization (ISO). China contributes to global rulemaking on AI safety, transparency and data governance, and shares agile regulatory approaches through platforms like the Belt and Road Digital Cooperation Network. WHO and IMDRF guidelines have influenced China’s focus on life cycle management, clinical validation and algorithm transparency. ISO standards also inform national and industry-level AI quality and data governance frameworks.

Cross-border challenges remain for healthcare AI developers; please refer to 10.5 Cross-Border Considerations for more details.

Key challenges include assigning liability for automated AI decision-making, clarifying the fair use of de-identified or copyrighted training data, and ensuring algorithm transparency and fairness – especially in critical medical scenarios. Data quality gaps (eg, insufficient data volume for rare diseases, and inadequate data diversity and representativeness) and poor generalisability further complicate oversight.

Regulators are responding by (i) drafting laws and regulations (see 9.1 Pending Legislation and Regulation); and (ii) exploring dynamic supervision for continuously learning systems, requiring regular performance reports and stricter data governance.

Concerning autonomous AI, future laws may define its legal status and clarify responsibilities among developers, users and institutions. Integration with robotics or virtual reality (VR) also gives rise to cross-sector co-ordination needs.

Healthcare AI developers need to implement “compliance by design” from the outset, embedding regulatory considerations into every stage, from data sourcing to algorithm explainability, establishing dynamic oversight through regular algorithm evaluations and maintaining detailed documentation of training data, validation reports and decision paths, forming a comprehensive AI model life cycle document.

As general practice in AI governance, the following measures could be taken into consideration:

  • establishment of multidisciplinary AI ethics committees within organisations, comprising HCPs, legal experts and IT experts;
  • documentation of the entire AI life cycle, with clear version control and audit trails;
  • Monitoring systems tracking technical performance, data usage and clinical outcomes; and
  • compliance that addresses cross-border data flow, cybersecurity and algorithm validation per China’s evolving legal landscape.

As outlined in 9.2 Regulatory Sandboxes and Innovation Programs, regulatory sandboxes can facilitate a more effective balance between fostering innovation and ensuring compliance.

Healthcare AI contracts typically address the following key areas:

  • IP – core algorithms are usually retained by technology providers, and customised models or outputs may be co-owned;
  • regulatory compliance – providers must ensure their products meet applicable medical device and AI-specific regulations;
  • data and privacy – contracts define data sources, anonymisation standards and compliance with the PIPL and DSL;
  • liability allocation – AI is used as a clinical support tool, and liability for medical decisions remains with HCPs; and
  • indemnity and warranties – liability caps and exclusions are common, and warranties may cover performance, updates and technical support.

Healthcare AI developers should prioritise insurance coverage that protects them from risks associated with algorithm performance and data processing. One of the most critical types of insurance is errors and omissions insurance, which provides protection in case an AI system malfunctions, delivers incorrect outputs or fails to perform as expected. If their AI product is classified as a medical device, developers should also secure product liability insurance. Healthcare users should evaluate whether their existing medical malpractice insurance or professional liability coverage extends to the use of AI-assisted tools. In addition, organisations adopting healthcare AI must consider cyber liability coverage to protect cybersecurity and patient data security.

Currently, there is no mandatory requirement nor dominant market practice for healthcare AI insurance in China. To address the market gaps, the People’s Insurance Company (Group) of China (PICC) has introduced “Affirmative AI Cover” insurance. This liability insurance primarily provides exclusive protection against infringement risks from content generated by LLMs, including copyright, portrait and reputational infringements.

The risk assessment varies significantly between traditional insurers and those offering affirmative AI coverage:

  • traditional insurers tend to assess AI-related risks in healthcare by relying on established actuarial models and regulatory benchmarks – their focus is on how AI affects clinical workflows and liability exposure, rather than the AI’s technical design; and
  • insurers providing Affirmative AI Cover take a more technical and adaptive approach – they conduct risk assessments and measurements based on AI application scenarios in the healthcare industry, upstream and downstream business chains, actual model performance, training data sources, etc, to adjust the coverage scope and premium.

In China, medical institutions are required to follow the best practices in the Management Specifications for Artificial Intelligence-Assisted Diagnosis Technology (Trial) and Management Specifications for Artificial Intelligence-Assisted Treatment Technology (Trial) for implementing healthcare AI systems that qualify as medical devices.

Organisation and Governance Structure

Healthcare organisations should involve ethics committees in the AI system deployment process; a committee should review clinical applicability, patient safety and data usage compliance. Clinical departments and IT teams should co-ordinate implementation, ensuring that systems align with medical workflows and institutional values.

Training Requirements

HCPs shall meet the requirements outlined in the Management Specification for Artificial Intelligence-Assisted Diagnosis Technology (Trial) and Management Specifications for Artificial Intelligence-Assisted Treatment Technology (Trial), including at least six months of structured training at a certified provincial base, 20+ hours of theoretical study and supervised involvement in over 20 AI-assisted diagnosis cases. Post-training assessment should be conducted to ensure clinical competence in AI system use.

Change Management

Effective integration of AI in healthcare requires adapting clinical workflows and ensuring HCP buy-in. AI vendors could support this through:

  • workflow mapping to align AI with clinical practice;
  • pilot testing to gather feedback and refine usability; and
  • ongoing monitoring to ensure safety, compliance and performance.

Deploying healthcare AI across jurisdictions presents complex legal and regulatory challenges. Key issues include the diverse requirements for data privacy and protection, medical device governance (such as the different standards for healthcare AI systems that qualify as medical devices versus algorithm-related issues) and AI regulatory frameworks, etc.

To navigate the different regulatory requirements, it is advisable to:

  • implement data localisation and modular deployment – verify CBDT limits for each major jurisdiction, store sensitive data on local servers and design modular AI architectures that process data locally;
  • create a global compliance framework with local nuances – develop a unified internal standard for AI ethics, quality and compliance, while mapping and adapting to local legal differences (eg, benchmark General Data Protection Regulation (GDPR) for data privacy and protection issues, and then include jurisdiction-specific compliance add-ons);
  • use tech-enhanced compliance measures – leverage technologies like differential privacy and federated learning to protect data while enabling cross-border AI scalability and minimising reliance on centralised datasets; and
  • design flexible contracts and liability frameworks – draft jurisdiction-specific agreements with local partners to clearly define responsibilities, data control, algorithm update protocols and audit rights.
Fangda Partners

24/F, HKRI Centre Two
HKRI Taikoo Hui
288 Shi Men Yi Road
Shanghai 200041
China

+86 21 2208 1166

+86 21 5298 5599

email@fangdalaw.com www.fangdalaw.com
Author Business Card

Law and Practice in China

Authors



Fangda Partners was founded in 1993 and is a leading full-service law firm with approximately 800 lawyers across offices in Beijing, Guangzhou, Hong Kong, Nanjing, Shanghai, Shenzhen and Singapore. The firm adopts a one-firm approach, providing integrated legal services across all practice areas and locations. Recognised as the firm of choice for complex and high-stakes legal matters, Fangda advises major domestic and international companies on both transactions and disputes. Fangda’s team includes lawyers qualified in the PRC, Hong Kong, the United States, the United Kingdom, Australia and Singapore, offering strong cross-border capabilities with a distinct China focus. Fangda has extensive experience in AI and life sciences sectors, such as personalised medicine, digital health and biotechnology, including genomics and cancer diagnostics. The firm has assisted several leading pharmaceutical companies in deploying AI tools to support offline marketing and streamline business operations.