In China, AI is widely used across the healthcare and life science sectors, primarily in relation to medical institutions, pharmaceutical R&D, sales and marketing, medical devices, diagnostics, treatment assistance and consumer health devices. In clinical settings, AI powers diagnostic imaging, virtual consultations and surgery planning tools, and provides clinical decision support for general practitioners. These technologies improve diagnostic accuracy and efficiency, particularly in underserved areas. In the pharmaceutical sector, AI is primarily used for drug discovery, including target identification, virtual screening and clinical trial design, as well as to educate patients and doctors on medical products. Meanwhile, AI-enabled wearable devices support remote patient monitoring and chronic disease management.
In recent years, Chinese authorities have taken active steps to guide and accelerate the development of healthcare AI. In November 2024, the Reference Guide for AI Application Scenarios in the Healthcare Industry (the “AI Application Scenarios Reference Guide”) identified 84 use cases across four categories: medical service management, public health and primary care, health industry innovation (such as robotics and drug development), and medical education and research. This guide provides a national framework to integrate AI into health services more systematically. Further, in early 2025, the Notice on Carrying out the 2025 Innovation Task of AI-based Medical Devices (the “2025 Innovation Task Notice”) was issued, promoting breakthroughs in intelligent diagnostic and therapeutic tools. In March 2025, the Opinions on Comprehensively Deepening the Reform of Drug and Medical Device Regulation to Promote the High-Quality Development of the Pharmaceutical Industry strengthened the regulation of next-generation medical technologies, including AI and medical robotics, by improving national technical standards and regulatory systems.
Healthcare AI adoption in China is driven by both systemic needs and innovation opportunities. On the one hand, medical institutions face growing pressure from limited medical resources, uneven care quality between urban and rural areas in China, and a rising demand for faster and more accurate diagnoses. On the other hand, pharmaceutical companies are under pressure to accelerate drug discovery, reduce R&D costs and improve trial design. These challenges have created strong incentives across the healthcare system to adopt AI solutions that can improve outcomes, enhance efficiency and support clinical decision-making.
AI technologies bring several significant benefits to healthcare delivery. First, they enhance diagnostic accuracy and efficiency, especially in radiology and pathology. AI tools used in image interpretation – such as for lung nodule, breast cancer and stroke detection – can match or exceed human performance in screening tasks, leading to earlier and more reliable disease identification. Second, AI improves access to care by supporting primary care healthcare professionals (HCPs) in under-resourced settings. Intelligent triage, symptom checkers and risk prediction tools help address the shortage of specialists and improve healthcare accessibility in rural and underserved areas. Third, AI enables more personalised and precise treatment decisions. By integrating genomics, pathology and patient-specific health data, AI systems can recommend tailored therapies, particularly in oncology, where AI supports the design of targeted or immune-based treatment strategies. Fourth, AI contributes to public health by strengthening early warning and monitoring systems. During the COVID-19 pandemic, AI tools were used for outbreak modelling, contact tracing and real-time policy support, highlighting their value in health emergency management. Fifth, AI supports clinical consistency and equity by reducing reliance on HCP intuition and minimising human error or bias. This helps standardise care pathways across medical institutions, promoting more equitable treatment regardless of geography. Finally, AI accelerates pharmaceutical innovation by optimising target identification, virtual compound screening and trial design. These tools are increasingly being integrated into R&D workflows, shortening timelines and boosting productivity.
Despite these advantages, AI also brings challenges around safety, liability, regulatory oversight and integration into professional workflows.
China’s healthcare AI market is rapidly evolving under strong policy support and industrial demand. The AI Application Scenarios Reference Guide and the 2025 Innovation Task Notice emphasise scenario-based adoption and regulatory readiness. Local governments have issued tailored plans to accelerate integration, such as the Shanghai Work Program for the Development of Artificial Intelligence in Medicine (2025–27) and the Suzhou Accelerates the Promotion of “Artificial Intelligence + Healthcare” Construction and Application Development Implementation Plan.
Pharmaceutical companies and technology developers are the primary drivers of innovation, and big technology firms and AI startups play a key role as enablers in these ecosystems.
Notable collaborations reflect how tech and medical institutions are jointly reshaping healthcare delivery through AI. Zhongshan Hospital partnered with Huawei and iFlytek to build a multi-modal smart hospital integrating AI imaging, voice assistants and clinical decision tools. Jiangsu Provincial People’s Hospital worked with Tencent to launch an AI imaging and virtual doctor system for patient engagement and diagnostic support. The Second Affiliated Hospital of Zhejiang University teamed up with Alibaba Health to deploy AI across triage, follow-up and patient management, while China-Japan Friendship Hospital collaborated with Baidu to develop a multi-modal foundation model for interdisciplinary clinical decision-making.
Current law and regulations lack a statutory definition for “healthcare AI systems”. However, the AI Application Scenarios Reference Guide provides concrete definitions for specific AI healthcare application scenarios. AI-based medical software packages meeting the statutory definition of medical devices are regulated as medical devices. The AI Application Scenarios Reference Guide categories healthcare AI applications into four domains with specific use cases:
AI medical software packages qualifying as medical devices are categorised under the Guiding Principles for the Classification and Definition of AI-based Medical Software Products as follows.
China lacks unified healthcare AI legislation.
Alongside existing medical device regulations, such as the Regulation on the Supervision and Administration of Medical Devices (revised in 2024; the “Medical Devices Supervision Regulation”) and the Administrative Measures on the Registration and Record-filing of Medical Devices (“Registration and Filing of Medical Devices Measures”), sector-specific rules target technologies like generative AI (GenAI) and deep synthesis algorithms – such as the Interim Measures for the Administration of Generative Artificial Intelligence Services (the “Gen AI Measures”), the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services (the “Deep Synthesis Rules”), the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services (the “Recommendation Rules”), the Cybersecurity Law (CSL), the Data Security Law (DSL) and the Personal Information Protection Law (PIPL). Specialised technical guidelines also form part of the regulatory framework, such as the Guiding Principles for Registration Review of AI-based Medical Devices (the “Guiding Principles for AMD Registration Review”). Moreover, investments in the healthcare AI sector remain subject to general foreign direct investment restrictions.
The regulatory framework comprises three distinct categories: AI medical devices, algorithm-based products, and medical service and medical technology.
Regarding AI medical devices, pursuant to the Guiding Principles for AMD Registration Review, the regulatory process covers several stages, starting with design and development (where each algorithm requires independent life cycle validation, and where 18 technical aspects including cybersecurity must be documented) and followed by pre-submission (involving activities to determine classification, conduct testing and compile clinical evaluation data), submission and review (requiring entities to file an application with the National Medical Products Administration (NMPA) for technical/good manufacturing practice (GMP) review) and finally certification (where the goal is to obtain a medical device registration certificate.
After determining the registration category, AI-based medical software is registered as standalone software. In special circumstances, streamlined pathways may apply, specifically covering two scenarios:
The foundational regulatory obligations include three main aspects:
Regarding AI algorithm requirements, according to the Deep Learning-Assisted Decision-Making SaMD Review, algorithm design shall consider the quality control requirements for the following activities: algorithm selection, algorithm training, cybersecurity protections and algorithm performance evaluation.
As for continuous learning, according to the Guiding Principles for AMD Registration Review, the registration applicant shall verify and validate the safety and effectiveness of self-learning updates under its quality management system, apply for change registration where required and deploy such updates only upon obtaining NMPA approval.
China has not issued specific privacy and data protection rules on the development and deployment of healthcare AI. Healthcare AI is still subject to general data protection legal requirements, including the PIPL, CSL, DSL and Network Data Security Management Regulations, etc.
Where patient data or wearable device users’ personal data is to be used for AI model training and operation, the following data processing activities should be carefully considered.
As the development of AI in the healthcare industry relies heavily on a large volume of sensitive patient data for training purposes, datasets composed of massive patient data or inferences drawn from comprehensive analysis based on such data may potentially be recognised as “important data” when a catalogue of important data in the healthcare sector is released. Accordingly, this could in turn trigger obligations such as data processing agreement drafting, risk assessments and annual reporting of important data processing activities, and graded classification and protection of important data.
There is currently no technical standard specific to healthcare AI systems. Instead, applicable standards are scattered across national, industry, and group standards. While national standards mainly cover general cybersecurity and data protection, technical guidance pertinent to healthcare AI is largely found in industry and group-level standards.
For GenAI systems that interact directly with patients – such as intelligent triage, virtual consultations or pre-diagnosis assistants – specific standards apply, including the following.
In addition to GenAI services, healthcare AI systems are also subject to the following technical requirements and standards.
National standards are normally developed by standardisation institutions and sectoral administrations, such as the National Information Security Standardization Technical Committee (TC260) and the NMPA, while group standards are often led by the China Communications Standards Association, with supervision and direction by regulatory agencies like the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT) and the NMPA.
Regulators can be categorised by sectoral administration and supervision mandates as:
Inter-agency co-ordination occurs through specialised law enforcement campaigns. CAC and MIIT, as technology regulators, lead these efforts, but in practice will defer to sector-specific authorities such as the NMPA for healthcare oversight.
Pre-market requirements for healthcare AI developers in China mainly apply to AI-based medical devices, which are regulated under the Medical Devices Supervision Regulation, Registration and Filing of Medical Devices Measure, and relevant technical guidelines.
Pursuant to the Guiding Principles for AMD Registration Review and the Guiding Principles for SaMD Registration Review, developers must:
Regulators also require disclosure of algorithm structure, training data and performance metrics. To enhance transparency and interpretability – especially for deep learning models – visual tools such as heatmaps are often encouraged. Furthermore, developers must mitigate bias through representative data collection and fairness assessments. For transparency, explainability and bias mitigation, please see 5.2 Transparency and Explainability and 5.3 Bias and Fairness.
Post-market surveillance requirements differ based on application types. For hospital-deployed medical AI, according to the Administrative Measures for Adverse Drug Reaction Reporting and Monitoring, which require institutions to report adverse drug reactions, there is currently no overarching legal framework specifically addressing AI-related risks. AI-based medical devices are subject to the Medical Devices Supervision Regulation, which mandates that registrants and filing holders conduct adverse event monitoring, re-evaluate marketed devices and implement recall mechanisms where necessary.
Concerning algorithm updates, as outlined in 2.4 Software as a Medical Device (SaMD), developers must:
For adaptive or continuous learning algorithms, the Guiding Principles for AMD Registration Review require that such features remain disabled or used solely for research purposes unless separately approved. These models, which update based on real-world data, introduce uncertainty in safety and effectiveness. Developers must validate any changes resulting from self-learning and apply for registration modification before such updates can be deployed in clinical settings.
A centralised adverse event reporting system exists for medical devices for monitoring and reporting adverse events; however, no dedicated monitoring mechanism is in place for AI applications outside the scope of medical device regulation.
Regarding enforcement, administrative penalties have been imposed on the use of unregistered AI-based medical software and on health data breaches. However, no publicly reported cases of regulatory intervention, warnings or product recalls specific to healthcare AI have been identified.
Penalties vary by violation type. Under the Medical Devices Supervision Regulation, use of unregistered Class II/III AI medical devices may trigger confiscation, fines or business suspension. Under the DSL, data protection failures may lead to fines of up to CNY2 million, suspension of operations or licence revocation. Although no significant or systematic enforcement against healthcare AI has been seen, in June 2023, a Beijing software company developing human gene exome data analysis systems was fined for failing to implement sufficient data security measures, resulting in 19.1 GB of genetic data being exposed to the risk of leakage.
China has not yet established a dedicated legal framework specifically addressing liability allocation between AI-related stakeholders. Liabilities are allocated under the traditional tort law, contract law and administrative regulations regarding generic products, medical devices, patients and healthcare providers. The applicable legal framework governing the liability of healthcare AI systems is as follows.
The Civil Code – Generic or Special Product Liability Provisions
If a healthcare AI system causes personal injury or property damage due to defects, the liability depends on the system type.
Steps to determine whether the AI system is defective in judicial practice typically include the following.
Product Quality Law and Consumer Rights Protection Law
If healthcare AI systems are defined as “products”, their safety, suitability and instructions must comply with relevant standards. Developers and sellers bear civil and administrative liability for non-compliance.
Regulations on the Supervision and Administration of Medical Devices (2024 Revision)
If healthcare AI systems qualify as medical devices, manufacturers are responsible for their quality, safety and effectiveness. Regulatory authorities may order recalls or impose penalties for design defects or software update failures.
Medical institutions and HCPs remain subject to traditional medical malpractice standards. Given that most AI systems in clinical practice function as decision-support tools rather than fully autonomous systems, ultimate responsibility typically rests with the human user. Improper reliance on AI-generated recommendations or inadequate supervision of the system’s application can expose medical treatment providers to legal claims.
In such cases, traditional rules on patient harm and malpractice apply, so medical institutions will be held liable only if their personnel are proven to be at fault and the cause of patient harm during diagnosis or treatment. Specifically, the patient must prove four elements: wrongful act, damage, causation and fault, where proving fault of medical personnel is the most challenging.
China has not yet prescribed a unified risk management framework specific to healthcare AI, but medical institutions and developers are subject to some fragmented regulatory and technical requirements.
Medical Institutions
Medical institutions deploying AI systems are generally expected to establish internal oversight mechanisms, including risk identification, adverse event tracking and algorithm performance monitoring. AI is typically treated as an assistive tool, and liability remains with licensed HCPs, reinforcing the need for robust human-in-the-loop safeguards.
Developers
AI-based medical devices shall comply with existing medical device regulations. Local guidance, such as that from the Beijing Medical Products Administration, requires risk documentation covering the full life cycle – risk identification, control measures, residual risk evaluation and traceability – particularly for AI-specific risks like false negatives or model drift.
Risk Assessment and Insurance
There is no mandatory AI-specific insurance, but some policy proposals encourage tailored coverage. In practice, a few insurers and medical institutions have piloted AI-related liability coverage or internal reserve mechanisms to manage emerging risks.
In generic/medical device product liability cases, under general tort law and product liability law the burden of proof lies mainly with the patient rather than the producer/manufacturer (developer), seller or medical institute. There is no reversal of the burden of proof. Consequently, it remains relatively difficult for the patient to hold AI developers or users liable.
In current judicial practice, courts will rely on experts to review AI system algorithms and determine whether there is obvious room for improvement that could have prevented the harm (ie, design defects). To date, no cases have involved “black-box” AI systems that are completely opaque and cannot be reviewed, and no relevant precedents exist.
In medical institute malpractice cases, there are also no specific liability limitations or “safe harbour” provisions available to healthcare users who used AI tools in treatment or diagnosis. The medical institute still needs to independently review and verify the AI system’s conclusion according to the medical standards prevailing at the time.
The ethical framework for healthcare AI consists of various mandatory requirements, recommended guidelines and industrial standards. This ethical framework emphasises the ethical review process to ensure compliance and AI’s human-centred nature.
One of the key milestones is the promulgation of the Measures for Scientific and Technological Ethics Review (Trial) in 2023. Companies engaging in life science, healthcare and AI research that involves sensitive fields of sci-tech ethics have to establish an internal ethics review committee to assess compliance with applicable laws, ethical codes and sci-tech ethical principles promoting human well-being, respecting the right to life, adhering to fairness and impartiality, reasonably controlling risks, maintaining openness and transparency, etc. For sci-tech activities that may pose a greater possibility of ethical risks, such as R&D of automated decision-making systems with a high degree of autonomy (AI models) for scenarios with safety or personal health risks, additional expert ethical review is required.
Various recommended guidelines have also been developed as sectoral best practice for developers and health institutions. For example, the Code of Ethics for the New Generation Artificial Intelligence, issued in 2021, which emphasises privacy and data security and echoes the ethical principles listed in the foregoing, provides ethical codes from R&D, supply, use and management perspectives. Similar ethical principles are also seen in the Industrial Expert Consensus of Deployment of DeepSeek by Medical Institutions.
In practice, ethical considerations, like human welfare, privacy and data security, and accountability are integrated into regulatory processes through product registration, clinical trials and post-market monitoring on adverse incidents. Ethics committee approval in clinical trials for AI medical devices, pertaining to whether the trial has sufficiently considered ethical principles, is a must-have. In particular, ethics committee approval for an AI medical device to collect data is also required as part of the algorithm research report to be submitted for medical device registration.
For healthcare AI systems that qualify as medical devices, the instructions of the product shall comply with the requirements of transparency and explainability, and shall include basic algorithm information. If an AI system’s security level is severe (such as in the case of using a black-box algorithm or auxiliary decision-making), additional algorithm research summaries, use restrictions and necessary precaution information shall also be provided, as required in the Guiding Principles for AMD Registration Review.
HCPs are only explicitly required to disclose to patients when healthcare AI is being used in their care in limited situations. For example, before using AI-assisted diagnostic technology for invasive examinations or performing surgery assisted by an AI surgical system, the purpose of the examination/surgery, risks, precautions, potential complications and preventive measures should be communicated by the HCP to the patient and their family members in advance. An informed consent form should also be signed.
For other healthcare AI systems that may process HCPs’ and patients’ personal information, general transparency requirements under the PIPL will apply, and the purpose and means of data processing shall also be made available to the HCPs and patients concerned.
Maintaining fairness and preventing algorithm bias is one of the key principles in healthcare AI-related regulations and guidelines. For example, the Gen AI Measures (where applicable) require GenAI service providers to take effective measures to prevent bias during algorithm design, the selection of training data, model generation and optimisation, service provision, etc.
For AI medical devices, the Guiding Principles for AMD Registration Review issued by the NMPA provide that, to ensure data quality and control data bias during the training of AI systems, the collection of sample data must consider the compliance, sufficiency and diversity of data sources (such as disease composition, population distribution, the scientific and rational distribution of data, and the sufficiency, effectiveness and accuracy of data quality control). In the registration materials for AI medical devices, it is also required that the NMPA be provided with algorithm risk management information, specifying control measures for risks such as overfitting and underfitting, false negatives and false positives, and data contamination and bias.
For healthcare-related GenAI services (such as GenAI tools for diagnostics and treatment planning or patient consultation), the Gen AI Measures require service providers to carry out a security assessment, under which training data and outputs that contain discriminative, unreliable or imprecise content that does not meet the security requirements in healthcare information services must be strictly managed and controlled during sampling tests. Content monitoring and a user complaint mechanism shall also be adopted during service provision.
Healthcare AI adheres to a human-centred principle, and automatically generating prescriptions, falsely using an HCP’s name or replacing an HCP in providing diagnosis and treatment services is explicitly prohibited. The final diagnosis and treatment must be determined by a qualified HCP.
Healthcare AI systems can only serve as a tool for users (HCPs or patients) to collect medical referential information, or to assist users (HCPs or patients) with auxiliary decision-making. In addition, highly autonomous AI systems that involve safety or health risks are subject to ethical review and expert re-examination.
For healthcare AI systems that qualify as medical devices, the Guiding Principles for AMD Registration Review provide key compliance requirements for the training data.
For other generic healthcare-related Gen AI services, the Gen AI Measures will regulate the data training process, which primarily requires service providers to use training data and models from legal sources, ensure there is no infringement, take measures to improve data quality and prevent bias, etc.
For bias-mitigation measures, please refer to 5.3 Bias and Fairness.
If healthcare data used for training contains personal information, the PIPL and Measures for the Ethical Review of Life Science and Medical Research Involving Humans requires – as a general principle – that data processing activities, including secondary use of healthcare data for AI training and development, shall be disclosed in the privacy policy/informed consent form to patients, and consent shall be obtained.
Although it might not be feasible to obtain consent for secondary use, current laws do not provide consent exemptions for secondary use of healthcare data, nor do they specifically provide that secondary use is compatible use. Having said that, the recommended national standard GB/T 39375-2020 Health and Medical Data Security Guidelines provides a mechanism to request secondary use of healthcare data from medical institutions, albeit that this is limited to non-identifiable data and can only be used for non-profit purposes.
Current legislation governing data sharing and access remains centred around:
As required by the PIPL, medical institutions collaborating with enterprises on healthcare AI development must strictly comply with the notification and separate consent requirements before sharing patients’ data. A data sharing agreement must be established to define the scope, purpose and means of data sharing and responsibilities.
Cross-border transfer of personal information and important data for healthcare AI development shall also comply with the CBDT mechanisms required by CAC, such as security assessment and standard contractual clause (SCC) filing. If training data involves human genetic resources, the Regulations on the Administration of Human Genetic Resources also require that mandatory filing and data backup be completed before such data can be lawfully transferred outside of China.
The de-identification and anonymisation of health data are primarily governed by the PIPL – which has established clear definitions for personal information de-identification and anonymisation – and the recommended national standard GB/T 37964-2019 Guidelines for De-identifying Personal Information, which provides detailed guidance on de-identification methods such as aggregation, encryption, suppression, pseudonymisation, generalisation and randomisation.
As raw health and medical data still constitute personal information, many AI system developers are considering the feasibility of de-identifying and anonymising such data for training purposes, to be exempted from the compliance requirements under the PIPL. While there are no legal standards for health and medical data anonymisation as of yet, AI system developers are adopting multiple de-identification measures, aiming to minimise the risk of re-identification to an acceptable level.
Under the Chinese Patent Law, an invention must be a novel technical solution that solves technical problems using natural laws and leads to technical effects. Purely abstract algorithms or mental methods, including AI models that do not have practical applications or technical implementability, are not patentable. To form a complete “problem–means–effect” chain, the healthcare AI patent application must clearly show how the technical systems solve specific technical issues (eg, how the algorithm is embedded in image-capturing devices or diagnostic apparatus). In practice, the Guidelines for Patent Applications for AI-related Inventions, published by the National Intellectual Property Administration (CNIPA), further clarify that, due to the “black-box” nature of AI, the patent specifications must include experimental data and parameter relationships so as to meet the implementation requirements.
Further, Article 25.1.3 of the Patent Law prohibits patents for diagnosis and treatment methods for illnesses. This limitation poses a significant barrier to healthcare AI patent applications that directly involve medical diagnoses. In patent examination practice, AI algorithms that directly diagnose diseases from patient data are typically considered “diagnostic methods” and are excluded from patent protection. To navigate this restriction, companies often reframe their inventions to avoid the word “diagnosis” and emphasise systems and devices rather than diagnostic methods.
A notable case illustrating the successful application of healthcare AI in China is Tencent’s MiYing AI for glaucoma diagnosis, which passed the regulatory requirements for medical devices and was approved as an innovative medical instrument. This case demonstrates that AI applications integrated with medical equipment and following specific technical and regulatory guidelines can be successfully patented.
Copyright Protection
In China, Article 3(8) of the Copyright Law expressly lists computer software among the categories of copyrightable works. This statutory protection is further elaborated in Articles 2 and 3 of the Regulations on the Protection of Computer Software, which specify that software, including computer programmes, source code and accompanying documentation qualify for copyright protection once the originality requirement has been met. Copyright protection is automatically granted upon creation without the compulsory need for registration, although voluntary registration is commonly used for evidentiary purposes in practice.
In the context of healthcare AI, the underlying algorithmic logic and structure of training models generally do not meet the threshold for authorship under copyright law and thus lack direct copyright protection. As noted in the foregoing, patent protection for AI algorithms integrated into concrete technical solutions remains uncertain. As a result, healthcare AI companies tend to rely more on trade secret protection to safeguard core models, parameters and data preprocessing workflows.
Trade Secret Protection
Pursuant to Article 9(4) of the Anti-Unfair Competition Law, technical information may qualify as a trade secret if it is not publicly known, commercially valuable and subject to reasonable confidentiality measures. In practice, healthcare AI companies treat key elements such as model weights, training datasets, algorithm design frameworks and operational processes as trade secrets. Protection mechanisms typically include non-disclosure agreements, information compartmentalisation, encrypted storage and access controls. Companies also implement clear internal policies on employee IP ownership and post-employment non-compete obligations to mitigate the risk of misappropriation or disputes over employee inventions.
Regulatory Disclosure and Confidentiality Mechanisms
In the context of medical device registration, healthcare AI developers shall submit detailed technical documentation to regulatory authorities. Reviewers and external experts are prohibited from disclosing technical information or other trade secrets obtained during the regulatory process without the applicant’s consent. To mitigate the risk of repeated disclosure, a “master file” system has been introduced, enabling companies to file core algorithmic materials separately and authorise their being referenced across multiple product applications.
Meanwhile, the Guiding Principles for AMD Registration Review mandate transparency by requiring companies to disclose key information – such as algorithm performance, data provenance and training processes – to ensure product safety. For clinical decision support tools, product manuals shall include performance evaluations and a summary of training data. For black-box models, additional disclosures regarding usage limitations and risk warnings are required. In practice, companies typically meet these transparency requirements through summary disclosures and performance reports while safeguarding detailed algorithms as internal confidential information.
Health AI outputs (eg, diagnostic findings, treatment suggestions) are often deemed part of medical services and are generally not recognised as independently tradable IP.
Given the premise that the outputs themselves generally do not involve IP rights, contractual practice is unlikely to specifically allocate such rights. Instead, contracts would primarily treat the outputs as data and assign rights and obligations from the perspective of data usage.
Due to the absence of specific legal provisions, contractual agreements between AI technology providers and healthcare institutions play a decisive role in allocating IP rights and responsibilities. Typically, AI providers retain IP in core technologies, such as algorithms, software and models, while healthcare institutions (eg, hospitals) receive licences to use and deploy the AI outputs as end users. These contracts often address IP as follows:
Commercialisation Models for Healthcare AI
A variety of commercialisation models are employed in the healthcare AI sector, including technology licensing and collaboration, software-as-a-service (SaaS) subscriptions and direct sales of regulated medical devices.
Regulatory and Reimbursement Challenges
Under the current Classified Catalogue of Medical Devices, AI diagnostic software offering only clinical support is regulated as Class II, while software generating autonomous diagnostic outputs requires Class III approval, including additional clinical trials. The longer approval timeline for Class III products often leads companies to frame their tools as assistive. Even after regulatory approval, inclusion in hospital billing systems and insurance coverage remains essential for commercial-scale use, yet no AI healthcare product is currently reimbursed under China’s public healthcare system. Consequently, commercialisation still requires active engagement with healthcare authorities to explore viable reimbursement models.
Academic-Industry Collaboration
To accelerate clinical adoption, many AI companies collaborate with hospitals and universities by forming joint labs or R&D alliances. These partnerships integrate clinical expertise and large-scale medical data, enabling the co-development of AI tools tailored to real-world settings. Notable examples include joint laboratories established by SenseTime and West China Hospital of Sichuan University, and by iFLYTEK and Anhui Provincial Hospital early in 2016. More recently, Baidu formed an AI hospital consortium with Shenzhen South Hospital and other partners to explore multi-agent collaborative AI solutions. These collaborations have produced widely adopted imaging, triage and diagnostics applications, forming replicable models for broader industry advancement.
In China, AI-based clinical decision support systems (CDSS) are regulated under a “general regulation + technical guidance” approach. At the general level, they are subject to the Medical Devices Supervision Regulation and the Registration and Filing of Medical Devices Measures, and are typically classified as Class III medical devices when they involve diagnostic or therapeutic decision-making. At the technical guidance level, several documents apply to AI-based CDSS, including those powered by AI. For example, the Good Practices for the Application of the Clinical Decision-making Support System for Medical Institutions (for Trial Implementation) sets out requirements for ethical review, clinical governance, safety and system integration within hospitals.
In practice, regulatory views and recent pilot cases indicate that developers need to disclose training data sources and validate model performance. For example, the Guiding Principles for AMD Registration Review emphasise that AI-based medical devices (including AI-based CDSS) shall undergo performance verification, including in relation to sensitivity, specificity and consistency with clinical standards. Hospitals are also expected to conduct ethical reviews, ensure system traceability and monitor diagnostic performance. Responsibility is shared among developers, institutions and clinicians.
AI-based diagnostic tools are regulated under the “general regulation + technical guidance” approach, applying the same core frameworks as AI-based CDSS. They are typically classified as Class II or III medical devices based on their risk profile. To address domain-specific challenges, regulators and industry bodies have issued supplemental technical guidelines. For example, the Center for Drug Evaluation (CDE) of the NMPA released the Review Guidelines for AI-based Pulmonary Nodule Detection Software via CT Imaging, and the Artificial Intelligence Medical Device Innovation and Cooperation Platform issued the Key Review Points for Deep Learning-Assisted Decision-Making Medical Device Software. These documents clarify regulatory expectations regarding training data, algorithm validation, clinical applicability and risk mitigation.
Under these frameworks, developers are generally required to provide clinical validation data, define algorithm performance metrics (such as sensitivity and specificity), and demonstrate proper data governance and human oversight mechanisms.
AI systems used in treatment planning are also regulated under the “general regulation + technical guidance” approach and are typically classified as Class III medical devices if they directly influence therapeutic decisions. Additionally, the Regulatory Rules for Internet-based Diagnosis and Treatment (Trial) explicitly prohibit AI from replacing licensed HCPs in delivering care or issuing prescriptions. In practice, such systems are treated as assistive tools that support but do not substitute for clinical judgment.
Currently, there are no dedicated technical guidance documents for treatment-planning AI. Nonetheless, oversight principles follow existing frameworks for clinical decision support: licensed HCPs shall validate AI outputs, and medical institutions remain responsible for ethical oversight, system traceability and patient safety.
AI applications and devices used for remote patient monitoring and telemedicine are subject to specific regulatory requirements, including government filing/registration for medical devices, filing/registration for AIGC products and requirements related to human oversight, data protection, ethical review and user training (for medical devices), as well as medical records with respect to data accuracy, completeness, integrity and traceability, etc.
Remote patient monitoring and AI use in home or non-clinical settings may encompass mobile medical devices and general wearables for consumers. In addition to privacy, data quality and security requirements, clear product handbooks and user training materials are essential to ensure proper use of AI systems, especially for medical devices to be used by patients, as required by the Provisions on the Administration of Instructions and Labels of Medical Devices. If used in decentralised clinical trials, the Technical Guidelines for the Implementation of Patient-Centered Clinical Trials (Trial) mandate proper de-identification and protection of patient data, and careful evaluation of digital health technologies (DHTs) based on disease characteristics and patient attributes (eg, education level, digital literacy). Real-time alerts for potential adverse events are also required.
As discussed in 5.4 Human Oversight, broader telemedicine laws, like the Regulatory Rules for Internet-based Diagnosis and Treatment (Trial), explicitly restrict the use of AI in clinical decision-making and require AI use to be human-centred. These AI-related considerations are closely linked with broader management requirements for medical records.
AI applications in drug discovery and development are subject to general pharmaceutical laws, such as the Drug Administration Law, the Measures for the Administration of Drug Registration and the Measures for the Administration of Drug Standards. Although there are no AI-specific regulations in this area, validation must align with existing technical standards.
Notably, the CDE issued the Guiding Principles for Model-Informed Drug Development, which require that the data used to establish models be derived from credible sources such as clinical trials, non-clinical studies or bibliographic references. When real-world data is used, developers must also comply with the Guiding Principles for Real-World Data regarding data quality, governance and applicability.
Several general legislative and regulatory initiatives in China are underway that may shape the development and use of healthcare AI.
At the national level, MIIT and the NMPA have launched a task-based programme targeting AI medical devices. Selected participants receive regulatory and technical support to accelerate AI product development and deployment. In parallel, the National Data Administration and other regulatory bodies have introduced policy to support enterprise data utilisation, with an emphasis on piloting regulatory sandboxes to create a flexible, innovation-friendly environment for emerging technologies and business models, like AI.
Many local governments have also published their own policies. In Beijing, the AI Data Training Base incorporates a regulatory sandbox that facilitates compliant access to large-scale, high-quality datasets for AI model training. It offers end-to-end services while safeguarding data rights and security. Shanghai and Shenzhen are piloting similar approaches.
Beijing’s Data Foundation System Pilot Zone and AI Data Training Base together provide trusted infrastructure for developing innovative AI data mechanisms. By integrating computing, data and compliance solutions, they offer comprehensive support to LLM developers. This represents China’s first successful implementation of an AI regulatory sandbox model, which may gradually extend nationwide. Beijing’s AI + Healthcare Action Plan (2025–27) further proposes a comprehensive support framework to boost healthcare AI development, including fast-track review channels for innovative AI medical devices, prioritised approvals, enhanced policy and financial incentives. By 2027, these measures aim to establish an innovative, globally influential healthcare ecosystem covering the entire value chain from R&D to application.
China actively engages in international efforts to harmonise healthcare AI regulation, participating in bodies like the International Medical Device Regulators Forum (IMDRF), World Health Organization (WHO) and International Organization for Standardization (ISO). China contributes to global rulemaking on AI safety, transparency and data governance, and shares agile regulatory approaches through platforms like the Belt and Road Digital Cooperation Network. WHO and IMDRF guidelines have influenced China’s focus on life cycle management, clinical validation and algorithm transparency. ISO standards also inform national and industry-level AI quality and data governance frameworks.
Cross-border challenges remain for healthcare AI developers; please refer to 10.5 Cross-Border Considerations for more details.
Key challenges include assigning liability for automated AI decision-making, clarifying the fair use of de-identified or copyrighted training data, and ensuring algorithm transparency and fairness – especially in critical medical scenarios. Data quality gaps (eg, insufficient data volume for rare diseases, and inadequate data diversity and representativeness) and poor generalisability further complicate oversight.
Regulators are responding by (i) drafting laws and regulations (see 9.1 Pending Legislation and Regulation); and (ii) exploring dynamic supervision for continuously learning systems, requiring regular performance reports and stricter data governance.
Concerning autonomous AI, future laws may define its legal status and clarify responsibilities among developers, users and institutions. Integration with robotics or virtual reality (VR) also gives rise to cross-sector co-ordination needs.
Healthcare AI developers need to implement “compliance by design” from the outset, embedding regulatory considerations into every stage, from data sourcing to algorithm explainability, establishing dynamic oversight through regular algorithm evaluations and maintaining detailed documentation of training data, validation reports and decision paths, forming a comprehensive AI model life cycle document.
As general practice in AI governance, the following measures could be taken into consideration:
As outlined in 9.2 Regulatory Sandboxes and Innovation Programs, regulatory sandboxes can facilitate a more effective balance between fostering innovation and ensuring compliance.
Healthcare AI contracts typically address the following key areas:
Healthcare AI developers should prioritise insurance coverage that protects them from risks associated with algorithm performance and data processing. One of the most critical types of insurance is errors and omissions insurance, which provides protection in case an AI system malfunctions, delivers incorrect outputs or fails to perform as expected. If their AI product is classified as a medical device, developers should also secure product liability insurance. Healthcare users should evaluate whether their existing medical malpractice insurance or professional liability coverage extends to the use of AI-assisted tools. In addition, organisations adopting healthcare AI must consider cyber liability coverage to protect cybersecurity and patient data security.
Currently, there is no mandatory requirement nor dominant market practice for healthcare AI insurance in China. To address the market gaps, the People’s Insurance Company (Group) of China (PICC) has introduced “Affirmative AI Cover” insurance. This liability insurance primarily provides exclusive protection against infringement risks from content generated by LLMs, including copyright, portrait and reputational infringements.
The risk assessment varies significantly between traditional insurers and those offering affirmative AI coverage:
In China, medical institutions are required to follow the best practices in the Management Specifications for Artificial Intelligence-Assisted Diagnosis Technology (Trial) and Management Specifications for Artificial Intelligence-Assisted Treatment Technology (Trial) for implementing healthcare AI systems that qualify as medical devices.
Organisation and Governance Structure
Healthcare organisations should involve ethics committees in the AI system deployment process; a committee should review clinical applicability, patient safety and data usage compliance. Clinical departments and IT teams should co-ordinate implementation, ensuring that systems align with medical workflows and institutional values.
Training Requirements
HCPs shall meet the requirements outlined in the Management Specification for Artificial Intelligence-Assisted Diagnosis Technology (Trial) and Management Specifications for Artificial Intelligence-Assisted Treatment Technology (Trial), including at least six months of structured training at a certified provincial base, 20+ hours of theoretical study and supervised involvement in over 20 AI-assisted diagnosis cases. Post-training assessment should be conducted to ensure clinical competence in AI system use.
Change Management
Effective integration of AI in healthcare requires adapting clinical workflows and ensuring HCP buy-in. AI vendors could support this through:
Deploying healthcare AI across jurisdictions presents complex legal and regulatory challenges. Key issues include the diverse requirements for data privacy and protection, medical device governance (such as the different standards for healthcare AI systems that qualify as medical devices versus algorithm-related issues) and AI regulatory frameworks, etc.
To navigate the different regulatory requirements, it is advisable to:
24/F, HKRI Centre Two
HKRI Taikoo Hui
288 Shi Men Yi Road
Shanghai 200041
China
+86 21 2208 1166
+86 21 5298 5599
email@fangdalaw.com www.fangdalaw.comOverall Industry and Market Trends
National policies to help develop the Chinese AI sector
China aims to become the world’s major AI innovation centre by 2030. A development plan issued by the State Council outlines the goal of realising an AI core industry exceeding CNY1 trillion (~USD 140.9 billion) in value, with related industries surpassing CNY10 trillion (~USD 1.4 trillion). Since 2016, China has achieved remarkable progress in the field of AI through strategic planning and initiatives. The government has introduced a series of comprehensive blueprints and public policies designed to support AI companies across multiple dimensions, including talent cultivation, start-up incubation, computing power and infrastructure procurement, investment schemes and incentives, taxation, product marketisation and other related aspects. Key policies include the “Internet+ AI” Three-Year Implementation Plan (2016), the New Generation Artificial Intelligence Development Plan (2017), National AI Open Innovation Platforms (2019), the AI Standardization Strategy (2020), AI Pilot Zones (2022) and the “AI+” Initiative (2024).
China’s adoption of AI across sectors is also growing rapidly, with the country’s AI ecosystem thriving under a wave of supportive policies. For instance, major shopping platforms, e-commerce sites and short-video apps are all deploying AI algorithms for content feeds, payments, and user services. The government has also been pushing for “smart retail” upgrades, with retailers adopting AI for inventory management, cashier-free shopping and even the generative design of products. In addition, major Chinese tech companies have launched numerous large language models (LLMs). As of March 2025, approximately 350 LLMs have been filed with the Cyberspace Administration of China (CAC). These models cover a wide range of applications, including fintech, medical and healthcare, education, intelligent manufacturing, content creation and enterprise services.
The medical sector: from departmental tools to institutional AI integration
AI technology is deconstructing traditional healthcare systems, enabling a transition from department-specific empowerment to hospital-wide intelligent ecosystems. AI in hospitals is evolving from isolated applications into a catalyst for systemic digital transformation. The emergence of “agent hospital” models – AI-driven systems capable of managing end-to-end clinical workflows – marks a strategic departure from department-level tools to full institutional integration. These AI agents are now capable of supporting diagnosis, clinical documentation, education and scientific research.
In the first half of 2025, Tsinghua University’s Institute for AI Industry Research (AIR) established the world’s first agent hospital, covering 21 departments and over 300 disease diagnoses. The AI system, trained using tens of thousands of virtual patient cases, achieved a diagnostic accuracy exceeding 93%. Compared to traditional digital tools, this model improves workflow efficiency by more than 100-fold, while maintaining interpretability and traceability. By integrating medical care, education and research, it creates a closed-loop AI healthcare ecosystem.
AI implementation at the hospital level is also accelerating. Notable examples include the following.
These developments demonstrate AI’s unique value in optimising limited healthcare resources in China. By enabling “AI-collaborative physicians” in frontline settings, medical institutions can enhance service accessibility while supporting the training of AI-literate medical professionals.
Local policies: regional governments as regulators and enablers
Across China, local governments are accelerating the development of AI healthcare ecosystems through co-ordinated policies, physical infrastructure and service platforms. Regional strategies increasingly emphasise cross-disciplinary innovation, industrial clustering and public-service application.
Beijing: precision funding for vertical innovation
Beijing’s Chaoyang District offers generous funding for AI healthcare innovation:
Additionally, the government supports dedicated industrial parks offering rental subsidies and professional services, including start-up incubation, scenario-based testing, investment matchmaking and IP management. These parks function as launchpads for commercialising AI solutions in real-world clinical settings.
Shenzhen: incubation and application platforms
In Shenzhen’s Longgang District, the local government launched the city’s first “AI + Life Health” industrial park, offering rent-free spaces for small and micro-enterprises. The district also introduced a Health Innovation and Transformation Center, fostering partnerships between hospitals, universities and enterprises. A new digital platform, “Longgang Family Doctor”, applies AI to personal health monitoring and chronic disease management, providing accessible, on-demand health services for residents.
Other regional initiatives
Hainan focuses on AI-enabled clinical trials using real-world data, capitalising on its free trade port policy. Chengdu and Suzhou have invested in AI biomedical parks that integrate device R&D, testing labs and regulatory support.
These local ecosystems are driving the transformation of AI healthcare from concept to scale, with regions acting as both regulators and incubators.
Foreign investment: opening of value-added telecom services catalyses strategic transformation
Foreign medical technology companies are leveraging China’s evolving regulatory landscape to achieve strategic transformation, shifting from traditional hardware vendors to core participants in the medical data ecosystem. A pivotal driver of this shift has been China’s gradual liberalisation of value-added telecom services (VATS), particularly the operation of internet data centres (IDCs), which were historically closed to foreign entities. Recent policy breakthroughs have enabled select international firms to operate wholly owned IDCs and offer AI computing and cloud-based services directly in China, substantially reshaping the medical value chain.
In early 2025, Siemens Healthineers obtained pilot approval for VATS from the Ministry of Industry and Information Technology (MIIT), becoming the first foreign medical technology company in China to secure such qualification. This landmark approval grants the company full operational rights to establish and manage an IDC under a wholly foreign-owned structure. Through this, Siemens launched its Virtual Medical Imaging Center (VMIC), a platform that enables real-time collaboration between top-tier radiologists and primary-level hospitals nationwide, promoting equitable access to diagnostic expertise.
This breakthrough stems from the implementation of the 2025 Action Plan for Stabilizing Foreign Investment, which explicitly supports foreign participation in pilot openings for VATS, biotechnology, and wholly foreign-owned hospitals. Shanghai, as one of the pilot regions, has maintained an average annual growth rate of 15% in foreign investment in its telecom services market over the past three years. On 28 February 2025, MIIT approved 13 foreign-invested enterprises, including four headquartered in Shanghai, for pilot participation in VATS across Beijing, Shanghai, Hainan and Shenzhen.
As foreign companies gain deeper access to China’s AI healthcare infrastructure, the domestic market is poised for greater international collaboration in areas such as cross-border data processing, real-world evidence generation and AI model localisation.
Capital markets and investment landscape: from concept to commercialisation
Following the huge popularity and success of DeepSeek and Unitree Robotics, there has been a surge of interest in investing in AI and robotics in China. In the AI sector, following investments and financing by various Chinese tech giants and the so-called AI Six Little Dragons, the huge interest in and uptake of DeepSeek by the public has led various investors to seek out sector-specific AI application businesses. Specifically, China’s AI healthcare sector has shifted from concept-driven valuation to a more clinically validated, investment-ready ecosystem.
According to industry tracking, more than CNY30 billion in new funding entered the Chinese “AI+Healthcare” sector in the past year. Key investment trends include the following.
Investors are also paying closer attention to regulatory readiness and compliance strategy – particularly with regard to algorithm updates, data governance and medical device registration. This evolving capital environment underscores China’s broader shift towards a “validation-first” model, where AI healthcare companies are expected to deliver not just visionary technology, but also demonstrable medical value and regulatory robustness.
Intellectual Property Protection in Healthcare AI
Market trends
AI is bringing about a profound change in the global healthcare landscape. Innovations such as surgical robots, remote diagnosis, smart diagnostics and wearable devices are accelerating both in development and deployment. Meanwhile, China has emerged as the global leader in healthcare AI IP rights in recent years, especially in patents. According to the 2025 AI Index Report published by Stanford HAI, in both 2022 and 2023, Chinese entities accounted for more than 60% of global healthcare AI patents, ahead of all other countries.
The development of China’s healthcare AI field is driven by the co-operation between enterprises and academic institutions. As mentioned in the IP Press’s White Paper, Ping An Health, for example, is the global leading patent applicant, with 4,176 healthcare AI patents; Tencent comes next, with 1,707 patents nationwide concentrated in medical imaging and disease detection. Academic institutions are also essential players, exemplified by Tsinghua University’s “Agent Hospital” project deploying 42 AI physicians trained to manage over 300 diseases. Co-developed by Alibaba’s DAMO Academy and Zhejiang University, the DAMO GRAPE model represents a major milestone as the world’s first gastric cancer screening AI.
Unfolding critical issues
Healthcare AI innovations in China are eligible for patent protection if they meet the technical criteria set forth by the Patent Law, and applicants should provide detailed descriptions of technical features, highlights of concrete technical effects and substantial data support to meet such criteria. However, challenges persist, particularly around the relatively vague examination standards and the conflicts between the transparency of AI models and confidentiality requirements. Meanwhile, the underlying algorithmic logic and structure of training models generally do not meet the threshold for authorship under copyright law and thus lack direct copyright protection. As a result, healthcare AI companies tend to rely more on trade secret protection to safeguard core models, parameters and data preprocessing workflows.
Concerning copyright protection related to AI-generated content (AIGC), a series of landmark cases have been heard by the Chinese judiciary. All these cases illustrate Chinese courts’ proactive yet inconsistent efforts to adapt traditional IP frameworks to AI governance.
Separately, in a May 2025 ruling, the Beijing Haidian District Court found that a healthcare company’s unauthorised replication of patient review data and doctor-authored medical educational articles from a competing internet healthcare platform constituted unfair competition, awarding CNY2.3 million in damages. The court held that defendant’s conduct, crawling and displaying data from the plaintiff’s platform on its own, violated the principle of good faith and disrupted fair market competition, thereby breaching the general clause of Article 2 of the Anti-Unfair Competition Law. While this case did not directly address AI training scenarios, its legal principles carry significant implications for the healthcare AI sector. As medical AI development increasingly depends on clinical data resources, courts are placing greater emphasis on protecting platform data assets. This creates legal risks for healthcare AI companies using competitors’ data without authorisation during model training, which may be deemed improper appropriation of commercial resources and market disruption. This trend underscores the growing importance for healthcare AI companies of establishing clear compliance boundaries between technological innovation and fair competition.
Regulatory Developments and Trends
Existing regulatory landscape
China currently lacks a unified legal framework specifically tailored to healthcare AI. In China, any AI-based medical software meeting the statutory definition of a “medical device” is regulated as such. The sectoral regulatory landscape is composed of existing regulatory rules, such as the Regulation on the Supervision and Administration of Medical Devices (Revised in 2024) and the Administrative Measures on the Registration and Record-filing of Medical Devices. Accordingly, AI medical software qualifying as a medical device will be categorised under the Guiding Principles for the Classification and Definition of AI-based Medical Software Products as Class II/III medical devices, depending on their algorithm maturity level and specific functionalities. This sectoral regulatory framework is further supplemented by specialised technical guidelines, such as the Guiding Principles for Registration Review of AI-based Medical Devices and the Key Review Points for Deep Learning-Assisted Decision-Making Medical Device Software.
Likewise, regulatory rules universally applicable to generative AI (GenAI) and algorithms, as well as cybersecurity and data protection, will apply for corresponding issues, including the Interim Measures for the Administration of Generative Artificial Intelligence Services (the “Gen AI Measures”), the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services (the “Algorithm Provisions”), the Cybersecurity Law, the Data Security Law and the Personal Information Protection Law. Aside from general data and privacy protection requirements, the following compliance action items are especially worthy of attention from companies operating healthcare AI.
Echoing the regulatory legislation, there is no specific regulatory authority in China responsible for the supervision of healthcare AI. Several regulatory authorities implement regulatory responsibilities within the scope of their duties, as follows.
Upcoming legislation and enforcement trends
Several critical legislative and regulatory initiatives moving forward in China are likely to define how healthcare AI is developed and deployed.
MIIT and the NMPA run a fast-track programme that gives selected firms early regulatory and technical guidance to shorten the path to market for AI medical devices, while the National Data Administration promotes “regulatory sandboxes” so companies can lawfully tap clinical data. Locally, Beijing has launched the country’s first “AI data training base and pilot zone”, integrating curated datasets and compliance tools into a one-stop sandbox for LLM developers. Similar pilots are under way in Shanghai and Shenzhen. Beijing’s 2025–27 AI+Healthcare Action Plan adds expedited reviews, priority approvals and extra funding, aiming to create a globally influential, end-to-end innovation ecosystem by 2027.
Administrative penalties have been levied for the use of unregistered AI-based medical software and for health data breaches, while there have been no publicly reported cases of regulatory intervention, warnings or product recalls specifically targeting healthcare AI. Nevertheless, CAC has increased its enforcement of AI regulations since 2025. For example, in April 2025, CAC launched a three-month action plan titled “Clear and Bright Crackdown on AI Technology Abuse”, which is being implemented in two phases. This law enforcement campaign suggests that broad-sweeping and proactive enforcement actions and penalties are anticipated in the coming months, where:
24/F, HKRI Centre Two
HKRI Taikoo Hui
288 Shi Men Yi Road
Shanghai 200041
China
+86 21 2208 1166
+86 21 5298 5599
email@fangdalaw.com www.fangdalaw.com