Artificial Intelligence 2025

Last Updated May 22, 2025

China

Law and Practice

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing, Shanghai and Shenzhen, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent years, including the chapter and trends and developments article in the Chambers Artificial Intelligence 2022-2024 Global Practice Guides.

China has adopted a comprehensive approach to regulating artificial intelligence (AI) by enacting various laws and regulations. These regulations address AI-related issues from diverse angles, encompassing data privacy, network security, algorithms, and ethical considerations. The following section provides a breakdown of this regulatory framework.

  • Data:
    1. Data Security Law of the People’s Republic of China (DSL);
    2. Personal Information Protection Law of the People’s Republic of China (PIPL); and
    3. Regulation on Network Data Security Management (NDSMR).
  • Network security:
    1. Cybersecurity Law of the People’s Republic of China (CSL).
  • Algorithms:
    1. Administration of Algorithm-generated Recommendations for Internet Information Services (CAC Algorithm Recommendation Rules);
    2. Provisions on the Administration of Deep Synthesis of Internet Information Services (CAC Deep Synthesis Rules);
    3. Interim Measures for the Administration of Generative AI Services (AIGC Measures); and
    4. Notice on Promulgation of the Measures for Labelling Al-Generated or Composed Content (AIGC Labelling Measures).
  • Ethics:
    1. Measures for Scientific and Technological Ethics Review (for Trial Implementation) (Ethics Review Measures).

Under the three foundational laws, namely, the DSL, PIPL, and CSL, the State Council, the Cyberspace Administration of China (CAC) and other authorities responsible for cybersecurity and data protection within the scope of their respective duties are tasked with developing and enforcing specific regulations. Specifically, the CAC has issued the four AI-specific regulations as set out above. Additionally, the Ministry of Science and Technology (MOST) and relevant authorities have promulgated the Ethics Review Measures, which are designed to set out the basic rules and principles for conducting scientific research, technology development, and other scientific and technological activities.

Apart from general cybersecurity and data protection laws, laws and regulations of other legal sectors also apply to AI if the application of AI involves specific issues regulated in these other legal sectors, including but not limited to tort law, consumer protection law, antitrust law and criminal law. Additionally, there are also many regulations and guidance related to algorithm governance focusing on specific industry fields such as e-commerce and healthcare; eg, the E-Commerce Law and the Guidelines for Registration Review of AI Medical Devices.

  • Manufacturing: AI is revolutionising the manufacturing industry in China through industrial internet and automated manufacturing systems. These technologies enhance efficiency, reduce production costs, and improve product quality by optimising production lines, predictive maintenance, and quality control using machine vision.
  • Agriculture: The applications of AI in agriculture cover intelligent agricultural machinery and automated farming, crop management and optimisation, as well as the intelligentisation of animal husbandry. The use of AI in agriculture can change the traditional ways of agricultural production, boost the efficiency of agricultural production, reduce the waste of resources, and ensure the quality and safety of agricultural products.
  • E-Commerce: AI applications in e-commerce include personalised recommendations, customer service chatbots, and demand forecasting. These innovations improve user experience, increase sales, and optimise inventory management.
  • Finance: In the financial sector, AI is used for credit scoring, fraud detection, and automated trading systems. Machine learning models analyse vast amounts of data to assess creditworthiness and detect fraudulent activities, thereby reducing risks and improving operational efficiency.
  • Healthcare: AI is transforming healthcare with applications in medical imaging, diagnostics, and personalised treatment plans. Machine learning algorithms can assist in identifying diseases, suggesting treatments, and predicting patient outcomes, leading to improved patient care. Collaborations between healthcare providers and AI developers are aimed at creating integrated health management solutions.
  • Transportation: Autonomous vehicles and smart traffic management systems are being developed and tested in China. AI helps in optimising traffic flow, reducing congestion, and enhancing road safety. Technology companies, automotive manufacturers, and government entities are working together on autonomous vehicle projects. Currently, China is aiming to achieve the realisation of an L4 autonomous driving network by 2025.
  • Legal Services: AI is transforming China’s legal industry. Currently, vertical AI tools in the legal field are mainly used in areas such as legal research, similar case retrieval, document drafting, contract review, contract drafting, and assisted reading. Legal practitioners can use general-purpose AI tools to generate simple documents, extract information from materials to aid reading, and complete preliminary contract review tasks by “issuing commands + uploading files.”

The Chinese government has been actively involved in promoting the adoption and advancement of AI for industry use through a variety of investment strategies, policies, and incentives.

  • Policy Formulation and Strategic Planning: The government has released several strategic documents, including the New Generation AI Development Plan, which emphasises the importance of AI in various sectors, such as manufacturing, agriculture, logistics, finance, and healthcare, and sets clear goals for the development of the AI industry.
  • Investment in AI Infrastructure: The government has made significant investments in building AI infrastructure, such as public computing centres and data platforms. These investments are designed to provide companies with the necessary resources to develop and deploy AI applications, thereby fostering innovation and industry adoption.
  • Incentives for Talent Attraction and Development: To address the talent shortage in AI, the government has implemented various measures to attract and retain AI experts from overseas and to cultivate domestic talent, such as supporting education and training programmes to develop a skilled local workforce.
  • Promotion of AI Applications in Industry: The government promotes the integration of AI in various industries through initiatives like the “AI+” action plan. This plan aims to integrate AI with traditional industries to enhance efficiency, reduce costs, and drive innovation. Incentives for industries that adopt AI include support for the establishment of AI innovation zones.
  • AI Special Subsidies: The government is increasing the supply of inclusive AI service resources and is subsidising enterprises to use AI service resources such as computing power, models, and corpora at a low cost. For example, the Shenzhen municipal government has proposed to issue up to CNY500 million of “training vouchers” annually to reduce the R&D and training costs of AI models, and up to CNY100 million of “model vouchers” annually to reduce the application costs of AI models.

China is the first country in the world to promulgate a law regulating generative AI; ie, the AIGC Measures. Notably, compared with the previously released draft of the AIGC Measures, the final version provides more flexibility and feasibility for relevant entities to fulfil generative AI-related compliance obligations. For example, on the point of the “authenticity, accuracy, objectivity, and diversity” of training data, the final AIGC Measures ease the obligations of generative AI service providers. Instead of requiring them to “ensure” the quality of the data, the measures now call for “improving” the quality of training data.

Currently, in addition to the three foundational laws mentioned in 1.1 General Legal Background, AI-specific legislation in China mainly includes the following.

Information Content Management

In December 2021, the CAC issued the Provisions on the CAC Algorithm Recommendation Rules, focusing on managing algorithmic discriminatory decision-making. The CAC Algorithm Recommendation Rules mark the CAC’s first attempt to regulate the use of algorithms, in which information service providers are required to use algorithms in a way that respects social morality and ethics, and are prohibited from setting up any algorithm model that could induce user addiction or excessive consumption.

In November 2022, the CAC issued the CAC Deep Synthesis Rules, regulating the provision of deep synthesis services and technologies. For example, deep synthesis service providers are required to take technical measures to add signs to alert users that the content was generated via deep synthesis technologies and the sign shall not affect users’ use of information generated or edited using their services.

In July 2023, the CAC issued the AIGC Measures, which put forward basic and general compliance requirements for the application of generative AI in specific scenarios. For example, providers of generative AI should bear the responsibility of producers of internet information content. On the other hand, the AIGC Measures also reserve a certain space for relevant organisations to use generative AI services to engage in specific activities in special fields such as news publishing, film and television production, etc.

In March 2025, the CAC issued the AIGC Labelling Measures, requiring the labelling of AI-generated synthetic content to include both explicit and implicit labelling. The AIGC Labelling Measures provide a more executable guide for the implementation of the labelling obligations of all responsible entities and establish a clearer legal liability system. Moreover, the AIGC Labelling Measures enable users to intuitively identify whether the content is generated by AI based on it, which helps users avoid the negative impacts from AI-generated content.

Ethical Considerations

In September 2023, the MOST issued the Ethics Review Measures, which clarify that units engaged in scientific and technological activities, including AI, whose research content involves sensitive areas of scientific and technological ethics, should establish a science and technology ethics (review) committee.

Several AI-specific directives are described below.

In January 2021, the National Information Security Standardisation Technical Committee (TC260) issued the Cybersecurity Standard Practice Guide – AI Ethical Security Risk Prevention Guidelines, which addresses the ethical safety risks that may arise from AI, and provides normative guidelines for the safe conduct of AI research and development, design, manufacturing, deployment and application and other related activities. It applies to relevant organisations or individuals that carry out AI research and development, design and manufacturing, deployment and application and other related activities.

In March 2024, the TC260 issued a guideline document on the safe development of generative AI services, namely the Basic Requirements for Security of Generative AI Services. The guideline document refines the relevant compliance requirements of the AIGC Measures in terms of enforcement rules, such as the legality of data sources and content security, and provides an effective path for generative AI service providers to conduct security assessments in practice.

In February 2025, the TC260 issued the Cybersecurity Technology – Labelling Method for Content Generated by AI, which was launched in tandem with the AIGC Labelling Measures. Formulated and implemented as a mandatory national standard, this method sets forth detailed requirements for content labelling methods for providers of AI-generated synthetic content services and internet information dissemination services.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

See 3.2 Jurisdictional Law.

China has been actively working on AI-specific legislation and regulations to govern the development and application of AI technologies. In addition to the promulgated laws and regulations, comprehensive AI legislation has been included in the State Council’s 2024 legislative work plan. China’s legislative efforts seek to create a supportive environment for AI that aligns with societal values and legal norms.

In addition, in March 2025, the National People’s Congress (NPC) explicitly included the revision of CSL in the legislative plan for the year. The third session of the 14th NPC in 2025 pointed out that future efforts should focus on strengthening legislative research in emerging fields such as AI, digital economy, and big data, and initiate the clean-up of regulations, rules, and other normative documents.

On the point of deep synthesis, in 2022, it was held in a case that enterprises shall not use information technology like deep synthesis to infringe on the portrait rights of others. In 2023, the defendant in a criminal case was held criminally liable for using deep synthesis technology to generate and disseminate illegal videos for profit, receiving a sentence of over one year in prison.

In addition, a case related to virtual humans generated by AI (the virtual human case) specified that if enterprises use AI technology to provide services, they cannot infringe on the legitimate interests and rights of others such as personality rights. In this case, the respondent provided users services that enabled users to engage in virtual emotional interaction such as “intimate” conversations with virtual images of celebrities formed using AI technology, and the respondent was held liable for the infringement.

For more judicial decisions on intellectual property rights related to generative AI, see 15.4 AI-Generated Works of Art and Works of Authorship.

In 2024, China adjudicated its first copyright infringement case involving AI-generated video content – the case of “Du Jia” AI creation tool’s text-to-video generation infringing the copyright of “Joy of Life” (Qing Yu Nian). The court ruled that the internet company’s “AI One-Click Video Production” tool directly infringed copyright by distributing unauthorised clips from a popular TV series. The defendant was ordered to pay CNY800,000 in damages and cease all infringing activities. The case centred on two key legal questions on whether the AI service provider directly violated the right of communication through information networks, and whether it indirectly induced infringement by failing to implement safeguards. Applying the principle of fault-based liability, the court found the company liable for breaching its duty of care and neglecting to establish effective infringement prevention mechanisms.

In 2024, China adjudicated its first case involving the infringement of the right of communication through information networks by a generative AI platform – the Ultraman case in Hangzhou. The case held that, in determining whether a provider of generative AI services has committed an infringement, different application scenarios and specific alleged infringing acts should be distinguished, and the liability for infringement should be defined separately and hierarchically for each category. Moreover, technology itself is neutral. If users create content in accordance with the platform’s service agreement and respect the intellectual property rights of others, it will not infringe upon the rights of copyright holders or the public interest.

In China, the CAC is responsible for the overall planning and co-ordination of cybersecurity, personal information (PI) protection and network data security, and has issued a number of regulations concerning the application of AI technology in terms of internet information services as well as the AIGC Measures.

There are also many other departments – such as departments in the industrial sector, telecommunications, transportation, finance, natural resources, health, education, science and technology – that undertake to ensure cybersecurity and data protection (including those relevant to AI) in their respective industries and fields. Public security authorities and national security authorities also play an important role in network and data security within their respective remits.

China has introduced several AI-specific directives, primarily through non-binding guidelines, to foster ethical and secure AI development from a general perspective. Key frameworks include:

  • The New Generation AI Governance Principles, issued by the National Governance Professional Committee for the New Generation of AI in 2019, promote “responsible AI” development, emphasising eight key principles of harmony and friendliness, fairness and justice, inclusiveness and sharing, respect for privacy, safety and controllability, shared responsibility, openness and collaboration, agile governance. They target general AI applications but lack enforcement mechanisms. 
  • The Ethical Norms for New Generation AI, issued by the National Governance Professional Committee for the New Generation of AI in 2021, propose six fundamental ethical requirements of enhancing human well-being, promoting fairness and justice, protecting privacy and security, ensuring controllability and trustworthiness, strengthening accountability, and improving ethical literacy. Additionally, the norms outline 18 specific ethical requirements for various AI-related activities, including governance, research and development, supply, and use. The norms encourage ethical AI but lack teeth.
  • The Generative AI Industry Self-Discipline Initiative, which is a non-binding guideline issued by China’s AI Security Governance Committee in 2024, urges principles of secure, compliant data/algorithm practice, healthy content ecosystems, ethical, value-aligned development, and industry collaboration. This voluntary measure complements state regulations, reflecting China’s dual approach of industry self-governance and government oversight in AI development.

In 2021, the State Administration for Market Regulation (SAMR) imposed penalties on Alibaba on the grounds that Alibaba’s use of data, algorithms and other technologies had restricted competition in the market for e-tailing platform services within China. The fine totalled CNY18,228 billion, which included a fine for misuse of data. After this, in order to ensure the reasonable use of algorithm recommendation technology, the CAC published the CAC Algorithm Recommendation Rules, which stated that algorithm recommendation service providers shall not conduct any monopoly or unfair competition by taking advantage of algorithms, and that AI or algorithm enforcement activities as well as the relevant legislation in China, are all aimed at safeguarding the legal legitimate interests and legal rights of users. On 12 November 2024, the CAC, along with three other regulatory departments, launched a special campaign titled “Qing Lang – Governance of Typical Algorithmic Issues on Online Platforms”. This comprehensive initiative aims to rectify prominent algorithmic problems that infringe upon users’ legitimate rights and interests.

In addition, regulatory authorities also pay attention to issues such as domestic entities introducing overseas generative AI services without due process and companies using customer PI for algorithm training without authorisation. In the first half of 2024, Chongqing’s local CAC rigorously investigated various illegal online activities, taking enforcement actions in accordance with laws and regulations, such as shutting down 142 non-compliant websites, conducting rectification interviews with 101 platforms, closing 29 violative accounts, removing 21 mobile applications from app stores, and initiating 11 administrative penalty cases.

The State Standardisation Administration (SSA) is responsible for approving the release of national standards, and TC260 is one of the most important standard-setting bodies on AI technology. So far, TC260 has issued a series of recommended national standards and practical guidelines containing provisions regarding the use of AI-related technology.

In summary, the SSA has released standards including, without limitation:

  • Information Technology-AI-Terminology (GB/T 41867-2022), which defines common terms used in the field of information technology related to AI;
  • AI – Deep Learning Algorithms Evaluation (GB/T 45225-2025), which defines the evaluation framework for AI deep learning algorithms, including assessment methodologies; it serves as a guideline for developers, users, and third-party organisations to evaluate deep learning algorithms and their trained models; and
  • Cybersecurity technology – Labelling Method for Content Generated by AI (GB 45438-2025), which outlines the labelling methods for AI-generated synthetic content, applicable to both generative service providers and content dissemination service providers when conducting labelling activities for such content.

Overall, the TC260 has released standards including but not limited to: Information Security Technology – Security Specification and Assessment Methods for Machine Learning Algorithms (GB/T 42888-2023), which specifies the security requirements and verification methods of machine learning algorithms during most of their life cycle. For other standards issued by the TC260, see 3.3 Jurisdictional Directives.

In addition, there are standard-setting bodies to formulate AI-related standards in specific industries. The People’s Bank of China (PBOC), along with the Financial Standardisation Technical Committee of China, plays a leading role in writing AI-related standards in the financial field. Specifically, the PBOC also issued the Evaluation Specification of AI Algorithm in Financial Application in 2021, providing AI algorithm evaluation methods in terms of security, interpretability, accuracy and performance.

In automated driving, the recommended national standard Taxonomy of Driving Automation for Vehicles sets forth six classes of automated driving (from L0 to L5) and the respective technical requirements and roles of the automated systems at each level. The TC260 released the Security Guidelines for Processing Vehicle Collected Data, which specify the security requirements for automobile manufacturers’ data processing activities.

In the healthcare sector, the General Office of the National Health Commission and other departments issued the Reference Guide for AI Application Scenarios in the Health Industry on 24 November 2024. The document aims to promote the innovative development of “AI + Healthcare” applications within the health industry. Specifically, it provides clear guidance on the application of AI in traditional Chinese medicine (TCM) management services and the TCM industry, outlining practical directions for implementation.

Countries may conclude international treaties that contain some international standards for AI regulation or AI technology. With regard to AI-related international treaties that China may conclude in the future, these international treaties will generally come into force in China’s territory by way of transposition or direct application. In this way, AI-related international standards generally do not conflict with China’s laws. For example, on 1 July, 2024, the 78th United Nations General Assembly unanimously adopted a resolution on strengthening international cooperation in AI capacity-building, proposed by China, with more than 140 countries co-sponsoring it. The resolution emphasises that the development of AI should adhere to the principles of being people-centred, beneficial, and aimed at promoting human well-being. It encourages international co-operation and concrete actions to help all countries, especially developing nations, enhance their AI capacity-building.

In recent years, China has been adapting to internet development trends and widely applying digital technologies such as big data, cloud computing and AI to the process of government administration in accordance with the law, in order to integrate information technology and the rule of law in government.

For example, in smart city applications, big data analysis carried out with the help of AI is used to determine traffic control measures in a given city. The smart city applications can design and promote smart transport strategies in which data analysis provides a clearer picture of traffic policies in terms of potential infractions committed by pedestrians and the range of transportation options accessible to residents. On 6 June 2022, the State Council issued the Guidelines on Strengthening the Development of Digital Government, aiming to adapt to the trends of the new round of scientific and technological revolution and industrial transformation, lead and drive the development of the digital economy and the construction of a digital society, foster a sound digital ecosystem, and accelerate digital transformation.

In April 2023, the Chengdu Railway Transport Intermediate Court held an online hearing regarding the PI protection dispute case between an individual and China Railway Chengdu Bureau Group Co., Ltd. This is also the first PI dispute caused by the use of facial recognition technology in public transportation in the country. The court held that the processing of facial recognition by the defendant meets the exemption condition of maintaining public safety, and therefore separate consent of the individual is not required. However, the court noted that the railway company still needs to fulfil its notification obligations in relation to its PI processing activities.

In April 2024, the Beijing Internet Court delivered a first-instance judgment in China’s first-ever case concerning infringement of personality rights through AI-generated voices. The defendant had used the plaintiff’s voice to create an AI-generated version without permission and sold it in its app. The court ruled that AI-processed voices do not automatically lose their connection to a natural person. Even if a voice has weak distinctiveness, as long as the general public or relevant audiences can associate it with the individual based on timbre, intonation, and speech patterns, the AI-modified voice remains identifiable. The defendant’s actions constituted an infringement of the plaintiff’s voice rights, and the court ordered compensation.

It is a common issue for AI operators that they may collect a large amount of data to feed their AI system. Since China’s laws and regulations on data processing have a clear concern for national security, AI companies are also advised to be aware of related legislative requirements.

Critical Information Infrastructure (CII)

The Regulation on Protecting the Security of Critical Information Infrastructure has defined CII as network facilities and information systems in important industries and fields that may seriously endanger national security, the national economy and people’s livelihoods, and public interest in the event of being damaged or losing functionality. CII Operators (CIIO) are required to take protective measures to ensure the security of the CIIs. Furthermore, the CSL imposes data localisation and security assessment requirements on the cross-border transfer of PI and important data for CIIOs.

Important Data

The DSL have defined important data as data the divulging of which may directly affect national security, public interests and the legitimate interests of citizens or organisations, and certain rules impose various restrictions on its processing. The DSL contemplates security assessment and reporting requirements for the processing of important data in general.

Cybersecurity Review

On 28 December 2021, the CAC, together with certain other national departments, promulgated the revised Cybersecurity Review Measures, aimed at ensuring the security of the CII supply chain, cybersecurity and data security and safeguarding national security. The regulation provides that CIIOs that procure internet products and services, and internet platform operators engaging in data processing activities, shall be subject to the cybersecurity review if their activities affect or may affect national security, and that internet platform operators holding more than one million users’ PI shall apply to the Cybersecurity Review Office for a cybersecurity review before listing abroad.

Cross-Border Data Transfer

Cross-border data transfer must comply with laws including the PIPL, Measures for Security Assessment of Cross-Border Data Transfers, and Regulations on Promoting and Regulating Cross-Border Data Flow. Key requirements are as follows: security assessment is mandatory for transferring important data or PI reaching certain amount; CIIOs must declare all outbound transfers; and non-CIIOs with smaller data volumes must either file standard contracts or obtain PI protection certification. Violations may incur fines of up to 5% of annual revenue and corrective orders under PIPL.

Generative AI continues to raise legal issues related to PI protection, intellectual property rights and the means of governing generative AI. In order to stimulate the standardised application of generative AI, the CAC issued the AIGC Measures to specify the obligations of the responsible entities and set corresponding administrative penalties. In addition, the AIGC Measures clarify that intellectual property rights must not be infringed upon during use of generative AI.

In terms of copyright, at the training stage of algorithms and models, AI training data may present infringement liability issues, while at the content generation stage, whether the output products fall within the scope of copyright protection remains highly controversial. In the absence of clear legal and regulatory guidance, intellectual property disputes based on generative AI services have gradually clarified the corresponding copyright ownership rules for AI-generated products through judicial adjudication. In December 2023, the Beijing Internet Court made a judgment on the first AI-generated picture (based on a text prompt) copyright case. The court decided that, considering that the copyright law does not place excessive requirements on the originality of the work, the pictures generated by the plaintiff through the use of generative AI should be recognised as work and thus enjoy copyright. However, relevant rules remain largely uncertain, considering that with the further iterative upgrade and in-depth application of generative AI technology, the judicial practice determination rules for the ownership of AI-generated objects may change based on the latest practice. In June 2024, the Beijing Internet Court conducted online hearings for four copyright infringement cases filed by illustrators against the developer/operator of an AI painting software. The plaintiffs alleged that the AI software developer had unlawfully used their original artworks as training data without authorisation, thereby infringing their legitimate rights and interests. Currently, the case is under further judicial review.

In China, the rights of data subjects, including the right to rectification and deletion, are addressed under the PIPL and other relevant legal frameworks. The PIPL also provides the principles of purpose limitation and data minimisation.

Right to Rectification

PI subjects have the right to request the correction or completion of their PI if it is found to be inaccurate or incomplete. This is in line with the principle of ensuring that PI is accurate and up to date. In principle, if an AI-generated output contains false factual claims about an individual, the individual may exercise their right to rectification to have the incorrect information corrected. The PIPL (Article 23) explicitly grants PI subjects this right, which requires PI handlers to take necessary actions to address such requests.

Right to Deletion

The right to deletion allows individuals to request the deletion of their PI under certain conditions. According to the PIPL (Article 24), PI handlers are required to delete PI in cases such as when the processing purpose has been achieved, the PI is no longer necessary for the original purpose, or when the individual withdraws their consent on which the processing was based. However, the deletion of the entire AI model is not explicitly required by law and would depend on the specific circumstances, such as whether the AI model developer may rely on reasonable use of PI in the public arena.

Right to Withdraw Consent

Under the PIPL, when PI is processed based on individual consent, individuals are granted the right to withdraw such consent. Furthermore, PIPL requires PI handlers to delete the relevant PI upon consent withdrawal. For AI model operators to effectively comply with these consent withdrawal requirements, it is necessary for them to prominently display withdrawal options with intuitive operation paths, accompanied by clear guidance.

Purpose Limitation and Data Minimisation

Purpose limitation and data minimisation are fundamental principles in DSL. The PIPL mandates that PI should only be collected and used for specific, explicit, and legitimate purposes (Article 6). This principle of purpose limitation requires that any further processing of PI should not be incompatible with the original purpose for which it was collected. In practice, it remains highly controversial to process PI that is collected for business purposes to train AI models.

Data minimisation, while not explicitly mentioned in the PIPL, is implicitly supported by the requirement that PI collection should be limited to what is necessary to achieve the stated purpose (Article 6). This means that PI handlers should only collect the minimum amount of PI required for the intended purpose, avoiding excessive or unnecessary data collection.

Uses of AI in the Practice of Law

AI technology has a wide range of applications in the judicial field, from transactional support work such as information backfilling, intelligent cataloguing and the error correction of documents. While AI is not yet authorised to issue judicial rulings directly, it has already been employed to assist in factual clarification, case analysis, and even the formulation of adjudicative recommendations.

On 29 March 2025, the Fa Xin Legal Foundation Model was officially launched in Beijing. Developed under the guidance of the Supreme People’s Court of PRC (SPC), this large-scale AI model is applicable to judicial proceedings, public security mediation, arbitration, enterprise risk management, and public legal education.

The Chinese 2024 Pioneering AI Case Collection includes a legal large model named Hua Yuan. The Hua Yuan Legal Large Model has the following four core capabilities in the judicial field: legal inquiry and response; case analysis; reasoning and decision-making; and legal document generation.

Ethical Considerations

In December 2021, the CAC issued the CAC Algorithm Recommendation Rules to provide special management regulations on algorithmic recommendation technology. The internet information service providers are required to use algorithms in a way that respects social morality and ethics, and are prohibited from setting up any algorithm model that could induce user addiction or excessive consumption.

In 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the Opinions on Strengthening the Governance over Ethics in Science and Technology. These opinions call for stringent investigation of unethical practices in science and technology, intending to enhance the management of scientific and technological ethics and improve ethical governance capabilities.

In 2021, China also issued the Cybersecurity Standard Practice Guide – AI Ethical Security Risk Prevention Guidelines, which provides guidelines for organisations or individuals to carry out AI R&D, design and manufacturing, deployment and application, and other related activities, in an ethical manner.

In addition, the AIGC Measures emphasised that in processes including algorithm design, selection of training data, model generation and model optimisation and service provision, measures should be taken to prevent discrimination on the basis of race, ethnicity, religious belief, nationality, region, sex, age or profession.

Last but not least, in the Opinions on Regulating and Strengthening the Applications of AI in the Judicial Fields by the SPC, it has been noted that the courts shall adopt ethical reviews, compliance reviews and security assessments to prevent and mitigate cybersecurity risks in judicial AI applications through mechanisms such as the Judicial AI Ethics Council.

From a tort law perspective, the owner of AI-enabled technology that harms the interest of others should be directly liable. However, the application of AI technology usually involves a number of roles, such as the AI developer, the product/service manufacturer, the seller and even the user. Thus, careful consideration must be given when defining who the “owner”, and consequently the liable party, truly is.

Currently, the assignment of liability in AI scenarios depends on the role of each party for the provision of AI service; for example, the provider of API access may only be responsible for post-factor liability (eg, deleting the relevant content based on a copyright holder’s request, etc). In addition, AIGC Measures stipulate several obligations for AI providers, such as content governance, adoption of security measures, and ensuring the AI model has passed the record-filing process, etc. The provider of AI is advised to exercise the duty of reasonable scrutiny and is entitled to claim no liability or diminished liability.

Allocation of Liability

For example, AIGC Measures stipulated that the provider of AI services shall use the data and basic models from lawful sources. Besides verifying the lawfulness of the data source by itself, the provider of AI services could require the provider of the data source to guarantee that the data sources would not infringe on others’ rights, and that the further development of the model does not materially change the security measures of the original model, etc.

Assigning responsibility in AI scenarios should involve careful deliberation and a clear definition of the duty of care expected from different parties. This consideration should take into account the state of the art and objective factors that might affect the computing process of the AI technology.

Role of Insurance and Contract

While AI has been widely adopted across industries, China currently lacks specific legislative provisions governing AI-related liability. In practice, insurance and contractual arrangements are commonly used as interim measures to address liability allocation in disputes. However, these mechanisms have inherent limitations and cannot serve as primary solutions for liability determination. Moreover, when implementing contractual approaches, safeguards must be established to prevent dominant parties from unfairly transferring liability through one-sided agreements.

The current legislation does not provide clear provisions on the imposition and assignment of liability; further clarification of relevant laws and regulations is awaited.

For instance, Article 155 of the Road Traffic Safety Law (Revised Draft), published by the Ministry of Public Security (MOPC) in April 2021, provides special provisions for automated driving cars. In terms of the assignment of responsibility, it states that “in the event of road traffic safety violations or accidents, the responsibility of the driver and the development unit of the automated driving system shall be determined in accordance with the law. The liability for damages shall be determined in accordance with the relevant laws and regulations. If a crime is committed, criminal liability shall be investigated in accordance with the law.”

CAC Algorithm Recommendation Rules, CAC Deep Synthesis Rules, AIGC Measures and Provisions on the Ecological Governance of Network Information Contents all impose different obligations and liabilities for various market players, with the main focus on the liability of service providers. For example, CAC Deep Synthesis Rules provide different obligations for technical supporters and service providers; eg, service providers must carry out a security assessment if the service to be launched can influence public opinion or mobilise the public, whereas technical supporters do not bear such obligations.

CAC Algorithm Recommendation Rules and AIGC Measures address the issues of algorithm bias and discrimination. The service provider is required to take effective measures in order to prevent discrimination in terms of nationality, religion, country, region, gender, occupation, health, etc, in the process of algorithm design, training data selection, model generation and optimisation, service provision, etc. The national standard, Generative AI Service Basic Security Requirement, which was issued on 29 February 2024, also requires that (i) the diversity of the language and type of the corpus be increased, and (ii) service providers source the corpus from different origins and reasonably source the corpus from both domestic and foreign origins. The anti-discrimination mechanism is encouraged to further prevent any issues of algorithm bias.

From a technical perspective, algorithms may be biased due to a number of reasons. The accuracy of an algorithm may be affected by the data used to train it. Data that lacks representativeness or, in essence, reflects certain inequalities may result in biases in the algorithms. The algorithm may also cause bias due to the cognitive deficits/bias of the R&D personnel. Besides, due to the inability to recognise and filter bias in human activities, algorithms may indiscriminately acquire human ethical preferences during human-computer interaction, increasing the risk of bias in the output results.

For these reasons, algorithmic bias can cause societal harm, such as the infringement of consumer rights. A typical example is big data-enabled price discrimination, where consumers are charged significantly different prices for the same goods.

To regulate this issue, on the one hand, legislation specifically addresses the problem of algorithmic discrimination. For example, the Implementing Regulation for the Law of the People’s Republic of China on the Protection of Consumer Rights and Interests (CPL) stipulates that business operators must not, without the consumer’s knowledge, set different prices or charging criteria for the same goods or services under the same transaction conditions. The violation of the foregoing provision may result in civil liability, an administrative fine, suspension of operation or invocation of a business licence.

Meanwhile, government departments also continuously carry out law enforcement activities to tackle the issue of algorithmic discrimination. For example, on 12 November 2024, the CAC, the MIIT, the MOPC, and the SAMR jointly issued a notice on the enforcement activities for the governance of typical algorithm-related issues on online platforms. Algorithmic discrimination is identified as a key focus of these enforcement activities.

Under the PIPL, facial recognition and biometric information are recognised as sensitive personal information (SPI). Separate consent is needed when processing SPI, unless another legal basis for processing the SPI exists; for example, the railroad department may collect peoples’ facial images at the train station for the sake of public security. Further, the processing of such information shall be only for specific purposes and with sufficient necessity. In addition, the PIPL requires that the data handler shall also inform the personal subject regarding the necessity of processing their SPI and the impact on their rights and interests.

Recent legislative developments and highlights:

  • Cybersecurity Standards Practice Guide – Security Requirements for PI Protection in Face Recognition Payment Scenarios (issued on 26 January 2025): This focuses more on the scenarios where facial data is being collected and the relevant security measures.
  • Administrative Provisions on the Application Security of Facial Recognition Technology (issued on 13 March 2025): The user of facial recognition technology that stores the facial information of more than 100,000 people must register with the local cyberspace authority.

This gives rise to concerns of intelligent shopping malls and smart retail industries where facial characteristics and body movements of consumers are processed for purposes beyond security, such as recognising VIP members and identifying consumers’ preferences so as to provide personalised recommendations. Under the PIPL, companies must consider the necessity for such commercialised processing and find feasible ways to obtain effective “separate consent”.

In the automobile industry, images or videos containing pedestrians are usually collected by cameras installed on cars, and videos and images containing facial information are considered as important data. Processors that have difficulty obtaining consent for their collection of PI from outside a vehicle for the purpose of ensuring driving safety anonymise such information, including deleting the images or videos that can identify people, or conducting partial contour processing of facial information. Companies failing to perform obligations under the PIPL and related regulations are also faced with administrative penalties and even criminal liability (ie, for infringing citizens’ PI).

Firstly, automated decision-making using PI shall be subject to transparency requirements; processors are required to ensure the fairness and impartiality of the decision, and shall not give unreasonable differential treatment to individuals in terms of trading price or other trading conditions.

Where information feeds or commercial marketing to individuals is carried out by means of automated decision-making, options not specific to individuals’ characteristics shall be provided simultaneously, or convenient opt-out channels shall be available. Individuals whose interests are materially impacted by the automated decision are entitled to request the relevant service provider/processor to provide explanations and to refuse to be subjected to decisions solely by automated means.

Risk of undisclosed automated decision-making technology:

  • Misleading Individuals: Individuals may not expect their PI to be used in this way or understand how the process works, preventing them from taking remedial measures when significant adverse effects arise.
  • Impact on Ethical and Social Values: The use of undisclosed automated decision-making technology can affect the ethical and social values and norms of the stakeholders involved.

In China, chatbots are usually deployed by e-commerce platforms or online sellers to provide consulting or aftersales services for consumers. PIPL and the CPL typically govern the use of chatbots. Furthermore, chatbots providing (personalised) content recommendations may also need to comply with regulations on algorithm recommendations, etc.

There are also transparency requirements for automated decision-making (see 11.3 Automated Decision-Making). Users of internet information services involving AI technology are also entitled to be informed of the provision of algorithm-recommended services in a conspicuous manner. Relevant service providers are required to appropriately publish the basic principles, purposes and main mechanics of algorithm-recommended services (see 11.1 Algorithmic Bias). The obligation to label is also an embodiment of transparency requirements. The Measures for Labelling of AI-Generated Synthetic Content (effective on 1 September 2025) stipulate that providers of AI services, dissemination platforms, application distribution platforms, and users who publish AI-generated synthetic content using online information dissemination services are all required to fulfil the obligation to label. This means that AI-generated synthetic content must be marked with an identifier to distinguish it from other types of content.

To avoid disputes and infringements, written agreements should cover crucial matters such as the ownership of intellectual property rights for the input content, ensuring that data sources do not infringe upon the rights and interests of others, and clarifying whether it is permitted to use the relevant content for data training. These agreements should also address liabilities related to the authenticity, legality, and completeness of the output content, as well as the division of responsibilities among the involved parties.

Service providers may also consider giving advance notices and disclaimers to customers, indicating that the output contents are not professional opinions and are based on public information. They should advise customers to seek professional opinions when necessary to avoid potential liabilities.

Common adoption of AI technology in HR practice includes automated assessments, digital interviews and data analytics to screen CVs and candidates.

This technology offers benefits such as the ability to quickly organise candidate CVs for employers, significantly reducing the time required to review applications. However, it also carries potential harm, such as biased hiring practices.

Compliance requirements include:

  • PI protection;
  • transparency; and
  • fairness and rationality of the decision-making process.

Benefits:

  • promote efficiency;
  • reduce mistakes;
  • provide personalised services; and
  • increase the quality of HR management.

Potential harm:

  • incomplete or biased data may harm the benefits or even infringe the employees’ interests.

Compliance practice:

  • regular review and correction mechanism for the AI technology used for evaluation and monitoring to mitigate the risk of unfair and unreasonable decision-making;
  • human participation in the entire recruitment process; and
  • privacy, ethics and data security for monitoring employees’ work.

AI-powered digital platforms for car services can use customers’ historical travel data and real-time traffic conditions to offer personalised journey plans and recommend the most efficient routes. However, these AI systems may leverage user data – such as location, spending habits, and other behavioural patterns – to implement dynamic pricing or priority dispatching, potentially leading to price discrimination or unequal access to services. Meanwhile, AI-based delivery systems continuously and automatically adjust parameters and road recommendations for delivery drivers, thereby reducing expected delivery times. This compels delivery drivers to take unsafe measures which could lead to traffic accidents, directly endangering the drivers’ personal rights and public interests (see 11.1 Algorithm Bias).

AI is used in various ways in financial services in China. For example, AI is used for credit scoring, fraud detection and customer service. China has updated its regulatory guidelines on the use of AI in financial services, mandating that banks and insurance institutions must ensure transparency and implement risk mitigation measures when deploying AI-driven solutions.

Potential risks:

  • Biases in repurposed data can lead to discriminatory practices, whether intentional or unintentional. For instance, unintentional bias in finance while using AI in China can occur when AI systems are trained on biased data or when AI systems are not transparent and explainable.
  • The uncontrollable risks inherent in AI systems also have many hidden dangers for transaction models, transaction trend prediction and other businesses.

Mitigation measures include:

  • developing alternative mechanisms for exiting AI applications;
  • making contingency plans for security threats and conducting drills;
  • conducting regular audits of their AI systems;
  • ensuring AI systems are transparent and explainable;
  • establishing internal evaluation and algorithm reporting mechanisms by reference to financial algorithm evaluation, algorithm record-filing and other compliance requirements; and
  • improving the internal control mechanisms at the algorithm level based on the dimensional standards of internal evaluation.

To accelerate the responsible adoption of AI in healthcare, Chinese authorities have published the Reference Guide for AI Application Scenarios covering pharmaceutical regulation and public health services.

The use of AI in healthcare requires the assimilation and evaluation of large amounts of complex healthcare data. Machine learning can be used to predict and make preventive recommendations to assist sports rehabilitation. However, non-objective parameters, insufficient data sources, and inadequate sample sizes may lead to discrimination and bias in the output results.

The integration of AI and IoT technologies enables smarter healthcare equipment management through real-time monitoring, intelligent analytics and predictive maintenance.

Robotic surgery enhances a surgeon’s capabilities through high-precision instrumentation, intelligent navigation systems, sensor technology, real-time imaging feedback and advanced algorithms. However, this heavy reliance on technology introduces significant risks during procedures. A key challenge remains the legal attribution of liability in cases of medical disputes arising from robotic surgery.

Using medical data requires the processing of large amounts of SPI, and the requirements under PIPL for the processing and sharing of SPI may not be fully implemented in practice. For example, the right to deletion of PI is difficult to realise once such PI has been used for data training or machine learning.

AI plays a pivotal role across all aspects of digital healthcare, from assisting with medical consultations and diagnosis to treatment planning, insurance billing, and post-hospitalisation care management.

The strengths of utilising centralised electronic health record systems include improved integration of healthcare resources and enhanced efficiency. The risks include increased vulnerability to cyberattacks and data breaches.

At the national level, China has yet to introduce mandatory legislation governing AI-powered autonomous vehicles. Current legal principles regarding AI applications in autonomous vehicles are primarily reflected in existing regulations such as the DSL and the Provisions on Automotive Data Security Management (Trial). However, several cities including Shenzhen, Suzhou and Wuhan have pioneered local regulatory frameworks. These policies actively encourage AI technology R&D and standard-setting initiatives for autonomous vehicles.

When autonomous vehicles are involved in accidents causing personal injury or property damage, liability allocation remains a matter of tort law. Determining responsibility requires careful analysis of key factors such as the vehicle’s automation level, the nature of the liable party, and the root cause of the accident.

Under the Several Provisions on Automotive Data Security Management (for Trial Implementation), the PI volume of more than 100,000 individuals is deemed as important data and will be subject to more strict security measures. In addition, autopilot technology normally requires large amounts of data for model training, and the processing of such data might involve data reflecting economic activities like vehicle flow, logistics, etc. Therefore, classifying and grading this data is crucial for companies to ensure compliance.

For AI algorithm governance, CAC Algorithm Recommendation Rules require the classification and grading of algorithms based on their potential impact on public opinion or social mobilisation. Service providers using such algorithms must file records with relevant authorities. Enterprises are encouraged to tag data assets based on their classification and grading and adopt relevant security measures, such as limiting data access, deciding whether to upload important data or SPI to the cloud, and anonymising facial images of individuals outside the vehicle when processing such data is unnecessary.

In addition, enterprises developing AI technologies related to scientific research may fall under the scope of “research on the synthesis of new species that has a significant impact on human life and health, value concepts, and the ecological environment”. Such scientific and technological activities are subject to ethical review. If the research involves sensitive fields of science and technology ethics, enterprises engaged in life sciences, medicine, AI, and other related fields must set up a science and technology ethics (review) committee.

China’s State Council has once again included the draft AI Law in its 2024 legislative work plan as a preparatory review item, indicating the proposed legislation may soon be tabled for formal deliberation.

In 2025, China participated in the World AI Action Summit, joining 60 nations in signing the Paris AI Declaration. This landmark agreement calls for strengthened international co-ordination on AI governance, laying the groundwork for developing shared standards in the field.

China’s MIIT, the CAC and two other government bodies have jointly issued guidelines prioritising standards development for smart manufacturing. The initiative aims to accelerate the establishment of comprehensive intelligent standards covering the entire industrial chain – from R&D and pilot testing to production, marketing and operations management – to drive the country’s new industrialisation strategy.

Common applications of AI in manufacturing include:

  • smart production, which utilises the automation chain for order management, vendor/supplier scheduling, monitoring product defects and returns, and production prediction, etc; and
  • the common use of AI smart cameras for detecting chemical or gas leaks and activating emergency plans to ensure both product and personnel safety.

Common risks include:

  • data security and integration;
  • data use and sharing with different parties; and
  • balancing the need for effective monitoring and supervision of factory operations without intruding on the privacy of employees.

AI is used by consulting firms and judicial authorities primarily for statistical purposes.

Compliance requirements include:

  • ensuring the technology is reliable, accurate, and complies with professional standards;
  • protecting confidential client information; and
  • obtaining explicit and separate consent when necessary.

Furthermore, in the digital advertising sector, publishers are encouraged to clearly label ads created using AI or deep synthesis technologies.

According to the AIGC Measures, AIGC service providers shall carry out training data processing activities such as pre-training and optimisation training in accordance with the law, and shall use data and basic models from legal sources. If intellectual property rights are involved, they shall not infringe upon the intellectual property rights enjoyed by others according to law. Therefore, AI algorithm model developers need to comply with intellectual property compliance requirements during the model training stage and respect the copyrighted works of others.

AI tool providers typically define rights allocation for input/output content through standard user agreements. The prevailing models include full user ownership of rights, user ownership with provider licensing rights, and tiered ownership based on subscription levels. Beyond establishing content rights, providers implement additional safeguards such as indemnification clauses, disclaimer provisions, and liability caps to mitigate mass infringement risks.

Regarding the output of AI models, please see the case discussed in 8.1 Specific Issues in Generative AI.

China’s National Intellectual Property Administration has issued clear guidelines stating that only natural persons can be named as inventors in patent applications for AI-related inventions. The directive explicitly prohibits listing AI technologies or other non-human entities as inventors. When multiple inventors are involved, each must be an individual human being.

Under China’s Copyright Law (CL), authorship is strictly limited to natural persons, legal entities or unincorporated organisations. As AI technologies lack legal personality, they cannot qualify as “authors” under the current legal framework.

When AI-enabled technology/algorithms are expressed in the form of computer software, the software code of the whole set, or of a certain module, can be protected in China under the Regulation on Computers Software Protection.

From a data protection perspective, concepts such as “data resources ownership rights” may offer a viable legal framework for safeguarding datasets used in LLMs. In Data Tang’s lawsuit against Yin Mu for alleged data IP infringement, heard at the Beijing Intellectual Property Court, the court ruled that when companies formally register proprietary rights to such data and demonstrate substantial investment in processing and enhancing its commercial value, they acquire legally protectable competitive interests under China’s Anti-Unfair Competition Law (AUCL).

If the development and use of the algorithm are highly confidential, such algorithm might be protected as a trade secret or technical know-how. According to the Announcement of the SPC, the court may classify information on structure, raw materials, components, formulas, etc, related to technology as technical information under the AUCL. Therefore, protecting AI technologies as technical secrets is justified by legislation.

In February 2024, the Guangzhou Internet Court issued a landmark ruling on copyright infringement by generative AI service providers. The court determined that text-to-image platforms must implement measures to prevent generating images substantially similar to copyrighted works, while affirming these providers’ obligation to exercise reasonable due diligence.

In June 2024, the Beijing Internet Court conducted online hearings for four copyright infringement cases filed by illustrators against the developer/operator of an AI painting software. These landmark proceedings represent China’s first copyright infringement case involving the training of AI image-generation models.

First, it remains unclear as to whether content created through the use of OpenAI constitutes “work” under the CL. Current judicial practice shows diverging rulings on whether AI-generated content qualifies for copyright protection. However, courts in Shenzhen, Beijing and Jiangsu have established a prevailing trend of recognising such content as copyright-protected works under existing law. Second, it is still up in the air as to whom the rights of the work belong to. AI technology itself has not yet been regarded as a legal entity. In judicial decisions, some courts tend to regard the person behind the AI technology as the owner of the copyright of AI-generated content. However, in the case discussed in 15.4 AI-Generated Works of Art and Works of Authorship, the Beijing Internet Court did not rule out the possibility that the user of the AI technology could be the author of the AI-generated content. It seems that decisions are based on the level of contribution of intellectual work by the individual.

The concept of “big data-enabled price discrimination” refers to the collection of customer information for algorithmic analysis to pinpoint consumer characteristics and thus implement personalised pricing. Although there is no clear legal definition of this activity, relevant regulations include the Civil Code, the PIPL, the CPL, the Electronic Commerce Law, the Anti-Monopoly Law and the Price Law.

With the development of AI, market participants may use an algorithm which is designed to limit competition (AI or algorithm collusion), such as price fixing, synchronised advertising, and sharing of insider information, etc.

PIPL provides that the use of PI for automated decision-making must not discriminate unreasonably in terms of the transaction prices. Service providers shall not use algorithms to commit unreasonable differential treatment and other illegal acts on the prices and other transaction conditions based on clients’ preferences, transaction practices and other characteristics.

China’s AI cybersecurity regulatory framework operates as an integrated system, combining overarching foundational laws with specialised sector-specific regulations. The framework comprises:

  • General legislation applicable to all digital activities:
    1. the CSL;
    2. the DSL;
    3. the PIPL; and
    4. the NDSMR.
  • AI-specific regulations providing detailed implementation:
    1. the AIGC Measures;
    2. the CAC Algorithm Recommendation Rules; and
    3. the CAC Deep Synthesis Rules.

China’s regulators have acknowledged AI’s transformative impact on cybersecurity threats and violent content. The CSL establishes fundamental obligations for all network operators, including implementing tiered protection systems, maintaining internal security protocols, and conducting real-time monitoring. For generative AI specifically, the NDSMR mandates enhanced oversight of training data processing and risk prevention measures. Additional provisions in the AIGC Measures and CAC Algorithm Recommendation Rules introduce targeted requirements such as mandatory security assessments and algorithm registration for systems that can influence public opinion or mobilise the public, creating a multi-layered defence against emerging AI-related threats.

Currently, ESG reporting requirements for AI are primarily stipulated in stock exchange regulations governing listed companies. For instance, the Shanghai Stock Exchange requires entities with disclosure obligations – particularly those involved in AI or other ethically sensitive technology sectors – to report on their compliance with technological ethics standards during the reporting period.

From an ESG perspective, AI presents both opportunities and challenges for climate action. For instance, AI enables more accurate climate predictions and optimised solutions, supporting broader applications that can reduce global emissions while enhancing climate resilience. However, the rapid growth of AI also brings significant resource demands. The expansion of data centres required to power AI systems leads to increased electricity and water consumption – a critical sustainability consideration that must be addressed alongside AI’s environmental benefits.

Key issues in AI governance include data monopolies, algorithmic discrimination, deep synthesis, privacy protection, and ethical concerns. In addressing these challenges, AI governance must comply with relevant regulations, policies and standards, while continuously improving governance mechanisms and approaches to fully respect and safeguard the privacy, freedom, dignity, security and other legitimate rights and interests of affected parties.

In general, the PIPL, CSL, DSL and the AIGC Measures set out the baseline compliance requirements for AI service providers and users. The main concerns can be divided into the protection of PI, data processing and training, algorithm compliance, and cross-board provision of data. To ensure compliance with China’s AI regulations, enterprises should implement a comprehensive strategy that includes conducting algorithmic impact assessments for public-facing AI systems, establishing robust data governance frameworks to ensure lawful data collection and usage, maintaining transparency by disclosing AI applications and providing user opt-out mechanisms where required, performing regular ethical audits to verify fairness and accountability, and undertaking mandatory algorithm filling requirements to the competent authorities. Nevertheless, enterprises are advised to closely monitor legislative trends and update their business practices accordingly to maintain compliance.

King & Wood Mallesons

18th Floor
East Tower
World Financial Center 1
Dongsanhuan Zhonglu
Chaoyang District
Beijing 100020
PRC

+86 10 5878 5588

kwm@cn.kwm.com www.kwm.com
Author Business Card

Trends and Developments


Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing, Shanghai and Shenzhen, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent years, including the chapter and trends and developments article in the Chambers Artificial Intelligence 2022-2024 Global Practice Guides.

Trends in AI Governance: China’s Approach

An “inclusive and prudent” approach

Following the unveiling of the Interim Administrative Measures for Generative Artificial Intelligence Services (“Generative AI Measures”), introduced in July 2023, China has been carving its path in shaping global AI governance through a set of regulations collectively known as the “Trio”. These regulations include two earlier measures: (i) the Internet Information Service Algorithm Recommendation Administrative Measures (“Algorithm Recommendation Measures”), addressing recommendation algorithms; and (ii) the Internet Information Service Deep Synthesis Administrative Measures (“Deep Synthesis Measures”), focusing on deep synthesis algorithms.

As early as April 2023, a debate over clear rules and flexibility was prompted by the draft version of the Generative AI Measures. The debate persists as the finalised regulation remains ambiguous in various aspects, such as in outlining the responsibilities of different players in AI services. From the literal reading of the regulation, the focus is primarily on service providers as the key entities accountable. However, given the complexity of the AI supply chain, determining and distinguishing the responsibilities of actors like technology developers and downstream app providers in practice remains an ongoing challenge. This uncertainty leaves little room for a definitive answer, especially as this discourse resonates within the broader context of international AI governance, notably with the recent EU AI Act.

In this sense and given the potential trade-off between overarching rules and flexibility, which could hinder innovation in a rapidly evolving AI landscape, China has opted for an alternative approach by adopting regulatory approvals, namely the Algorithm Filing(算法备案)and Generative AI Services Filing (生成式人工智能服务备案, also known as 大模型备案).

In 2024, China has strengthened its regulatory oversight of algorithms and artificial intelligence through co-ordinated legislative, judicial, and law enforcement efforts, deepening regulatory intensity and broadening compliance supervision.

Administrative enforcement

In terms of administrative enforcement, China has continued to carry out Algorithm Filing and Generative AI Services Filing/Registration. According to publicly available information, as of the end of December 2024, China has approved ten batches of algorithm filing applications, with a total of 2,841 deep synthesis service filings, of which service providers that completed Algorithm Filing accounted for as much as three-quarters. By the end of December 2024, 302 generative artificial intelligence services had been filed and approved, with 105 registered, among which the number of Generative AI Services Filing/Registration approved by the cyberspace administrations of Beijing and Shanghai took the leading position.

In addition, central and local cyberspace administrations at all levels have strengthened supervision and inspection of algorithm service enterprises’ compliance with current laws and regulations such as the Generative AI Measures, and imposed administrative penalties on non-compliant enterprises within their authority. The Cyberspace Administration of China and three other ministries jointly issued the “Notice on Carrying out the “Clear and Bright – Governance of Typical Algorithm Problems on Online Platforms’ Special Campaign” on 24 November 2024, organising enterprises to conduct self-inspections and corrections, and verifying and thoroughly evaluating the effectiveness of governance based on each enterprise’s self-inspection. Local cyberspace administrations in Beijing, Shanghai, and other regions actively responded, holding research symposiums and classified guidance meetings for relevant enterprises within their jurisdictions. Meanwhile, for violations such as providing generative artificial intelligence services to the public without security assessment filings or failing to strictly fulfil content review obligations, local cyberspace administrations have also taken administrative penalty measures such as interviews and orders to cease services against relevant non-compliant enterprises within their authority.

From this perspective, the Cyberspace Administration and other competent authorities have gradually initiated algorithm compliance supervision actions. Relevant enterprises, in addition to completing procedures such as Algorithm Filing, modification, and Generative AI Services Filing in accordance with the law, should also actively carry out internal algorithm compliance work, especially concerning user rights protection, and retain relevant supporting materials to be prepared for regulatory inspections at any time.

Challenges in the AI Market and the Call for Regulation

Appropriate legal basis for AI training

Gathering meticulously curated datasets serves as the cornerstone of supercharging generative AI models. For tech giants, these datasets often consist largely of their own business operational data. However, start-ups and entities in traditional industrial sectors might be more apprehensive and thus heavily rely on extensive, distributed datasets from the internet, like open-source datasets or those generated by the web crawlers from webpage, social media or even personal blogs.

Regardless of the data’s origin, it is universally understood that lawfully using the personal information contained within is one of the trickiest issues throughout the AI development and operation life cycle, from pre-training to market deployment.

According to the Personal Information Protection Law (PIPL), processing personal information requires a valid legal basis. While Article 13 of the PIPL provides seven legal bases in total, only these two appear to be relevant and applicable: (i) prior consent from concerned individuals; and (ii) reasonable processing of personal information disclosed publicly by individuals themselves or is otherwise legally disclosed (“Public Information”).

Practically speaking, since the use cases dramatically vary, there may not be a one-size-fits-all legal basis to ground the AI training. However, upon weighing up all the elements required for relying on consent or public information, neither could be an appropriate and suitable ground for AI training.

It is believed that consent is the go-to solution for almost all scenarios, but as proven in practice, it frequently falls short, undermining the genuine voluntariness and freedom of individuals involved, especially those in vulnerable situations. At the same time, developers may struggle to identify individuals and obtain consent from a large number of individuals, leading to considerable costs.

Moreover, the inherent opacity of machine learning complicates the ability to fully grasp how AI processes personal information and makes it challenging to keep individuals “fully informed” as required by the PIPL. Even when consent is obtained, doubts persist about whether it is given voluntarily and explicitly, especially when granted to developers with dominant market positions.

When relying on Public Information as a legal basis, the interpretation of “reasonable processing” is disputed both in theoretical discourse and judicial practice. Such ambiguity poses challenges to its effective application and may inadvertently allow certain AI service providers to obfuscate their handling of personal information.

Therefore, the current market practice and legal frameworks are somewhat ambiguous and the legal basis is a pressing legal issue that needs to be addressed.

Generative AI and copyright infringement

In terms of judicial adjudication, in 2024, infringement issues arising from the use of generative artificial intelligence services gradually entered the discussion, including but not limited to whether using copyrighted works for artificial intelligence model training constitutes infringement and whether artificial intelligence-generated content (such as images and audio) is protected by China’s copyright law.

The first case of generative artificial intelligence platform infringement

In early 2024, the Guangzhou Internet Court issued an effective judgment in a case where generative artificial intelligence services infringed on others’ copyrights, which was also the first judgment on generative artificial intelligence platform infringement liability worldwide. Specifically, the plaintiff, authorised by the copyright owner, held exclusive rights and enforcement rights to the Ultraman series of images. The defendant was a company providing artificial intelligence painting services, where entering the text “generate an Ultraman” in its service would produce images similar to Ultraman. The court ruled that the defendant’s actions infringed on the plaintiff’s reproduction and adaptation rights to the Ultraman works in question, ordering the immediate cessation of the infringement and the implementation of technical measures to prevent recurrence.

The first “AI voice infringement case”

On 23 April 2024, the Beijing Internet Court issued a judgment in the first “AI voice infringement case”. Specifically, one of the defendants used artificial intelligence to process works voiced by the plaintiff, generating the text-to-speech works in question for sale. The plaintiff argued that the defendants’ actions had infringed on their voice rights and demanded that the defendants immediately cease the infringement, apologise, and compensate for economic and emotional damages. The court held that if AI-synthesized voices could enable the general public or relevant groups to associate them with a specific natural person based on their timbre, tone, and pronunciation style, they could be considered identifiable. The AI product in question, upon in-court inspection, could identify the plaintiff’s identity, and one of the defendants had no right to process the plaintiff’s voiced works using AI, thus constituting infringement.

From current judicial cases, it is evident that rights holders actively seek judicial remedies when their rights are infringed upon by AI-related actions. This also serves as a warning to AI-related enterprises to prioritise the legality and compliance of training data sources when conducting business, to avoid subsequent infringement issues such as content infringement due to authorisation flaws in training data. Secondly, enterprises should emphasise the protection of users’ overall rights, including intellectual property, clearly defining the scope of rights and obligations of both parties to avoid new infringement issues arising from the use of AI tools.

Content moderation

“Hallucination” is considered one of the potential drawbacks of generative AI: a generative AI may provide information that sounds plausible but is laden with inaccuracies, bias, or in some cases, has no relevance to the given context whatsoever. Furthermore, because AI learns from human data, there is always the risk of “garbage in, garbage out” – meaning the quality of the model’s output is directly dependent on the quality and completeness of the data it was trained on.

In China, preventing hallucinations and flawed output is regarded as a legal obligation. According to the Provisions on Network Information Content Moderation Governance published in 2019, the generative AI service providers should prevent and combat the dissemination of illegal and harmful content, including content containing rumours, obscenities, improper comments on disasters, or other content that adversely affects network ecology.

Experts argue that hallucinations are here to stay, and it is uncertain whether fixing them will ultimately be beneficial or detrimental. Nonetheless, reducing their occurrence is possible by adopting proper measures. In this regard, generative AI service providers serve not only as key enforcers of regulatory requirements but also as frontline solution providers for addressing these risks.

In March 2024, the National Cybersecurity Standardisation Technical Committee in the People’s Republic of China released the Basic Security Requirements for Generative Artificial Intelligence Service (“Security Requirements”).

The Security Requirements set out specific requirements on content moderation applicable to generative AI service providers in the entire course of developing their AI products. These guidelines cover aspects such as monitoring and properly labelling training datasets, implementing model security measures to defend against evasion attacks, and input and mechanisms to filter out or flag potentially illegal and harmful input and output.

While the Security Requirements are not legally binding in themselves, the generative AI service providers will be heavily incentivised to follow the guidelines. This is because the Security Requirements provide detailed elaboration on the Interim Measures and, most importantly, serve as the practical guideline for completing the application for the Generative AI Services Filing.

AI governing AI

AI governing AI means a paradigm shift underway in AI governance, as AI itself takes on the role of regulator, moving beyond the conventional perspectives of providers and authorities. This approach aims to harness AI’s own capabilities to identify weaknesses in AI systems and enhance their defensive and security capabilities accordingly.

Leveraging adversarial attack and defence techniques, together with the notion of integrating attack and defence is nothing new and has been part of AI discourse for around a decade. The rise of generative AI has further propelled advancements in AI security technologies.

For example, generative AI models now have evolved into intelligent security advisers, offering defence strategies through straightforward natural language descriptions, skipping complex programming. They can even simulate adversarial scenarios to assess robustness, marking a departure from human-led testing processes. This shift is a direct result of the era of large models, where enhanced computing power has revolutionised security technology.

In 2022, the China Academy of Information and Communications Technology, Tsinghua University, and Ant Group jointly released the AI security detection platform “YiJian”, which is considered to be the first of its kind in the industry. Its successor, “YiJian2.0”, unveiled in 2023, offers advanced capability to detect risks associated with generative AI models across various domains, including data security, content moderation, and ethical considerations. It conducts adversarial detection across multiple dimensions, such as privacy, ideology, criminal activities, bias, and discrimination, and generates comprehensive reports to facilitate targeted evolution and improvement. This approach remarkably streamlines testing processes, by mitigating the limitations of manual testing. Moreover, the testing process itself stimulates the refinement and advancement of both the testing AI and the AI being tested.

In addition to the aforementioned, current efforts to broaden the application of AI for self-regulation have primarily been exploratory. But along with this experimentation come new challenges. Besides just making sure AI is technically sound and safe, there is the lingering issue of how to determine whether AI behaves ethically from a legal perspective. How do we ensure the AI we use for testing stays unbiased and accurate? And then, there is the big unknown: will AI regulating other AI lead to better or worse outcomes? These are all important questions that need careful consideration when regulating generative AI.

Anticipating AI risks and the impetus for regulation

In China, it is apparent that lawmakers are actively promoting and steering China’s own AI development in more positive directions, with a strong emphasis on security. However, given the evolving and dynamic landscape, with AI regulations still in flux and the comprehensive Artificial Intelligence Law yet to be enacted, companies find themselves treading uncertain waters and navigating grey areas amidst unclear enforcement rules.

In such a climate, where regulatory arbitrage is a tempting prospect, savvy companies may seek ways to operate just outside the bounds of regulation. The rapid growth of generative AI technologies across sectors has revealed a spectrum of risks, spanning from data breaches to ethical dilemmas. Without robust regulation, the likelihood of such security incidents occurring becomes more pronounced. This recognition of emerging risks underscores the imperative for proactive regulatory action. Consequently, there may be growing pressure on regulators to bolster their supervision of AI.

Legislative Trend

Artificial intelligence legislation

In addition to advancing administrative law enforcement and judicial adjudication related to artificial intelligence, in 2024, China also actively promoted refined legislative actions in the field of algorithms and artificial intelligence. These included but were not limited to drafting the “Basic Security Requirements for Generative Artificial Intelligence Services (TC260-003)”, “Cybersecurity Technology - Basic Security Requirements for Generative Artificial Intelligence Services (Draft for Comments)”, “Cybersecurity Technology - Security Specifications for Pre-training and Optimization Training Data of Generative Artificial Intelligence (Draft for Comments)”, “Cybersecurity Technology - Security Specifications for Data Annotation of Generative Artificial Intelligence (Draft for Comments)”, “Identification Methods for Artificial Intelligence-Generated Synthetic Content (Draft for Comments)”, and the “Cybersecurity Standards Practice Guide - Emergency Response Guide for Security of Generative Artificial Intelligence Services”.

Through the formulation and issuance of departmental rules, practice guides, national standards, and other documents, detailed requirements were proposed for the security assessment, training data compliance, data annotation, and emergency management of generative artificial intelligence. This ensures comprehensive control over generative AI technology from development and training to service provision, guaranteeing the provision of safe and compliant generative artificial intelligence services. Enterprises should also promptly monitor the latest developments in relevant legislation, improve and revise their internal security management systems based on the latest legislative developments, and implement them while adopting technical safeguards that meet the latest legislative requirements.

Furthermore, to implement the “Global Artificial Intelligence Governance Initiative”, in September 2024, the National Cybersecurity Standardization Technical Committee released version 1.0 of the “Artificial Intelligence Security Governance Framework” at the main forum of the National Cybersecurity Awareness Week. This framework clarified AI governance principles such as inclusive prudence and ensuring security, outlined endogenous risks, application risks, cognitive domain risks, and ethical risks that AI might currently involve, and further proposed technical measures to address various risks. This also provided direction for co-ordinating AI compliance layouts in 2025.

An Outlook Into the Future

Throughout the development of human society, every major technological revolution has been accompanied by the innovation and reconstruction of governance paths. Artificial intelligence has become an important driving force in the new round of technological revolution and industrial transformation, with the European Union, the United States, China, and others all placing AI legislation on their agendas. On 21 May 2024, the world’s first specialised law on artificial intelligence, the EU’s “Artificial Intelligence Act”, was officially approved by the European Council and came into effect on 1 August of the same year. In China, the legislative plans of the 13th and 14th National People’s Congress both included AI legislation, and the State Council listed the draft AI law in its annual legislative plans for both 2023 and 2024. As mentioned above, in 2024, China actively promoted refined legislative work in the field of algorithms and artificial intelligence. Relevant national standard documents proposed detailed requirements for AI regulatory paths from aspects such as generative AI security assessment, training data compliance, data annotation, and emergency management, and were opened for public comments.

In this regard, we understand that China’s legislative work in the field of artificial intelligence is steadily advancing. The model of “industry regulation + general legislation” may become the mainstream approach to AI governance, with the “Artificial Intelligence Law” and AI industry legislation jointly constructing China’s AI governance solution. Against this backdrop, it is recommended that relevant enterprises closely monitor the overall direction of China’s AI legislation, thoroughly study existing AI governance policy documents and standards, and establish an internally coherent and effective AI compliance system from dimensions such as risk management organisation, ethical standards, development management, data security management, emergency response, algorithm correction, and reporting. This will help prepare for potential future regulatory measures.

King & Wood Mallesons

18th Floor
East Tower
World Financial Center 1
Dongsanhuan Zhonglu
Chaoyang District
Beijing 100020
PRC

+86 10 5878 5588

kwm@cn.kwm.com www.kwm.com
Author Business Card

Law and Practice

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing, Shanghai and Shenzhen, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent years, including the chapter and trends and developments article in the Chambers Artificial Intelligence 2022-2024 Global Practice Guides.

Trends and Developments

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing, Shanghai and Shenzhen, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent years, including the chapter and trends and developments article in the Chambers Artificial Intelligence 2022-2024 Global Practice Guides.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.