Artificial Intelligence 2024

Last Updated May 28, 2024

China

Law and Practice

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 29 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing and Shanghai, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent few years, including “Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights”, published in China Law Insights in 2022.

China has adopted a comprehensive approach to regulating artificial intelligence (AI) by enacting various laws and regulations. These regulations address AI-related issues from diverse angles, encompassing data privacy, network security, algorithms, and ethical considerations. The following section provides a breakdown of this regulatory framework.

  • Data:
    1. Data Security Law of the People’s Republic of China (DSL); and
    2. Personal Information Protection Law of the People’s Republic of China (PIPL).
  • Network security:
    1. Cybersecurity Law of the People’s Republic of China (CSL).
  • Algorithms:
    1. Administration of Algorithm-generated Recommendations for Internet Information Services (the “CAC Algorithm Recommendation Rules”);
    2. Provisions on the Administration of Deep Synthesis of Internet Information Services (the “CAC Deep Synthesis Rules”); and
    3. Interim Measures for the Administration of Generative AI Services (the “AIGC Measures”)
  • Ethics:
    1. Measures for Scientific and Technological Ethics Review (for Trial Implementation) (the “Ethics Review Measures”).

Under the three foundational laws, namely, the DSL, PIPL, and CSL, the State Council, the Cyberspace Administration of China (CAC) and other authorities responsible for cybersecurity and data protection within the scope of their respective duties are tasked with developing and enforcing specific regulations. Specifically, the CAC has issued the three AI-specific regulations as set out above. Additionally, the Ministry of Science and Technology and relevant authorities have promulgated the Ethics Review Measures, which are designed to set out the basic rules and principles for conducting scientific research, technology development, and other scientific and technological activities.

Apart from general cybersecurity and data protection laws, laws and regulations of other legal sectors also apply to AI if the application of AI involves specific issues regulated in these other legal sectors, including but not limited to tort law, consumer protection law, antitrust law and criminal law. Additionally, there are also many regulations and guidance related to algorithm governance focusing on specific industry fields such as e-commerce and healthcare; eg, the E-Commerce Law and the Guidelines for Registration Review of AI Medical Devices.

  • Manufacturing: AI is revolutionising the manufacturing industry in China through industrial internet and automated manufacturing systems. These technologies enhance efficiency, reduce production costs, and improve product quality by optimising production lines, predictive maintenance, and quality control using machine vision.
  • E-commerce: AI applications in e-commerce include personalised recommendations, customer service chatbots, and demand forecasting. These innovations improve user experience, increase sales, and optimise inventory management.
  • Finance: In the financial sector, AI is used for credit scoring, fraud detection, and automated trading systems. Machine learning models analyse vast amounts of data to assess creditworthiness and detect fraudulent activities, thereby reducing risks and improving operational efficiency.
  • Healthcare: AI is transforming healthcare with applications in medical imaging, diagnostics, and personalised treatment plans. Machine learning algorithms can assist in identifying diseases, suggesting treatments, and predicting patient outcomes, leading to improved patient care. Collaborations between healthcare providers and AI developers are aimed at creating integrated health management solutions.
  • Transportation: Autonomous vehicles and smart traffic management systems are being developed and tested in China. AI helps in optimising traffic flow, reducing congestion, and enhancing road safety. Technology companies, automotive manufacturers, and government entities are working together on autonomous vehicle projects. Currently, China is aiming to achieve the realisation of an L4 autonomous driving network by 2025.

The Chinese government has been actively involved in promoting the adoption and advancement of AI for industry use through a variety of investment strategies, policies, and incentives.

  • Policy formulation and strategic planning: The government has released several strategic documents, including the “New Generation Artificial Intelligence Development Plan”, which emphasises the importance of AI in various sectors, such as manufacturing, agriculture, logistics, finance, and healthcare, and sets clear goals for the development of the AI industry.
  • Investment in AI infrastructure: The government has made significant investments in building AI infrastructure, such as public computing centres and data platforms. These investments are designed to provide companies with the necessary resources to develop and deploy AI applications, thereby fostering innovation and industry adoption.
  • Incentives for talent attraction and development: To address the talent shortage in AI, the government has implemented various measures to attract and retain AI experts from overseas and to cultivate domestic talent, such as supporting education and training programmes to develop a skilled local workforce.
  • Promotion of AI applications in industry: The government promotes the integration of AI in various industries through initiatives like the “AI+” action plan. This plan aims to integrate AI with traditional industries to enhance efficiency, reduce costs, and drive innovation. Incentives for industries that adopt AI include support for the establishment of AI innovation zones.

China is the first country in the world to promulgate a law regulating generative AI; ie, the AIGC Measures. Notably, compared with the previously released draft of the AIGC Measures, the final version provides more flexibility and feasibility for relevant entities to fulfil generative AI-related compliance obligations. For example, on the point of the “authenticity, accuracy, objectivity, and diversity” of training data, the final AIGC Measures ease the obligations of generative AI service providers. Instead of requiring them to “ensure” the quality of the data, the measures now call for “improving” the quality of training data.

Currently, in addition to the three foundational laws mentioned in 1.1 General Legal Background, AI-specific legislation in China mainly includes the following.

Information Content Management

In December 2021, the CAC issued the Provisions on the CAC Algorithm Recommendation Rules, focusing on managing algorithmic discriminatory decision-making. The CAC Algorithm Recommendation Rules mark the CAC’s first attempt to regulate the use of algorithms, in which information service providers are required to use algorithms in a way that respects social morality and ethics, and are prohibited from setting up any algorithm model that could induce user addiction or excessive consumption.

In November 2022, the CAC issued the CAC Deep Synthesis Rules, regulating the provision of deep synthesis services and technologies. For example, deep synthesis service providers are required to take technical measures to add signs to alert users that the content was generated via deep synthesis technologies and the sign shall not affect users’ use of information generated or edited using their services.

In July 2023, the CAC issued the AIGC Measures, which put forward basic and general compliance requirements for the application of generative AI in specific scenarios. For example, providers of generative AI should bear the responsibility of producers of internet information content. On the other hand, the AIGC Measures also reserve a certain space for relevant organisations to use generative AI services to engage in specific activities in special fields such as news publishing, film and television production, etc.

Ethical Considerations

In September 2023, the Ministry of Science and Technology issued the Ethics Review Measures, which clarify that units engaged in scientific and technological activities, including AI, whose research content involves sensitive areas of scientific and technological ethics, should establish a science and technology ethics (review) committee.

Several AI-specific directives are described below.

In January 2021, the National Information Security Standardisation Technical Committee (TC260) issued the Cybersecurity Standard Practice Guide – AI Ethical Security Risk Prevention Guidelines, which addresses the ethical safety risks that may arise from AI, and provides normative guidelines for the safe conduct of AI research and development, design, manufacturing, deployment and application and other related activities. It applies to relevant organisations or individuals that carry out AI research and development, design and manufacturing, deployment and application and other related activities.

In May 2023, the TC260 issued the AI-Code of Practice for Data Labelling of Machine Learning, which specifies the data annotation framework process for machine learning in the field of AI. This document serves for guiding data annotation for machine learning in the field of AI and related research, development and application.

In August 2023, the TC260 issued the Cybersecurity Standards Practice Guide – Generative AI Service Content Label Method, which provides label methods for four types of generated content: text, pictures, audio and video. It applies to providers who use generative AI technology to provide content services to the public, whether the content is displayed directly or output in the form of files.

In March 2024, the TC260 issued a guideline document on the safe development of generative AI services, namely the Basic Requirements for Security of Generative AI Services. The guideline document refines the relevant compliance requirements of the AIGC Measures in terms of enforcement rules, such as the legality of data sources and content security, and provides an effective path for generative AI service providers to conduct security assessments in practice.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

See 3.2 Jurisdictional Law.

China has been actively working on AI-specific legislation and regulations to govern the development and application of AI technologies. In addition to the promulgated laws and regulations, a comprehensive AI legislation has been included in the State Council’s 2023 legislative work plan. These proposed regulations and ethical guidelines aim to strike a balance between fostering innovation in AI and ensuring the responsible development and application of these technologies. China’s legislative efforts seek to create a supportive environment for AI that aligns with societal values and legal norms.

On the point of deepfakes, in 2022, it was held in a case that enterprises shall not use information technology like deepfakes to infringe on the portrait rights of others. In 2023, the defendant in a criminal case was held criminally liable for using deepfake technology to generate and disseminate illegal videos for profit, receiving a sentence of over one year in prison.

In addition, a case related to virtual humans generated by AI (the virtual human case) specified that if enterprises use AI technology to provide services, they cannot infringe on the legitimate interests and rights of others such as personality rights. In this case, the respondent provided users services that enabled users to engage in virtual emotional interaction such as “intimate” conversations with virtual images of celebrities formed using AI technology, and the respondent was held liable for the infringement.

For more judicial decisions on intellectual property rights related to generative AI, see 15.3 AI-Generated Works of Art and Works of Authorship.

Most cases involving AI technology do not specifically clarify AI-related technical definitions. Most AI technology tends to be regarded as a tool, and the court typically does not delve into the intricacies of the AI technology itself, but rather focuses on the damage or impact caused by the AI technology. In addition, AI is often discussed in relation to specific legal issues, such as intellectual property rights, liability for actions taken by AI systems, and data privacy, and the characteristics of AI are considered in relation to the aforementioned aspects. For example, in the virtual human case, the court held that the software in question was an AI system, and that this AI system accomplished its learning and recommendation capabilities through the operation of software (computer program). This kind of software can basically be regarded as a manifestation of the algorithm.

The lack of a uniform definition may lead to calls for clearer legal definitions of AI. Though the AIGC Measures, CAC Algorithm Recommendation Rules, and CAC Deep Synthesis Rules provide definitions for generative AI, recommendation algorithm, and deep synthesis respectively, a uniform definition of AI awaits to be clarified by relevant legislation. For generative AI, if courts recognise AI’s creative capabilities, this could impact the attribution of copyright and other intellectual property rights for AI-generated works.

In China, the CAC is responsible for the overall planning and co-ordination of cybersecurity, personal information protection and network data security, and has issued a number of regulations concerning the application of AI technology in terms of internet information services as well as the AIGC Measures.

There are also many other departments – such as departments in the industrial sector, telecommunications, transportation, finance, natural resources, health, education, science and technology – that undertake to ensure cybersecurity and data protection (including those relevant to AI) in their respective industries and fields. Public security authorities and national security authorities also play an important role in network and data security within their respective remits.

According to the CAC Algorithm Recommendation Rules, algorithm recommendation technology refers to the use of algorithm technologies such as generation and synthesis, personalised push, sorting and selection, retrieval and filtering, and scheduling and decision-making to provide information to users. Meanwhile, the CAC Deep Synthesis Rules provide that deep synthesis technology refers to technology that uses generative synthesis algorithms such as deep learning and virtual reality to produce internet information such as text, images, audio, video, and virtual scenes. The AIGC Measures provide a definition for generative AI, which refers to models and related technologies with the ability to generate text, pictures, audio, video and other content.

Based on the above definitions, most generative AI service providers need to file with both the algorithm filing system under the CAC Algorithm Recommendation Rule and the CAC Deep Synthesis Rules, as well as the registration system of generative AI (large language model) under the AIGC Measures. In practice, a dual registration regulatory framework of generative AI services consisting of algorithm filing and large model filing has been formed.

It is normal practice for the CAC and other departments to co-operate in rule-making and enforcing the laws. Most of the data protection-related rules are jointly issued by multiple regulatory agencies including the CAC, the Ministry of Industry and Information Technology (MIIT), public security authorities and other related departments.

These laws and regulations have played a key role in ensuring network and data security, and the protection of personal information. As for those mainly focusing on AI regulation, the corresponding authorities have been trying their best to focus on risk control, especially for personal information protection, prevention of bias, protection of intellectual properties and trade secrets, and protection of individuals’ rights to portrait, reputation, and honour, etc. Meanwhile, the agencies seek to promote the innovative application of generative AI technology in different industries and fields, the innovation in foundational technologies such as generative AI algorithms, frameworks, chips and supporting software platforms, and the construction of generative AI infrastructure and public training data resource platforms

In 2021, the State Administration for Market Regulation imposed penalties on Alibaba on the grounds that Alibaba’s use of data, algorithms and other technologies had restricted competition in the market for e-tailing platform services within China. The fine totalled CNY18.228 billion, which included a fine for misuse of data. After this, in order to ensure the reasonable use of algorithm recommendation technology, the CAC published the CAC Algorithm Recommendation Rules, which stated that algorithm recommendation service providers shall not conduct any monopoly or unfair competition by taking advantage of algorithms, and that AI or algorithm enforcement activities as well as the relevant legislation in China, are all aimed at safeguarding the legal legitimate interests and legal rights of users.

In addition, regulatory authorities also pay attention to issues such as domestic entities introducing overseas generative AI services without due process and companies using customer personal information for algorithm training without authorisation. Specifically, in June 2023, the National Financial Regulatory Administration announced a case which involves illegally obtaining training data for algorithm training, that is, a service provider secretly used more than six million session archived data from several banks for its model training. The announcement did not specify the size of the fine imposed.

The State Standardisation Administration (SSA) is responsible for approving the release of national standards, and TC260 is one of the most important standard-setting bodies on AI technology. So far, TC260 has issued a series of recommended national standards and practical guidelines containing provisions regarding the use of AI-related technology.

In summary, the SSA has released standards including, without limitation:

  • Artificial Intelligence-affective Computing User Interface-model (GB/T 40691-2021), providing guidance related to the design, development and application of the affective computing user interface;
  • Information Technology-Artificial Intelligence-Terminology (GB/T 41867-2022), which defines common terms used in the field of information technology related to AI;
  • Information Technology-Artificial Intelligence-platform Computing Resource Specification (GB/T 42018-2022), which provides a standard basis for the construction of AI platforms; and
  • Artificial Intelligence-Technical Framework of Knowledge Graph (GB/T 42131-2022), which provides the conceptual model and technical framework for knowledge graphs.

Overall, the TC260 has released standards including but not limited to: Information Security Technology – Security Specification and Assessment Methods for Machine Learning Algorithms (GB/T 42888-2023), which specifies the security requirements and verification methods of machine learning algorithms during most of their life cycle. For other standards issued by the TC260, see 3.3 Jurisdictional Directives.

In addition, there are standard-setting bodies to formulate AI-related standards in specific industries. The People’s Bank of China (PBOC), along with the Financial Standardisation Technical Committee of China (TC 180), plays a leading role in writing AI-related standards in the financial field. Specifically, the PBOC also issued the Evaluation Specification of Artificial Intelligence Algorithm in Financial Application in 2021, providing AI algorithm evaluation methods in terms of security, interpretability, accuracy and performance.

In automated driving, the recommended national standard Taxonomy of Driving Automation for Vehicles sets forth six classes of automated driving (from L0 to L5) and the respective technical requirements and roles of the automated systems at each level. The TC260 released the Security Guidelines for Processing Vehicle Collected Data, which specify the security requirements for automobile manufacturers’ data processing activities such as transmission, storage and export of automobile data, and provides data protection implementation specifications for automobile manufacturers to carry out the design, production, sales, use, operation and maintenance of automobiles.

Countries may conclude international treaties that contain some international standards for AI regulation or AI technology. With regard to AI-related international treaties that China may conclude in the future, these international treaties will generally come into force in China’s territory by way of transposition or direct application. In this way, AI-related international standards generally do not conflict with China’s laws. For example, in December 2021, China called on countries to develop and use AI technology in the military in a prudent and responsible manner. If international standards are concluded successfully with China’s participation, China and China’s enterprises shall follow these AI-related international standards.

On the other hand, with regard to AI-related international treaties that China will not conclude, China’s AI-related enterprises still need to consider the relevant AI-related international standards if these enterprises intend to provide AI products or services within those jurisdictions.

In recent years, China has been adapting to internet development trends and widely applying digital technologies such as big data, cloud computing and AI to the process of government administration in accordance with the law, in order to integrate information technology and the rule of law in government.

For example, in smart city applications, big data analysis carried out with the help of AI is used to determine traffic control measures in a given city. The smart city applications can design and promote smart transport strategies in which data analysis provides a clearer picture of traffic policies in terms of potential infractions committed by pedestrians and the range of transportation options accessible to residents.

In April 2023, the Chengdu Railway Transport Intermediate Court held an online hearing regarding the personal information protection dispute case between an individual and China Railway Chengdu Bureau Group Co., Ltd. This is also the first personal information dispute caused by the use of facial recognition technology in public transportation in the country. The court held that the processing of facial recognition by the defendant meets the exemption condition of maintaining public safety, and therefore separate consent of the individual is not required. However, the court noted that the railway company still needs to fulfil its notification obligations in relation to its personal information processing activities.

It is a common issue for AI operators that they may collect a large amount of data to feed their AI system. Since China’s laws and regulations on data processing have a clear concern for national security, AI companies are also advised to be aware of related legislative requirements.

Critical Information Infrastructure (CII)

The Regulation on Protecting the Security of Critical Information Infrastructure has defined CII as network facilities and information systems in important industries and fields that may seriously endanger national security, the national economy and people’s livelihoods, and public interest in the event of being damaged or losing functionality. CII Operators (CIIO) are required to take protective measures to ensure the security of the CIIs. Furthermore, the CSL imposes data localisation and security assessment requirements on the cross-border transfer of personal information and important data for CIIOs.

Important Data

The DSL have defined important data as data the divulging of which may directly affect national security, public interests and the legitimate interests of citizens or organisations, and certain rules impose various restrictions on its processing. The DSL contemplates security assessment and reporting requirements for the processing of important data in general.

Cybersecurity Review

On 28 December 2021, the CAC, together with certain other national departments, promulgated the revised Cybersecurity Review Measures, aimed at ensuring the security of the CII supply chain, cybersecurity and data security and safeguarding national security. The regulation provides that CIIOs that procure internet products and services, and internet platform operators engaging in data processing activities, shall be subject to the cybersecurity review if their activities affect or may affect national security, and that internet platform operators holding more than one million users’ personal information shall apply to the Cybersecurity Review Office for a cybersecurity review before listing abroad.

Generative AI continues to raise legal issues related to personal information protection, intellectual property rights and the means of governing generative AI. In order to stimulate the standardised application of generative AI, the CAC issued the AIGC Measures, which serve as a governance strategy, aiming to promptly address the potential risks and impacts from current AI-generated content, particularly in terms of providing early warnings and managing potential ethical risks.

The AIGC Measures specify the obligations the responsible entities should fulfil and set corresponding administrative penalties. According to the AIGC Measures, the responsible party is the generative AI service provider, which refers to organisations and individuals that use generative AI technology to provide generative AI services (including organisations and individuals that provide generative AI services by providing programmable interfaces and other methods). In addition, the AIGC Measures clarify that intellectual property rights must not be infringed upon during use of generative AI.

In terms of copyright, at the training stage of algorithms and models, AI training data may present infringement liability issues, while at the content generation stage, whether the output products fall within the scope of copyright protection remains highly controversial. In the absence of clear legal and regulatory guidance, intellectual property disputes based on generative AI services have gradually clarified the corresponding copyright ownership rules for AI-generated products through judicial adjudication. In December 2023, the Beijing Internet Court made a judgment on the first AI-generated picture (based on text prompt) copyright case. The court decided that, considering that the copyright law does not place excessive requirements on the originality of the work, the pictures generated by the plaintiff through the use of generative AI should be recognised as work and thus enjoy copyright. However, relevant rules remain largely uncertain, considering that with the further iterative upgrade and in-depth application of generative AI technology, the judicial practice determination rules for the ownership of AI-generated objects may change based on the latest practice.

According to the AIGC Measures, AIGC service providers shall carry out training data processing activities such as pre-training and optimisation training in accordance with the law, and shall use data and basic models from legal sources. If intellectual property rights are involved, they shall not infringe upon the intellectual property rights enjoyed by others according to law. Therefore, AI algorithm model developers need to comply with intellectual property compliance requirements during the model training stage and respect the copyrighted works of others. Regarding whether the use of other people’s copyrighted works in the input stage of the AI algorithm model constitutes infringement, there are no relevant cases in domestic judicial practice, but scholars hold different views on this. Some believe that the use of relevant data by generative AI for training models without the copyright owner’s authorisation may infringe copyright. Others believe that the issue remains uncertain, as the reasonable use provision may apply.

Regarding the output of AI models, though there remain uncertainties, a case decided in February 2024 by Guangzhou Internet Court resulted in a verdict against the defendant, a text-to-image AIGC provider, finding it liable for infringing the copyrights of Ultraman IP. The court concluded that the defendant failed to exercise reasonable duty of care in generating its AIGC output, thus violating the AIGC Measures. For the output of the generative AI, see also the case discussed in 8.1 Emerging Issues in Generative AI.

In China, the rights of data subjects, including the right to rectification and deletion, are addressed under the PIPL and other relevant legal frameworks. The PIPL also provides the principles of purpose limitation and data minimisation.

Right to Rectification

Personal information subjects have the right to request the correction or completion of their personal information if it is found to be inaccurate or incomplete. This is in line with the principle of ensuring that personal information is accurate and up to date. On principle, if an AI-generated output contains false factual claims about an individual, the individual may exercise their right to rectification to have the incorrect information corrected. The PIPL (Article 23) explicitly grants personal information subjects this right, which requires personal information handlers to take necessary actions to address such requests.

Right to Deletion

The right to deletion allows individuals to request the deletion of their personal information under certain conditions. According to the PIPL (Article 24), personal information handlers are required to delete personal information in cases such as when the processing purpose has been achieved, the personal information is no longer necessary for the original purpose, or when the individual withdraws their consent on which the processing was based. However, the deletion of the entire AI model is not explicitly required by law and would depend on the specific circumstances, such as whether the AI model developer may rely on reasonable use of personal information in the public arena.

Purpose Limitation and Data Minimisation

Purpose limitation and data minimisation are fundamental principles in Chinese data protection law. The PIPL mandates that personal information should only be collected and used for specific, explicit, and legitimate purposes (Article 6). This principle of purpose limitation requires that any further processing of personal information should not be incompatible with the original purpose for which it was collected. In practice, it remains highly controversial to process personal information that is collected for business purposes to train AI models.

Data minimisation, while not explicitly mentioned in the PIPL, is implicitly supported by the requirement that personal information collection should be limited to what is necessary to achieve the stated purpose (Article 6). This means that personal information handlers should only collect the minimum amount of personal information required for the intended purpose, avoiding excessive or unnecessary data collection.

Uses of AI in the Practice of Law

AI technology has a wide range of applications in the judicial field, from transactional support work such as information backfilling, intelligent cataloguing and the error correction of documents. It should be noted that AI technology does not participate in the making of judicial decisions.

Shanghai’s “206” system incorporates unified statutory evidence standards into its database, providing assistance to public prosecution and law enforcement agencies. This system, backed by expert knowledge, model algorithms, and vast amounts of data, is expected to have 20 functions in the future. These functions include evidence standards guidelines, single evidence verification, arrest conditions review, social dangerousness assessment, case pushing, knowledge indexing, and sentencing reference and document generation.

By integrating multiple types of data resources for big data mining and analysis, “Judge Rui” of the Beijing High People’s Court automatically pushes information such as case analysis, legal provisions, similar cases, and judgment references to provide judges with trial specifications and guidelines for case handling.

Ethical Considerations

In December 2021, the CAC issued the CAC Algorithm Recommendation Rules to provide special management regulations on algorithmic recommendation technology. The CAC Algorithm Recommendation Rules mark the CAC’s first attempt to regulate the use of algorithms, in which internet information service providers are required to use algorithms in a way that respects social morality and ethics, and are prohibited from setting up any algorithm model that could induce user addiction or excessive consumption.

In 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the Opinions on Strengthening the Governance over Ethics in Science and Technology. These opinions call for stringent investigation of unethical practices in science and technology, intending to enhance the management of scientific and technological ethics and improve ethical governance capabilities.

In 2023, China also issued the “Guideline on Cybersecurity Standard Practices – Guidelines on Ethical Security Risk Prevention for AI”, which provides guidelines for organisations or individuals to carry out AI research and development, design and manufacturing, deployment and application, and other related activities, in an ethical manner.

In addition, the AIGC Measures emphasised that in processes including algorithm design, selection of training data, model generation and model optimisation and service provision, measures should be taken to prevent discrimination on the basis of race, ethnicity, religious belief, nationality, region, sex, age or profession.

Last but not least, in the Opinions on Regulating and Strengthening the Applications of Artificial Intelligence in the Judicial Fields by the Supreme People’s Court, it has been noted that the courts shall adopt ethical reviews, compliance reviews and security assessments to prevent and mitigate cybersecurity risks in judicial AI applications through mechanisms such as the Judicial AI Ethics Council.

From a tort law perspective, the owner of AI-enabled technology that harms the interest of others should be directly liable. However, the application of AI technology usually involves a number of roles, such as the AI developer, the product/service manufacturer, the seller and even the user. Thus, careful consideration must be given when defining who the “owner”, and consequently the liable party, truly is.

Currently, the assignment of liability in AI scenarios depends on the role of each party for the provision of AI service; for example, the provider of API access may only be responsible for post-factor liability (eg, deleting the relevant content based on a copyright holder’s request, etc). In addition, AIGC Measures stipulate several obligations for AI providers, such as content governance, adoption of security measures, and ensuring the AI model has passed the record-filing process, etc. it can effectively reduce the likelihood of being found at fault. The provider of AI is advised to exercise the duty of reasonable scrutiny and is entitled to claim no liability or diminished liability.

Role of Insurance

With the rapid development of AI technology in various industries, traditional insurance may not be enough to cover the innovative and unique risks that AI services trigger, such as cybersecurity, data breach, infringement on IP, etc.

Allocation of Liability

For example, AIGC Measures stipulated that the provider of AI service shall use the data and basic models from lawful sources. Besides verifying the lawfulness of the data source by itself, the provider of AI service could require the provider of the data source to guarantee that the data sources would not infringe on other’s rights, and that the further development of the model does not materially change the security measures of the original model, etc.

Assigning responsibility in AI scenarios should involve careful deliberation and a clear definition of the duty of care expected from different parties. This consideration should take into account the state of the art and objective factors that might affect the computing process of the AI technology.

The current legislation does not provide clear provisions on the imposition and assignment of liability; further clarification of relevant laws and regulations is awaited.

For instance, Article 155 of the Road Traffic Safety Law (Revised Draft), published by the Ministry of Public Security in April 2021, provides special provisions for automated driving cars. In terms of the assignment of responsibility, it states that “in the event of road traffic safety violations or accidents, the responsibility of the driver and the development unit of the automated driving system shall be determined in accordance with the law. The liability for damages shall be determined in accordance with the relevant laws and regulations. If a crime is committed, criminal liability shall be investigated in accordance with the law.”

CAC Algorithm Recommendation Rules, CAC Deep Synthesis Rules, AIGC Measures and Provisions on the Ecological Governance of Network Information Contents all impose different obligations and liabilities for various market players, with the main focus on the liability of service providers. For example, CAC Deep Synthesis Rules provide different obligations for technical supporters and service providers; eg, service providers must carry out a security assessment if the service to be launched can influence public opinion or mobilise the public, whereas technical supporters do not bear such obligations.

CAC Algorithm Recommendation Rules and AIGC Measures address the issues of algorithm bias and discrimination. The service provider is required to take effective measures in order to prevent discrimination in terms of nationality, religion, country, region, gender, occupation, health, etc, in the process of algorithm design, training data selection, model generation and optimisation, service provision, etc. The recently issued recommended national standard “Generative AI Service Basic Security Requirement” also requires (i) that the diversity of the language and type of the corpus be increased, and that (ii) service providers collocate the corpus from different sources, and reasonably collocate the corpus from both domestic and foreign sources.

The anti-discrimination mechanism is encouraged to further prevent any issues of algorithm bias.

From a technical perspective, algorithms may be biased due to a number of reasons. The accuracy of an algorithm may be affected by the data used to train it. Data that lacks representativeness or, in essence, reflects certain inequalities may result in biases in the algorithms. The algorithm may also cause bias due to the cognitive deficits/bias of the R&D personnel. Besides, due to the inability to recognise and filter bias in human activities, algorithms may indiscriminately acquire human ethical preferences during human-computer interaction, increasing the risk of bias in the output results.

Another example is big data-enabled price discrimination, where different consumers are charged significantly different prices for the same goods. According to the Implementing Regulation for the Law of the People's Republic of China on the Protection of Consumer Rights and Interests (effective on 1 July 2024), without the knowledge of consumers, business operators shall not set different prices or charging criteria for the same goods or services under the same transaction conditions; the violation of the foregoing provision may result in civil liability, an administrative fine, suspension of operation or invocation of business license.

Recognising the societal harm and the impact on consumer interests caused by algorithm bias, regulations emphasise the proper application of algorithms. This focus extends both to industry-specific practices and general data protection measures (see 11.4 Automated Decision-Making).

With the popularisation of various smart devices (eg, smart bracelets, smart speakers) and smart systems (eg, biometric identification systems, smart medical systems), AI devices and systems have become more comprehensive in their collection of personal data. Moreover, biometric information with strong personal attributes such as users’ faces, fingerprints, voiceprints, irises, and genes, are unique and invariable; once they are leaked or misused, such incidents may have a serious impact on citizens’ rights and interests. In February 2019, a facial recognition company was exposed to a data leak in which over 2.5 million people’s data and 6.8 million records were leaked, including ID card information, facial recognition images and GPS location records.

The potential risks include:

  • Inconsistent or inaccurate personal attributes: Automated decisions based on incorrect data can infringe on personal rights and interests, particularly if the decision-making mechanism is not disclosed to the user, prejudicing their right to information, choice, and refusal.
  • Impact on minors and human resource management: Automated decision-making in these areas can have significant and sensitive consequences.
  • Data breaches and misuse: These incidents erode trust and can lead to the wrongful use of data.
  • Challenges in user rights: Users may find it difficult to exercise their right to intervene in the decision-making process or to request the deletion of their personal information.

Under the PIPL, facial recognition and biometric information are recognised as sensitive personal information. Separate consent is needed when processing sensitive personal information, unless another legal basis for processing the sensitive personal information exists; for example, the railroad department may collect peoples’ facial images at the train station for the sake of public security. Further, the processing of such information shall be only for specific purposes and with sufficient necessity. In addition, the PIPL requires that the data handler shall also inform the personal subject regarding the necessity of processing their sensitive personal information and the impact on their rights and interests.

Recent legislative development and highlights:

  • Administrative Provisions on the Application Security of Facial Recognition Technology (for Trial Implementation) (Exposure Draft): The user of facial recognition technology that uses facial recognition technology in public places or stores the facial information of more than 10,000 people shall file for record with the local cyberspace authority.
  • Practice Guide on Cybersecurity Standards – Security Requirements for Personal Information Protection in Face Recognition Payment Scenarios (Exposure Draft): This focuses more on the scenarios where facial data is being collected and the relevant security measures.

This gives rise to concerns of intelligent shopping malls and smart retail industries where facial characteristics and body movements of consumers are processed for purposes beyond security, such as recognising VIP members and identifying consumers’ preferences so as to provide personalised recommendations. Under the PIPL, companies must consider the necessity for such commercialised processing and find feasible ways to obtain effective “separate consent”.

In the automobile industry, images or videos containing pedestrians are usually collected by cameras installed on cars, and videos and images containing facial information are considered as important data. Processors that have difficulty obtaining consent for their collection of personal information from outside a vehicle for the purpose of ensuring driving safety anonymise such information, including deleting the images or videos that can identify people, or conducting partial contour processing of facial information. Companies failing to perform obligations under the PIPL and related regulations are also faced with administrative penalties and even criminal liability (ie, for infringing citizens’ personal information).

Firstly, automated decision-making using personal information shall be subject to transparency requirements; processors are required to ensure the fairness and impartiality of the decision, and shall not give unreasonable differential treatment to individuals in terms of trading price or other trading conditions.

Where information feeds or commercial marketing to individuals is carried out by means of automated decision-making, options not specific to individuals’ characteristics shall be provided simultaneously, or convenient opt-out channels shall be available. Individuals whose interests are materially impacted by the automated decision are entitled to request the relevant service provider/processor to provide explanations and to refuse to be subjected to decisions solely by automated means.

Risk of undisclosed automated decision-making technology:

  • Misleading individuals: Individuals may not expect their personal information to be used in this way or understand how the process works, preventing them from taking remedial measures when significant adverse effects arise.
  • Impact on ethical and social values: The use of undisclosed automated decision-making technology can affect the ethical and social values and norms of the stakeholders involved.

In China, chatbots are usually deployed by e-commerce platforms or online sellers to provide consulting or aftersale services for consumers. PIPL and the Law on Protection of Consumer Rights and Interests typically govern the use of chatbots. Furthermore, chatbots providing (personalised) content recommendations may also need to comply with regulations on algorithm recommendations, etc.

There are also transparency requirements for automated decision-making (see 11.4 Automated Decision-Making). Users of internet information services involving AI technology are also entitled to be informed of the provision of algorithm-recommended services in a conspicuous manner. Relevant service providers are required to appropriately publish the basic principles, purposes and main mechanics of algorithm-recommended services (see 11.1 Algorithmic Bias). Without transparency, users may not understand how algorithms make decisions, which affects their ability to make informed choices. Technologies using historical purchase information to push recommended services and products can lead to the excessive use of personal information. The recommended products might not be the best for consumers, and their purchasing patterns and habits may be unduly influenced by undisclosed algorithmic decisions.

The concept of “big data-enabled price discrimination” refers to the collection of customer information for algorithmic analysis to pinpoint consumer characteristics and thus implement personalised pricing. Although there is no clear legal definition of this activity, relevant regulations include the Civil Code, the PIPL, the Law on Protection of Consumer Rights and Interests, the Electronic Commerce Law, the Anti-Monopoly Law and the Price Law.

With the development of AI, market participants may use an algorithm which is designed to limit competition (AI or algorithm collusion), such as price fixing, synchronised advertising, and sharing of insider information, etc.

PIPL provides that the use of personal information for automated decision-making must not discriminate unreasonably in terms of the transaction prices. Service providers shall not use algorithms to commit unreasonable differential treatment and other illegal acts on the prices and other transaction conditions based on clients’ preferences, transaction practices and other characteristics.

To avoid disputes and infringements, written agreements should cover crucial matters such as the ownership of intellectual property rights for the input content, ensuring that data sources do not infringe upon the rights and interests of others, and clarifying whether it is permitted to use the relevant content for data training. These agreements should also address liabilities related to the authenticity, legality, and completeness of the output content, as well as the division of responsibilities among the involved parties.

Service providers may also consider giving advance notices and disclaimers to customers, indicating that the output contents are not professional opinions and are based on public information. They should advise customers to seek professional opinions when necessary to avoid potential liabilities.

Common adoption of AI technology in HR practice includes automated assessments, digital interviews and data analytics to screen CVs and candidates.

This technology offers benefits such as the ability to quickly organise candidate CVs for employers, significantly reducing the time required to review applications. However, it also carries potential harm, such as biased hiring practices.

Compliance requirements include:

  • personal information protection;
  • transparency; and
  • fairness and rationality of the decision-making process.

Benefits:

  • promote efficiency;
  • reduce mistakes;
  • provide personalised services; and
  • increase the quality of HR management.

Potential harm:

  • incomplete or biased data may harm the benefits or even infringe the employees’ interests.

Compliance practice:

  • regular review and correction mechanism for the AI technology used for evaluation and monitoring to mitigate the risk of unfair and unreasonable decision-making;
  • human participation in the entire recruitment process; and
  • privacy, ethics and data security for monitoring employees’ work.

AI-based delivery systems continuously and automatically adjust parameters and road recommendations for delivery drivers, thereby reducing expected delivery times. This compels delivery drivers to take unsafe measures which could lead to traffic accidents, directly endangering the drivers’ personal rights and public interests (see 11.1 Algorithm Bias).

AI is used in various ways in financial services in China. For example, AI is used for credit scoring, fraud detection and customer service. China has implemented a number of regulations related to the use of AI in financial services. The PBC has issued guidance on the use of AI in financial services, including guidance on how to manage risks associated with AI.

Potential risks:

  • Biases in repurposed data can lead to discriminatory practices, whether intentional or unintentional. For instance, unintentional bias in finance while using AI in China can occur when AI systems are trained on biased data or when AI systems are not transparent and explainable.
  • The uncontrollable risks inherent in AI systems also have many hidden dangers for transaction models, transaction trend prediction and other businesses.

Mitigation measures include:

  • developing policies and procedures for managing AI risks;
  • conducting regular audits of their AI systems;
  • ensuring AI systems are transparent and explainable;
  • establishing internal evaluation and algorithm reporting mechanisms by reference to financial algorithm evaluation, algorithm record-filing and other compliance requirements; and
  • improve the internal control mechanisms at the algorithm level based on the dimensional standards of internal evaluation.

The use of AI in healthcare requires the assimilation and evaluation of large amounts of complex healthcare data. Machine learning can be used to predict and make preventive recommendations to assist sports rehabilitation. However, non-objective parameters, insufficient data sources, and inadequate sample sizes may lead to discrimination and bias in the output results. Additionally, defects in data retrieval mechanisms and lax review protocols for sharing medical data among different institutions and parties pose significant challenges. Insufficient data sources can result in different institutions generating varying or incorrect conclusions for treating the same disease or symptoms.

Using medical data requires the processing of large amounts of sensitive personal information, and the requirements under PIPL for the processing and sharing of sensitive personal information may not be fully implemented in practice. For example, the right to deletion of personal information is difficult to realise once such personal information has been used for data training or machine learning.  Repurposed medical data could be used for medical insurance, leading to risks such as data synergy and biased recommendations for insurance policies and coverage.

The strengths of utilising centralised electronic health record systems include improved integration of healthcare resources and enhanced efficiency. The risks include increased vulnerability to cyberattacks and data breaches.

The intelligent cockpit system:

  • utilises a natural language processing model to enable the vehicle’s speech system to understand and respond to voice commands with natural language and even gestures;
  • ensures accurate processing and comprehension of conversations, considering the context of the language used; and
  • can personalise the user experience by keeping track of the driver's preferences and habits.

The autopilot system primarily focuses on perceiving the driving environment. This involves training models using data collected by cameras and radar sensors.

Under the Several Provisions on Automotive Data Security Management (for Trial Implementation), the personal information volume of more than 100,000 individuals is deemed as important data and will be subject to more strict security measures. In addition, autopilot technology normally requires large amounts of data for model training, and the processing of such data might involve data reflecting economic activities like vehicle flow, logistics, etc. Therefore, classifying and grading this data is crucial for companies to ensure compliance.

For AI algorithm governance, CAC Algorithm Recommendation Rules require the classification and grading of algorithms based on their potential impact on public opinion or social mobilisation. Service providers using such algorithms must file records with relevant authorities. Enterprises are encouraged to tag data assets based on their classification and grading and adopt relevant security measures, such as limiting data access, deciding whether to upload important data or sensitive personal information to the cloud, and anonymising facial images of individuals outside the vehicle when processing such data is unnecessary.

In addition, enterprises developing AI technologies related to scientific research may fall under the scope of “research on the synthesis of new species that has a significant impact on human life and health, value concepts, and the ecological environment”. Such scientific and technological activities are subject to ethical review. If the research involves sensitive fields of science and technology ethics, enterprises engaged in life sciences, medicine, AI, and other related fields must set up a science and technology ethics (review) committee.

Currently, there are no major regulations governing AI in manufacturing.

Common applications of AI in manufacturing include:

  • smart production, which utilises the automation chain for order management, vendor/supplier scheduling, monitoring product defects and returns, and production prediction, etc; and
  • the common use of AI smart cameras for detecting chemical or gas leaks and activating emergency plans to ensure both product and personnel safety.

Common risks include:

  • data security and integration;
  • data use and sharing with different parties; and
  • balancing the need for effective monitoring and supervision of factory operations without intruding on the privacy of employees.

AI is used by consulting firms and judicial authorities primarily for statistical purposes.

Compliance requirements include:

  • ensuring the technology is reliable, accurate, and complies with professional standards;
  • protecting confidential client information; and
  • obtaining explicit and separate consent when necessary.

Shenzhen Nanshan District People’s Court determined in a copyright infringement case in 2020 that articles automatically generated by an AI software assistant shall be copyrightable and constitute a work of the legal entity that owns the software.

In another case, the Beijing Internet Court addressed a dispute over the infringement of the right of authorship and the right of information network dissemination of AI-generated works. The court clarified the attribute of pictures generated by AI and identified the AI user as the “author”. The court found that since the plaintiff made intellectual investments in the design process and the final selection of the AI-created picture, the picture has the elements of “intellectual achievement”, and the AI user gains the authorship of this picture.

When AI-enabled technology/algorithms are expressed in the form of computer software, the software code of the whole set, or of a certain module, can be protected in China under the Regulation on Computers Software Protection. While AI-enabled technology/algorithms are expressed through a technical scheme, it can be protected as a process patent. The Patent Examination Guidelines (2020) specifically added provisions for the examination of invention applications that include algorithmic features.

If the development and use of the algorithm are highly confidential, such algorithm might be protected as a trade secret or technical know-how. According to the Announcement of the Supreme People’s Court of the People’s Republic of China, the court may classify information on structure, raw materials, components, formulas, etc, related to technology as technical information under the Anti-Unfair Competition Law. Therefore, protecting AI technologies as technical secrets is justified by legislation.

In a case decided by the Beijing Internet Court, the key issue was whether the text in a big data analysis report generated by AI constituted a work. The Beijing Internet Court eventually decided that even though the report embodied original creation to some extent, this was not sufficient for the report to be viewed as a work under the Copyright Law, which requires that a written work should be created by a natural person. However, the report did not convey the original expression of feelings and thoughts of the AI developer and the user. Because neither the AI developer nor the user was the author of the report, the Beijing Internet Court refused the argument that the report constituted written work.

Shenzhen Nanshan District People’s Court held the view that the software does not run automatically without cause or possess self-awareness; instead, the way the software runs reflects the developer’s choice and is determined by the nature of the AI technology. As such, the court ruled that the article in question was deemed a written work protected by the Copyright Law. The court found that the specific expression of the article and the process of its creation, both stemming from the choices of the creator, sufficiently qualified the article as a written work (see 15.1 Applicability of Patent and Copyright Law).

First, it remains unclear as to whether content created through the use of OpenAI constitutes “work” under the Copyright Law. Second, it is still up in the air as to whom the rights of the work belong to. AI technology itself has not yet been regarded as a legal entity. In judicial decisions, some courts tend to regard the person behind the AI technology as the owner of the copyright of AI-generated content. However, in the case discussed in 15.3 AI-Generated Works of Art and Works of Authorship, the Beijing Internet Court did not rule out the possibility that the user of the AI technology could be the author of the AI-generated content. It seems that decisions are based on the level of contribution of intellectual work by the individual.

Automated decision-making directly and frequently affects shareholders’ vested interests and the operation of the business as a whole. It is important to determine whether automated decisions are regarded as those made by the board of directors or shareholders' meeting.

Risk mitigation measures include:

  • prioritising the traceability of automated decision-making outcomes;
  • conducting a comprehensive risk assessment to identify potential risks associated with AI implementation;
  • limiting the applicable scope of the system in the event of a material adverse impact;
  • setting up a manual review mechanism to check and ensure the accountability of final decisions; and
  • setting up an AI ethics committee to supervise the internal use of AI.

In general, the PIPL, CSL, DSl and the AIGC Measures set out the baseline compliance requirements for AI service providers and users. The main concerns can be divided into the protection of personal information, data processing and training, algorithm compliance, and cross-board provision of data, etc.

Enterprises are advised to follow specific requirements under each applicable law and regulation to ensure compliant business operations. For those engaged in highly regulated industries such as finance, healthcare, and automobile, industry-specific regulations require special attention as well.

Recent regulations for AI governance in China are rapidly evolving. Enterprises are advised to closely monitor legislative trends and update their business practices accordingly to maintain compliance.

King & Wood Mallesons

18th Floor
East Tower
World Financial Center 1
Dongsanhuan Zhonglu
Chaoyang District
Beijing 100020 PRC

+86 10 5878 5588

kwm@cn.kwm.com www.kwm.com
Author Business Card

Trends and Developments


Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 29 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing and Shanghai, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent few years, including “Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights”, published in China Law Insights in 2022.

Trends in AI Governance: China’s Approach

An “inclusive and prudent” approach

Following the unveiling of the Interim Administrative Measures for Generative Artificial Intelligence Services (“Generative AI Measures”), introduced in July 2023, China has been carving its path in shaping global AI governance through a set of regulations collectively known as the “Trio”. These regulations include earlier two measures: (i) the Internet Information Service Algorithm Recommendation Administrative Measures (“Algorithm Recommendation Measures”), addressing recommendation algorithms; and (ii) the Internet Information Service Deep Synthesis Administrative Measures (“Deep Synthesis Measures”), focusing on deep synthesis algorithms.

As early as April 2023, a debate over clear rules and flexibility was prompted by the draft version of the Generative AI Measures. The debate persists as the finalised regulation remains ambiguous in various aspects, such as in outlining the responsibilities of different players in AI services. From the literal reading of the regulation, the focus is primarily on service providers as the key entities accountable. However, given the complexity of the AI supply chain, determining and distinguishing the responsibilities of actors like technology developers and downstream app providers in practice remains an ongoing challenge. This uncertainty leaves little room for a definitive answer, especially as this discourse resonates within the broader context of international AI governance, notably with the recent EU AI Act.

In this sense and given the potential trade-off between overarching rules and flexibility, which could hinder innovation in a rapidly evolving AI landscape, China has opted for an alternative approach by adopting regulatory approvals, namely the Algorithm Filing(算法备案)and Generative AI Services Filing (生成式人工智能服务备案, also known as 大模型备案).

Regulatory approvals

As highlighted in the statement from the Cyberspace Administration of China (CAC), China is poised to adopt an “inclusive and prudent” approach towards generative AI services. These two filings signal a prudent ex-ante governance strategy, aiming to gather information and impose requirements on generative AI service providers, without necessarily subjecting them to a substantial review by the CAC.

The Generative AI Services Filing is often characterised as a gatekeeper of sorts, requiring models to undergo security tests initiated by local CAC authorities at the municipal level. However, such security tests, primarily focused on content moderation perspectives, align with established requirements outlined in the Provisions on Network Information Content Moderation Governance and are not inherently concerning.

The “inclusive and prudent” approach is further evidenced by the statistics on the two filings. According to public data from the State CAC, as of March 2024, 117 generative AI services filings have been approved. As for algorithm filings, the numbers speak for themselves: a total of five batches comprising 940 algorithms have been approved.

In the wake of the increasing availability of AI services and products to the public, there has been a discernible uptick in momentum towards AI governance. In this dynamic environment, the practical implications of the new regulations are currently undergoing intense scrutiny across the market. Over the past year since the implementation of the Generative AI Measures, AI companies of all sizes, from start-ups and larger enterprises have felt the pressure.

This pressure arises from adapting to these vague and evolving regulations while maintaining innovation and competitiveness. While the Generative AI Measures brought some clarity, several key issues are still up in the air. The following sections will offer insights into these concerns and what they could mean for the industry.

Challenges in the AI Market and the Call for Regulation

Appropriate legal basis for AI training

Gathering meticulously curated datasets serves as the cornerstone of supercharging generative AI models. For tech giants, these datasets often consist largely of their own business operational data. However, start-ups and entities in traditional industrial sectors might be more apprehensive and thus heavily rely on extensive, distributed datasets from the internet, like open-source datasets or those generated by the web crawlers from webpage, social media or even personal blogs.

Regardless of the data’s origin, it is universally understood that lawfully using the personal information contained within is one of the trickiest issues throughout the AI development and operation lifecycle, from pre-training to market deployment.

According to the Personal Information Protection Law (PIPL), processing personal information requires a valid legal basis. While Article 13 of the PIPL provides seven legal bases in total, only these two appear to be relevant and applicable: (i) prior consent from concerned individuals; and (ii) reasonable processing of personal information disclosed publicly by individuals themselves or is otherwise legally disclosed (“Public Information”).

Practically speaking, since the use cases dramatically vary, there may not be a one-size-fits-all legal basis to ground the AI training. However, upon weighing up all the elements required for relying on consent or public information, neither could be an appropriate and suitable ground for AI training.

It is believed that consent is the go-to solution for almost all scenarios, but as proven in practice, it frequently falls short, undermining the genuine voluntariness and freedom of individuals involved, especially those in vulnerable situations. At the same time, developers may struggle to identify individuals and obtain consent from a large number of individuals, leading to considerable costs.

Moreover, the inherent opacity of machine learning complicates the ability to fully grasp how AI processes personal information and makes it challenging to keep individuals “fully informed” as required by the PIPL. Even when consent is obtained, doubts persist about whether it is given voluntarily and explicitly, especially when granted to developers with dominant market positions.

When relying on Public Information as a legal basis, the interpretation of “reasonable processing” is disputed both in theoretical discourse and judicial practice. Such ambiguity poses challenges to its effective application and may inadvertently allow certain AI service providers to obfuscate their handling of personal information.

Therefore, the current market practice and legal frameworks are somewhat ambiguous and the legal basis is a pressing legal issue that needs to be addressed.

Generative AI and copyright infringement

In November 2023, the Beijing Internet Court delivered a landmark first-instance decision in a civil case, marking what is widely regarded as China’s first legal case involving copyright infringement of AI-generated images. The main issue at hand was whether these images, generated by AI with human inputs, could be recognised as a “work” eligible for copyright protection.

The plaintiff was an individual who used a generative AI model, Stable Diffusion, to generate the picture in question and posted it on a social media platform, RED. Subsequently, the defendant, a blogger on another social media platform used the plaintiff’s image without permission in an article and removed the watermark associated with the plaintiff’s user ID on RED. As a result, the plaintiff sued for infringement of intellectual property rights.

In its ruling, the Beijing Internet Court recognised that AI-generated pictures displayed elements of originality, indicating the original intellectual input of a human, and granted copyright protection.

Specifically, the court’s decision presents rulings that offer insight into future judgments on similar matters:

  • The determination of “intellectual achievements”: The court held that, from the conception to the selection of the picture in question, the plaintiff had made certain intellectual inputs, such as designing the presentation of the characters, choosing the prompt words, arranging the order of the prompts, setting up the parameters, and selecting the picture that meets the expectations, etc. It can be assumed that the court affirmed human involvement in the process of using AI to generate the pictures and held that this resulted in pictures containing an element of intellectual achievement.
  • The determination of “originality”: The judgment pointed out that the plaintiff had designed the characters and the way they were presented and other elements of the picture using prompts, and set up the parameters for the layout and composition of the picture, reflecting the plaintiff's choice and arrangement. On the other hand, the plaintiff continued to add prompt words, modify the parameters, and continued to adjust the correction after obtaining the first picture and ultimately obtained the picture in question. This adjustment and correction process also reflects the plaintiff’s aesthetic choices and personal judgement. It was concluded that the picture was not a “mechanical intellectual achievement”, but rather a reflection of the plaintiff’s personalised expression.
  • The determination of the attributes of “work” and “work of art” for AI-generated images: The series of judgments emphasised human creative activity as the most critical factor. The court argued that it was the human, not the AI, who made the intellectual investment in the entire creative process. This meets the requirements for a work under copyright law and is in line with the core purpose of the copyright system to encourage creation. The judgment presents the view that AI-generated pictures, as long as they reflect the original intellectual input of a human being, should be recognised as works and protected by copyright law.

Based on these principles, the plaintiff, who contributed intellectual input and personalised expression to the image, was recognised as its author and granted copyright protection.

Certainly, it is worth mentioning that there are still questions about how much human creativity is needed for AI-generated images to be protected by copyright. Additionally, this ruling has prompted wider discussions about the legal framework surrounding AI-generated content, with the industry awaiting further legal precedents and legislative developments for guidance.

Content moderation

“Hallucination” is considered one of the potential drawbacks of generative AI: a generative AI may provide information that sounds plausible but is laden with inaccuracies, bias, or in some cases, has no relevance to the given context whatsoever. Furthermore, because AI learns from human data, there is always the risk of “garbage in, garbage out” – meaning the quality of the model’s output is directly dependent on the quality and completeness of the data it was trained on.

In China, preventing hallucinations and flawed output is regarded as a legal obligation. According to the Provisions on Network Information Content Moderation Governance published in 2019, the generative AI service providers should prevent and combat the dissemination of illegal and harmful content, including content containing rumours, obscenities, improper comments on disasters, or other content that adversely affects network ecology. 

Experts argue that hallucinations are here to stay, and it is uncertain whether fixing them will ultimately be beneficial or detrimental. Nonetheless, reducing their occurrence is possible by adopting proper measures. In this regard, generative AI service providers serve not only as key enforcers of regulatory requirements but also as frontline solution providers for addressing these risks.

In March 2024, The National Cybersecurity Standardisation Technical Committee in the People’s Republic of China released the Basic Security Requirements for Generative Artificial Intelligence Service (“Security Requirements”).

The Security Requirements set out specific requirements on content moderation applicable to generative AI service providers in the entire course of developing their AI products. These guidelines cover aspects such as monitoring and properly labelling training datasets, implementing model security measures to defend against evasion attacks, and input and mechanisms to filter out or flag potentially illegal and harmful input and output.

While the Security Requirements are not legally binding in themselves, the generative AI service providers will be heavily incentivised to follow the guidelines. This is because the Security Requirements provide detailed elaboration on the Interim Measures and, most importantly, serve as the practical guideline for completing the application for the Generative AI Services Filing.

AI governing AI

AI governing AI means a paradigm shift underway in AI governance, as AI itself takes on the role of regulator, moving beyond the conventional perspectives of providers and authorities. This approach aims to harness AI’s own capabilities to identify weaknesses in AI systems and enhance their defensive and security capabilities accordingly.

Leveraging adversarial attack and defence techniques, together with the notion of integrating attack and defense is nothing new and has been part of AI discourse for around a decade. The rise of generative AI has further propelled advancements in AI security technologies.

For example, generative AI models now have evolved into intelligent security advisers, offering defence strategies through straightforward natural language descriptions, skipping complex programming. They can even simulate adversarial scenarios to assess robustness, marking a departure from human-led testing processes. This shift is a direct result of the era of large models, where enhanced computing power has revolutionised security technology.

In 2022, the China Academy of Information and Communications Technology, Tsinghua University, and Ant Group jointly released the AI security detection platform “YiJian”, which is considered to be the first of its kind in the industry. Its successor, “YiJian2.0”, unveiled in 2023, offers advanced capability to detect risks associated with generative AI models across various domains, including data security, content moderation, and ethical considerations. It conducts adversarial detection across multiple dimensions, such as privacy, ideology, criminal activities, bias, and discrimination, and generates comprehensive reports to facilitate targeted evolution and improvement. This approach remarkably streamlines testing processes, by mitigating the limitations of manual testing. Moreover, the testing process itself stimulates the refinement and advancement of both the testing AI and the AI being tested.

In addition to the aforementioned, current efforts to broaden the application of AI for self-regulation have primarily been exploratory. But along with this experimentation come new challenges. Besides just making sure AI is technically sound and safe, there is the lingering issue of how to determine whether AI behaves ethically from a legal perspective. How do we ensure the AI we use for testing stays unbiased and accurate? And then, there is the big unknown: will AI regulating other AI lead to better or worse outcomes? These are all important questions that need careful consideration when regulating generative AI.

Anticipating AI risks and the impetus for regulation

In China, it is apparent that lawmakers are actively promoting and steering China’s own AI development in more positive directions, with a strong emphasis on security. However, given the evolving and dynamic landscape, with AI regulations still in flux and the comprehensive Artificial Intelligence Law yet to be enacted, companies find themselves treading uncertain waters and navigating grey areas amidst unclear enforcement rules.

In such a climate, where regulatory arbitrage is a tempting prospect, savvy companies may seek ways to operate just outside the bounds of regulation. The rapid growth of generative AI technologies across sectors has revealed a spectrum of risks, spanning from data breaches to ethical dilemmas. Without robust regulation, the likelihood of such security incidents occurring becomes more pronounced. This recognition of emerging risks underscores the imperative for proactive regulatory action. Consequently, there may be growing pressure on regulators to bolster their supervision of AI.

King & Wood Mallesons

18th Floor, East Tower, World Financial Center 1 Dongsanhuan Zhonglu, Chaoyang District, Beijing 100020 PRC

+86 10 5878 5588

kwm@cn.kwm.com www.kwm.com
Author Business Card

Law and Practice

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 29 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing and Shanghai, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent few years, including “Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights”, published in China Law Insights in 2022.

Trends and Developments

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 29 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity, data compliance, and algorithm governance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, located in Beijing and Shanghai, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence. The team has published multiple papers in recent few years, including “Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights”, published in China Law Insights in 2022.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.