Artificial Intelligence 2023

Last Updated May 30, 2023

China

Law and Practice

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity and data compliance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, mainly located in Beijing, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence and is also involved in the development of the AI industry. The team has published multiple papers in recent few years, including "Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights", published in China Law Insights in 2022.

China has developed a large number of laws and regulations that systematically address AI-related issues, as well as rules regulating particular AI-related subject matters.

At the level of national laws, AI – as a technology that highly relies on the use of internet and

data – is subject to the three basic laws in the information technology field, namely:

  • the Cybersecurity Law of the People’s Republic of China (CSL);
  • the Data Security Law of the People’s Republic of China (DSL); and
  • the Personal Information Protection Law of the People’s Republic of China (PIPL).

Those three laws are enacted to guarantee cybersecurity and regulate data (including personal information) processing activities.

Under these basic laws, the State Council, the Cyberspace Administration of China (CAC) and other authorities responsible for cybersecurity and data protection within the scope of their respective duties are tasked to develop and enforce specific regulations. For example, the CAC has issued a number of rules concerning internet information services, especially including the use of AI technologies in such fields as algorithm-generated recommendations and deep synthesis. Recently, for instance, the CAC has issued a draft policy – “Measures on the Management of Generative Artificial Intelligence Services” (“Draft AI Policy”), soliciting feedback from the general public with respect to the regulation and management of generative AI services. Moreover, the authority is seeking comments on the draft Method for Science and Technology Ethics Review, which is designed to set out the basic rules and principles for conducting science and technology activities.

At regional levels, local governments have enacted relevant cybersecurity and data regulations in conjunction with regional levels, and local governments such as Shanghai and Beijing have enacted regional data regulations. Furthermore, Shanghai and Shenzhen have enacted regulations directly related to AI to promote the development of AI.

Apart from general cybersecurity and data protection laws, laws and regulations of other legal sectors also apply to AI if the application of AI involves specific issues regulated in these other legal sectors, including tort law, consumer protection law, antitrust law and industrial-specific laws.

AI and machine learning have become a key force in promoting the development of the healthcare industry. According to a report issued by the China Academy of Information and Communications Technology (CAICT), by the close of 2021, there were approximately 740 manufacturers of AI medical devices in China. Predominantly comprised of small and medium-sized enterprises, these businesses span across various sectors, including diagnosis, treatment, monitoring, rehabilitation and traditional Chinese medicine. However, the primary focus for these enterprises lies in the realm of diagnosis and treatment, constituting approximately 66% of the industry’s focus.  Moreover, the pace of obtaining registration certificates for AI medical devices has accelerated. As of October 2022, 62 AI medical devices have been approved, covering cardiovascular, brain, eye, lung, orthopaedic, oncology and other disease areas, which are expected to be used to assist triage and assessment, quantitative calculation, lesion detection, target area outlining, etc.

AI generated content (AIGC) is making significant strides in various industries. According to a report concerning the AIGC industry, China’s AIGC market size will reach 7.93 billion by 2023, and might reach 2767.4 billion by 2028. The AIGC technology has been used in many fields such as writing, drawing, conversational robots, e-commerce and media.

Currently, legislations regulating particular AI-related subject matters in China include the following.

Data Protection

The CSL and DSL directly address the national strategy for enhancing cybersecurity and data protection. As for personal information protection, there are three overarching statutes setting forth general principles:

  • the PIPL, enacted on 1 November 2021;
  • the Civil Code, introduced in May 2020; and
  • the CSL, articulating requirements for personal information protection.

The PIPL proposes to extend the legal basis of processing personal information, as compared to the Civil Code and the CSL, in order to adapt to the complexities of economic and social activities. Since 2019, when multiple departments in China jointly issued the Announcement on Special Treatment of Illegal Collection and Use of Personal Information by Apps, there has been a growing trend towards greater enforcement of app personal information protection. Regulation particularly focuses on small programs, third-party SDK (software development toolkit) data sharing and algorithmic recommendations.

CAC issued the Measures on Security Assessment of Cross-border Data Transfer, which came into effect in September 2022, providing a more comprehensive framework for cross-border data security review. Measures for the Standard Contract for Outbound Transfer of Personal Information is another important legislation for CBDT activities which will come into effect in June 2023, and aims to specify how to transfer data internationally using standard contracts.

AI Industry Development

As of April 2023, Shenzhen and Shanghai respectively issued the Regulation of Shenzhen Special Economic Zone on Promoting the Artificial Intelligence Industry and Regulation of Shanghai on Promoting the Development of the Artificial Intelligence Industry. Although these two regulations focus more on the high-quality development of the AI industry, and aim to strengthen the function of the new-generation AI as the source of scientific and technological innovation, the two regulations also set forth requirements for regulating AI activities. For example, the Regulation of Shanghai on Promoting the Development of the Artificial Intelligence Industry specifies a regulatory approach based on classification, which enforces high-risk AI products and services subject to list-based management.

Antitrust

The Antitrust Guidelines for the Platform Economy state that concerted practice may result from coordination through data, algorithms, platform rules, or other means, without an agreement necessarily being entered into. AI operators must comply with the Anti-Monopoly Law, which prohibits monopolistic agreements such as price fixing, production or sales restrictions, market division, boycotting, or other restrictive practices. Moreover, dominant market players are also prohibited from conducting discriminatory activities against their counterparties by means of algorithms.

Consumer Protection

Business operators providing products/services to consumers by means of algorithms shall be subject to the Law on Protection of Consumer Rights and Interests, which acts as the basic consumer protection legislation. As for e-commerce businesses, they should further comply with the E-commerce Law, in which specific rules deal with personalised recommendations.

Information Content Management

In December 2021, the CAC issued the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services (the “CAC Algorithm Recommendation Rules”) to provide special management regulations on algorithmic recommendation technology. The CAC Algorithm Recommendation Rules mark the CAC’s first attempt to regulate the use of algorithms, in which ISPs are required to use algorithms in a way that respects social morality and ethics, and are prohibited from setting up any algorithm model that could induce user addiction or excessive consumption.

In November 2022, the CAC issued the Provisions on the Administration of Deep Synthesis of Internet-based Information Services (the “CAC Deep Synthesis Rules”) to provide specific rules for providers of deep synthesis technologies in the context of information content management; specifically, deep synthesis service providers are required to take technical measures to add signs to alert the users that the content was generated via deep synthesis technologies and the sign shall not affect users’ use to information generated or edited using their services.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

There are already several judicial decisions relating to AI, ranging from deepfakes to virtual humans, which may shed some light on how AI-related enterprises use AI technology.

Deepfakes are one of the AI technologies that has been in the public eye recently. In response, the CAC enacted the Regulations on Algorithm Recommendation and the Regulations on Deep Synthesis. A case in 2022 held that enterprises shall not use information technology like deepfakes to infringe on the portrait rights of others. There was even a criminal case in 2023, in which the defendant was held criminally liable for using deepfake technology to generate and disseminate illegal videos for profit, receiving a sentence of over one year in prison.

There was another case related to virtual humans generated by AI (the virtual human case), which specified that if enterprises use AI technology to provide services, they cannot infringe on the legitimate interests and rights of others such as personality rights. In this case, the respondent provided users services that enabled users to engage in virtual emotional interaction such as “intimate” conversations with virtual images of celebrities formed using AI technology, and the respondent was held liable for the infringement.

A court is an institution that adjudicates disputes based on the law. On the other hand, technology definitions are usually derived from legislation and the technical experts hired by the court. For example, in the virtual human case, the court held that the software in question was an AI system, and that this AI system accomplished its learning and recommendation capabilities through the operation of software (computer program). This kind of software can basically be regarded as a manifestation of the algorithm.

Moreover, in China, most cases involving AI technology do not specifically clarify AI-related technical definitions. On the one hand, China temporarily lacks basic legislation for AI regulation. On the other hand, the CAC is still focusing on regulating AI services and applications. Furthermore, most AI technology tends to be regarded as a tool, and the court typically does not delve into the intricacies of the AI technology itself, but rather focuses on the damage caused by the AI technology.

In China, the CAC is responsible for the overall planning and co-ordination of cybersecurity, personal information protection and network data security, and has issued a number of regulations concerning the application of AI technology in terms of internet information services.

There are also many other departments – such as departments in the industrial sector, telecommunications, transportation, finance, natural resources, health, education, science and technology – that undertake to ensure cybersecurity and data protection (including relating to AI) in their respective industries and fields. Public security authorities and national security authorities also play an important role in network and data security within their respective remits.

The practice guidance issued by the National Information Security Standardisation Technical Committee (TC260) – the Practice Guide for Network Security Standards-Guidelines for Prevention of Ethical Security Risks in Artificial Intelligence – has defined AI as the simulation, extension or expansion of human intelligence by using a computer or its controlled equipment, through the methods of perceiving the environment, acquiring knowledge and deducing. And the two local AI regulations provide a similar technology definition of AI.

In April 2023, Draft AI Policy provides a definition for generative AI, which refers to technologies generating text, image, audio, video, code, or other such content based on algorithms, models or rules.

Another draft standard, the Information Security Technology-Security Specification and Assessment Methods for Machine Learning Algorithms, also released by TC260, defines machine learning algorithms as algorithms that solve problems by using a limited and orderly set of rules to generate classification, to reason, and to predict, based on input data.

It is normal practice for the CAC and other departments to co-operate in rule-making and enforcing the laws. Most of the data protection-related rules are jointly issued by multiple regulatory agencies including the CAC, the Ministry of Industry and Information Technology (MIIT), public security authorities and other related departments.

These laws and regulations have played a key role in ensuring network and data security, and the protection of personal information. As for those mainly focusing on AI regulation, the corresponding authorities have been trying their best to focus on risk control, especially for mitigating pre-existing risks. For instance, these laws and regulations may protect the public interests and national security involved in the network and data fields from being endangered.

On the other hand, the laws and regulations are aimed at facilitating the development and evolution of AI in China. Hence, the CAC has successively issued a series of rules or drafts on the application of AI technology aiming to promote the positive application of algorithms. In particular, these rules aim to regulate the providers who provide internet information services via applications of algorithm recommendation technology and deep synthesis technology. In addition, as mentioned in 5.2 Technology Definitions, the CAC recently issued the Draft AI Policy, which aim to stimulate the healthy development and standardised application of generative AI, while preventing unfair competition and discrimination and ensuring the safety of AI-generated content.

There is a very typical enforcement action related to algorithms in China. In 2021, the State Administration for Market Regulation imposed penalties on Alibaba on the grounds that Alibaba’s use of data, algorithms and other technologies had restricted competition in the market for e-tailing platform services within China. The fine totalled CNY18.228 billion, which included a fine for misuse of data. After this, in order to ensure the reasonable use of algorithm recommendation technology, the CAC published the CAC Algorithm Recommendation Rules, which stated that algorithm recommendation service providers shall not conduct any monopoly or unfair competition by taking advantage of algorithms, and that AI or algorithm enforcement activities as well as the relevant legislation in China, are all aimed at safeguarding the legal legitimate interests and legal rights of users.

During the past two years, the data protection authorities in China have issued a large number of draft regulations aimed at providing detailed implementation guidance for national legislation to regulate data processing activities and AI-related issues.

As for AI-related rules, the CAC published the Draft AI Policy in April 2023. This legislation aims to promote the standardised application of generative AI services.

In addition, the Ministry of Science and Technology also issued its draft Method for Science and Technology Ethics Review in April 2023. This legislation seeks to establish principles governing science and technology activities.

The State Standardisation Administration (CSA) is responsible for approving the release of national standards, and TC260 (see 5.2 Technology Definitions) is one of the most important standard-setting bodies on AI technology. So far, TC260 has issued a series of recommended national standards and practical guidelines containing provisions regarding the use of AI-related technology. For example, the national standard “Information Security Technology – Personal Information Specification” provides rules on automated decision-making similar to the PIPL, which states that controllers adopting automated decision-making that may influence data subjects’ interests should conduct preliminary and periodic security assessments of personal information, and should allow data subjects to opt out of such automated decision-making.

In May 2022, the Artificial Intelligence-affective Computing User Interface-model (GB/T 40691-2021), published by the SCA, came into force. Affective computing user interface is the interface where a user interacts emotionally with the information system. And GB/T 40691-2021 applies to the design, development and application of the affective computing user interface, and provides some standards related to processing activities such as affective representation, or affective data collection, etc.

In October 2022, the SCA published the Information Technology-Artificial Intelligence-Terminology (GB/T 41867-2022), which comes into effect in May 2023. The GB/T 41867-2022 defines common terms used in the field of information technology related to AI. Meanwhile, the Information Technology-Artificial Intelligence-platform Computing Resource Specification (GB/T 42018-2022) was also published and will come into effect in May 2023. GB/T 42018-2022 applies to the design and testing of machine learning oriented AI platforms, and provide a standard basis for the construction of AI platforms.

In December 2022, the SCA released the Artificial Intelligence-Technical Framework of Knowledge Graph (GB/T 42131-2022), which will come into effect in July 2023. A knowledge graph is a collection of interlinked entities and their relationships, and a way of representing knowledge about the world in a structured form. GB/T 42131-2022 provides the conceptual model and technical framework for knowledge graphs.

The draft standard “Information Security Technology – Security Specification and Assessment Methods for Machine Learning Algorithms” specifies the security requirements and verification methods of machine learning algorithms during most of their life cycle, such as design and development, verification testing, deployment and operation, maintenance and upgrading and decommissioning, etc.

In addition, there are standard-setting bodies to formulate AI-related standards in specific industries. The People’s Bank of China (PBOC), along with the Financial Standardisation Technical Committee of China (TC 180), which is the CSA-authorised institution to engage in national standardisation, plays a leading role in writing AI-related standards in the financial field. The recommended industry standard “Personal Financial Information Protection Technical Specification”, which was issued in the name of the PBOC, sets forth requirements for financial institutions to regularly assess the safety of external automated tools (such as algorithm models and software development kits) adopted in the sharing, transferring and entrusting of personal financial information. The PBOC also issued the “Evaluation Specification of Artificial Intelligence Algorithm in Financial Application” in 2021, providing AI algorithm evaluation methods in terms of security, interpretability, accuracy and performance.

In automated driving, the recommended national standard “Taxonomy of Driving Automation for Vehicles” sets forth six classes of automated driving (from L0 to L5) and the respective technical requirements and roles of the automated systems at each level. The TC260 released the “Security Guidelines for Processing Vehicle Collected Data”, which specify the security requirements for automobile manufacturers’ data processing activities such as transmission, storage and export of automobile data, and provides data protection implementation specifications for automobile manufacturers to carry out the design, production, sales, use, operation and maintenance of automobiles.

Countries may conclude international treaties that contain some international standards for AI regulation or AI technology. With regard to AI-related international treaties that China may conclude in the future, these international treaties will generally come into force in China’s territory by way of transposition or direct application. In this way, AI-related international standards generally do not conflict with China’s laws. For example, in December 2021, China called on countries to develop and use AI technology in the military in a prudent and responsible manner. If international standards are concluded successfully with China’s participation, China and China’s enterprises shall follow these AI-related international standards.

On the other hand, with regard to AI-related international treaties that China will not conclude, China’s AI-related enterprises still need to consider the relevant AI-related international standards if these enterprises intend to provide AI products or services within those jurisdiction.

In recent years, China has been adapting to internet development trends and widely applying digital technologies such as big data, cloud computing and artificial intelligence to the process of government administration in accordance with the law, in order to integrate information technology and the rule of law in government.

For example, in smart city applications, big data analysis carried out with the help of artificial intelligence is used to determine traffic control measures in a given city. The smart city applications can design and promote smart transport strategies in which data analysis provides a clearer picture of traffic policies in terms of potential infractions committed by pedestrians and the range of transportation options accessible to residents.

China has also been vigorously promoting the application of AI technology in criminal justice in recent years, and in 2016 the construction of smart courts was incorporated in the “Outline of the National Informatisation Development Strategy” and the “13th Five-Year Plan”, which was formally elevated to a national strategy.

Currently, judicial bodies across China have adopted a broad spectrum of AI applications within criminal justice scenarios. These involve instances such as reviewing arrest conditions, automating document generation, case pushing, and providing early warnings of deviations from verdicts. Noteworthy examples include Shanghai’s “206” system (Shanghai’s intelligent auxiliary case handling system for criminal cases) and the Beijing High People’s Court’s “Judge Rui.”

Some local authorities have also launched their own sentencing system based on the “Little Baogong” AI legal technology platform, using AI, judicial big data and other technological means, to develop software systems to serve the judiciary in accurate sentencing and other operations.

It is a common issue for AI operators that they may collect a large amount of data to feed their AI system. Since China’s laws and regulations on data processing have a clear concern for national security, AI companies are also advised to be aware of related legislative requirements.

Critical Information Infrastructure (CII)

The Regulation on Protecting the Security of Critical Information Infrastructure has defined CII as network facilities and information systems in important industries and fields that may seriously endanger national security, the national economy and people’s livelihoods, and public interest in the event of being damaged or losing functionality. CII Operators (CIIO) are required to take protective measures to ensure the security of the CIIs. Furthermore, the CSL imposes data localisation and security assessment requirements on the cross-border transfer of personal information and important data for CIIOs.

Important Data

The DSL have defined important data as data the divulging of which may directly affect national security, public interests and the legitimate interests of citizens or organisations, and certain rules impose various restrictions on its processing. The DSL contemplates security assessment and reporting requirements for the processing of important data in general.

Cybersecurity Review

On 28 December 2021, the CAC, together with certain other national departments, promulgated the revised Cybersecurity Review Measures, aimed at ensuring the security of the CII supply chain, cybersecurity and data security and safeguarding national security. The regulation provides that CIIOs that procure internet products and services, and internet platform operators engaging in data processing activities, shall be subject to the cybersecurity review if their activities affect or may affect national security, and that internet platform operators holding more than one million users’ personal information shall apply to the Cybersecurity Review Office for a cybersecurity review before listing abroad.

Recently, generative AI like ChatGPT have raised plenty of legal issues, such as personal information protection, intellectual property rights and means of governing generative AI. In order to stimulate the standardised application of generative AI, the CAC issued the Draft AI Policy, which serves as a governance strategy, aiming to promptly address the potential risks and impacts from current AI-generated content, particularly in terms of providing early warnings and managing potential ethical risks.

The Draft AI Policy firstly clarifies the foundational laws and regulations it relies on, its scope of applicability, and the government’s stance towards generative AI services. Moreover, the Draft AI Policy specifies the obligations the responsible entities should fulfil and sets corresponding administrative penalties. Further, Article 2 of the Draft AI Policy defines generative AI as a technology that generates text, images, sound, videos, code and other content based on algorithms, models and rules. Based on this definition, the Draft AI Policy clarifies its extraterritorial reach, applying to activities involving the development and use of generative AI products and the delivery of generative AI services to the public within the territory of the PRC.

According to Article 5 of the Draft AI Policy, the responsible party under the Draft AI Policy is the “Provider”, which refers to organisations and individuals who provide services such as chatbots and the generation of text, images and sound, using generative AI products. From a direct reading of the Draft AI Policy, the term “Providers” appears to encompass both organisations and individuals, but may not extend to those who are exclusively engaged in research and development activities.

Besides setting forth obligations that cover algorithms, content, users, regulatory mechanisms and other aspects for the responsible parties, the Draft AI Policy emphasises the close connection of the algorithmic governance framework with higher-level laws such as the CSL, the Regulations on the Management of Algorithmic Recommendations for Internet Information Services, and the CAC Deep Synthesis Rules, etc, forming a comprehensive and multidimensional regulatory framework for generative AI services.

In sum, the Draft AI Policy aims to regulate the application of generative AI. Although it is not a legally binding document at the current stage, it may shed some light on the government’s viewpoint concerning the regulation of generative AI by imposing compliance obligations on the providers of generative AI applications.  Its effectiveness and feasibility, however, remain subjects for further discussion.

AI technology has a wide range of applications in the judicial field, from transactional support work such as information backfilling, intelligent cataloguing and the error correction of documents.

Shanghai’s “206” system incorporates unified statutory evidence standards into its database, providing assistance to public prosecution and law enforcement agencies. This system, backed by expert knowledge, model algorithms, and vast amounts of data,  is expected to have 20 functions in the future. These functions include evidence standards guidelines, single evidence verification, arrest conditions review, social dangerousness assessment, case pushing, knowledge indexing, and sentencing reference and document generation.

By integrating multiple types of data resources for big data mining and analysis, “Judge Rui” of the Beijing High People’s Court automatically pushes information such as case analysis, legal provisions, similar cases, and judgment references to provide judges with trial specifications and guidelines for case handling.

In December 2021, the CAC issued the CAC Algorithm Recommendation Rules to provide special management regulations on algorithmic recommendation technology. The CAC Algorithm Recommendation Rules mark the CAC’s first attempt to regulate the use of algorithms, in which internet information service providers are required to use algorithms in a way that respects social morality and ethics, and are prohibited from setting up any algorithm model that could induce user addiction or excessive consumption.

In 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the Opinions on Strengthening the Governance over Ethics in Science and Technology. These opinions call for stringent investigation of unethical practices in science and technology, intending to enhance the management of scientific and technological ethics and improve ethical governance capabilities.

In 2023, China has also issued the “Guideline on Cybersecurity Standard Practices – Guidelines on Ethical Security Risk Prevention for AI”, which provides guidelines for organisations or individuals to carry out AI research and development, design and manufacturing, deployment and application, and other related activities, in an ethical manner.

In addition, the Draft AI Policy emphasised that in processes including algorithm design, selection of training data, model generation and model optimisation and service provision, measures should be taken to prevent discrimination on the basis of race, ethnicity, religious belief, nationality, region, sex, age or profession. Furthermore, the Draft AI Policy also put an emphasis on respecting intellectual property rights and commercial ethics, so that advantages in algorithms, data, platforms, etc, cannot be used to engage in unfair competition. The Draft AI Policy also addresses the lawful rights and interests of others, prevention of harm to the physical and mental health of others, infringement of their portrait rights, reputation rights and personal privacy.

Last but not least, in the Opinions on Regulating and Strengthening the Applications of Artificial Intelligence in the Judicial Fields by the Supreme People’s Court, it has been noted that the courts shall adopt ethical reviews, compliance reviews and security assessments to prevent and mitigate cybersecurity risks in judicial AI applications through mechanisms such as the Judicial AI Ethics Council.

The assignment of liability in AI scenarios has sparked heated discussions. In a traditional view, civil law – including tort law – deals with legal relationships among civil subjects such as natural persons, companies or other organisations; thus, it seems difficult to treat AI, which is developed by humans through computer programming, as a liability subject. However, this consensus might be challenged given the increased capacity for self-learning and independent decision-making inherent in AI technology, both now and in the foreseeable future.

From a tort law perspective, the owner of AI-enabled technology that harms the interest of others should be directly liable. However, the application of AI technology usually involves a number of roles, such as the AI developer, the product/service manufacturer, the seller and even the user. Thus, careful consideration must be given when defining who the “owner”, and consequently the liable party, truly is.

Additionally, liability is typically predicated on the notion that the offending party is at fault.  This presumption becomes problematic when decisions that harm others are made by AI technology outside the control of the user, a common example being an auto-piloted car’s driver. Further complicating matters is the potential liability of the AI technology’s developer or provider. The plaintiff faces a significant hurdle in proving at a technical level that there is an internal design defect in the AI technology, especially given AI’s autonomous deep learning capability and the complexity of the external environment that may interfere with AI’s decision-making during the interaction.

Therefore, assigning responsibility in AI scenarios should involve careful deliberation and a clear definition of the duty of care expected from different parties. This consideration should take into account the state of the art and objective factors that might affect the computing process of the AI technology.

The current legislation does not provide clear provisions on the imposition and assignment of liability; further clarification of relevant laws and regulations are awaited.

For instance, Article 155 of the Road Traffic Safety Law (Revised Draft), published by the Ministry of Public Security in April 2021, provides special provisions for automated driving cars. In terms of the assignment of responsibility, it states that “in the event of road traffic safety violations or accidents, the responsibility of the driver and the development unit of the automated driving system shall be determined in accordance with the law. The liability for damages shall be determined in accordance with the relevant laws and regulations. If a crime is committed, criminal liability shall be investigated in accordance with the law.”

From a technical perspective, algorithms may be biased due to a number of reasons. The accuracy of an algorithm may be affected by the data used to train it. Data that lacks representativeness or, in essence, reflects certain inequalities may result in biases of the algorithms. The algorithm may also cause bias due to the cognitive deficits/bias of the R&D personnel. Besides, due to the inability to recognise and filter bias in human activities, algorithms may indiscriminately acquire human ethical preferences during human-computer interaction, increasing the risk of bias in the output results.

For example, the provision of personalised content by digital media has raised serious concerns about the so-called “information cocoon” – a phenomenon where people get more and more limited information selected, based on automatic analysis of their previous content preferences. Another example is the concern of “big data killing”, where different consumers are charged significantly different prices for the same goods. According to the China Consumers Association, certain companies use algorithms to make price discriminations over different groups of consumers.

Having been aware of the harm to society and consumers’ interests caused by algorithm bias, the Chinese government is trying to regulate the proper application of algorithms both on an industrial-specific basis and on the general data protection side. According to the E-commerce Law, where an e-commerce business operator provides consumers with search results for goods or services based on consumers’ preferences or consumption habits, it shall, in parallel, provide consumers with options that are not targeted at their personal characteristics. Similar rules have been set in the PIPL regarding automatic decision-making, where transparency and fairness requirements are explicitly stipulated (see 12.4 Automated Decision-Making).

AI applications can lead to the risk of over-collection of personal data. With the popularisation of various smart devices (eg, smart bracelets, smart speakers) and smart systems (eg, biometric identification systems, smart medical systems), AI devices and systems have become more comprehensive in their collection of personal data. Moreover, biometric information with strong personal attributes such as users’ faces, fingerprints, voiceprints, irises, and genes, are unique and invariable; once they are leaked or misused, such incidents may cause serious impact on citizens' rights and interests. In February 2019, a facial recognition company was exposed to a data leak in which over 2.5 million people’s data and 6.8 million records were leaked, including ID card information, facial recognition images and GPS location records.

Under the PIPL, facial recognition and biometric information are recognised as sensitive personal information. Separate consent is needed for processing such information and the processing shall be only for specific purposes and with sufficient necessity. Facial information collected by image collection or personal identification equipment in public places shall only be used for maintaining public security, unless separate consent has been obtained.

This gives rise to concerns of intelligent shopping malls and smart retail industries where facial characters and body movement of consumers are processed for purposes beyond security, such as recognising VIP members and identifying consumers’ preferences so as to provide personalised recommendations. Under the PIPL, companies must consider the necessity for such commercialised processing and find feasible ways to obtain effective “separate consent”.

In the automobile industry, images or videos containing pedestrians are usually collected by cameras installed on cars. This is a typical data source for automobile companies engaging in autonomous driving or providing internet of vehicles services. While training their algorithms and providing relevant services, automobile data processors must consider the mandatory requirements both in the PIPL and the recently issued Several Provisions on the Management of Automobile Data Security (for Trial Implementation), in which videos and images containing facial information are considered as important data. Processors having difficulty obtaining consent for its collection of personal information from outside a vehicle for the purpose of ensuring driving safety shall conduct anonymisation for such information, including deleting the images or videos that can identify the natural person, or conducting partial contour processing of facial information.

The Supreme People’s Court also provides its judicial view regarding the processing of facial information and clarifies scenarios that may cause civil liabilities, such as:

  • failing to comply with laws when conducting facial verification, recognition, or analysis in business premises and public places;
  • failing to disclose rules on the processing of facial information or failing to explicitly state the purposes,
  • methods and scope of such processing;
  • failing to obtain the separate consent; and
  • failing to take proper measures for ensuring the security of facial information which results in leaks, distortion or loss of facial information.

Companies failing to perform obligations under the PIPL and related regulations are also faced with administrative penalties and even criminal liabilities (ie, for infringing citizens’ personal information).

There are specific rules for automated decision-making in the PIPL. Firstly, automated decision-making using personal information shall be subject to transparency requirements; processors are required to ensure the fairness and impartiality of the decision, and shall not give unreasonable differential treatment to individuals in terms of trading price or other trading conditions.

Where information feed or commercial marketing to individuals is carried out by means of automated decision-making, options not specific to individuals’ characteristics shall be provided simultaneously, or convenient ways to refuse shall be provided to individuals. Individuals whose interests are materially impacted by the decision made by automated means are entitled to request relevant the service provider/processor to provide explanations and to refuse to be subjected to decisions solely by automated means.

In China, chatbots are usually deployed by e-commerce platforms or online sellers to provide consulting or aftersale services for consumers. While there has not been a special regulation targeting the compliant use of chatbots or similar technologies, it does not mean that such use avoids the scrutiny of current effective laws. For example, under the regime of consumer protection law, companies using chatbots to address consumers’ questions or requests must ensure the rights and interests of consumers are properly protected; where chatbots are enabled to make decisions based on a user’s personal information, the PIPL shall apply.

Furthermore, chatbots providing (personalised) content recommendations may also need to comply with the rules issued by the CAC Algorithm Recommendation Rules. Companies shall pay special attention to the recent CAC Algorithm Recommendation Rules, if their chatbots are equipped with automated content push functions.

Relevant laws have set out transparency requirements on the use of AI-related technology. If such technology involves processing of personal information, processors are required to notify individuals of such processing. There are also transparency requirements for automated decision-making (see 12.4 Automated Decision-Making). Users of internet information services involving AI technology are also entitled to be informed of the provision of algorithm-recommended services in a conspicuous manner. According to the CAC Algorithm Recommendation Rules, relevant service providers are required to appropriately publish the basic principles, purposes and main mechanics of algorithm-recommended services.

The term “Big Data Killing” emerged from the rapid development of the platform economy, and refers to the collection of customer information for algorithmic analysis to pinpoint consumer characteristics and thus implement personalised pricing. At present, there is no clear legal definition of “Big Data Killing”, and according to its characteristics and the rights and objects it infringes, the laws that may be involved in regulating this issue are the Civil Code, the PIPL, the Law on Protection of Consumer Rights and Interests, the Electronic Commerce Law, the Anti-Monopoly Law and the Price Law.

For instance, Article 24 (1) of PIPL provides that the use of personal information for automated decision-making must not discriminate unreasonably in terms of the transaction prices. The newly issued regulation concerning internet information services, the CAC Algorithm Recommendation Rules, further provides that an algorithm-recommended service provider which sells goods or provides services to consumers protects their right to fair transactions. In other words, those providers shall not use algorithms to commit unreasonable differential treatment and other illegal acts in respect of transaction prices and other transaction conditions based on the clients’ preferences, transaction practices and other characteristics.

However, the use of personal data/personal characteristics for differential pricing is not necessarily illegal in itself, and a number of factors need to be taken into account to assess its impact on market competition, consumer welfare, and social well-being, to determine whether it is reasonable or justifiable in itself. However, at the same time, the pricing process and pricing strategy using personal data/personal information should ensure considerable transparency and human intervention to ensure the right of personal information subjects.

As climate change becomes a global issue, more and more people are becoming aware of the negative impact of humans on the environment. AI has emerged to provide new solutions to combat climate change and help the world mitigate the adverse effects of climate change.

  • AI is able to analyse large-scale data sets more efficiently than humans. It can also quickly summarise trends and accurately model the factors that influencing environmental change. This provides researchers with assistance in critical areas such as reducing carbon emissions and improving renewable energy sources.
  • AI can also help to protect and restore ecosystems. By analysing the behaviour of the land and animals in a given environment, AI can guide environmental protection efforts. In addition to this, AI technology could track and predict some natural disasters and give advance warning to people who live in vulnerable areas.
  • The use of AI in agriculture can also help combat climate change. The use of precision farming and smart irrigation systems can increase crop yields, reduce the waste of water, and reduce the carbon footprint.

Not long ago, the Shanghai AI Lab released the first AI model (“Feng Wu”) based on multimodal and multitasking deep learning methods to achieve effective forecasting of core atmospheric variables for more than 10 days at high resolution, and surpassed Deepmind’s model GraphCast in 80% of evaluation metrics.

As businesses turn to automated assessments, digital interviews and data analytics to screen CVs and candidates, the use of AI technology in recruiting has been increasing.

One of the main benefits of AI recruiting is its ability to quickly organise candidate CVs for employers. AI is able to sift through hundreds of CVs, scour candidates for relevant past experience, or other qualities that might be of interest to employers, and ensure the best candidates are screened within minutes. This greatly reduces the time required to review applications.

On the other hand, without a broadly representative dataset, it might be difficult for AI systems to discover and evaluate suitable candidates in a fair manner. For example, if the positions in the company have been dominated by male employees over the years, the historical data on which the AI recruitment system is based may lead to a gender bias, making women who would otherwise be qualified for the job excluded from the candidates list.

At present, many international technology companies employ AI or algorithm-based evaluations as grounds for employment termination, a practice not yet prevalent in China. Employers using AI technology to process employees’ information shall be subject to the transparency and related requirements under the PIPL, and shall ensure the fairness and rationality of the decision-making process.

Using AI in employee evaluation and monitoring could promote efficiency, reduce mistakes, provide personalised services and increase the quality of human resources management. However, there could be some downsides. For instance, the quality of the application of AI relies heavily on the quality and integrity of data. Incomplete or biased data may have an adverse effect on the benefits or even infringe the employees’ interests.

To best avoid bias, employers are suggested to establish a regular review and correction mechanism for the AI technology used for evaluation and monitoring to mitigate the risk of unfair and unreasonable decision-making. Further, human participation in the entire recruitment process should be guaranteed, so that the interview, evaluation, and decision-making during human resources management shall be mainly processed by humans.

As AI has been more and more used in digital platform companies such as car services and food delivery, those companies can improve the performance of an AI matching system to a level that satisfies platform workers by collecting more diverse data such as cooking wait times for each restaurant, road congestion, weather and motorcycle-free zones.

However, large digital platforms continuously and unilaterally monitor individual users, and use subtle methods, such as scoring behaviour and setting up reward and punishment systems, to establish comprehensive evaluation systems, predict and adjust individual behaviour trajectories, in order to maximise profits or gain a competitive advantage in the market. 

Moreover, since digital platforms are mostly profit-making legal persons in nature, their algorithmic goals are to maximise the achievement of commercial objectives. Therefore, even if the original intention of the application of the algorithm is in line with the spirit or purpose of the law, the design of algorithm may more or less ignore the legal value and the goal of rights protection for the sake of market economy interests, resulting in risks of infringing the legal interests of the public. Take Meituan, Elemene, and other food delivery platforms in China as examples. Some research indicates that by using AI and deep learning technology, the delivery systems developed by them require the delivery riders to continuously optimise their routes. In this system, the harder the riders work, the more likely the system is to automatically adjust parameters, thereby reducing the expected delivery time. This compels riders to take unsafe measures, such as driving in the wrong direction, running red lights and speeding to evade penalties for delays. This could lead to traffic accidents, directly endangering the riders’ personal rights and public interests.

As a result, some scholars in China believe that the Government should view the public characteristics of large-scale digital platforms through the lens of public law, incorporate the principles and values of public law into the governance framework of these platforms, and moderately intervene in the platform’s exercise of private power.

AI is used in various ways in financial services in China. For example, AI is used for credit scoring, fraud detection and customer service. China has implemented a number of regulations related to the use of AI in financial services. The People’s Bank of China (PBC) has issued guidance on the use of AI in financial services, including guidance on how to manage risks associated with AI.

One of the risks associated with the use of AI in financial services in China is the risk of biases in repurposed data. Repurposed data refers to data that was collected for one purpose but is being used for another purpose. Biases in repurposed data can lead to discriminatory practices, whether intentional or unintentional. For instance, unintentional bias in finance while using AI in China can occur when AI systems are trained on biased data or when AI systems are not transparent and explainable; eg, if an AI system is trained on data that reflects historical discrimination, the system may learn to discriminate against certain groups of people.

To mitigate these risks, financial services companies can take steps such as developing policies and procedures for managing AI risks, conducting regular audits of their AI systems, and ensuring that their AI systems are transparent and explainable.

The use of AI in healthcare requires the assimilation and evaluation of large amounts of complex healthcare data. However, to effectively use machine learning tools in healthcare, several limitations must be addressed and the key issues, such as its clinical implementation and ethics in healthcare delivery, must also be taken into consideration. Utilising centralised electronic health record systems, or their equivalents, can offer advantages such as improved integration of healthcare resources and enhanced efficiency and effectiveness in healthcare service delivery. This contributes to better industry practice. However, concerns persist around data security, particularly given that the centralised nature of these systems can leave them vulnerable to cyberattacks.

Nevertheless, the study of AI medical device has been developing rapidly in China. AI software as a medical device is regulated by the National Medical Products Administration (NMPA). In 2020, the NMPA approved the first Class III AI medical device, DeepVessel FFR, developed by Beijing Keya Medical, to perform physiological functional assessment of the coronary arteries non-invasively. In 2018, DeepVessel FFR received CE certification as a medical device in the EU, and was approved by the FDA in the United States in 2022.

Whether machines can hold intellectual property rights is hotly debated. In China, one of the well-known local courts, the Shenzhen Nanshan District People’s Court, determined in a copyright infringement case in 2020 that articles automatically generated by an AI software assistant shall be copyrightable and constitute a work of the legal entity that owns the software. Although recognised as one of the top ten cases in 2020 by People’s Court Daily, the court’s opinion on whether automatically generated content is copyrightable still remains controversial, especially after the Beijing Internet Court took the opposite position in a similar case.

When AI-enabled technology/algorithms are expressed in the form of computer software, the software code of the whole set, or of a certain module, can be protected in China under the Regulation on Computers Software Protection. While AI-enabled technology/algorithms are expressed through a technical scheme, it can be protected as a process patent. The latest revision of the Patent Examination Guidelines in 2020 specifically adds provisions for the examination of invention applications that include algorithmic features.

If the development and use of the algorithm is of high confidentiality, such algorithm might be protected as a trade secret or technical know-how. According to the Announcement of the Supreme People’s Court of the People’s Republic of China, a people’s court may determine the information on structure, raw materials, components, formulas, materials, samples, styles, propagation materials of new plant varieties, processes, methods or their steps, algorithms data, computer programs and their relevant documents, among others, relating to technology as technical information set forth in the Anti-Unfair Competition Law. Hence, protecting AI technologies as technical secrets is justified by the legislation.

As for datasets, the law is unclear on whether companies or persons could successfully establish ownership over such intangible assets. Recent judicial cases have affirmed the competitive rights of platform operators in the user data they hold from the perspective of the Anti-Unfair Competition Law, and regulations made by certain local governments have tried to formulate a right/interest system for data that involves individuals and enterprises. However, given that different types of data (personal information, important data, state secrets, etc) are subject to restrictions in different legal regimes, challenges still exist for ownership protection over data, from both a legislative and practical perspective.

Whether works generated by AI can be protected as works of art or authorship are still being assessed on a case-by-case basis. In a case decided by the Beijing Internet Court, the key issue was whether the text in a big data analysis report generated by AI constituted a work. The Beijing Internet Court eventually decided that even though the report embodied original creation to some extent, this was not sufficient for the report to be viewed as a work under the Copyright Law. Specifically, under the current Copyright Law, a written work should be created by a natural person. However, the report did not convey the original expression of feelings and thoughts of the AI developer and the user. Because neither the AI developer nor the user was the author of the report, the Beijing Internet Court refused the argument that the report constituted written work.

By contrast, in a similar case, the Shenzhen Nanshan District People’s Court held the view that the software does not run automatically without cause or possess self-awareness; instead, the way the software runs reflects the developer’s choice and is determined by the nature of the AI technology. As such, the court ruled that the article in question was deemed a written work protected by the Copyright Law. The court's reasoning was that the specific expression of the article and the process of its creation, both stemming from the choices of the creator, sufficiently qualified the article as a written work.

Firstly, it remains unclear as to whether content created through the use of OpenAI constitute “work” under the Copyright Law. The Copyright Law refers to “works” as ingenious intellectual achievements that can be presented in a certain form. Some believe that author of the content is not a natural person, and thus such content cannot be viewed as a work, while others argue that the creator of the technology itself is the author of the content generated by OpenAI. Furthermore, there remains uncertainty regarding whether the “original creation” element is satisfied. In one of the first cases involving the applicability of copyright law to AI-generated content, the court considered the content at issue to be reasonable in structure, clear in expression and logical, with a certain degree of originality.

Secondly, it is still up in the air as to whom the rights of the work belong to. AI technology itself has not yet been regarded as a legal entity. In judicial decisions, some courts tend to regard the person behind the AI technology as the owner of the copyright of AI-generated content. However, in the case discussed in 16.3 AI-Generated Works of Art and Works of Authorship, the Beijing Internet Court did not rule out the possibility that the user of the AI technology could be the author of the AI-generated content. It seems that decisions are based on the level of contribution of intellectual work by the individual.

Firstly, if a company intends to use personal information to conduct automated decision-making, the in-house attorneys should note that the company will be liable for additional duties under the Personal Information Protection Law of China and related regulations, including but not limited to conducting personal information protection impact assessment in advance, notifying the users, and obtaining separate consent from the users. If the company operates APPs, then the company needs to pay attention to the setting of their APPs, such as giving the users opt-out options when it comes to services involving automated decision-making.

Secondly, the in-house attorneys should monitor whether the company is in compliance with the algorithm regulation framework. For example, the deep synthesis service providers should fulfil the principal responsibilities for information security, establish and improve management systems for, among other things, algorithm mechanism review, scientific and technological ethics review, user registration, information release review, data security, personal information protection, etc. Notably, an algorithm-recommended service provider with public opinion attributes or social mobilisation capabilities is should file required information on the Internet Information Service Algorithm Filing System within ten working days of the service provision date and conduct security assessments in accordance with the relevant provisions issued by the state.

Thirdly, in-house attorneys should assist the company in establishing an internal control system for ensuring confidentiality and trade secret protection.

In the context of corporate governance, automated decision-making may more directly and frequently affect shareholders’ vested interests and the operation of the business as a whole. It is important to determine whether automated decisions are regarded as those made by the board of directors or shareholders' meeting. As automated decision-making systems are generally adopted by companies on the basis of decisions taken by the board, the consensus view is that such decisions should be considered as decisions of the board or shareholders’ meeting. Therefore, if there is any adverse impact on shareholders or the business operation as a whole, the board or the shareholders’ meeting should be held responsible.

In order to mitigate associated risks, from a technical standpoint, prioritising the traceability of automated decision-making outcomes is crucial. From a managerial perspective, companies are advised to assess potential business risks before implementing an automated decision-making system, limit the applicable scope of such system in the event of a material adverse impact, and set up a manual review mechanism to check and ensure the accountability of final decisions. Furthermore, to neutralise potential biases that may be introduced by the algorithm, it is also advisable for companies to set up an AI ethics committee to overview the internal use of AI.

King & Wood Mallesons

18th Floor, East Tower
World Financial Center
1 Dongsanhuan Zhonglu
Chaoyang District
Beijing 100020
PRC

+86 10 5878 5588

+86 10 5878 5566

kwm@cn.kwm.com www.kwm.com
Author Business Card

Trends and Developments


Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity and data compliance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, mainly located in Beijing, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence and is also involved in the development of the AI industry. The team has published multiple papers in recent few years, including "Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights", published in China Law Insights in 2022.

Overview

The surge of interest in ChatGPT has further fueled the global AI craze. On the one hand, AI-generated content (AIGC) has found its way into an increasing number of applications, effectively advancing economic and social development worldwide. On the other hand, the breakthroughs associated with AIGC have also sparked concerns in various areas. These range from individual privacy breaches and trade secret protection to broader issues such as potential job losses, cultural biases, and the measures needed to protect human subjectivity.

On 11 April 2023, the Chinese Cyberspace Administration (CAC) issued a draft policy titled Measures on the Management of Generative Artificial Intelligence Services (Draft AI Policy), soliciting public feedback on the regulation and management of generative AI services. It represents a governance effort to promptly address the risks and impacts that may arise from the current use of AIGC, with a focus on issuing early warnings and managing possible ethical risks.

The Draft AI Policy maintains a regulatory stance similar to that in the Regulations on Algorithm Recommendation and the Regulations on Deep Synthesis, focusing on service providers involved in algorithm recommendation and deep synthesis. Furthermore, it continues previous supervisory methods, demonstrating consistency and continuity in China’s approach to algorithmic governance. As China’s AI industry benefits from various market advantages, including vast amounts of data available for machine learning, diverse and substantial market demand, and robust policy support, several controversial cases have emerged in certain booming areas in China in the past few years.  These have challenged the the applicability of the traditional framework of laws and regulations in relation of AI.

As the application of AI expands into various areas, China has been gradually formulating its approach to AI governance in the past year. We have highlighted three of these areas for further discussion on AI governance below.

Automated Driving

In relation to automated driving, determining the liable party has become more and more challenging in China, primarily in two aspects: the allocation of product liability, and the attribution of liability in accidents involving intelligent-connected vehicles (ICVs).

In terms of product liability, Article 46 of the Product Quality Law of the People’s Republic of China provides that “[d]efects mentioned in this law refer to the unreasonable risks existing in the products that threaten the safety of person or properties, or products that do not conform to the standards set by the Government or the specific trade, where applicable.” Currently, Chinese law applies two standards – unreasonable risks and non-conformance with government or industry-specific standards – to determine the presence of a “defect” in product liability cases. Since there is no mandatory national standard yet applicable specifically to the autonomous driving area, courts must rely on the “unreasonable risk” criterion for judgments on defects in autonomous driving cases. However, the courts have developed an approach to resolve most of these cases by classifying defects into three categories: design defects, manufacturing defects and warning defects.

For instance, a design defect refers to defects in the design of the autonomous driving system, which leads to all products adopting such a system posing unreasonable risks that could endanger personal and property safety. Courts normally use “reasonable expectation of consumers” to determine whether the autonomous driving system presents unreasonable risks. In practice, what consumers reasonably expect from an autonomous system is that the system has the same or even greater capability than a human driver in order to achieve driving safety. In that regard, when an automated driving car is involved in an accident due to malicious intrusion of its autonomous driving system by a third party, it should be deemed that there is design defect in the autonomous driving system. In this case, the service provider of the autonomous driving system should bear product liability.

As regards the discussion regarding accident liability, the Ministry of Industry and Information Technology (MIIT) and other ministries jointly issued the Trial Administrative Provisions on Road Tests of Intelligent Connected Vehicles, effective since May 2018, to regulate the qualification, application and procedural requirements of automated driving road tests and liabilities incurred by road test accidents. In addition, many provincial governments in China have issued their own administrative regulations concerning the management of ICV. For instance, Regulations on the Management of Intelligent-connected Vehicles released by the Standing Committee of the Shenzhen Municipal People’s Congress, effective since August 2022, provides that where an ICV with a driver causes damage in a traffic accident, and the fault is attributable to the ICV, the driver of such ICV shall bear the liability. Moreover, as proposed in the Regulations on the Management of Intelligent-connected Vehicles, if an accident is caused by the quality defects of ICVs, the drivers are entitled to claim compensation from the manufacturers and sellers of such vehicles after assuming the compensation liabilities.

Online AI Face Swap

“AI face swapping” or “AI facial replacement” are two common ways to describe the practice of using AI technology to swap or replace faces in images or videos. This technology uses deep learning and neural networks to analyse and synthesise facial features for a natural-looking face replacement effect. It is often used in film production, video editing, virtual reality and entertainment, among other fields. However, it can also be used for fraud, false advertising and other unethical activities.

This technology experienced a surge in popularity in China as deepfake technology advanced further in 2022. Applications or individual sellers have been providing AI face swap services online, with the most popular ones allowing users to replace the original face with a celebrity’s face. As an increasing number of controversial cases surfaced, celebrities decided to exercise their rights and take legal action. Most commercial AI face swap activities involving celebrities have since been either banned or transformed into more covert practices. For ordinary individuals who have also had their rights infringed upon by these commercial AI face swap activities, the courts have also issued rulings against several applications and service providers in 2022.

For instance, in December 2022, the Hangzhou Internet Court tried a case in which an AI face swap app, using deep synthesis algorithms, infringed on the portrait rights of others. The developer of the app was ultimately ordered to apologise and provide compensation totalling CNY5,000.

In that case, the plaintiff, Lou, is a Chinese model who often publishes photos and videos of herself in ancient Chinese clothing. In March 2022, Lou posted a short video on an online platform. In that video, Lou wore ancient Chinese makeup and clothing, and appeared in several different outfits. The defendant, a Shanghai-based company, is the operator of an AI face swap app. The app advertises that users can become the protagonist of a video by using just one photo and changing their face. Users can access all face swap video templates by paying a membership fee ranging from CNY68 to 198. Lou discovered that the app operated by the defendant company had a video template featuring her in ancient Chinese clothing. Users can upload their own photos and replace the face in the video template with their own face. Apart from the facial features, the rest of the content in the video template remains unchanged. The app generates a face-swapped video of the user wearing ancient Chinese clothing, which the user can save and share on other platforms. Lou believes that the company’s conduct infringes on her portrait rights, and asked the Court to order the company to stop the infringement, apologise and compensate for losses.

The court found that the Civil Code explicitly regulates the use of information technology to forge or fabricate someone else’s portrait as a typical form of infringement of portrait rights. “Using information technology to forge” refers to the use of information technology to fabricate or fake someone else’s portrait in order to deceive others and obtain illegal benefits. In this case, Lou holds the portrait rights to the video template and corresponding character images both before and after the face swapping since ordinary people could still easily recognise Lou based on the image and physical features regardless of the replacement of her facial features.

There were more cases like this in 2022, and the courts began to investigate more thoroughly how to discern between legal and illegal uses of online AI face swap technology.

Application of AI in the Judicial Field

In recent years, the deployment of AI in the judicial sector has presented both opportunities and challenges. Consequently, devising a reasonable approach to implement AI technologies in judicial work has emerged as a critical issue warranting research within the field.

In China, AI technologies have already been used to assist with judicial work in several ways. Firstly, they have greatly improved the efficiency of legal research and information processing. Though it is still challenging for the AI to generate answers to complicated legal issues, the information obtained by searching the questions in a fixed pattern can generally meet the needs of more straightforward cases. Furthermore, perceptual AI technologies such as OCR, voice recognition and evidence recognition are far more accurate and convenient than traditional scanning and recording technologies. The application of such technologies have broadly reduced court time and boosted the completeness of trial transcripts.

Secondly, AI technologies have also assisted judges in drafting documents and pushing similar cases for judges’ reference. For most basic cases, such as small loan disputes, government information disclosure and other cases that can use standard judicial documents, the automatic judicial document generation systems can automatically identify and extract key content, including party information, litigation requests and case facts through OCR, semantic analysis and other technologies, and generate standard judicial documents according to the template. In less than a year, this automatic judicial document generation system assisted the courts in one province with processing about 110,000 cases and generating 780,000 documents. Meanwhile, by establishing case databases with classification marks, the systems can automatically filter and push similar cases to the judges to provide reference when dealing with new cases, thereby enabling the unification of adjudication standards.

Thirdly, at the primary stage of case analysis, the AI technologies can calculate the workload of and resources needed for each case based on the categories and weighing coefficients of the cases. This function enables the court to reasonably allocate judicial resources. In terms of in-depth case analysis and adjudication assistance, the system used by Beijing courts can automatically sort out the facts, generate trial outlines, and feed these outlines into the trial system. The system used by Shanghai courts has the function of evidence rules guidance, which enables the intelligent review of evidentiary materials and provides standardised guidelines for case handlers.

Despite the rapid growth in the application of AI technologies in the judicial field, their use still carries certain limitations. Currently, AI is used to assist in standardised procedural work and process cases with simple reasoning based on models. AI technologies developed for general purposes are relatively mature, such as voice recognition technology, and have facilitated the efficiency of judicial activities. However, such general-purpose AI technologies cannot satisfy the needs of determining facts in complicated cases, evaluating the strength of evidence, or legal interpretation, among other things. AI technologies tailored specifically for use within the judicial field are still under development.

At the end of 2022, the Supreme People’s Court issued its Opinions on Regulating and Strengthening the Applications of Artificial Intelligence in the Judicial Fields, reflecting the government’s goal of promoting the in-depth integration of AI within judicial work. More specifically, the opinions outlined an aim to build an application and theoretical system for the utilisation of AI in the judicial field, complete with rules and guidelines, by 2030.

King & Wood Mallesons

18th Floor, East Tower
World Financial Center
1 Dongsanhuan Zhonglu
Chaoyang District
Beijing 100020
PRC

+86 10 5878 5588

+86 10 5878 5566

kwm@cn.kwm.com www.kwm.com
Author Business Card

Law and Practice

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity and data compliance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, mainly located in Beijing, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence and is also involved in the development of the AI industry. The team has published multiple papers in recent few years, including "Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights", published in China Law Insights in 2022.

Trends and Developments

Authors



King & Wood Mallesons (KWM) is an international law firm headquartered in Asia with a global network of 27 international offices. KWM’s cybersecurity team is one of the first legal service teams to provide professional services concerning cybersecurity and data compliance in China; it consists of more than ten lawyers with solid interdisciplinary backgrounds, mainly located in Beijing, while further specialisms are found within KWM’s global network. The team has expertise in assisting clients in responding to cybersecurity inspections and network emergencies, the establishment of network information compliance systems, self-assessment, algorithm registration and other related matters. The team is a member of the Chinese Association for Artificial Intelligence and is also involved in the development of the AI industry. The team has published multiple papers in recent few years, including "Algorithm Governance – Internet Information Service Recommendation Algorithm Management, China Law Insights", published in China Law Insights in 2022.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.