Artificial Intelligence 2023

Last Updated May 30, 2023

Japan

Law and Practice

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The TMT Practice Group is comprised of about 50 lawyers and legal professionals and has been representing Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. Also, a strength of the TMT Practice Group is that, in view of the firm’s robust client base, it is well positioned to consistently meet requests from clients to provide advice on many different matters, from business strategies to daily compliance and corporate matters.

Under Japanese law, the generally applicable laws relating to AI liability are the Civil Code (ie, tort liability) and the Product Liability Act.

Civil Code (Tort Liability)

Under the Civil Code of Japan, a person who wilfully or negligently infringes the rights or legally protected interests of another is liable in tort for damages arising out of or in connection with such infringement (Article 709). In this context, the term “negligence” refers to the failure to take the necessary measures to avoid the occurrence of a specific result, although the occurrence of such a result was foreseeable. For example, if users cause an unexpected result through the use of AI that causes a third party to incur damage, they can be held liable in tort for their “negligence”. AI developers and manufacturers can also be held liable in tort.

However, whether AI users, developers, or manufacturers can be considered to have “foreseen” the occurrence of such a result or “taken necessary measures to avoid it” will be determined based on the specific circumstances of the case, including the functions and risks of the AI.

The Product Liability Act

Under the Product Liability Act of Japan, the manufacturer of a “defective product” that “infringes the life, body, or property of another” is liable for damages, regardless of whether the manufacturer was negligent (Article 3).

Although an AI program or software itself does not constitute a “product”, if the AI is installed on a particular device, the entire device, including the AI, constitutes a “product”. The term “defect” under the Act refers to a lack of “safety that the product ordinarily should provide”. However, the issue of determining how an AI “ordinarily should provide safety” and how a plaintiff (victim) can prove that the product lacks such safety is extremely problematic.

It should be noted that even if an AI is found to be “defective”, the manufacturer of the AI device is exempted from liability for damages if it can be established that the manufacturer could not have detected such defect in the AI based on its scientific or technical knowledge at the time the manufacturer delivered the AI device (development risk defence) (Article 4, item 1).

AI is being introduced and utilised in a wide range of industries. For example, the 2022 AI White Paper, published by the Information-technology Promotion Agency of Japan, lists the following industries in which AI is used and examples of its application.

  • Manufacturing: automated product inspection by image analysis, efficient work supervision, detection of abnormalities and preventive diagnosis of production equipment failures, design support and production planning support.
  • Automotive: automated driving (the amended Road Traffic Act that came into effect on 1 April 2023 allows for approval-based SAE Level 4 automated driving services) as well as streamlining operations such as vehicle visual inspection and design.
  • Infrastructure: abnormality detection and maintenance work.
  • Agriculture: forecasting crop damage due to disease, crop growth management and harvest timing forecasting, optimising use of fertilisers and pesticides, as well as automated crop sorting and harvesting using robots.
  • Health, medicine and nursing care: image diagnosis support, automation of medical consultations and pharmaceutical development, nursing care support by robots.

Another example of cross-industry collaboration is the Super City Initiative, which was launched in 2020 based on the amended National Strategic Special Zone Act. This is an initiative to create a unified data linkage infrastructure for specific local governments and to use this data linkage infrastructure to provide advanced services using AI technologies in a wide range of fields, including administrative procedures, transportation, medical care, disaster prevention and education for improving the convenience of daily life. This initiative also includes reforming regulations to enable the introduction of AI technologies as a prerequisite for such services. AI technologies in various fields are being used cooperatively for the improvement of residents’ lives, and currently 31 local governments are engaged in various efforts to propose Super City-type National Strategic Special Zones.

Future developments are also expected, as an AI White Paper issued in April 2023 by the ruling Liberal Democratic Party’s Headquarters for the Promotion of Digital Society also recommends the promotion and improvement of efforts related to this super city concept.

There is currently no cross-sectional legislation in the area of AI. However, there are relevant rules in individual legal areas that presuppose the use of AI. For example, the amended Road Traffic Act that came into effect in April 2023 defines as “specified automated driving” where automobiles are operated pursuant to certain conditions without the presence of a driver. As such, the Road Traffic Act has established certain rules to ensure that AI-based automated driving (level 4) is safe. Persons or entities that wish to conduct the specified automated driving must obtain the permission of the Public Safety Commission with jurisdiction over the intended location of the automated driving. The Pharmaceutical and Medical Device Act also establishes a “prior notification system for confirmation of plans for change regarding medical devices and changes implemented according to the plans for medical devices” (commonly known as IDATEN – the Improvement Design within Approval for Timely Evaluation and Notice) for AI-based medical device programs, which aim to provide flexibility for medical devices that are expected to be continuously improved, such as AI medical devices.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

In June 2022, a court ruled that the operator of Tabelog, a well-known Japanese restaurant ratings site, was found liable for damages under the Anti-monopoly Act for “abuse of a superior bargaining position” by changing its algorithm to the disadvantage of some users and continuing to use the changed algorithm. The case is currently on appeal. Thus far, the Japan Fair Trade Commission has indicated that a restaurant ratings site may have a superior position, and that acts such as unilaterally changing the algorithm and forcing restaurants to conclude contracts favourable to the site may constitute an abuse of a superior position. The above judgment is considered to be a highly influential decision since an abuse of a superior bargaining position was found by solely the fact that the algorithm was changed to the disadvantage of the parties. In addition, the fact that the ratings site operators initially refused to disclose the algorithm itself, which was an issue in the process of this lawsuit, as highly confidential information, but eventually agreed to disclose it, became noteworthy. In this regard, this lawsuit is also notable from the perspective of the principle of transparency, which is an aspect of AI governance.

There are no precedents in Japan where the definition of AI was particularly at issue and a specific ruling was made. As stated in 5.2 Technology Definitions, there are some definitions of AI in statutes or guidelines.

Although the Cabinet Office has formulated a national strategy for AI, there are no cross-sectional and binding laws and regulations for AI in Japan (see 1.1 General Legal Background Framework). Therefore, there is no regulatory authority that plays a leading role in regulating AI. Instead, the following ministries and agencies are primarily responsible for the enforcement of AI-related laws by sector and application within the scope of the laws and regulations under their jurisdiction.

In relation to AI, the Ministry of Health, Labour and Welfare (MHLW) has jurisdiction over labour laws (ie, the Labour Standards Act, Labour Contract Act, Employment Security Act, among others) and the Pharmaceutical and Medical Devices Act (PMDA). In connection with labour laws, the MHLW addresses AI-related employment issues, such as recruitment, personnel evaluation and monitoring of employees using AI (see 14 AI in Employment). In connection with the medical devices field, there is a move to accommodate AI-enabled medical devices under the PMDA (see 15.3 Healthcare).

The Ministry of Land, Infrastructure, Transport and Tourism (MLIT) has jurisdiction over the Road Traffic Act, which establishes rules for automated driving.

The Ministry of Economy, Trade and Industry (METI) has jurisdiction over various AI-related laws and regulations (such as the Unfair Competition Prevention Act, which protects big data as “limited provision data”) and is actively formulating guidelines and other relevant materials for businesses involved in the development and utilisation of AI, such as “Contract Guidelines on Utilisation of AI and Data Version 1.1” and “the Governance Guidelines for Implementation of AI Principles Version 1.1”. In addition, the Japan Patent Office, an external bureau of METI, has jurisdiction over the Patent Act (see 16.1 Applicability of Patent and Copyright Law regarding the protection of AI-enabled technologies and datasets under the Patent Act).

The Personal Information Protection Commission (PPC) has jurisdiction over the Act on the Protection of Personal Information (APPI). The PPC addresses APPI-related issues where personal data is involved in the development and use of AI.

The Japanese Fair Trade Commission (JFTC) has jurisdiction over the Act on Prohibition of Private Monopolisation and Maintenance of Fair Trade (the Anti-Monopoly Act) and the Subcontract Act. The JFTC addresses issues that the use of AI, including AI and algorithmic price adjustment behaviour and dynamic pricing, may have on a fair competitive environment.

The Financial Services Agency (FSA) has jurisdiction over the Banking Act and the Financial Instruments and Exchange Act, among others. The FSA addresses risks and other issues related to investment decisions by AI for financial instrument business operators (see 15.2 Financial Services).

The Agency for Cultural Affairs has jurisdiction over the Copyright Act. See 16.1 Applicability of Patent and Copyright Lawregarding the protection of AI-enabled technologies and datasets under the Copyright Act).

The Ministry of Internal Affairs and Communications (MIC) addresses the policy related to information and communication technologies (including the policy related to advancement of network system with AI as a component).

The definitions of AI used by regulators include some specific to machine learning and other more broad definitions, while the Japanese government has not established any fixed definition. Furthermore, the definition of AI system in the “Governance Guidelines for Implementation of AI Principles ver. 1.1” is based on the definition in the OECD AI Principles. The main examples are as follows.

Governance Guidelines for Implementation of AI Principles Version 1.1: according to these guidelines, an AI system is a system that is developed with a machine learning approach, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning, which can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. It includes not only software but also a machine which contains software as an element.

The Basic Act on the Advancement of Public and Private Sector Data Utilisation: according to this act, “AI-related technology” means technology related to the realisation of intelligent functions such as learning, reasoning and decision-making by artificial means, and the use of such functions realised by artificial means.

The MHLW, through its enforcement of the Labour Act, addresses issues related to the utilisation of AI in various aspects of employment, including recruitment, personnel evaluation, employee monitoring and AI replacement and termination/reassignment issues (see 14 AI in Employment). Steps are also being taken to address AI-based medical devices under the PMDA, such as providing a framework for determining whether an AI-based medical device program constitutes a “medical device” subject to licensing (see 15.3 Healthcare).

MLIT handles the development of laws on traffic rules for automated driving through the enforcement of the Road Traffic Act.

METI addresses the protection of data and information used in AI development and products created in the process of AI development under the Unfair Competition Prevention Act (see 16.1 Applicability of Patent and Copyright Law).

See 15.2 Financial Services for a discussion on the amended Instalment Sales Act, which came into effect in April 2021, enabling credit card companies to determine credit limits through credit screening using AI and big data analysis.

The PPC, through its enforcement of the APPI, addresses the handling of personal information that may be used in the development and utilisation of AI.

The JFTC addresses issues related to the use of AI in a fair competitive environment through enforcement of the Anti-Monopoly Act (see 12.6 Anti-Competitive Conduct).

Regarding the regulatory objectives stated in 5.3 Regulatory Objectives, the regulatory authorities are now in the process of discussing and announcing potential cases that may pose problems in light of existing laws and regulations. There have been no actual examples of enforcement actions or other regulatory actions as of yet.

Although the development and use of AI itself was not a target of enforcement, there was a case where the handling of personal data in a service using AI became an issue. In this case in 2019, a service provider used AI technology to calculate the expected job offer rejection rate for individuals during job hunting and provided it to client companies without the consent of the subject individuals. The PPC issued a warning and guidance to the service provider while the MHLW issued administrative guidance.

There is currently no cross-sectional legislation in the area of AI, including any proposed legislation or regulations. However, in March 2023, the ruling Liberal Democratic Party (LDP) released an AI White Paper recommending that specific laws and regulations be considered for the following risk areas that are under discussion in the US and Europe:

  • serious human rights violations;
  • national security; and
  • improper intervention in democratic processes.

In addition, the Digital Agency is currently working on a comprehensive revision of the so-called “analogue regulations” that require written, visual, resident and on-site participation, which have been factors that have inhibited the use of digital technology.

Current standards for AI quality include the Japanese Industrial Standards (JIS) established by METI, specifically JISX 0028 and JISX 0031. These are essentially Japanese translations of the ISO international standards and there is no substantial difference in content. These two standards define the basic concepts of AI, expert systems and machine learning; however, these standards are somewhat out of date, having been established in 1999 without any amendments to date. Thus, it is difficult to say that these standards are appropriate for AI today, which has become more complex and made significant progress since 1999. It has been announced that in the JIS, “Information technology – IT governance – Impact of the utilisation of AI on the governance of organisations” (JISQ 38507) and “Information technology – AI – Concepts and terminology” (JISX 22989) are scheduled to be enacted to correspond to ISO/IEC 38507 and ISO/IEC 22989. In addition, JISX0028 is scheduled to be abolished with the enactment of JISX 22989.

Although not a national standard, the Consortium for AI Product Quality Assurance, consisting of major domestic IT companies, academics and the National Research and Development Agency, has published the AI Product Quality Assurance Guidelines. The guidelines list five quality evaluation areas (data integrity, model robustness, system quality, process agility and customer expectation) as well as specific checklists for each product. It is believed that these can be useful in product development.

In addition, the National Institute of Advanced Industrial Science and Technology (AIST) has published the Machine Learning Quality Manual Management Guidelines. These guidelines classify quality for machine learning systems into three categories:

  • quality at the time of use (quality that should be provided to the final user of the system as a whole);
  • external quality (quality from an objective perspective that is required of components of the system); and
  • internal quality (quality that is measured specifically when creating the components or evaluated through development activities such as design – ie, quality that is a characteristic inherent to the components).

The guidelines then establish anticipated quality levels for external and internal quality according to their characteristics, and propose how to use quality control according to the quality level.

There are currently no critical issues related to the standard-essential patents related to AI or their licensing.

As described in 7.1 National Standard-Setting Bodies, in order to strengthen Japan’s industrial competitiveness and improve social acceptance of AI, it is necessary to establish JIS consistent with international standards, while taking into account domestic and international trends based on ISO and other international standards, for the implementation of the AI social principles. Therefore, it is planned that JIS will be established consistent with ISO in relation to AI.

Further, Japan is actively involved in the international standards for AI, which are currently being actively discussed. For example, the Information Processing Society of Japan (IPSJ) has established the SC42 Technical Committee within its Information Standards Committee to gather domestic opinions and to respond to international issues. In addition, it seems to be deepening its co-operative relationship with CEN/CENELEC, an EU standardisation body.

In Japan, as there is no comprehensive regulation on AI, there are plans to establish standards consistent with international trends. At this point, there are no clear conflicts with international standards.

Regarding the introduction of AI technology in government, the “Guidebook for the Use and Introduction of AI in Local Governments” was published by MIC in June 2022. According to this guidebook, the number of local governments introducing AI has increased significantly over the past few years, and currently AI is mainly used for voice recognition, text recognition and chatbot-based responses to inquiries. The “Guidebook for the Use and Introduction of AI in Local Governments (Introduction Steps)”, released by MIC around the same time, provides specific methods and points to note for local governments in introducing AI technology. This guidebook also presents pioneering case studies where AI has been introduced. AI technology adoption is evolving rapidly and growing increasingly diverse.

The use of facial and biometric recognition by the government is subject to the Act on the Protection of Personal Information because the required data falls under the category of personal information and may infringe on the right to privacy and publicity. In 2021, the Japan Federation of Bar Associations published the “Opinion Concerning Legal Restrictions on Facial Recognition Systems Used in the Government and Private Sector”, in which it expressed strong concerns about the use of facial recognition systems from the perspective of protecting privacy rights, in particular by calling for strict regulations on facial recognition systems to be used on the general public. On the other hand, in March 2022, METI published the “Camera Image Utilisation Guidebook Version 3.0”, which outlines points to be noted in the collection and handling of information when utilising camera images for facial recognition technology. Further, the Personal Information Protection Commission (PIPPC) published the “Report of the Study Group of Experts on the Use of Camera Images for Crime Prevention and Security”, released in March 2023. This report summarises issues and measures under the Personal Information Protection Act. The Guideline and the Report are intended to regulate private businesses in principle, but can also be referenced by government agencies. Through the establishment of these guidelines, rules are being formulated that take into consideration the rights of each individual who provides personal information.

There are no particular judicial decisions regarding issues related to the use of AI technologies by government agencies in Japan.

In the AI Strategy 2022 formulated by the Cabinet Office in April 2022, it is stated that “In light of the increasing complexity of the international geo-political situation and changes in the socioeconomic structure, various initiatives are being considered for key technologies including AI from the perspective of economic security, and it is necessary to coordinate related measures so that the government as a whole can effectively focus on these issues”. This was the first time AI-related announcements referred to economic security. In addition, in May 2022, the Economic Security Act was enacted, which also stipulates the provision of information and financial support for the specified critical technologies, including AI-related technologies. In addition, following the enactment of the Act, the Cabinet Office, Ministry of Economy, Trade and Industry, and Ministry of Education, Culture, Sports, Science and Technology initiated a programme to foster key technologies for economic security in 2022, which promotes research and development of advanced technologies related to economic security. AI-related technologies are also covered by this programme.

Conversely, a notable instance of the government ceasing to use AI is the discontinued use of LINE, a social networking service that also functioned as an automated chatbot for responding to inquiries. in March 2021, an issue emerged following reports that LINE’s subcontractor in China could access the personal data of  LINE users in Japan. Consequently, local governments faced the dilemma of whether to suspend the use of LINE.

In Japan, since the basic idea that it is important to approach AI with a non-regulatory and non-binding framework was presented in 2019, there has been a series of discussions in the direction of formulating policy tools jointly by the public and private sectors through soft law, without imposing too many laws and regulations on the development and use of AI, so as not to inhibit innovation. However, the LDP’s AI White Paper mentioned in 6.1 Proposed Legislation and Regulations points out that the assumptions of the policy discussion so far are drastically changing now that the development of fundamental models such as GPT is progressing, and the social implementation of AI is advancing at an unexpected speed.

Copyright is one of the main legal issues related to generative AI. Under Japan’s current copyright law, AI-generated works are not generally considered to be subject to copyright protection; however, they are considered to be copyrightable as human creations when AI is used as a tool to express human ideas and emotions. It used to be understood that computers did not completely replace human creative acts in most cases, and that they were usually accompanied by some kind of human creative contribution. However, the legal issues related to such copyrights have become more complicated due to the quantum leap in AI’s progress.

In Japan, legal regulation of deep fakes has not been discussed to the extent it has in Europe and the US. However, the LDP’s AI White Paper has identified national security and intervention in the democratic process as major areas of risk in the future, so further discussion is expected.

In Japan, legal tech is being used for digital transformation and increased efficiency in the legal world in areas such as contract management, AI-based contract support services, electronic contracts, legal research and digital forensics. The early adopters of legal tech are legal research tools, where companies are devising more advanced search technologies such as natural language search and AI-based keyword suggestions. The use of AI technology to review contracts is also expanding, with several thousand companies currently employing that technology. An increasing number of companies have introduced contract management systems that create databases of internal contracts and enable knowledge management through search, management and sharing of information related to the contracts. In 2023, some legal tech companies are planning to introduce legal consultation services using Chat GPT and services that suggest revisions to contracts using Chat GPT.

Article 72 of the Attorneys Act prohibits non-lawyers from providing legal services such as advice and representation with respect to legal matters for the purpose of earning compensation. Whether AI chatbot legal advice and AI automated drafting services violate the Attorneys Act is a major issue.

In June and October 2022, two companies planning AI contract review services separately made inquiries to the Ministry of Justice (MOJ) about the legality of their legal-tech services. In response, the MOJ expressed the view for each of the above inquiries that the services planned to be provided by the companies may be illegal. Specifically, the MOJ stated that the services may violate Article 72 of the Attorneys Act, which prohibits persons other than attorneys from engaging in legal services for the purpose of earning compensation. The responses from the MOJ only cover the specific inquiries that were made and do not imply that AI contract review services in general or existing services are illegal. However, there were growing concerns that companies may hesitate to introduce such services, which may impede the growth of the market for AI contract review services.

To address these concerns, in December 2022, the Council for Promotion of Regulatory Reform in the Cabinet Office held a meeting to discuss the relationship between AI contract review services and Article 72 of the Attorneys Act. At the meeting, it was decided that the MOJ would consider formulating and publishing guidelines that introduce specific cases in which AI contract review services are considered legal.

In Japan, AI is not recognised as a legal entity, and there is no specific legislation regarding liability arising from the acts or use of AI. Therefore, general civil and criminal liability will apply to them. Civil liability is as described in 1.1 General Legal Background Framework, but in some cases, depending on the relationship between the injured party and the manufacturer, manufacturer’s liability may be based on a contract. In addition, regarding automated driving, the “operator” (the owner of the vehicle) may be liable for damages; specifically, the operator is liable unless it can be proven that it was not negligent. In terms of criminal liability, professional or ordinary negligence resulting in injury or death (Article 211 of the Criminal Code or Article 210 of the Criminal Code) are typically considered to be applicable to the developers and users of AI, but other crimes may also be applicable depending on the circumstances. In addition, in cases where the actions of a third party intervene and the use of AI causes damage to others, the issues of joint tort liability with respect to civil liability and conspiracy with respect to criminal liability may arise.

In relation to the civil liability mentioned above, if a product has a defect, product liability will be imposed regardless of whether the manufacturer was negligent; this may have a chilling effect on AI developers. In this regard, this risk can be hedged by insurance, which can encourage development.

Regarding the sharing of responsibility in the supply chain, the Contract Guidelines for the Use of AI and Data, Version 1.1 (see 5.1 Key Regulatory Agencies), note that there are difficulties in determining the attribution of liability (percentage of negligence) based on tortious acts because of the difficulty of verifying causal relationships after an accident and the fact that the results of AI use depend on learning datasets, the content of which is difficult to identify, and the input data at the time of use, which is unspecified. In addition, claims for damages may be made based on contractual liability between the user and the AI developer, and between the AI developer and the data provider for the generation of trained models. It is desirable to clearly specify the division of responsibility in the contract according to the circumstances.

In addition, the model version described in Version 1.1 of the Contract Guidelines for the Use of AI and Data is a good reference for common industry practice. Regarding the allocation of responsibility, in the contract for data provision, there is a representation and warranty clause regarding the quality of data to be provided, and in the contract for the development of trained models, there is a provision to the effect that the vendor is, in principle, exempted from liability for the use of products generated from using trained models. In addition, new efforts are being made with regard to insurance in relation to automated driving, and an SAE Level 4 insurance policy for operators of automated driving services has been developed in line with the revision of the law on automated driving.

In Japan, there is no cross-sectional legislation or guidelines regarding criminal and civil legal liability with respect to AI.

Algorithmic bias refers to situations in which a bias occurs in the output of an algorithm, resulting in unfair or discriminatory decisions. In Japan, there has not been a case in which a company has been found legally liable for illegality arising from algorithmic bias. However, if a company were to make a biased decision based on the use of AI, it could be found liable for damages based on tort or other grounds. In addition, companies may face reputational risk if unfair or discriminatory decisions are made in relation to gender or other matters that significantly affect a person’s life, such as the hiring process.

There are no laws or regulations that directly address algorithmic bias. Companies are expected to take initiatives to prevent the occurrence of algorithmic bias. For example, the AI Utilisation Guidelines (August 2019) issued by the Conference toward AI Network Society established by MIC provides, as one of the ten principles of AI utilisation, the principle of fairness (principle 8), which states that “AI service providers, business users, and data providers should be aware of the possibility of bias in the decision-making process of AI systems or AI services, and should be mindful that individuals and groups are not unfairly discriminated against based on the decisions of AI systems or AI services”. In addition, the Guidelines for the Quality Assurance of AI Systems (September 2021) and the Machine Learning Quality Management Guideline, Second Edition (July 2021) provide tips for avoiding or mitigating algorithmic bias, which may be useful in practice.

Given that all processes involved in data generation and selection, annotation, pre-processing, and model/algorithm generation are subject to potential bias, documentation regarding the specifics of these processes should be obtained and maintained. However, when using complex algorithms such as deep learning, it may not be possible for humans to understand the above-mentioned process, even if collecting the material in relation to such process, in the first place. Therefore, it is advisable to select algorithms that can be used by taking into account aspects of “explainable AI" (XAI).

The “AI Utilisation Guidelines” stipulate the Principle of Privacy, namely that users of AI systems and persons who provide data to AI systems must exercise care so that their own privacy or the privacy of others is not infringed when utilising AI systems or AI services. The guidelines thus require respect for privacy throughout the entire life cycle of AI, from the collection of data for learning to the AI output. Under Japanese law, the right to privacy is considered to be “the right to control one’s own information”, which is not necessarily the same as the protection of personal information under the Personal Information Protection Act and requires separate consideration.

Profiling by AI to infer a person’s behaviour and characteristics from their browsing history may raise privacy concerns. A well-known Japanese recruiting company that operates a job search website for university students provided a service that indicates the likelihood of students leaving the hiring process or declining job offers; the company offered this service to companies that were considering hiring new graduates. This service used an algorithm that calculated the likelihood of a student’s declining a job offer based on the student’s browsing history by industry on job search websites and provided the company with a score indicating the likelihood of the student declining the offer. This service involved issues such as the fact that some students did not agree to the privacy policy and the fact that the privacy policy was not adequately specific, making it difficult for the students to foresee that their information would be provided to companies in the form of the likelihood that they would decline the company’s offer. The Privacy Protection Commission issued a recommendation and guidance as this service was a violation of the Personal Information Protection Act. The above service was strongly criticised by Japanese society.

Under Japanese law, in relation to privacy and personal information, the obligations or responsibility related to the processing of personal data by AI, such as in profiling, do not change based on the existence of direct human supervision. For example, the secrecy of  communications is protected as a type of the right to privacy. However, even if the contents of communications are obtained and analysed solely by a machine without any human involvement, in principle this would constitute an infringement of the right to secrecy of communications if the consent of the individual concerned was not obtained.

Personal Data

Facial or biometric authentication requires the capture of biometric data such as facial images and fingerprint data. Such data is considered personal information under Japan’s Act on the Protection of Personal Information (APPI), but is not regarded as personal information requiring special care (Article 2, paragraph 3 of the Act). Therefore, when acquiring such information, as long as its purpose of use is notified or disclosed, the individual’s consent is not required. However, depending on how the data is acquired and used, it may constitute an improper acquisition (Article 20, paragraph 1 of the Act) or improper use (Article 19 of the Act). It is therefore advisable to consider this issue carefully.

Privacy and Portrait Rights

In addition, depending on how facial images and biometric information are obtained and used, there may also be infringement of privacy rights and portrait rights (ie, infringement of personality rights). Although the debate over the circumstances in which an infringement of privacy and portrait rights occurs has intensified with a growing number of court precedents, since the debate surrounding facial and biometric authentication has not yet crystallised, it is difficult to definitively specify what type of acquisition and use would be permissible. With respect to the use of video images, in practice, it is advisable to refer to the Guidebook for Utilisation of Camera Images Version 3.0 (March 2022).

Corporate Risk

If the personal identification function makes an incorrect decision during facial or biometric authentication, it is likely that the user cannot use the device (ie, a false negative), or someone who is not the user can use the device (ie, a false positive), among other issues. In all such cases, the service provider’s liability for damages may become an issue, but, generally, the terms of use or other policies and guidelines provide that the service provider is exempt from liability. Whether or not such disclaimer is valid is determined in light of the Consumer Contract Act in cases of B2C transactions.

In July 2021, JR East, Japan’s largest rail operator, introduced a security system featuring facial recognition to detect “those who have committed serious offences and served prison sentences in the past in JR East facilities”, “wanted suspects” and “loiterers or other suspicious persons”. However, following severe public criticism in relation to detecting those released from prison and parolees, it was decided not to include them within the scope of detection. Therefore, social acceptance is also an important factor in the use of facial and biometric recognition, and there is a risk of reputation damage if an incorrect decision is made.

Profiling will be used as an example of automated decision-making. While some foreign countries have introduced regulations on profiling using AI, such as Article 22 of the EU’s GDPR, there are no laws or regulations that directly regulate profiling in Japan. Notwithstanding this, however, the provisions of the APPI must be complied with. For example, when personal data is acquired for profiling purposes to analyse behaviour, interests and other information from data obtained from individuals, the purpose of the use of such data must be explicitly notified or disclosed to the public in accordance with the APPI. However, it should be noted that individuals’ consent is not required under the APPI, unless acquiring personal information requiring special care. In addition, precautions should be taken to avoid inappropriate use (Article 19 of the APPI).

Further, if automated decision-making leads to unfair or discriminatory decisions, liability for damages and reputational risk could be an issue, similar to the issues discussed in 12.1 Algorithmic Bias.

In Japan, there are no laws or regulations that provide specific rules for AI transparency and accountability. However, the AI Utilisation Guidelines (August 2019) issued by the Conference toward AI Network Society established by the MIC lists “the principle of transparency” and “the principle of accountability” as two of the ten principles of AI utilisation. In the interests of the former, it would be advisable to record and keep AI input and output logs, among others, and ensure accountability. In contrast, in the interests of the latter, it would be advisable to provide information on AI and notify or disclose to the public its utilisation policies. However, there is no clear guidance on when and what information should be disclosed when AI, such as chatbots, replaces services typically provided by people.

The above can also be problematic from the standpoint of the APPI. For example, if AI is actually being used, but the company does not disclose this, leading the user to mistakenly believe that a human is making decisions and providing personal data, there may be a breach of the duty to properly acquire the data or the duty to notify the purpose of its utilisation.

In March 2021, the Japan Fair Trade Commission published the “Report of the Study Group on Competition Policy in Digital Markets – Algorithms/AI and Competition Policy”, with the aim of ensuring that competition risks associated with algorithms/AI are properly addressed. The report discusses three types of algorithms/AI that may have a significant impact on competition at this time: price research and pricing algorithms, ranking, and personalisation (especially personalised pricing). The JFTC is examining potential competition policy issues in these areas.

It is generally believed that it is not easy to make a case for concerted conduct that uses algorithms because there is little contact between competing businesses and it is difficult to actually identify the communication of intent. The above report points to the following cases where even if there is no direct or indirect exchange of information between business using algorithms, it is considered that there is a common recognition that prices are to synchronised and thus a cartel exists:

  • multiple competing businesses use a pricing algorithm provided by the same vendor, etc, and by using that algorithm, the businesses are aware that the price will be mutually synchronised; and
  • a platform provider of a pricing algorithm informs its users that it will impose the same upper limit of discount rates on the sale prices of all users, and the users use the algorithm while being aware of this.

In addition, with regard to rankings, if a leading ranking operator arbitrarily manipulates the rankings and obstructs transactions between competing business operators and consumers by displaying its own products at a higher ranking and treating them more favourably, it is considered to be in violation of the Anti-monopoly Act. In a related matter, in June 2022 the Tokyo District Court ordered the payment of damages in a case in which a restaurant claimed that a restaurant rating platform in a dominant position unfairly lowered its rating due to an algorithm change, in violation of the Anti-monopoly Act.

In the “Social Principles of Human-Centric AI” released by the Cabinet Office in 2019, one of the basic principles of AI use is a “sustainable society”, namely that “Through the use of AI (...) we need to develop in the direction of building a sustainable society that can respond to global environmental issues and climate change.” This means that AI is expected to be utilised in addressing climate change issues as well.

One of Japan’s key initiatives in climate change assessment is the measurement of the distribution of greenhouse gas concentrations by the Greenhouse Gases Observing Satellite (GOSAT), and the promotion of analysis and utilisation of the measured data obtained by GOSAT, jointly conducted by the Ministry of the Environment, the Japan Aerospace Exploration Agency (JAXA), and the National Institute for Environmental Studies (NIES). This initiative will ensure transparency and objectivity in the reporting of greenhouse gas emissions and reductions and will facilitate the assessment of climate change (especially global warming). The launch of GOSAT-GW, which has improved technology compared with original GOSAT, is scheduled for 2024.

AI technologies are also being used in a wide range of settings by both the public and private sector to address climate change; the following are examples of use of AI:

  • the Ministry of the Environment’s promotion of Bi-tech, which collects big data on energy (eg, electricity, gas, automobile fuel) usage and characteristics of individuals and households by using IoT technology, analyses it by using AI technology, and provides feedback in the form of personalised messages;
  • CO2 reduction through optimisation of shipping;
  • digitalisation of electricity supply and demand coordination;
  • the use of AI to reduce traffic congestion and reduce CO2 emissions; and
  • disaster forecasting and prevention using AI.

Advantages for employers using AI in hiring and termination include the fact that, unlike the subjective evaluations conducted by recruiters in the past, AI-based evaluations can be conducted fairly and objectively by setting certain standards, and that the use of AI can make the recruitment process more efficient. On the other hand, the following points are relevant with respect to the information that may be obtained through the hiring process and the exercise of the right to termination.

Hiring

Under Japanese law and judicial precedent, since companies have the freedom to hire, even if an AI analysis is incorrect and the employer does not fully verify this analysis, this would not necessarily constitute a violation of applicable laws. However, it can be said that AI-based recruitment limits a company’s freedom to hire to a certain extent.

Specifically, even in cases where AI is utilised in recruitment activities and information on jobseekers is automatically obtained, in accordance with Article 5-4 of the Employment Security Act and Article 4-1 (2) of the Employment Security Act Guidelines, the information must be collected in a lawful and fair manner such as directly from the jobseeker or from a person other than the jobseeker with the consent of the jobseeker. In addition, when using AI to obtain information on jobseekers, companies must be careful not to obtain certain prohibited information.

Specifically, under Article 20 of the Personal Information Protection Act, the company is typically prohibited from obtaining information requiring special care (race, creed, social status, medical history, criminal record and any facts related to the jobseeker being a victim of a crime), and, under Article 5-4 of the Employment Security Act and Article 4-1(1) of the Employment Security Act Guidelines, the company may not obtain certain information (eg, membership in labour union, place of birth) even with the consent of the jobseeker.

In addition, there is a risk that as a result of an erroneously high AI evaluation of a jobseeker, an offer may be made to a jobseeker or the jobseeker may be hired even though the jobseeker would not have been given an offer or hired if the company’s original criteria were followed. In such case, under Japanese law, the legality and validity of a decision to reject or dismiss the jobseeker will be determined based on how the recruitment process was conducted.

Having said that, it is likely to be difficult to dismiss an employee for the sole reason that the AI-based evaluation was incorrect. On the other hand, if a jobseeker is mistakenly given a low AI evaluation and is not hired, the possibility of this constituting a violation of applicable law is not likely to be high, even though the jobseeker is subject to de facto disadvantageous treatment.

Termination

Situations in which the selection of the persons to be terminated may be problematic include termination as part of employment redundancy or voluntary resignations.

Under Japanese law, unilateral termination of employees by employers is restricted, and termination that constitutes an abuse of the right to terminate is considered invalid. In particular, in the case of termination as part of employment redundancy, the validity of termination is examined from the viewpoints of (i) the necessity of reducing the workforce; (ii) the necessity of terminating employees through employment redundancy; (iii) the validity of the selection of employees to be terminated; and (iv) the validity of the procedures for termination. AI’s use is mainly anticipated in the selection of employees to be terminated in (iii) above. It should be noted that these four perspectives are considered as factors rather than requirements, and even if AI is utilised to select an employee for termination in a reasonable and fair manner that eliminates subjectivity in the selection of the employee to be terminated, this does not necessarily mean that the termination is valid. Naturally, if the data on which the AI bases its judgement is erroneous or if the AI is unreasonably biased, there is a high possibility that the selection of the terminated employee will not be recognised as valid.

On the other hand, there is no law that specifically regulates voluntary resignations, since the resignation is made voluntarily by the employee. However, it is necessary for the voluntary resignations to take place in a manner that respects the voluntary decision of the employee; there are court cases that have held that a voluntary resignation resulting from an unreasonable act or conduct that may have impeded the employee’s voluntary decision to resign constitutes a tort under Article 709 of the Civil Code. Therefore, even if the selection of employees subject to voluntary resignation is based on an objective and impartial evaluation by AI, the company should not approach the voluntary resignation with the attitude that the decision is based on the AI’s judgment and that there is no room for negotiation. Instead, the company should provide a thorough explanation to the employee so that the employee understands the pros and cons of resigning and is able to make a voluntary decision. This recommendation to companies precedes the introduction of AI in the termination process.

Personnel Evaluation

Generally, the items and standards of assessment in Japanese personnel evaluations are abstract, and supervisors have broad discretion in the assessments. AI-based personnel evaluations are expected to reduce the unfairness and uncertainty stemming from the discretion given to supervisors.

Legally, the following provisions regulate personnel evaluations:

  • equal treatment (Article 3 of the Labour Standards Act);
  • equal pay for men and women (Article 4, ibid);
  • equal treatment of men and women in promotions, etc. (Article 6, Paragraph 1 of the Equal Employment Opportunity Act); and
  • unfair labour practices (Article 7 of the Labour Union Act).

In the case of a company that has the authority to evaluate an employee, courts have held that a tort is not established unless the employer violated the above-mentioned provisions or abused its discretionary power in violation of the purpose of the personnel evaluation system. Cases that would fall under abuse of discretion include factual errors, misapplication of evaluation criteria, arbitrary evaluation and discriminatory evaluation.

Therefore, even in the case of personnel evaluation using AI, if there is an error in the data on which the AI bases its judgement, or if there is an error in the algorithm or learning method by which the AI evaluates such data, personnel evaluation based on such AI’s judgement may constitute a tort.

Monitoring

One possible method of monitoring workers using AI would be, for example, for AI to check e-mails and automatically notify managers if there are suspicious e-mails.

The question is whether this would infringe on the privacy rights of the workers to be monitored, but monitoring is considered permissible as long as the company’s authority to monitor is clearly defined in the internal rules. Courts have also held that, even if the authority is not clearly stated, monitoring is permissible as long as there is a reasonable business management need, such as when it is necessary to investigate whether or not there has been a violation of corporate order, and the means and methods used are reasonable.

Therefore, when conducting monitoring using AI, it would be advisable to specify in the internal rules that managers ultimately have the authority to check the contents of employees’ email exchanges.

While services such as Uber are not widespread in Japan due to strict regulations regarding ridesharing, food delivery platforms such as Uber Eats, which uses an algorithm to guide delivery staff to deliver orders quickly and efficiently, are widely used. Many food delivery platforms do not have an employment relationship with the delivery staff who work on a freelance basis. The MHLW guidelines for freelance workers state the following.

  • The Anti-monopoly Act and the Subcontract Act may apply to transactions between freelance workers as sole proprietors and transaction partners (eg, non-delivery of contracts, unilateral changes in transaction terms, and delay or reduction of remuneration payments are prohibited as an abuse of superior bargaining position); and
  • Regardless of the contract form, if the relevant person is in fact an employee or worker, labour-related laws and regulations will apply in addition to the Anti-monopoly Act. The Uber Eats Union, a labour union of Uber Eats delivery staff, demanded collective bargaining with the Japanese entity that operates the Uber Eats business in Japan (Uber Eats Japan). Specifically, the Uber Eats Union demanded collective bargaining regarding compensation in the event of an accident during delivery. Uber Eats Japan rejected the union’s demands for the reason that the delivery staff do not constitute employees under the Labour Union Act. The union then sought the intervention of the Tokyo Labour Relations Commission, which, in November 2022, ruled that the delivery staff were employees under the Labour Union Act.

In the financial sector, AI is used by banks and lenders for credit decisions and by investment firms for investment decisions. In addition, the amended Instalment Sales Act, which came into effect in April 2021, enables credit card companies to determine credit limits through credit screening using AI and big data analysis.

The FSA’s supervisory guidelines require banks, etc, when concluding a loan contract, to be prepared to explain the objective rationale for concluding a loan contract based on the customer’s financial situation in relation to the provisions of the loan contract. This is true even if AI is used for credit operations. Therefore, it is necessary to be able to explain the rationale of credit decisions made by AI.

In addition, when credit scoring is used by AI to determine the loan amount available for personal loans, care should be taken to avoid discriminatory judgements, such as different judgements of loan amounts available based on gender or other factors. The Principles for a Human-Centred AI Society also state: “Under the AI design philosophy, all people must be treated fairly, without undue discrimination on the basis of their race, gender, nationality, age, political beliefs, religion, or other factors related to diversity of backgrounds”.

Financial instrument firms must not fail to protect investors by conducting inappropriate solicitation in light of the customer’s knowledge, experience, financial situation, and the purpose of concluding the contract (the compliance principle). In addition, these firms are obligated to explain to customers the outline of the contract and the risks of investment in accordance with the compliance principle. Therefore, if the criteria for investment decisions by AI cannot be reasonably explained, problems may arise in relation to the compliance principle and the duty to explain.

If AI-based programs, such as diagnostic imaging software or health management wearable terminals, or devices equipped with such programs fall under the category of “medical devices” under the Pharmaceuticals and Medical Devices Act, approval is required for their manufacture and sale, and approval or certification is also required for individual medical device products. Whether AI-based diagnostic support software and other medical programs constitute “medical devices” must be determined on a case-by-case basis, but the MHLW has provided a basic framework for making such determinations.

According to this framework, the following two points should be considered.

  • How much does the programmed medical device contribute to the treatment, diagnosis, etc, of diseases in view of the importance of the results obtained from the programmed medical device?
  • What is the overall risk, including the risk of affecting human life and health in the event of impairment, etc, of the functions of the programmed medical device?

In addition, when a change procedure is required to change a part of the approved or certified content of a medical device, the product design for an AI-based medical device may be based on the assumption that its performance will constantly change as new data is obtained after the product is marketed. Given the characteristics of AI-based programs, which are subject to constant changes in performance and other aspects after their initial approval, the amended Pharmaceuticals and Medical Devices Act, which came into effect in September 2020, introduces a medical device approval review system that allows for continuous improvement.

Since medical services such as diagnosis and treatment may only be performed by physicians, programs that provide AI-based diagnostic and treatment support may only serve as a tool to assist physicians in diagnosis and treatment, and physicians will be responsible for making the final decision.

Medical history, physical and mental ailments, and results of medical examinations conducted by physicians are considered “personal information requiring special care”, under the APPI, and, in principle, the consent of the patient must be obtained when obtaining such information. In many cases, medical institutions are required to provide personal data to medical device manufacturers for the development and verification of AI medical devices. In principle, the provision of personal information to a third party requires the consent of the individual, but it may be difficult to obtain prior consent from the patient. An opt-out system is also in place. However, it cannot be used for personal information requiring special care.

Anonymised information, which is irreversibly processed so that a specific individual cannot be identified from the personal information, can be freely provided to a third party. However, it has been noted that it is practically difficult for medical institutions to create anonymised information. In addition, the Next Generation Medical Infrastructure Act allows authorised business operators to receive medical information from medical information handlers (hospitals, etc) and anonymise it through an opt-out method. However, it is not widely used.

The revised Next Generation Medical Infrastructure Act passed by the Diet in April 2023 established a new system for the creation and use of “pseudonymised medical information”. Unlike anonymised medical information, pseudonymised medical information does not require the deletion of specific values or rare disease names, and can provide highly useful data that better meets the needs of medical research.

Discussions regarding whether AI technology can be recognised as an inventor or co-inventor for patent purposes, an author or co-author for copyright purposes, or a moral right holder are also taking place in Japan. Currently, there have been no judicial or agency decisions on this matter.

Under current Japanese law, AI is not considered a natural person, and therefore cannot be recognised as the inventor for patent purposes, the author for copyright purposes, or the holder of moral rights. However, if a person who used AI to create a work had the intention to create a work and made a creative contribution, then the resulting work may be recognised as having been created by the person who used the AI as a tool, rather than by the AI itself. In such a case, the natural person who had the creative intention and made the creative contribution is considered to be the author. While it is controversial whether AI should be given judicial personality, such a legal system is not being considered at this point.

Protection under the Unfair Competition Prevention Act

AI technology and (big) data utilised in the development and use of AI are protected as trade secrets just like other informational assets (Article 2 (6) of the Unfair Competition Prevention Act (the UCPA)) as long as they are (i) kept secret; (ii) not publicly known; and (iii) are useful for business activities. The trade secret holder can seek an injunction against unauthorised use by a third party and can also claim damages for unauthorised use. In addition, criminal penalties may also apply for acts of unfair competition, etc, for the purpose of wrongful gain or causing damage (Article 21 of the UCPA).

Moreover, even if the data does not qualify as a trade secret because it is not kept secret as it is intended to be provided to a third party in the course of the development or use of AI, if the data constitutes technical or business information that is accumulated to a significant extent and is managed by electromagnetic means as information to be provided to a specific party on a regular basis, it is protected as “shared data with limited access” (Article 2 (7) of the UCPA ). The holder of the rights to shared data with limited access can seek an injunction against unauthorised use by a third party and can also claim damages for unauthorised use. However, unlike trade secrets, there are currently no criminal penalties with respect to shared data with limited access.

Other

Protection based on judicial precedents

Even if not protected by the UCPA, unauthorised use of data may constitute a tort under Article 709 of the Civil Code if there are special circumstances, such as infringing on legally protected interests (Supreme Court, Judgment, December 8, 2011, Minshu 65(9)3275 [2012]). Legally protected interests include, for example, business interests in business activities (a case in which incorporating another company's database into one's own database for sale was considered to constitute a tort; Tokyo District Court, Judgment, May 25, 2001, Hanta 1081, 267 [2002]).

Protection through contracts

Even if not protected by the UCPA, it is possible to set rights and obligations related to data between parties in data transaction contracts and protect valuable data. However, in current Japanese law, data, which is an intangible asset, is not recognised as an object of ownership and remains a subject of the right to use under the contract. Especially for programs or models and their source code, it is reasonable to expect that they should be treated separately, so it is desirable to explicitly agree on the handling of the source code in cases where the transfer of the source code is an issue.

Copyright Law

  • Works created autonomously by AI are not protected by copyright since AI lacks ideas or emotions. However, if the user of AI (a human being) has creative intent in the process of generating the work and contributes creatively to obtaining the AI-generated work through instructions or other means, it can be considered that the user has creatively expressed their thoughts or sentiments using AI as a tool, and the work is protected as a copyrighted work.
  • Using third-party copyrighted works for the purpose of “AI learning” before generating AI-created work does not constitute copyright infringement. This is because in certain cases where the use is not intended for enjoying the expression of thoughts or sentiments in the copyrighted work (Article 30- 4 (ii) of the Copyright Act), copyright protection does not apply and such use is not considered copyright infringement. However, if one tries to use the copyrighted works as they are for a database rather than as data for AI-learning purposes, such use may constitute copyright infringement, even under the above conditions.
  • Copyright infringement is established when someone relies on and uses another’s copyrighted work (in other words, someone’s work is derived from the copyrighted work). However, it is controversial whether the reliance requirement is satisfied in the case where AI that is developed using another’s copyrighted work as AI-learning data produces its own work that resembles another’s copyrighted work that was used as AI-learning data, and there is no established view on this matter.

Patent Law

AI-related technologies, including inventions of methods for AI to produce works and works produced by AI, are eligible to receive patents as long as they meet the general patent requirements. Under Japanese law, it is considered that data and pre-trained models are not excluded from eligibility for patent protection as long as they are considered programs or program-equivalents (ie, data with structure and data structure). On the other hand, data or datasets that are merely presented as information are not eligible for patent protection.

As mentioned in 16.3 Applicability of Trade Secret and Similar Protection, if the user of AI has creative intent in the process of generating the work and contributes creatively to obtaining the AI-generated work through instructions or other means, the user can be considered to have creatively expressed their ideas or emotions using AI as a tool. In such cases, the AI-generated work is protected as a copyrighted work. This also applies to creating works and products using OpenAI, and there is no difference in protection whether the product is an image or text.

However, the extent to which creative contribution must be made to qualify for copyright protection is determined on a case-by-case basis and is still controversial.

Under the Copyright Act, it is likely that the prompts used to generate high-quality output can be protected as copyrighted works unless they are mere ideas since the copyright protects expressions not idea. On the other hand, even if the prompt can be protected by the copyright, it is likely that the work generated by/with OpenAI is not a derivative work of the prompts if creativity in the prompts is difficult to find in the generated work.

Although there are no cross-sectional laws and regulations regarding AI, in-house attorneys should be aware of the legal issues raised in Sections 12 and 16 when utilising AI. There is a specific need to confirm whether there are any problems in the utilisation of AI in terms of laws related to intellectual property, such as the Patent Act, Copyright Act, and the Unfair Competition Prevention Act, as well as the Act on the Protection of Personal Information.

In addition, the AI Utilisation Guidelines formulated by MIC in 2019 outline the following ten principles to note when utilising AI in light of its special characteristics, which should be taken into account by in-house attorneys of companies actively utilising AI:

  • proper utilisation;
  • data quality;
  • collaboration;
  • safety;
  • security;
  • privacy;
  • human dignity and individual autonomy;
  • fairness;
  • transparency; and
  • accountability.

In Japan, there are no cross-sectoral laws and regulations applicable to AI, only regulations in individual areas of law. However, given that the use of AI often involves the use of personal information, compliance with the APPI is essential. In particular, the APPI is only a minimum set of required rules. Therefore, a more cautious approach is needed for the use of advanced technologies such as AI, depending on the purpose of the use and the type of personal information involved.

In addition to legal liability, there is also reputational risk if the use of AI results in discriminatory or unfair treatment.

Ultimately, it is for businesses to decide how to use AI in light of these considerations, which falls within the remit of the directors. However, since these decisions involve expert judgement, an increasing number of companies are turning to external expert panels or advisory boards on AI.

One AI governance guideline that is expected to be used as a reference for such business judgement is the Governance Guidelines for Implementation of AI Principles Version 1.1. Although the guidelines are not legally binding, in order to implement the Social Principles of Human-centric AI, they set forth six action goals for AI providers:

  • conditions and risks analysis;
  • goal-setting;
  • system design (building an AI management system);
  • implementation;
  • evaluation; and
  • re-analysis of conditions and risks, along with practical examples.

The Guidelines also emphasise transparency and accountability. It is advisable to regard the information mentioned above as non-financial information in corporate governance codes, and to consider actively disclosing it. However, not many companies are actively disclosing such information at this time.

Nagashima Ohno & Tsunematsu

JP Tower
2-7-2 Marunouchi
Chiyoda-ku
Tokyo 100-7036
Japan

+81 3 6889 7000

+81 3 6889 8000

www.noandt.com/en/
Author Business Card

Trends and Developments


Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The TMT Practice Group is comprised of about 50 lawyers and legal professionals and has been representing Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. Also, a strength of the TMT Practice Group is that, in view of the firm’s robust client base, it is well positioned to consistently meet requests from clients to provide advice on many different matters, from business strategies to daily compliance and corporate matters.

Introduction: Impact of Generative AI Using Foundation Models

ChatGPT, which Open AI began offering in November 2022, has significantly impacted Japanese society, just as in other countries. In Japan, in addition to ChatGPT, many generative AI services using foundation models, including Large Language Models (LLM) (“Generative AI”), such as Google’s Bard and Adobe’s Firefly, are available in Japanese or English. On 16 March 2023, ABEJA, a leading Japanese AI company, officially began offering an LLM with a model trained based on GPT-3. Early adopters and innovators who have already begun using ChatGPT are disclosing and sharing various use cases and how to create effective prompts; this way, information regarding ChatGPT is updated daily. In addition, several companies have begun providing a number of services using ChatGPT through Application Programming Interfaces (API). In this environment, an era has suddenly arrived where even ordinary people with no experience with programming languages can instantly receive the benefits of advanced AI by interacting with the AI.

With the advent of generative AI, the barriers to using AI have decreased dramatically. Before the availability of ChatGPT in Japan, AI had been used primarily in various industries, such as finance, manufacturing, infrastructure, healthcare and nursing care, as well as in services such as inspection, maintenance and call centre operations. However, their use was limited to companies retaining AI engineers and companies that could outsource the development of AI. With the emergence of generative AI, companies and individuals without such technical resources are now able to use AI. For example, before the arrival of generative AI, a major internet advertising company created elaborate 3D models by taking high-resolution whole-body scans of celebrities and then developed and utilised AI to automatically generate advertising content based on the 3D models following the customers’ preferences. In contrast, although not currently as sophisticated as the above-mentioned 3D models, some companies and individuals are using generative AI, such as Stable Diffusion, to create fictitious 3D models of persons and have launched modeling agencies to provide those 3D models for other companies for use in advertising or other content.

In addition to this expansion of the user base for AI, there are also an increasing number of companies that have used AI in the past and have now begun to utilise generative AI not only for relatively routine work, such as analysing information that has been done in the past using AI but also in business areas where AI has not been used in the past, such as the creation of images of completed buildings to be constructed and coding and debugging.

Generative AI is being rapidly implemented in Japanese society.

Risks Stemming From Generative AI

Not all enterprises and individuals are proactively using generative AI. Considering the various risks associated with using generative AI, some companies have prohibited employees from using it internally or imposed certain limitations on its use. The following are some of the risks identified with using generative AI.

Third-party copyright infringement

Third-party copyright infringement issues may arise where a user uses the copyrighted work of a third party in data or prompts entered into AI and where the work generated by the AI is similar to the third party’s copyrighted work. In addition, since the sources used to train the AI for generating the relevant text are not clearly disclosed, there is also a risk that users may use the text generated by the AI for goods or services without knowing that the text contains a third-party’s copyrighted work.

Use of incorrect information

There is a risk that AI will generate “plausible lies” or false information that is difficult for people to identify as such, and users may make mistakes in judgement or behaviour by unquestionably relying on that information. The risk includes mistakes arising from data and issues related to biases.

Leaks or improper use of confidential and personal information

When confidential information or personal information is included in training data or prompts, there may be issues related to leaks of confidential information or inappropriate use of personal information.

Misuse

Potential risks include the generation and distribution of sophisticated fake news, which is difficult for users to identify as fake, and the risk of training the AI with the photographs of certain celebrities and influencers and generating images of similar people. A more severe case has been reported where generative AI explained how one could create a harmful computer virus.

An example of the risks mentioned above that arose is Mimic, an AI illustration generator in Japan. Mimic is an AI illustration generator that can learn the characteristics of 15 to 100 illustrations uploaded by the user and then generate illustrations that reflect those characteristics. Beta-version 1.0 of Mimic was released on 29 August 2022. In Mimic’s user guidelines, users were prohibited from uploading illustrations created by others. However, immediately after the release, there were several cases in which some users uploaded image data from certain manga artists and illustrators without permission. Several people pointed out online that the developer of Mimic had not taken adequate measures to address the risk of user copyright infringement. Many illustrators and manga artists suggested prohibiting the unauthorised use of images in image generation AI services. As a result of these comments, the developer of Mimic temporarily suspended the use of all functions of Mimic on 30 August, the day after its release, implicitly acknowledging that mechanisms to prevent improper use were inadequate.

The Framework of Rules for the Development and Use of AI in Japan

In Japan, a soft law approach has been adopted for the development and use of AI rather than a hard law approach, such as the one contemplated by the EU. To ensure that innovation by AI is not impeded, the Japanese government has been building the framework for governance related to the development and use of AI by publishing a variety of guidelines instead of imposing obligations through laws and regulations as far as possible, leaving the private sector to conduct the governance voluntarily. The “Governance Guidelines for Implementation of AI Principles” published by the Ministry of Economy, Trade and Industry (METI) on 9 July 2021 (as amended in Version 1.1 of 28 January 2022) identify the key considerations for upholding the “Social Principles of Human-Centric AI” published by the Cabinet Office’s Integrated Innovation Promotion Council on 29 March 2019. The Guidelines sets forth the fundamental notion that applying a non-regulatory and non-binding framework for the development and use of AI is vital. The Guidelines also state that businesses that develop and operate AI should establish and adhere to principles for the development and use of AI that are to be implemented according to the purposes and methods of the development and operation of their AI.

In addition to METI’s Governance Guidelines, the Conference Towards AI Network Society of the Ministry of Internal Affairs and Communications (MIC) (the “AI Network Conference”) has formulated the AI Development Guidelines (July 2017) and AI Utilisation Guidelines (August 2019) to facilitate international discussions regarding ethics and governance related to AI. These Guidelines set forth the values that need to be respected in the development and use of AI, such as fairness, transparency, and accountability; many companies likely refer to these Guidelines when establishing their principles for developing and operating AI. In its Report 2022, published on 25 July 2022, the AI Network Conference notes that it is reviewing both Guidelines to see whether they need to be amended; however, the report also confirms that both Guidelines generally cover the values described in the principles and guidelines established by the major countries involved in AI.

In addition to the above, government agencies and non-government organisations have established more specific guidance to facilitate the development of more specific guidelines by AI developers and users, such as the Guidelines on Assessment of AI Reliability in the Field of Plant Safety, which aims to contribute to resolving issues that may occur when introducing AI in the field of plant safety, and the Handbook for Utilising AI, which aims to improve the basic literacy of consumers for the use of AI. As for recent developments, on 15 February 2022, MIC published the “Guidebook on AI-based Cloud Services”, which identifies matters to be considered when developing and providing AI-based cloud services.

With the arrival of generative AI such as ChatGPT, attention is focused on whether the Japanese government will continue its soft law approach centred on formulating and publishing the guideline mentioned above or formulating any regulations.

The Japanese Government’s National Strategy on AI and Policy Recommendations by the Liberal Democratic Party

Against this backdrop, the “Project Team on the Evolution and Implementation of AIs” (PT) of the Liberal Democratic Party, Japan’s ruling party, released “The AI White Paper – Japan’s National Strategy in the New Era of AI” in April 2023. The PT advocates developing a new national strategy and reviewing previous measures as soon as possible in response to the Japanese government’s existing AI strategy.

The Japanese government’s national strategy on AI is the “AI Strategy” formulated by the Integrated Innovation Strategy Promotion Council (the “Council"), established within the Cabinet Office. Since the development of AI Strategy 2019, the Council has annually updated its overall AI-related policy initiatives and process management to drive policy promotion and monitoring. Guided by the three principles of respect for human dignity, diversity and sustainability, Japan’s AI Strategy 2022 aims to resolve global issues and advance Society 5.0. By implementing this strategy, Japan intends to overcome its social challenges, improve its industrial competitiveness, and address imminent crises, such as large-scale disasters. AI Strategy 2022 includes objectives for promoting the social implementation of AI, such as improving the reliability of AI and promoting the utilisation of AI by the government. However, in 2023, the Council does not intend to prepare a separate AI Strategy document. Instead, the Council will present the government’s AI policy efforts in a single chapter of the Integrated Innovation Strategy, providing a complete overview of the government’s science and technology innovation strategy.

In response to the above, the PT insists that there is an urgent need to formulate a new national strategy, develop new policies and review past initiatives, given the rapid progress in the evolution and social implementation of generative AI. The PT recommends that Japan’s new national strategy aim to achieve international competitiveness with other countries in terms of the scale and scope of initiatives, and proposes a comprehensive review of policy measures from various perspectives, including research and development, economic structure, social infrastructure, human resource development and security. The PT also stresses the importance of conducting this review in a timely manner.

In particular, the PT proposes to discuss specific regulations for serious risk areas, as the EU and the US have been considering, that accelerate the risk of social harm through misuse, which will increase even more with the evolution of generative AI having a significant societal impact. The PT points out the specific areas of significant risk that may require regulatory measures through a detailed analysis of AI regulation in other countries such as the EU, the US and China. These include: (i) serious human rights violations; (ii) national security; and (iii) undue interference in the democratic process. In addition, the PT urges more discussion on the interpretation of intellectual property laws related to generative AI to consider and propose the formulation of guidelines and other measures to promote the advancement of AI technology, prevent its misuse and further develop the content industry.

Establishment of a Strategy Council

In connection with these developments, the Japanese government established a new Strategy Council in April 2023. The Council serves as a command centre, responsible for considering national strategies for AI and providing primary policy direction. According to press reports, the Strategy Council is expected to discuss the requisite regulations concerning interactive AI, including personal data protection and copyright issues. In addition, discussions are also expected to cover promoting research and development for domestically produced AI and human resource development, in line with the direction outlined in the AI White Paper.

Corporate AI Ethics and Governance Initiatives

Major companies that have developed and utilised AI have taken initiatives to ensure ethics and governance in such development and utilisation of AI, referring to the guidelines mentioned above issued by government and non-government organisations. The Report 2022 gives the following examples of such initiatives.

Guidelines and principles

A telecommunications carrier has formulated and released nine principles that it considers essential for appropriate AI development and utilisation, taking guidance from the AI Development Guidelines and AI Utilisation Guidelines published by the Ministry of Internal Affairs and Communications.

Organisation and structure

An electronics company has established an ethics committee to implement and oversee AI ethics and governance in AI developers and service providers. A system integration company (SIer) has also established an AI ethics governance office that reports directly to its president.

Security assurance and privacy protection

Another SIer is assessing the security of externally procured AI models and programs by assessing the provider’s reliability and conducting source code inspections. In another case, an electronics company has developed an image recognition system that identifies humans while avoiding the retention of personal data on AI. The company achieved this through edge processing of images and uploading only the recognition results to the cloud or estimating the skeletal structure.

Ensuring fairness and eliminating bias

A telecommunications company collects large amounts of data from various sources to improve the accuracy of its AI systems, acknowledging the difficulty of eliminating all potential sources of bias. Furthermore, in cases where AI-induced bias is a concern, humans are included in verifying and addressing potential biases.

Transparency and accountability

In developing credit screening models, a marketplace operator carefully analyses the results of AI models and the effectiveness of their features to ensure transparency and accountability in explaining the outcomes of the credit screening.

In addition to the efforts mentioned above for security assurance, privacy protection, fairness and eliminating bias, companies are exploring third-party frameworks or services to evaluate and assess their AI models and monitor their performance. Auditing and other independent organisations have recently begun to offer expert advice and monitoring evaluations on the governance and ethics of AI, and it is anticipated that the use of such organizations will rise in the future.

Case Law Related to AI technology: Abuse of Algorithm Modification

There has yet to be a case in Japan that directly provides a legal framework for developing and utilising AI. However, on 16 June 2022, the Tokyo District Court issued its first decision relating to the abuse of algorithm modification. The decision addresses issues that should be considered in the development and utilisation of AI. The court found that the defendant, which operates the gourmet food website Tabelog, had engaged in an abuse of a dominant bargaining position, which the Anti-monopoly Act prohibits. Specifically, the defendant unilaterally modified its algorithm for determining the ratings of restaurants posted on Tabelog, resulting in the unreasonable lowering of the rating of the restaurant chain operated by the plaintiff. The court allowed the plaintiff’s claim for damages.

In this case, the defendant unilaterally modified the algorithm to calculate restaurant ratings, resulting in lower ratings for chain restaurants than for non-chain restaurants. The plaintiff argued that modifications to the evaluation criteria on Tabelog led to a decrease in its restaurant rating and sales and that the ongoing use of the modified algorithm violated the Anti-monopoly Act. The defendant had previously announced that it periodically reviews its algorithm but did not give advance notice of the specific modifications made in this case.

The Court found abuse of a dominant bargaining position based on the following factors:

  • The posting of ratings for fee-paying restaurant members of Tabelog by the defendant constitutes conducting a “transaction” (see Article 2.9(5)(c) of the Act) in relation to such members.
  • The act of making algorithm modifications and changing settings constitutes “conducting a transaction.” Conducting a transaction in a manner disadvantageous to the counterparty of a transaction includes any factual acts related to the transaction that are disadvantageous to such counterparty, aside from establishing or changing the terms of the transaction.
  • The Tabelog rating is a numerical value calculated based on subjective ratings and word-of-mouth reviews. It is valuable information for consumers to select restaurants and is used to determine the order of display in search ranking results. It can therefore influence a restaurant’s decision on whether to become a fee-paying member of Tabelog.
  • The algorithm modification is unreasonable as it does not achieve the intended purpose and causes unforeseeable harm to the plaintiff, contrary to normal business practices.

The above decision indicates that fairness and transparency are required in implementing and utilising the algorithm. The decision also highlights the importance of an appropriate process, which includes advance notification, when making algorithm modifications. Although this decision is not yet final, it cites an opinion submitted by the Fair Trade Commission (FTC). The FTC opinion states that in determining whether the implementation and utilisation of an algorithm impede fair competition, the algorithm’s timing, the scope of implementation and use (including whether prior consultation with restaurants was conducted), its potential to suppress the independence of restaurants and the extent of the disadvantage to the restaurants should all be considered. Therefore, fairness, transparency and procedures remain essential, particularly in cases of adverse action.

Conclusion

As previously mentioned, the Japanese government has thus far adopted a soft law approach rather than a hard law approach to avoid hindering innovation through AI. Companies with human and technical resources that have actively developed and utilised AI in the past have voluntarily established their governance for the development and utilisation of AI, referring to guidelines published by the government and non-government organisations.

On the other hand, the landscape of AI utilisation is expected to undergo significant changes with the emergence of generative AI. For example, the increased accessibility of AI may limit the approach of requiring developers and users to voluntarily establish AI governance, as the number of developers without AI expertise and consumer-like users who are unaware they are using AI increases. Additionally, for example, ensuring transparency in using ChatGPT often relies on the response of Open AI.

Furthermore, the Japanese government is exploring the possibility of aligning with international frameworks for AI governance. Therefore, regulations, particularly those related to generative AI, may be established in the future depending on the progress of international discussions. In addition, guidelines or other policies are highly likely to be published in the future to provide further clarification on legal provisions closely related to AI development, such as Article 30-4 of the Copyright Act of Japan, which establishes that the use of copyrighted materials as training data for AI does not generally constitute copyright infringement. Therefore, legal counsel should prepare governance policies regarding AI development and utilisation with reference to published guidelines and industry best practices. Future governmental developments should also be closely monitored.

Nagashima Ohno & Tsunematsu

JP Tower
2-7-2 Marunouchi
Chiyoda-ku
Tokyo 100-7036
Japan

+81 3 6889 7000

+81 3 6889 8000

www.noandt.com/en/
Author Business Card

Law and Practice

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The TMT Practice Group is comprised of about 50 lawyers and legal professionals and has been representing Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. Also, a strength of the TMT Practice Group is that, in view of the firm’s robust client base, it is well positioned to consistently meet requests from clients to provide advice on many different matters, from business strategies to daily compliance and corporate matters.

Trends and Developments

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The TMT Practice Group is comprised of about 50 lawyers and legal professionals and has been representing Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. Also, a strength of the TMT Practice Group is that, in view of the firm’s robust client base, it is well positioned to consistently meet requests from clients to provide advice on many different matters, from business strategies to daily compliance and corporate matters.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.