Artificial Intelligence 2024

Last Updated May 28, 2024

Japan

Law and Practice

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The TMT practice group is comprised of about 50 lawyers and legal professionals and has been representing Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. Also, a strength of the TMT practice group is that, in view of the firm’s robust client base, it is well-positioned to consistently meet requests from clients to provide advice on many different areas, from business strategies to daily compliance and corporate matters.

In Japan, general legal frameworks such as tort law, data protection, intellectual property rights, criminal law, antitrust law, labour law, product liability law, and consumer protection law may also apply to artificial intelligence (AI).

Tort Law (Civil Code)

Under Article 709 of Japan’s Civil Code, liability may arise from intentional or negligent actions that infringe on rights or legally protected interests, including harm caused by AI. Tort law provisions encompass potential liabilities for AI users, developers, or providers based on their foresight and preventive measures.

Privacy and Data Protection Law

The Personal Information Protection Act (PIPA) regulates the processing of personal data in developing, training, and utilising AI (for details, see 8.3 Data Protection and Generative AI and 11.2 Data Protection and Privacy).

Intellectual Property Law

The application of copyright and patent laws to AI is widely debated in Japan. 8.2 IP and Generative AI and 15.1 Applicability of Patent and Copyright Law address these issues.

Criminal Law

The Japanese Penal Code encompasses various crimes that may involve AI, including fraud (Article 246), defamation (Article 230), and obstruction of business (Article 233). Abuse of AI technologies, such as deep fakes, may also fall under these provisions. Additionally, the Unauthorised Computer Access Law addresses AI-related misconduct, including unauthorised computer access (Article 11) and the unlawful acquisition of identifiers such as passwords (Article 12).

Antitrust Law

The Act on Prohibition of Private Monopolisation and Maintenance of Fair Trade addresses the potential risks of monopolistic practices or anti-competitive behaviours involving AI and algorithms, as detailed in 12.6 Anti-Competitive Conduct. Issues such as synchronised pricing through shared algorithms highlight these concerns.

Labour Law

The Employment Security Act stipulates the legal and fair collection of applicant information, which is also applicable when collecting such information using AI in hiring processes (see 13 AI in Employment). Meanwhile, Japan’s labour laws currently lack specific provisions regarding the use of autonomous decision-making systems.

Product Liability Law

Under Japan’s Product Liability Act, manufacturers are liable for damages caused by defective products that harm life, body, or property, irrespective of the manufacturer’s negligence. While AI software itself may not be considered a “product”, if integrated into a device, the entire assembly is deemed to be a product. However, determining what constitutes adequate safety for AI and proving a defect in safety can be challenging.

Consumer Protection Law

In Japan, laws such as the Act Against Unjustifiable Premiums and Misleading Representations and the Consumer Contract Act apply to AI in consumer contexts. Generative AI used in advertising that leads to misleading or deceptive impressions could be regulated under the Act against Unjustifiable Premiums and Misleading Representations. Additionally, unfair solicitation practices by AI-driven systems like robo-advisers could be a violation of the Consumer Contract Act.

AI and machine learning are transforming industries in Japan with predictive and generative AI technologies driving innovation and efficiency. Predictive AI, for instance, is being used across various sectors and enhancing processes through data analysis. In finance, it detects fraud and forecasts stock market trends, aiding in risk management and investment strategies. In healthcare, predictive AI assists in diagnosing diseases and planning treatments, improving patient care. In infrastructure and agriculture, predictive AI streamlines equipment maintenance and optimises harvest planning, thus enhancing operational efficiency and productivity.

Generative AI is introducing novel approaches in traditional and emerging fields. In advertising, for example, one beverage company employs generative AI for package design and virtual personalities in TV commercials. Architects leverage generative AI for visual presentations to clients, accelerating the design process. In IT, generative AI supports software development by automating code generation, reducing errors, and speeding up project timelines.

Moreover, certain industries traditionally reliant on predictive AI are now embracing generative AI to forge new paths in innovation. For example, in manufacturing, alongside predictive tasks, generative AI is used to develop robots that operate based on natural language instructions. In retail, in addition to enhancing customer service through AI-powered chat systems, generative AI is being utilised to create a new kind of dynamic and personalised shopping experience for consumers.

The Japanese government actively supports the development of AI through comprehensive investments and policy initiatives, effectively integrating national efforts with global collaboration. The Ministry of Economy, Trade and Industry (METI) exemplifies this support with initiatives like the GENIAC project, launched on 2 February 2024, which provides subsidies for essential computational resources for AI foundational models. Additionally, on 19 April 2024, METI committed JPY72.5 billion to enhance domestic supercomputing facilities to support AI development, emphasising its importance to Japan’s economic security.

For fiscal year 2024, Japan allocated approximately JPY164.1 billion for AI-related activities, reflecting a solid commitment to the sector. Of this, JPY72.8 billion is earmarked explicitly for generative AI technologies. This funding is aimed at supporting various initiatives, including the advancement of AI in sectors such as healthcare, education, and infrastructure. It also covers research and development in foundational AI models, computing resources, and technologies designed to mitigate AI-associated risks, such as misinformation.

Currently, there is no comprehensive cross-sectoral legislation regarding AI. As stated in “AI Governance in Japan Ver. 1.1,” the reason for this lies not only in the belief that comprehensive regulations are currently unnecessary from the perspective of fostering innovation but also because of the idea that it may be preferable to respect rule-making at the individual sector level in certain specific fields, such as automotive and medical sectors.

In individual legal domains, such as the Act on the Protection of Personal Information (APPI) and the Copyright Law, rules and amendments to existing laws are being made to promote the utilisation of AI.

One such introduction occurred in 2023 with the implementation of pseudonymised medical data in the Next-Generation Medical Infrastructure Act. Specifically, to facilitate the use of AI in research and development in the medical field, the Next-Generation Medical Infrastructure Act, which is a special law under the APPI, introduced the concept of pseudonymised medical data through an amendment in May 2023. This is expected to promote research and development of AI diagnostic tools utilising big data in the medical field.

Furthermore, the government has provided guidance on the interpretation of existing laws and regulations in relation to the use of AI (see 3.3 Jurisdictional Directives). Although these are binding interpretations, they serve as useful references for businesses.

Copyright Law

The following topics were included in the “Report on AI and Copyright” (15 March 2024) prepared by the Copyright Subcommittee of the Cultural Affairs Council’s Subcommittee on Legal Systems:

  • basic principles regarding copyright infringement when AI-generated works similar to the original copyrighted works are used as training data;,
  • fundamental considerations when utilising copyrighted works to develop AI (trained models); and
  • basic principles for recognising AI-generated works as copyrighted works.

Unfair Competition Prevention Law

In February 2024, revised versions of the “Handbook for the Protection of Confidential Information” and the “Guidelines for Limited Provision Data” were published. These publications are intended to address the concern that information protected under the Unfair Competition Prevention Law as “trade secrets” or “limited provision data” may leak through generated AI. The revised documents provide alerts and responses regarding the aforementioned risk.

Act on the Protection of Personal Information (APPI)

In June 2023, the Personal Information Protection Commission (PPC) published its stance on the handling of personal data in the use of generated AI.

Ministry of Education, Culture, Sports, Science, and Technology

The Ministry published its “Interim Guidelines on the Use of Generated AI at the Elementary and Secondary Education Levels” on 4 July 2023.

Non-binding Guidelines

In addition to existing interpretations of laws, several non-binding guidelines tailored specifically for businesses operating in the AI sector have been published. Among these, the “AI Guidelines for Business” released by METI and Ministry of Internal Affairs and Communications (MIC) on 19 April 2024 provide the latest guidelines outlining the aspects that AI developers, providers, and users should take into consideration while doing business. It is anticipated that until binding regulations on AI are introduced, these guidelines will serve as the primary reference point for Japanese companies regarding AI regulations.

In Japan, there are currently no specific laws or regulations that apply exclusively to AI; instead, there are only regulations within individual areas of law. For details on the proposed AI-specific legislation currently under consideration, please refer to 3.7 Proposed AI-Specific Legislation and Regulations.

On April 19, 2024, METI and MIC released the “AI Guidelines for Business”, which propose a framework aiming to balance the promotion of innovation and the mitigation of risks by providing unified guidelines for AI governance in Japan.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

Below is a discussion on how data protection laws and information and content laws in Japan have evolved or have been introduced to foster AI technology, as well as the role of public body recommendations or directives in this context.

Data Protection Laws

In Japan, the APPI covers data protection. Below are rules and guidance recently introduced concerning AI.

AI Development and use of personal information

According to the default rules of the APPI, when collecting and using personal information, such information can only be used for the purposes specified at the time of collection. Changing those purposes requires the consent of the individual. However, with the introduction of “pseudonymised personal information” (ie, information processed in a way that renders it impossible to identify a specific individual unless collated with other information) in the amended APPI enacted in 2022, it is now permitted to change the purposes of use of collected personal information without the consent of the individual, making it easier to use collected personal data in AI machine learning.

In March 2023, the PPC announced “The Use of Camera Systems with Facial Recognition Function for Crime Prevention and Safety Assurance.” While not introducing new rules or interpretations under the APPI, this serves as a reference guide for private businesses utilising facial recognition technology for purposes such as crime prevention.

Handling of generative AI and personal information

The PPC’s “Cautionary Notes on the Use of Generative AI Services” (June 2023) outline the following points of caution for businesses:

When businesses input prompts containing personal information into generative AI services, it is crucial to ensure that the scope of the data used is strictly necessary to achieve the specified purposes.

If businesses input prompts containing personal information into generative AI services without obtaining prior consent from the individuals, and if the personal information is used for purposes other than responding to the prompt, such businesses may violate the provisions of the APPI. Therefore, when inputting such prompts, it is essential to confirm that the service provider does not use the personal information for machine learning or similar purposes.

Copyright Laws

AI development and the use of existing works

Under the Copyright Act, using works without the consent of the copyright owner can lead to copyright infringement. However, Japan has a specific provision that does not consider it an infringement to use works for information analysis purpose (Article 30-4 of the Copyright Act). This makes it relatively easy to use third-party works for AI machine learning in Japan. However, there are restrictions when the purpose of such use of works includes enjoying the thoughts or sentiments expressed in a work, or when it unfairly harms the interests of the copyright owner.

Generative AI and copyright infringement

On 29 February 2024, the Agency for Cultural Affairs released a report detailing its interpretation of copyright laws concerning AI and copyright. This report outlines the criteria for recognising AI-generated works as copyrighted works and the basic principles regarding copyright infringement when AI-generated works that are similar to the original works are used.

Against the backdrop of the rapid proliferation of generative AI and regulatory trends in various countries, in March 2023, Japan’s ruling party, the Liberal Democratic Party (LDP), released an AI White Paper recommending the introduction of specific laws and regulations be considered for certain risk areas. Thereafter, LDP published the outline of the “Basic Law for the Promotion of Responsible AI” (tentative) on 16 February 2024.

If this proposed act is realised, it would signify a noteworthy shift in AI governance in Japan from being primarily focused on soft law regulations to regulations enforced by hard law with penalties. On the other hand, unlike the EU’s AI Act, there is no provision in the proposed law for immediate prohibition or regulation of specific AI models or services based solely on their content.

First, in June 2022, the Tokyo District Court ruled that the operator of Tabelog, a well-known Japanese restaurant ratings site, was found liable for damages under the Anti-monopoly Act for “abuse of a superior bargaining position” by changing its algorithm to the disadvantage of some users and continuing to use the changed algorithm. Thus far, the Japan Fair Trade Commission has indicated that a restaurant ratings site may have a superior position, and that acts such as unilaterally changing the algorithm and forcing restaurants to conclude contracts favourable to the site may constitute an abuse of a superior position.

On the other hand, in January 2024, the Tokyo High Court (court of appeal) ruled that the ratings site operators may have a superior bargaining position but they were not liable for “an abuse of a superior bargaining position” since the purpose of the change and the manner in which the algorithm was changed in this case were reasonable. The case is currently on final appeal.

The above judgments are still considered to be highly influential decisions since (i) an abuse of a superior bargaining position was found by solely the fact that the algorithm was changed to the disadvantage of the parties and (ii) the reason for changing the algorithm largely determines whether the act was carried out unjustly in light of normal business practices, which is one of the requirements for “an abuse of a superior bargaining position”. Regarding point (ii), this lawsuit is notable from the perspective of information asymmetry, which is an aspect of AI services.

In addition, the fact that, in the first instance, the ratings site operators initially refused to disclose the algorithm itself, which was an issue in the process of this lawsuit, as highly confidential information, but eventually agreed to disclose it, became noteworthy. In this regard, this lawsuit is also notable from the perspective of the principle of transparency, which is an aspect of AI governance.

Second, on 16 May 2024, the Tokyo District Court ruled that an “inventor” as defined in the Patent Act is limited to natural persons and does not include AI (see 15.1 Applicability of Patent and Copyright Law).

There are no precedents in Japan where the definition of AI was particularly at issue and a specific ruling was made. As stated in 5.2 Technology Definitions, there are some definitions of AI in statutes or guidelines.

Although the Cabinet Office has formulated a national strategy for AI, there are no cross-sectional and binding laws and regulations for AI in Japan (see 1.1 General Legal Background Framework). Therefore, there is no regulatory authority that plays a leading role in regulating AI. Instead, the following ministries and agencies are primarily responsible for the enforcement of AI-related laws by sector and application within the scope of the laws and regulations under their jurisdiction.

In relation to AI, the Ministry of Health, Labour and Welfare (MHLW) has jurisdiction over labour laws (ie, the Labour Standards Act, Labour Contract Act, Employment Security Act, among others) and the Pharmaceutical and Medical Devices Act (PMDA). In connection with labour laws, the MHLW addresses AI-related employment issues, such as recruitment, personnel evaluation and monitoring of employees using AI (see 13 AI in Employment). In connection with the medical devices field, there is a move to accommodate AI-enabled medical devices under the PMDA (see 14.3 Healthcare).

The Ministry of Land, Infrastructure, Transport and Tourism (MLIT) has jurisdiction over the Road Traffic Act, which establishes rules for automated driving.

The Ministry of Economy, Trade and Industry (METI) has jurisdiction over various AI-related laws and regulations (such as the Unfair Competition Prevention Act, which protects big data as “limited provision data”) and is actively formulating guidelines and other relevant materials for businesses involved in the development and utilisation of AI, such as “Contract Guidelines on Utilisation of AI and Data Version 1.1” and “AI Guidelines for Businesses 1.0”. In addition, the Japan Patent Office, an external bureau of METI, has jurisdiction over the Patent Act (see 15.1 Applicability of Patent and Copyright Law regarding the protection of AI-enabled technologies and datasets under the Patent Act).

The PPC has jurisdiction over the APPI. The PPC addresses APPI-related issues where personal data is involved in the development and use of AI.

The Japanese Fair Trade Commission (JFTC) has jurisdiction over the Act on Prohibition of Private Monopolisation and Maintenance of Fair Trade (the Anti-Monopoly Act) and the Subcontract Act. The JFTC addresses issues that the use of AI, including AI and algorithmic price adjustment behaviour and dynamic pricing, may have on a fair competitive environment.

The Financial Services Agency (FSA) has jurisdiction over the Banking Act and the Financial Instruments and Exchange Act, among others. The FSA addresses risks and other issues related to investment decisions by AI for financial instrument business operators (see 14.2 Financial Services).

The Agency for Cultural Affairs has jurisdiction over the Copyright Act. See 15.1 Applicability of Patent and Copyright Law regarding the protection of AI-enabled technologies and datasets under the Copyright Act).

MIC addresses the policy related to information and communication technologies (including the policy related to advancement of network system with AI as a component). In April 2024, MIC also issued “the AI Guidelines for Businesses 1.0” jointly with METI.

The definitions of AI used by regulators include some that are specific to machine learning as well as other more broad definitions which could include generative AI. However, the Japanese government has not yet established any fixed definition that applies in every context. The main examples are as follows.

  • The AI Guidelines for Businesses 1.0: According to these guidelines, an AI system is abstractly defined as a system that includes software elements capable of operating and learning with various levels of autonomy through the process of utilisation.
  • The Basic Act on the Advancement of Public and Private Sector Data Utilisation: According to this act, “AI-related technology” means technology related to the realisation of intelligent functions such as learning, reasoning and decision-making by artificial means, and the use of such functions realised by artificial means.

The MHLW, through its enforcement of the Labour Act, addresses issues related to the utilisation of AI in various aspects of employment, including recruitment, personnel evaluation, employee monitoring and AI replacement and termination/reassignment issues (see 13 AI in Employment). Steps are also being taken to address AI-based medical devices under the PMDA, such as providing a framework for determining whether an AI-based medical device program constitutes a “medical device” subject to licensing (see 14.3 Healthcare).

MLIT handles the development of laws on traffic rules for automated driving through the enforcement of the Road Traffic Act.

METI addresses the protection of data and information used in AI development and products created in the process of AI development under the Unfair Competition Prevention Act (see 15.1 Applicability of Patent and Copyright Law).

See 14.2 Financial Services for a discussion on the amended Instalment Sales Act, which came into effect in April 2021, enabling credit card companies to determine credit limits through credit screening using AI and big data analysis.

The PPC, through its enforcement of the APPI, addresses the handling of personal information that may be used in the development and utilisation of AI.

The JFTC addresses issues related to the use of AI in a fair competitive environment through enforcement of the Anti-Monopoly Act (see 12.6 Anti-Competitive Conduct).

Although the development and use of AI itself was not a target of enforcement, there was a case where the handling of personal data in a service using AI became an issue. In this case, back in 2019, a service provider used AI technology to calculate the expected job offer rejection rate for individuals during job hunting and provided it to client companies without the consent of the subject individuals. The PPC issued a warning and guidance to the service provider while the MHLW issued administrative guidance.

Government agencies, national research institutions, and industry groups each contribute significantly to developing and establishing AI-related standards and guidelines.

Japanese Industrial Standards (JIS)

Established by the Ministry of Economy, Trade, and Industry (METI), on 21 August 2023, the Japanese Industrial Standards introduced JIS X 22989, “Information technology -- Artificial intelligence -- Artificial intelligence concepts and terminology”. This standard, identical to ISO/IEC 22989, defines the concepts and terminology related to AI. Additionally, JISQ 38507 “Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organisations” is being developed to align with ISO/IEC 38507:2022 and is intended to provide practical governance guidelines for AI use in organisations.

AI Safety Institute

The AI Safety Institute, established on 14 February 2024 by the Cabinet Office and the Information-technology Promotion Agency (IPA), focuses on enhancing AI safety standards domestically and internationally. The institute collaborates with ISO/IEC SC42 to standardise safety measures and is also developing frameworks for reliable safety evaluation methods and testing procedures for AI systems. It is poised to play a pivotal role in establishing these safety standards and providing guidance for the secure deployment of AI technologies across various sectors.

The Consortium of Quality Assurance for Artificial-Intelligence-Based Products and Services (QA4AI Consortium)

The QA4AI Consortium, a collaborative effort of leading IT companies, academic institutions, and the National Research and Development Agency, has published the “Guidelines for Quality Assurance of AI-Based Products and Services”. These guidelines address key areas such as data integrity, model robustness, system quality, process agility, and customer expectations, providing detailed checklists that aid in developing reliable AI products.

Research and Guidance by AIST

The National Institute of Advanced Industrial Science and Technology (AIST) continues to lead in AI research and standards development. The “Machine Learning Quality Management Guideline (Revision 3.2.1)” published by AIST classifies the quality of machine learning systems into three categories: quality at the time of use, external quality, and internal quality. It further details methods for applying quality control tailored to these quality categories, which are essential for ensuring the effectiveness and reliability of AI systems in various applications.

In Japan, aligning business practices with international AI standards is becoming increasingly important for companies involved in AI development and deployment.

The AI Guidelines for Businesses, issued on 19 April 2024 by the Ministry of Internal Affairs and Communications and METI, emphasise the importance of adhering to international standards that ensure responsible development, deployment, and management of AI systems. The guidelines advocate a proactive approach to integrating international standards into Japanese business practices. They include direct references to comprehensive standards such as ISO/IEC 23894:2023, which addresses various environmental considerations for AI systems. Moreover, the guidelines cover standards relevant to various aspects of AI implementation, from information security (ISO/IEC 27001) and data quality (ISO/IEC 25012) to privacy protection (ISO/IEC 27701, ISO/IEC 29100, and ISO/IEC 27018).

Although current Japanese regulations do not mandate compliance with these international standards, the proactive involvement of Japanese experts in their development illustrates Japan’s commitment to aligning domestic practices with global benchmarks. This participation bolsters Japan’s position on the international stage and helps ensure that local practices are in sync with international standards, reducing potential discrepancies and conflicts.

Regarding the introduction of AI technology in government, the “Guidebook for the Use and Introduction of AI in Local Governments” was published by MIC in June 2022. The “Guidebook for the Use and Introduction of AI in Local Governments (Introduction Steps)”, released by MIC around the same time, provides specific methods and points to note for local governments in introducing AI technology.

The use of facial and biometric recognition by the government is subject to the Act on the Protection of Personal Information because the required data falls under the category of personal information and may infringe on the right to privacy and publicity.

There are no particular judicial decisions regarding issues related to the use of AI technologies by government agencies in Japan.

In the AI Strategy 2022 formulated by the Cabinet Office in April 2022, it is stated that “[i]n light of the increasing complexity of the international geo-political situation and changes in the socioeconomic structure, various initiatives are being considered for key technologies including AI from the perspective of economic security, and it is necessary to coordinate related measures so that the government as a whole can effectively focus on these issues”. This was the first time AI-related announcements referred to economic security. In addition, in May 2022, the Economic Security Act was enacted, which also stipulates the provision of information and financial support for the specified critical technologies, including AI-related technologies. In addition, following the enactment of the Act, in April 2024, METI designated the “Cloud Program” (including generative AI) as critical material under the Economic Security Act, and announced its plan to establish relevant computing resources domestically. This plan aims to make resources for the Cloud Program, with a particular focus on generative AI, accessible to a broad range of developers, in order to secure a stable supply of such services.

Conversely, a notable instance of the government ceasing to use AI is the discontinued use of LINE, a social networking service that also functioned as an automated chatbot for responding to inquiries. in March 2021, an issue emerged following reports that LINE’s subcontractor in China could access the personal data of LINE users in Japan. Consequently, local governments faced the dilemma of whether to suspend the use of LINE.

Discussions around generative AI technologies, such as GPT, and their ethical, legal, and social implications in Japan continue to grow more prevalent and increase in intensity. These issues can be categorised into several critical areas, as outlined below.

Intellectual Property Violations

Generative AI poses new challenges in intellectual property law, especially potential copyright issues. In Japan, a significant point of contention is the application of copyright provisions that limit copyright protections related to information analysis, including machine learning (Copyright Law Article 30-4). These provisions, particularly in the context of using copyrighted works during the AI training phase, can lead to significant conflicts among stakeholders.

Invasion of Publicity Rights

The unauthorised use of celebrity images in AI-generated content raises concerns about publicity rights violations. These concerns include the creation of accurate deepfakes and blending features from multiple celebrities to form new virtual characters for both commercial and non-commercial uses, leading to new legal and ethical challenges.

Misuse of Personal Data and Invasion of Privacy

The use of personal data by generative AI without prior consent can lead to inappropriate handling or use for unintended purposes. This includes the risk of AI learning from this data and incorporating it into its output, sometimes inaccurately, which can lead to privacy violations.

Leakage of Confidential Information

Generative AI may inadvertently disclose sensitive or proprietary information. If AI systems are trained on confidential data, there is a risk that this information could be exposed to other users or misused by entities for competitive advantages, breaching confidentiality obligations.

Misinformation

Generative AI can produce inaccurate or entirely fabricated information, spreading misinformation and impacting decision-making processes.

Bias and Discrimination

Improperly designed and monitored AI systems can perpetuate or amplify existing biases, resulting in unfair or discriminatory treatment.

Illegal and Unethical Use

While political impersonation has not been a prominent issue in Japan, generative AI has been implicated in other criminal activities, such as fraud and hacking. Issues like using AI for phishing scams or to facilitate hacking are increasingly significant concerns.

IP Protection of the AI Process

Generative AI processes involve (i) training the AI model using a training dataset and (ii) generating outputs by providing prompts to the trained model. These processes may yield valuable assets such as the AI model, training datasets, input prompts, and output. These assets may be protected under intellectual property law, as outlined below.

AI model

Mathematical or theoretical AI models are generally not eligible for patent protection as they are often viewed as discoveries of natural laws. However, if the learning methods of an AI model provide innovative solutions to existing problems, they can be patented. If not patented, these innovations can be treated as trade secrets, provided they meet the requirements for trade secrets. It is unclear whether AI models can be recognised as “database works” or “program works” under copyright law.

Training dataset

Training datasets typically do not qualify for patent protection; however, the methods used to generate them, unique selections, and combinations of data items and preprocessing techniques that effectively train specific AI models, can be subjects of patent protection. If the components of the datasets, such as images, videos, and music, qualify as works of authorship, they are individually protected by copyright. Additionally, if these datasets meet the criteria for trade secrets or are offered on a limited basis, they can be protected under the Unfair Competition Prevention Act.

Input (prompts)

Innovations in prompt generation methods can be patented if they enhance AI system inputs or are designed to elicit specific responses. Additionally, prompts that include copyrighted elements like images, videos, and music are protected under copyright law.

Output

Outputs generated by AI themselves typically do not qualify for patent protection; however, the processes or systems that produce these outputs can be patented. Additionally, the outputs generated by AI may contain creative expressions eligible for copyright protection, contingent upon the content and nature of the inputs to the AI and how the AI is utilised.

AI Terms for Input and Output Rights

Generative AI providers typically offer users the option to opt out of using their input data for model training, reflecting industry standards and user concerns about data use and protection. Users usually retain ownership of outputs generated by these AI tools, per the terms of service. However, these terms do not guarantee the legal protectability of these outputs.

Please refer to 15 Intellectual Property for IP infringements related to the AI process.

Under Articles 17 and 18 of Japan’s Personal Information Protection Act (PIPA), which advocate for purpose limitation and data minimisation, personal information handling operators, acting as controllers, must ensure that the usage of personal information in generative AI services aligns with the purposes for which the data was collected. As mentioned in 3.6 Data, Information or Content Laws, the recent advisory issued by Japan’s Personal Information Protection Commission emphasises the critical importance of the appropriate handling of personal data within AI applications. The Commission cautions that using personal data in generative AI without prior consent and for purposes other than those disclosed could violate PIPA. The Commission has highlighted the need for data subjects’ explicit consent before using their sensitive personal information in AI models, aligning with PIPA’s consent requirements under Article 20.

Additionally, individuals have specific rights under PIPA, such as the right to rectify or delete incorrect personal data under Article 34 and the right to request suspension or deletion of unlawfully processed data under Article 35. However, it is important to note that personal information used in generative AI may not always fall under the definition of “retained personal data”, which refers to data systematically organised for retrieval. Consequently, the rights to request disclosure, correction, or cessation of use may not be applicable in all scenarios where AI generates output.

Whether AI chatbot legal advice and AI automated drafting services violate the Attorneys Act that prohibits non-lawyers from providing legal services is a major issue. This was highlighted when the Ministry of Justice responded to inquiries from legal tech service providers about the legality of such services in 2022, suggesting that their contemplated services may constitute the unauthorised practice of law. However, in August 2023, the Ministry of Justice issued guidelines clarifying that the following types of contract drafting, review and management services do not constitute the unauthorised practice of law:

  • services that assist in the drafting of contracts and review of legal issues in the ordinary course of business regarding corporate legal matters that do not involve litigation or disputes;
  • services where the language or clauses of the contracts being reviewed are the same as or similar to those pre-registered in the system, such as contract templates or checklists, and are presented without individual modification (as opposed to services that involve the legal analysis of the content of the contract based on specific factual background or instructions regarding the content and the preparation of detailed, case-specific drafting or modification of the contract); and
  • services used by lawyers who individually review the AI-generated material and make necessary changes themselves.

The guidelines have made it clear that the scope of legality for AI contract review services is quite broad.

In Japan, AI is not recognised as a legal entity, and there is no specific legislation regarding liability arising from the acts or use of AI. Therefore, general civil and criminal liability will apply to them. Civil liability is as described in 1.1 General Legal Background, but in some cases, depending on the relationship between the injured party and the manufacturer, manufacturer’s liability may be based on a contract. In addition, regarding automated driving, the “operator” (the owner of the vehicle) may be liable for damages; specifically, the operator is liable unless it can be proven that it was not negligent. In terms of criminal liability, professional or ordinary negligence resulting in injury or death (Article 211 of the Criminal Code or Article 210 of the Criminal Code) are typically considered to be applicable to the developers and users of AI, but other crimes may also be applicable depending on the circumstances. In addition, in cases where the actions of a third party intervene and the use of AI causes damage to others, the issues of joint tort liability with respect to civil liability and conspiracy with respect to criminal liability may arise.

In relation to the civil liability mentioned above, if a product has a defect, product liability will be imposed regardless of whether the manufacturer was negligent; this may have a chilling effect on AI developers. In this regard, this risk can be hedged by insurance, which can encourage development.

Regarding the sharing of responsibility in the supply chain, the Contract Guidelines for the Use of AI and Data, Version 1.1 (see 5.1 Regulatory Agencies), note that there are difficulties in determining the attribution of liability (percentage of negligence) based on tortious acts because of the difficulty of verifying causal relationships after an accident and the fact that the results of AI use depend on learning datasets, the content of which is difficult to identify, and the input data at the time of use, which is unspecified. In addition, claims for damages may be made based on contractual liability between the user and the AI developer, and between the AI developer and the data provider for the generation of trained models. It is desirable to clearly specify the division of responsibility in the contract according to the circumstances.

In addition, the model version described in Version 1.1 of the Contract Guidelines for the Use of AI and Data is a good reference for common industry practice.

In Japan, there is no cross-sectional legislation or guidelines regarding criminal and civil legal liability with respect to AI.

Algorithmic bias refers to situations in which a bias occurs in the output of an algorithm, resulting in unfair or discriminatory decisions. In Japan, there has not been a case in which a company has been found legally liable for illegality arising from algorithmic bias. However, if a company were to make a biased decision based on the use of AI, it could be found liable for damages based on tort or other grounds. In addition, companies may face reputational risk if unfair or discriminatory decisions are made in relation to gender or other matters that significantly affect a person’s life, such as the hiring process.

There are no laws or regulations that directly address algorithmic bias. Companies are expected to take initiatives themselves to prevent the occurrence of algorithmic bias. For example, The AI Guidelines for Businesses by METI and MIC recommend the following: “AI developers must ensure that AI models are trained on representative datasets and are inspected for any unfair biases in the AI system. AI providers are to regularly assess the inputs and outputs of the AI models and their decision-making bases, and monitor for the occurrence of any bias. AI business users must ensure fairness in the data inputs and responsibly make business decisions based on the AI’s outputs, being mindful of any bias included in the prompts”.

Given that all processes involved in data generation and selection, annotation, pre-processing, and model/algorithm generation are subject to potential bias, documentation regarding the specifics of these processes should be obtained and maintained. However, when using complex algorithms such as deep learning, it may not be possible for humans to understand the above-mentioned process, even if collecting the material in relation to such process, in the first place. Therefore, it is advisable to select algorithms that can be used by taking into account aspects of “explainable AI” (XAI).

The AI Guidelines for Businesses call for the protection of privacy across all AI systems and services. They require AI developers to ensure appropriate data training through privacy by design and other means. AI providers are tasked with implementing mechanisms and measures for privacy protection. AI users are expected to prevent improper input of personal information and take adequate measures to ensure against privacy violations. Under Japanese law, the right to privacy is considered to be “the right to control one’s own information”, which is not necessarily the same as the protection of personal information under the Personal Information Protection Act and requires separate consideration.

Profiling by AI to infer a person’s behaviour and characteristics from their browsing history may raise privacy concerns. A well-known Japanese recruiting company that operates a job search website for university students provided a service that indicates the likelihood of students leaving the hiring process or declining job offers; the company offered this service to companies that were considering hiring new graduates. This service used an algorithm that calculated the likelihood of a student declining a job offer based on the student’s browsing history by industry on job search websites and provided the company with a score indicating the likelihood of the student declining the offer. This service involved issues such as the fact that some students did not agree to the privacy policy and the fact that the privacy policy was not adequately specific, making it difficult for the students to foresee that their information would be provided to companies in the form of the likelihood that they would decline the company’s offer. The Privacy Protection Commission issued a recommendation and guidance as this service was a violation of the APPI. The above service was strongly criticised by Japanese society.

Under Japanese law, in relation to privacy and personal information, the obligations or responsibility related to the processing of personal data by AI, such as in profiling, do not change based on the existence of direct human supervision. For example, the secrecy of communications is protected as a type of the right to privacy. However, even if the contents of communications are obtained and analysed solely by a machine without any human involvement, in principle this would constitute an infringement of the right to secrecy of communications if the consent of the individual concerned was not obtained.

Personal Data

Facial or biometric authentication requires the capture of biometric data such as facial images and fingerprint data. Such data is considered personal information under the APPI, but is not regarded as personal information requiring special care (Article 2, paragraph 3 of the Act). Therefore, when acquiring such information, as long as its purpose of use is notified or disclosed, the individual’s consent is not required. However, depending on how the data is acquired and used, it may constitute an improper acquisition (Article 20, paragraph 1 of the Act) or improper use (Article 19 of the Act). It is therefore advisable to consider this issue carefully.

Privacy and Portrait Rights

In addition, depending on how facial images and biometric information are obtained and used, there may also be infringement of privacy rights and portrait rights (ie, infringement of personality rights). Although the debate over the circumstances in which an infringement of privacy and portrait rights occurs has intensified with a growing number of court precedents, since the debate surrounding facial and biometric authentication has not yet crystallised, it is difficult to definitively specify what type of acquisition and use would be permissible. With respect to the use of video images, in practice, it is advisable to refer to the Guidebook for Utilisation of Camera Images Version 3.0 (March 2022).

Profiling will be used as an example of automated decision-making. While some foreign countries have introduced regulations on profiling using AI, such as Article 22 of the EU’s GDPR, there are no laws or regulations that directly regulate profiling in Japan. Notwithstanding this, however, the provisions of the APPI must be complied with. For example, when personal data is acquired for profiling purposes to analyse behaviour, interests and other information from data obtained from individuals, the purpose of the use of such data must be explicitly notified or disclosed to the public in accordance with the APPI. However, it should be noted that individuals’ consent is not required under the APPI, unless acquiring personal information requiring special care. In addition, precautions should be taken to avoid inappropriate use (Article 19 of the APPI).

Further, if automated decision-making leads to unfair or discriminatory decisions, liability for damages and reputational risk could be an issue, similar to the issues discussed in 11.1 Algorithmic Bias.

In Japan, there are no laws or regulations that provide specific rules for AI transparency and accountability. However, in the AI Business Guidelines published by METI and MIC in April 2024, transparency and accountability are established as common principles for businesses involved in the AI field. This means that when utilising AI, it is necessary to ensure that AI systems and services can be verified, and are within technically feasible limits, with appropriate information on the AI systems being provided to stakeholders. This includes information about the use of AI, its application scope, methods of data collection, the capabilities and limitations of the system, and the methods of its use.

However, there is no clear guidance on when and what information should be disclosed when AI, such as chatbots, replaces services typically provided by people.

The above can also be problematic from the standpoint of the APPI. For example, if AI is actually being used, but the company does not disclose this, leading the user to mistakenly believe that a human is making decisions and providing personal data, there may be a breach of the duty to properly acquire the data or the duty to notify the purpose of its utilisation.

In March 2021, the Japan Fair Trade Commission published the “Report of the Study Group on Competition Policy in Digital Markets – Algorithms/AI and Competition Policy”, with the aim of ensuring that competition risks associated with algorithms/AI are properly addressed. The report discusses three types of algorithms/AI that may have a significant impact on competition at this time: price research and pricing algorithms, ranking, and personalisation (especially personalised pricing). The JFTC is examining potential competition policy issues in these areas.

It is generally believed that it is not easy to make a case for concerted conduct that uses algorithms because there is little contact between competing businesses and it is difficult to actually identify the communication of intent. The above report points to the following cases where even if there is no direct or indirect exchange of information between businesses using algorithms, it is considered that there is a common recognition that prices are synchronised and thus a cartel exists:

  • multiple competing businesses use a pricing algorithm provided by the same vendor, etc, and by using that algorithm, the businesses are aware that the price will be mutually synchronised; and
  • a platform provider of a pricing algorithm informs its users that it will impose the same upper limit of discount rates on the sale prices of all users, and the users use the algorithm while being aware of this.

In addition, with regard to rankings, if a leading ranking operator arbitrarily manipulates the rankings and obstructs transactions between competing business operators and consumers by displaying its own products at a higher ranking and treating them more favourably, it is considered to be in violation of the Anti-monopoly Act. In a related matter, in June 2022 the Tokyo District Court ordered the payment of damages in a case in which a restaurant claimed that a restaurant rating platform in a dominant position unfairly lowered its rating due to an algorithm change, in violation of the Anti-monopoly Act. However, in February 2024, the Tokyo High Court overturned the said District Court’s decision and ruled in favour of the platform. The case has been further appealed to the Supreme Court.

AI as a Service (AIaaS) models utilise provider-sourced data to train algorithms that interact with user inputs during application stages. The inherent multi-tenancy of these services means that interactions with AI by one user can potentially affect others. This characteristic raises specific concerns about the management of user data and related output.

Data Interaction, User Input, and Output Management

Significant privacy and confidentiality risks arise in AIaaS models when user inputs or prompts – and the output derived from these – are used for further AI learning. Contracts should specify how user inputs are managed, ensuring that they are not stored or used beyond immediate operational requirements without explicit user consent. Additionally, contracts should safeguard users’ rights over their inputs and clarify whether the AI is authorised to reproduce similar output for other users or use cases, thus preventing unauthorised use or replication of proprietary information. It is also crucial to ensure that the AI employs technical measures to prevent the generation of output that could infringe on any third-party copyright.

Explainability

Explainability in decision-making is critical in sectors such as finance, healthcare, and legal, as well as in operations where AI-driven decisions significantly impact individuals. AIaaS contracts should emphasise transparent decision-making processes across all applications, enhancing trustworthiness and ethical integrity.

Ethical Considerations

Ethical practices are essential in the deployment and operation of AI systems within AIaaS. Contracts should include mechanisms for users to inquire and report any concerns regarding biases or ethical shortcomings in the AI system.

Advantages for employers using AI in hiring and termination include the fact that, unlike the subjective evaluations conducted by recruiters in the past, AI-based evaluations can be conducted fairly and objectively by setting certain standards, and that the use of AI can make the recruitment process more efficient. On the other hand, the following points are relevant with respect to the information that may be obtained through the hiring process and the exercise of the right to termination.

Hiring

In Japan, there are no laws that specifically restrict the use of AI in hiring or recruitment activities. Additionally, under Japanese law and judicial precedent, since companies have the freedom to hire, even if an AI analysis is incorrect and the employer does not fully verify this analysis, this would not necessarily constitute a violation of applicable laws. However, it can be said that AI-based recruitment limits a company’s freedom to hire to a certain extent.

Specifically, even in cases where AI is utilised in recruitment activities and information on jobseekers is automatically obtained, in accordance with Article 5-4 of the Employment Security Act and Article 4-1 (2) of the Employment Security Act Guidelines, the information must be collected in a lawful and fair manner such as directly from the jobseeker or from a person other than the jobseeker with the consent of the jobseeker.

In addition, when using AI to obtain information on jobseekers, companies must be careful not to obtain certain prohibited information.

Specifically, under Article 20 of the Personal Information Protection Act, the company is typically prohibited from obtaining information requiring special care (race, creed, social status, medical history, criminal record and any facts related to the jobseeker being a victim of a crime) without the consent of the jobseeker, and, under Article 5-4 of the Employment Security Act and Article 5-1(2) of the Employment Security Act Guidelines, the company may not obtain certain information (eg, membership in labour union, place of birth) even with the consent of the jobseeker.

In addition, there is a risk that as a result of an erroneously high AI evaluation of a jobseeker, an offer may be made to a jobseeker or the jobseeker may be hired even though the jobseeker would not have been given an offer or hired if the company’s original criteria were followed. In such case, under Japanese law, the legality and validity of a decision to reject or dismiss the jobseeker will be determined based on how the recruitment process was conducted.

Termination

Situations in which the selection of the persons to be terminated may be problematic include termination as part of employment redundancy or voluntary resignations.

Under Japanese law, unilateral termination of employees by employers is restricted, and termination that constitutes an abuse of the right to terminate is considered invalid. In particular, in the case of termination as part of employment redundancy, the validity of termination is examined from the viewpoints of (i) the necessity of reducing the workforce; (ii) the necessity of terminating employees through employment redundancy; (iii) the validity of the selection of employees to be terminated; and (iv) the validity of the procedures for termination. AI’s use is mainly anticipated in the selection of employees to be terminated in (iii) above. It should be noted that these four perspectives are considered as factors rather than requirements, and even if AI is utilised to select an employee for termination in a reasonable and fair manner that eliminates subjectivity in the selection of the employee to be terminated, this does not necessarily mean that the termination is valid. Naturally, if the data on which the AI bases its judgement is erroneous or if the AI is unreasonably biased, there is a high possibility that the selection of the terminated employee will not be recognised as valid.

On the other hand, there is no law that specifically regulates voluntary resignations, since the resignation is made voluntarily by the employee. However, it is necessary for the voluntary resignations to take place in a manner that respects the voluntary decision of the employee; there are court cases that have held that a voluntary resignation resulting from an unreasonable act or conduct that may have impeded the employee’s voluntary decision to resign constitutes a tort under Article 709 of the Civil Code. Therefore, even if the selection of employees subject to voluntary resignation is based on an objective and impartial evaluation by AI, the company should not approach the voluntary resignation with the attitude that the decision is based on the AI’s judgment and that there is no room for negotiation. Instead, the company should provide a thorough explanation to the employee so that the employee understands the pros and cons of resigning and is able to make a voluntary decision. This recommendation to companies precedes the introduction of AI in the termination process.

Personnel Evaluation

Generally, the items and standards of assessment in Japanese personnel evaluations are abstract, and supervisors have broad discretion in the assessments. AI-based personnel evaluations are expected to reduce the unfairness and uncertainty stemming from the discretion given to supervisors.

Legally, the following provisions regulate personnel evaluations:

  • equal treatment (Article 3 of the Labour Standards Act);
  • equal pay for men and women (Article 4, ibid);
  • equal treatment of men and women in promotions, etc (Article 6, Paragraph 1 of the Equal Employment Opportunity Act); and
  • unfair labour practices (Article 7 of the Labour Union Act).

In the case of a company that has the authority to evaluate an employee, courts have held that a tort is not established unless the employer violated the above-mentioned provisions or abused its discretionary power in violation of the purpose of the personnel evaluation system. Cases that would fall under abuse of discretion include factual errors, misapplication of evaluation criteria, arbitrary evaluation and discriminatory evaluation.

Therefore, even in the case of personnel evaluation using AI, if there is an error in the data on which the AI bases its judgement, or if there is an error in the algorithm or learning method by which the AI evaluates such data, personnel evaluation based on such AI’s judgement may constitute a tort.

Monitoring

One possible method of monitoring workers using AI would be, for example, for AI to check emails and automatically notify managers if there are suspicious emails.

The question is whether this would infringe on the privacy rights of the workers to be monitored, but monitoring is considered permissible as long as the company’s authority to monitor is clearly defined in the internal rules. Courts have also held that, even if the authority is not clearly stated, monitoring is permissible as long as there is a reasonable business management need, such as when it is necessary to investigate whether or not there has been a violation of corporate order, and the means and methods used are reasonable.

Therefore, when conducting monitoring using AI, it would be advisable to (i) specify in the internal rules that managers ultimately have the authority to check the contents of employees’ email exchanges, and (ii) communicate such rules to the employees.

Ridesharing services were partially liberalised in Japan in 2024, but strict legal regulations still apply and ridesharing services such as Uber are not yet widespread in Japan. However, food delivery platforms, such as Uber Eats, which uses an algorithm to guide delivery staff to deliver orders quickly and efficiently, are widely used. Many food delivery platforms do not have an employment relationship with the delivery staff who work on a freelance basis. The MHLW guidelines for freelance workers state the following.

  • The Anti-monopoly Act and the Subcontract Act may apply to transactions between freelance workers as sole proprietors and transaction partners (eg, non-delivery of contracts, unilateral changes in transaction terms, and delay or reduction of remuneration payments are prohibited as an abuse of superior bargaining position).
  • Regardless of the contract form, if the relevant person is in fact an employee or worker, labour-related laws and regulations will apply in addition to the Anti-monopoly Act.

The Uber Eats Union, a labour union of Uber Eats delivery staff, demanded collective bargaining with the Japanese entity that operates the Uber Eats business in Japan (Uber Eats Japan). Specifically, the Uber Eats Union demanded collective bargaining regarding compensation in the event of an accident during delivery. Uber Eats Japan rejected the union’s demands for the reason that the delivery staff do not constitute employees under the Labour Union Act. The union then sought the intervention of the Tokyo Labour Relations Commission, which, in November 2022, ruled that the delivery staff were employees under the Labour Union Act.

In the financial sector, AI is used by banks and lenders for credit decisions and by investment firms for investment decisions. In addition, the amended Instalment Sales Act, which came into effect in April 2021, enables credit card companies to determine credit limits through credit screening using AI and big data analysis.

The FSA’s supervisory guidelines require banks, etc, when concluding a loan contract, to be prepared to explain the objective rationale for concluding a loan contract based on the customer’s financial situation in relation to the provisions of the loan contract. This is true even if AI is used for credit operations. Therefore, it is necessary to be able to explain the rationale of credit decisions made by AI.

In addition, when credit scoring is used by AI to determine the loan amount available for personal loans, care should be taken to avoid discriminatory judgements, such as different judgements of loan amounts available based on gender or other factors. The Principles for a Human-Centred AI Society also state: “Under the AI design philosophy, all people must be treated fairly, without undue discrimination on the basis of their race, gender, nationality, age, political beliefs, religion, or other factors related to diversity of backgrounds”.

Financial instrument firms must not fail to protect investors by conducting inappropriate solicitation in light of the customer’s knowledge, experience, financial situation, and the purpose of concluding the contract (the compliance principle). In addition, these firms are obligated to explain to customers the outline of the contract and the risks of investment in accordance with the compliance principle. Therefore, if the criteria for investment decisions by AI cannot be reasonably explained, problems may arise in relation to the compliance principle and the duty to explain.

If AI-based programs, such as diagnostic imaging software or health management wearable terminals, or devices equipped with such programs fall under the category of “medical devices” under the Pharmaceuticals and Medical Devices Act, approval is required for their manufacture and sale, and approval or certification is also required for individual medical device products. Whether AI-based diagnostic support software and other medical programs constitute “medical devices” must be determined on a case-by-case basis, but the MHLW has provided a basic framework for making such determinations.

According to this framework, the following two points should be considered.

  • How much does the programmed medical device contribute to the treatment, diagnosis, etc, of diseases in view of the importance of the results obtained from the programmed medical device?
  • What is the overall risk, including the risk of affecting human life and health in the event of impairment, etc, of the functions of the programmed medical device?

In addition, when a change procedure is required to change a part of the approved or certified content of a medical device, the product design for an AI-based medical device may be based on the assumption that its performance will constantly change as new data is obtained after the product is marketed. Given the characteristics of AI-based programs, which are subject to constant changes in performance and other aspects after their initial approval, the amended Pharmaceuticals and Medical Devices Act, which came into effect in September 2020, introduces a medical device approval review system that allows for continuous improvement.

Since medical services such as diagnosis and treatment may only be performed by physicians, programs that provide AI-based diagnostic and treatment support may only serve as a tool to assist physicians in diagnosis and treatment, and physicians will be responsible for making the final decision.

Medical history, physical and mental ailments, and results of medical examinations conducted by physicians are considered “personal information requiring special care”, under the APPI, and, in principle, the consent of the patient must be obtained when obtaining such information. In many cases, medical institutions are required to provide personal data to medical device manufacturers for the development and verification of AI medical devices. In principle, the provision of personal information to a third party requires the consent of the individual, but it may be difficult to obtain prior consent from the patient. An opt-out system is also in place. However, it cannot be used for personal information requiring special care.

Anonymised information, which is irreversibly processed so that a specific individual cannot be identified from the personal information, can be freely provided to a third party. However, it has been noted that it is practically difficult for medical institutions to create anonymised information. In addition, the Next Generation Medical Infrastructure Act allows authorised business operators to receive medical information from medical information handlers (hospitals, etc) and anonymise it through an opt-out method. However, it is not widely used.

The revised Next Generation Medical Infrastructure Act passed by the Diet in April 2023 established a new system for the creation and use of “pseudonymised medical information”.

Regarding traffic rules, amendments to the Road Traffic Act have already been enacted to permit Level 3 (conditional automated driving) and Level 4 (unmanned automated driving).

Regarding liability in the event of an accident, there are no specific regulations that determine liability when an autonomous vehicle causes an accident, and currently, the existing legal framework applies. Under the current law, the entities liable in the event of an accident involving an autonomous vehicle include the driver, the operator (a concept that includes the owner of the vehicle and the transport business operator, in addition to the driver), and the manufacturer of the vehicle.

As for the driver’s liability, under the amended Road Traffic Act, at Level 3, the driver is not required to be vigilant if not requested to override and take over the autonomous driving system, thus liability for accidents occurring without an override request is limited to exceptional circumstances. At Level 4, since intervention by a person riding in the car is not requested at all, the person in the car will not bear any responsibility if an accident occurs.

Regarding the manufacturer’s liability, under the Product Liability Act, there is currently an active discussion on how to define the “defect” in an autonomous vehicle that must be proven by the victim. But generally, it is considered very challenging to hold manufacturers liable under the Product Liability Act when an autonomous vehicle causes an accident.

In light of this, the government has a policy to ensure the protection of a traffic accident victim by clarifying that the operator’s liability applies to autonomous driving for the time being. In Japan, when a personal injury accident occurs, the operator is subject to almost strict liability. When the operator is held liable, victims are compensated through the compulsory automobile liability insurance that comes with the vehicle.

There are currently no specific regulations or government guidelines for the use of AI in manufacturing. Nevertheless, the AI Guidelines for Businesses are broadly applicable to the use of AI in the manufacturing sector. Interestingly, a document released in June 2020 by the Regulatory Reform Promotion Council, an advisory body to the Cabinet Office, suggests that existing regulations regarding the inspection of products at manufacturing facilities could be relaxed if AI is used to assist in the inspection. It states that “if precise risk management is carried out using digital technologies during the manufacturing process, inspections themselves should be considered unnecessary”.

In addition to legal services (see 9 Legal Tech), when AI assists with professional services such as tax and accounting work, individual professional regulations must be observed. For example, as stated in Article 72 of the Attorneys Act, non-lawyers or entities other than law firms are not permitted to engage in the practice of law as a business. Nevertheless, a violation will not occur if the relevant AI services are intended to assist lawyers and are designed so that the output of AI services must be reviewed by lawyers and then provided to clients as the lawyers’ own work product. However, if the output of the AI services is provided directly to clients, there may be a problem under the Attorney Act. Since there are many such restrictions under current laws applicable to professional services, it is necessary to ensure that AI performing certain professional tasks does not violate these professional regulations.

Discussions regarding whether AI technology can be recognised as an inventor or co-inventor for patent purposes, an author or co-author for copyright purposes, or a moral right holder are also taking place in Japan. Under current Japanese law, AI is not considered a natural person, and therefore cannot be recognised as the inventor for patent purposes, the author for copyright purposes, or the holder of moral rights. In this regard, on 16 May 2024, the Tokyo District Court ruled that an “inventor” as defined in the Patent Act is limited to natural persons and does not include AI, in a case where the Japan Patent Office (JPO) in its decision dismissed the patent application related to an AI-generated invention because only “DABAS, an artificial intelligence which invented the invention autonomously” was listed as the inventor’s name in the national phase documents of the PCT application and the plaintiff filed a lawsuit to seek the revocation of the JPO decision.

However, if a person who used AI to create a work had the intention to create a work and made a creative contribution, then the resulting work may be recognised as having been created by the person who used the AI as a tool, rather than by the AI itself. In such a case, the natural person who had the creative intention and made the creative contribution is considered to be the author. While it is controversial whether AI should be given judicial personality, such a legal system is not being considered at this point.

AI technology and (big) data utilised in the development and use of AI are protected as trade secrets just like other informational assets (Article 2 (6) of the Unfair Competition Prevention Act (the UCPA)) as long as they are (i) kept secret; (ii) not publicly known; and (iii) are useful for business activities. The trade secret holder can seek an injunction against unauthorised use by a third party and can also claim damages for unauthorised use. In addition, criminal penalties may also apply for acts of unfair competition, etc, for the purpose of wrongful gain or causing damage (Article 21 of the UCPA).

Moreover, even if the data does not qualify as a trade secret because it is not kept secret as it is intended to be provided to a third party in the course of the development or use of AI, if the data constitutes technical or business information that is accumulated to a significant extent and is managed by electromagnetic means as information to be provided to a specific party on a regular basis, it is protected as “shared data with limited access” (Article 2 (7) of the UCPA). The holder of the rights to shared data with limited access can seek an injunction against unauthorised use by a third party and can also claim damages for unauthorised use. However, unlike trade secrets, there are currently no criminal penalties with respect to shared data with limited access.

Protection Based on Judicial Precedents

Even if not protected by the UCPA, unauthorised use of data may constitute a tort under Article 709 of the Civil Code if there are special circumstances, such as infringing on legally protected interests (Supreme Court, Judgment, 8 December 2011, Minshu 65(9)3275 [2012]). Legally protected interests include, for example, business interests in business activities (a case in which incorporating another company’s database into one’s own database for sale was considered to constitute a tort; Tokyo District Court, Judgment, 25 May 2001, Hanta 1081, 267 [2002]).

Protection Through Contracts

Even if not protected by the UCPA, it is possible to set rights and obligations related to data between parties in data transaction contracts and protect valuable data. However, in current Japanese law, data, which is an intangible asset, is not recognised as an object of ownership and remains a subject of the right to use under the contract. Especially for programs or models and their source code, it is reasonable to expect that they should be treated separately, so it is desirable to explicitly agree on the handling of the source code in cases where the transfer of the source code is an issue.

Copyright Law

Works created autonomously by AI are not protected by copyright since AI lacks ideas or emotions. However, if the user of AI (a human being) has creative intent in the process of generating the work and contributes creatively to obtaining the AI-generated work through instructions or other means, it can be considered that the user has creatively expressed their thoughts or sentiments using AI as a tool, and the work is protected as a copyrighted work.

Using third-party copyrighted works for the purpose of “AI learning” before generating AI-created work does not constitute copyright infringement. This is because in certain cases where the use is not intended for enjoying the expression of thoughts or sentiments in the copyrighted work (Article 30-4 (ii) of the Copyright Act), copyright protection does not apply and such use is not considered copyright infringement. However, if one tries to use the copyrighted works as they are for a database rather than as data for AI-learning purposes, such use may constitute copyright infringement, even under the above conditions.

Copyright infringement is established when someone relies on and uses another’s copyrighted work (in other words, someone’s work is derived from the copyrighted work). However, it is controversial whether the reliance requirement is satisfied in the case where AI that is developed using another’s copyrighted work as AI-learning data produces its own work that resembles another’s copyrighted work that was used as AI-learning data, and there is no established view on this matter.

Patent Law

AI-related technologies, including inventions of methods for AI to produce works and works produced by AI, are eligible to receive patents as long as they meet the general patent requirements. Under Japanese law, it is considered that data and pre-trained models are not excluded from eligibility for patent protection as long as they are considered programs or program equivalents (ie, data with structure and data structure). On the other hand, data or datasets that are merely presented as information are not eligible for patent protection.

As mentioned in 15.2 Applicability of Trade Secrecy and Similar Protection, if the user of AI has creative intent in the process of generating the work and contributes creatively to obtaining the AI-generated work through instructions or other means, the user can be considered to have creatively expressed their ideas or emotions using AI as a tool. In such cases, the AI-generated work is protected as a copyrighted work. This also applies to creating works and products using OpenAI, and there is no difference in protection whether the product is an image or text.

However, the extent to which creative contribution must be made to qualify for copyright protection is determined on a case-by-case basis and is still controversial.

Under the Copyright Act, it is likely that the prompts used to generate high-quality output can be protected as copyrighted works unless they are mere ideas since the copyright protects expressions not ideas. On the other hand, even if the prompt can be protected by the copyright, it is likely that the work generated by/with OpenAI is not a derivative work of the prompts if creativity in the prompts is difficult to find in the generated work.

In Japan, there are no cross-sectoral laws and regulations applicable to AI, only regulations in individual areas of law.

However, given that the use of AI often involves the use of personal information, compliance with the APPI is essential. In particular, the APPI is only a minimum set of required rules. Therefore, a more cautious approach is needed for the use of advanced technologies such as AI, depending on the purpose of the use and the type of personal information involved.

In addition to legal liability, there is also reputational risk if the use of AI results in discriminatory or unfair treatment.

Ultimately, it is for businesses to decide how to use AI in light of these considerations, which falls within the remit of the directors. However, since these decisions involve expert judgement, an increasing number of companies are turning to external expert panels or advisory boards on AI.

One AI governance guideline that is expected to be used as a reference for such business judgement is the “AI Guidelines for Businesses 1.0” established by METI and MIC.  Although the guidelines are not legally binding, it is anticipated that until binding regulations on AI are introduced, this will serve as a primary reference point for Japanese companies regarding AI regulations.

Since there is no comprehensive AI regulation in Japan, best practice includes: (i) compliance with existing laws in specific areas; (ii) building a robust AI governance framework; (iii) contractual measures; and (iv) technical measures. The following discussion focuses on points (i) through (iii).

Legal Compliance

When developing, providing, or using AI, it is necessary to comply with existing laws, especially both the Copyright Act and APPI. These issues are discussed in more detail in other sections of this chapter.

Risk Management and Governance Framework (Building an AI Governance System)

Since there is no comprehensive AI regulation in Japan, there is a need to address risks not necessarily covered by law, such as bias and fairness issues. In this regard, mere compliance with existing regulations is not sufficient. Therefore, companies developing high-risk AI systems in particular are increasingly considering establishing a comprehensive AI governance framework across their organisations. Such AI governance frameworks mainly consist of an internal process to identify and address AI risks, as well as the organisations and personnel that develop and operate these processes.

Guidance that can be useful in this context includes the “AI Guidelines for Businesses 1.0” published by METI and MIC in April 2024. While these guidelines are not legally binding and non-compliance does not incur penalties, Japanese case law suggests that widely adopted guidelines could be considered when determining important issues such as breaches of duty of directors. Consequently, industry participants are recommended to review these guidelines to ensure that their systems are not significantly below industry standards.

Contractual Measures

Given that multiple parties are involved in the process of developing, providing, or using AI, it is worth considering contractually allocating appropriate risk distribution and responsibility sharing. In this context, the “Contract Guidelines on the Utilization of AI and Data” published by METI in June 2018 can serve as a useful reference. However, it is important to be cautious of regulations found in other applicable laws, such as the Subcontract Act, the Consumer Contract Act, and standard terms of contract provisions under the Civil Code, which invalidate certain contract clauses that unilaterally impose a disadvantage on a counterparty.

Nagashima Ohno & Tsunematsu

JP Tower
2-7-2 Marunouchi
Chiyoda-ku
Tokyo 100-7036
Japan

+81 3 6889 7000

+81 3 6889 8000

www.noandt.com/en/
Author Business Card

Law and Practice

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The TMT practice group is comprised of about 50 lawyers and legal professionals and has been representing Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. Also, a strength of the TMT practice group is that, in view of the firm’s robust client base, it is well-positioned to consistently meet requests from clients to provide advice on many different areas, from business strategies to daily compliance and corporate matters.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.