In Japan, general legal frameworks such as tort law, data protection, intellectual property rights, criminal law, antitrust law, labour law, product liability law, and consumer protection law may also apply to artificial intelligence (AI).
Tort Law (Civil Code)
Under Article 709 of Japan’s Civil Code, liability may arise from intentional or negligent actions that infringe on rights or legally protected interests, including harm caused by AI. Tort law provisions encompass potential liabilities for AI users, developers, or providers based on their foresight and preventive measures.
Privacy and Data Protection Law
The Personal Information Protection Act (PIPA) regulates the processing of personal data in developing, training, and utilising AI (for details, see 8.3 Data Protection and Generative AI and 11.2 Data Protection and Privacy).
Intellectual Property Law
The application of copyright and patent laws to AI is widely debated in Japan. 8.2 IP and Generative AI and 15.1 Applicability of Patent and Copyright Law address these issues.
Criminal Law
The Japanese Penal Code encompasses various crimes that may involve AI, including fraud (Article 246), defamation (Article 230), and obstruction of business (Article 233). Abuse of AI technologies, such as deep fakes, may also fall under these provisions. Additionally, the Unauthorised Computer Access Law addresses AI-related misconduct, including unauthorised computer access (Article 11) and the unlawful acquisition of identifiers such as passwords (Article 12).
Antitrust Law
The Act on Prohibition of Private Monopolisation and Maintenance of Fair Trade addresses the potential risks of monopolistic practices or anti-competitive behaviours involving AI and algorithms, as detailed in 12.6 Anti-Competitive Conduct. Issues such as synchronised pricing through shared algorithms highlight these concerns.
Labour Law
The Employment Security Act stipulates the legal and fair collection of applicant information, which is also applicable when collecting such information using AI in hiring processes (see 13 AI in Employment). Meanwhile, Japan’s labour laws currently lack specific provisions regarding the use of autonomous decision-making systems.
Product Liability Law
Under Japan’s Product Liability Act, manufacturers are liable for damages caused by defective products that harm life, body, or property, irrespective of the manufacturer’s negligence. While AI software itself may not be considered a “product”, if integrated into a device, the entire assembly is deemed to be a product. However, determining what constitutes adequate safety for AI and proving a defect in safety can be challenging.
Consumer Protection Law
In Japan, laws such as the Act Against Unjustifiable Premiums and Misleading Representations and the Consumer Contract Act apply to AI in consumer contexts. Generative AI used in advertising that leads to misleading or deceptive impressions could be regulated under the Act against Unjustifiable Premiums and Misleading Representations. Additionally, unfair solicitation practices by AI-driven systems like robo-advisers could be a violation of the Consumer Contract Act.
AI and machine learning are transforming industries in Japan with predictive and generative AI technologies driving innovation and efficiency. Predictive AI, for instance, is being used across various sectors and enhancing processes through data analysis. In finance, it detects fraud and forecasts stock market trends, aiding in risk management and investment strategies. In healthcare, predictive AI assists in diagnosing diseases and planning treatments, improving patient care. In infrastructure and agriculture, predictive AI streamlines equipment maintenance and optimises harvest planning, thus enhancing operational efficiency and productivity.
Generative AI is introducing novel approaches in traditional and emerging fields. In advertising, for example, one beverage company employs generative AI for package design and virtual personalities in TV commercials. Architects leverage generative AI for visual presentations to clients, accelerating the design process. In IT, generative AI supports software development by automating code generation, reducing errors, and speeding up project timelines.
Moreover, certain industries traditionally reliant on predictive AI are now embracing generative AI to forge new paths in innovation. For example, in manufacturing, alongside predictive tasks, generative AI is used to develop robots that operate based on natural language instructions. In retail, in addition to enhancing customer service through AI-powered chat systems, generative AI is being utilised to create a new kind of dynamic and personalised shopping experience for consumers.
The Japanese government actively supports the development of AI through comprehensive investments and policy initiatives, effectively integrating national efforts with global collaboration. The Ministry of Economy, Trade and Industry (METI) exemplifies this support with initiatives like the GENIAC project, launched on 2 February 2024, which provides subsidies for essential computational resources for AI foundational models. Additionally, on 19 April 2024, METI committed JPY72.5 billion to enhance domestic supercomputing facilities to support AI development, emphasising its importance to Japan’s economic security.
For fiscal year 2024, Japan allocated approximately JPY164.1 billion for AI-related activities, reflecting a solid commitment to the sector. Of this, JPY72.8 billion is earmarked explicitly for generative AI technologies. This funding is aimed at supporting various initiatives, including the advancement of AI in sectors such as healthcare, education, and infrastructure. It also covers research and development in foundational AI models, computing resources, and technologies designed to mitigate AI-associated risks, such as misinformation.
Currently, there is no comprehensive cross-sectoral legislation regarding AI. As stated in “AI Governance in Japan Ver. 1.1,” the reason for this lies not only in the belief that comprehensive regulations are currently unnecessary from the perspective of fostering innovation but also because of the idea that it may be preferable to respect rule-making at the individual sector level in certain specific fields, such as automotive and medical sectors.
In individual legal domains, such as the Act on the Protection of Personal Information (APPI) and the Copyright Law, rules and amendments to existing laws are being made to promote the utilisation of AI.
One such introduction occurred in 2023 with the implementation of pseudonymised medical data in the Next-Generation Medical Infrastructure Act. Specifically, to facilitate the use of AI in research and development in the medical field, the Next-Generation Medical Infrastructure Act, which is a special law under the APPI, introduced the concept of pseudonymised medical data through an amendment in May 2023. This is expected to promote research and development of AI diagnostic tools utilising big data in the medical field.
Furthermore, the government has provided guidance on the interpretation of existing laws and regulations in relation to the use of AI (see 3.3 Jurisdictional Directives). Although these are binding interpretations, they serve as useful references for businesses.
Copyright Law
The following topics were included in the “Report on AI and Copyright” (15 March 2024) prepared by the Copyright Subcommittee of the Cultural Affairs Council’s Subcommittee on Legal Systems:
Unfair Competition Prevention Law
In February 2024, revised versions of the “Handbook for the Protection of Confidential Information” and the “Guidelines for Limited Provision Data” were published. These publications are intended to address the concern that information protected under the Unfair Competition Prevention Law as “trade secrets” or “limited provision data” may leak through generated AI. The revised documents provide alerts and responses regarding the aforementioned risk.
Act on the Protection of Personal Information (APPI)
In June 2023, the Personal Information Protection Commission (PPC) published its stance on the handling of personal data in the use of generated AI.
Ministry of Education, Culture, Sports, Science, and Technology
The Ministry published its “Interim Guidelines on the Use of Generated AI at the Elementary and Secondary Education Levels” on 4 July 2023.
Non-binding Guidelines
In addition to existing interpretations of laws, several non-binding guidelines tailored specifically for businesses operating in the AI sector have been published. Among these, the “AI Guidelines for Business” released by METI and Ministry of Internal Affairs and Communications (MIC) on 19 April 2024 provide the latest guidelines outlining the aspects that AI developers, providers, and users should take into consideration while doing business. It is anticipated that until binding regulations on AI are introduced, these guidelines will serve as the primary reference point for Japanese companies regarding AI regulations.
In Japan, there are currently no specific laws or regulations that apply exclusively to AI; instead, there are only regulations within individual areas of law. For details on the proposed AI-specific legislation currently under consideration, please refer to 3.7 Proposed AI-Specific Legislation and Regulations.
On April 19, 2024, METI and MIC released the “AI Guidelines for Business”, which propose a framework aiming to balance the promotion of innovation and the mitigation of risks by providing unified guidelines for AI governance in Japan.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
Below is a discussion on how data protection laws and information and content laws in Japan have evolved or have been introduced to foster AI technology, as well as the role of public body recommendations or directives in this context.
Data Protection Laws
In Japan, the APPI covers data protection. Below are rules and guidance recently introduced concerning AI.
AI Development and use of personal information
According to the default rules of the APPI, when collecting and using personal information, such information can only be used for the purposes specified at the time of collection. Changing those purposes requires the consent of the individual. However, with the introduction of “pseudonymised personal information” (ie, information processed in a way that renders it impossible to identify a specific individual unless collated with other information) in the amended APPI enacted in 2022, it is now permitted to change the purposes of use of collected personal information without the consent of the individual, making it easier to use collected personal data in AI machine learning.
In March 2023, the PPC announced “The Use of Camera Systems with Facial Recognition Function for Crime Prevention and Safety Assurance.” While not introducing new rules or interpretations under the APPI, this serves as a reference guide for private businesses utilising facial recognition technology for purposes such as crime prevention.
Handling of generative AI and personal information
The PPC’s “Cautionary Notes on the Use of Generative AI Services” (June 2023) outline the following points of caution for businesses:
When businesses input prompts containing personal information into generative AI services, it is crucial to ensure that the scope of the data used is strictly necessary to achieve the specified purposes.
If businesses input prompts containing personal information into generative AI services without obtaining prior consent from the individuals, and if the personal information is used for purposes other than responding to the prompt, such businesses may violate the provisions of the APPI. Therefore, when inputting such prompts, it is essential to confirm that the service provider does not use the personal information for machine learning or similar purposes.
Copyright Laws
AI development and the use of existing works
Under the Copyright Act, using works without the consent of the copyright owner can lead to copyright infringement. However, Japan has a specific provision that does not consider it an infringement to use works for information analysis purpose (Article 30-4 of the Copyright Act). This makes it relatively easy to use third-party works for AI machine learning in Japan. However, there are restrictions when the purpose of such use of works includes enjoying the thoughts or sentiments expressed in a work, or when it unfairly harms the interests of the copyright owner.
Generative AI and copyright infringement
On 29 February 2024, the Agency for Cultural Affairs released a report detailing its interpretation of copyright laws concerning AI and copyright. This report outlines the criteria for recognising AI-generated works as copyrighted works and the basic principles regarding copyright infringement when AI-generated works that are similar to the original works are used.
Against the backdrop of the rapid proliferation of generative AI and regulatory trends in various countries, in March 2023, Japan’s ruling party, the Liberal Democratic Party (LDP), released an AI White Paper recommending the introduction of specific laws and regulations be considered for certain risk areas. Thereafter, LDP published the outline of the “Basic Law for the Promotion of Responsible AI” (tentative) on 16 February 2024.
If this proposed act is realised, it would signify a noteworthy shift in AI governance in Japan from being primarily focused on soft law regulations to regulations enforced by hard law with penalties. On the other hand, unlike the EU’s AI Act, there is no provision in the proposed law for immediate prohibition or regulation of specific AI models or services based solely on their content.
First, in June 2022, the Tokyo District Court ruled that the operator of Tabelog, a well-known Japanese restaurant ratings site, was found liable for damages under the Anti-monopoly Act for “abuse of a superior bargaining position” by changing its algorithm to the disadvantage of some users and continuing to use the changed algorithm. Thus far, the Japan Fair Trade Commission has indicated that a restaurant ratings site may have a superior position, and that acts such as unilaterally changing the algorithm and forcing restaurants to conclude contracts favourable to the site may constitute an abuse of a superior position.
On the other hand, in January 2024, the Tokyo High Court (court of appeal) ruled that the ratings site operators may have a superior bargaining position but they were not liable for “an abuse of a superior bargaining position” since the purpose of the change and the manner in which the algorithm was changed in this case were reasonable. The case is currently on final appeal.
The above judgments are still considered to be highly influential decisions since (i) an abuse of a superior bargaining position was found by solely the fact that the algorithm was changed to the disadvantage of the parties and (ii) the reason for changing the algorithm largely determines whether the act was carried out unjustly in light of normal business practices, which is one of the requirements for “an abuse of a superior bargaining position”. Regarding point (ii), this lawsuit is notable from the perspective of information asymmetry, which is an aspect of AI services.
In addition, the fact that, in the first instance, the ratings site operators initially refused to disclose the algorithm itself, which was an issue in the process of this lawsuit, as highly confidential information, but eventually agreed to disclose it, became noteworthy. In this regard, this lawsuit is also notable from the perspective of the principle of transparency, which is an aspect of AI governance.
Second, on 16 May 2024, the Tokyo District Court ruled that an “inventor” as defined in the Patent Act is limited to natural persons and does not include AI (see 15.1 Applicability of Patent and Copyright Law).
There are no precedents in Japan where the definition of AI was particularly at issue and a specific ruling was made. As stated in 5.2 Technology Definitions, there are some definitions of AI in statutes or guidelines.
Although the Cabinet Office has formulated a national strategy for AI, there are no cross-sectional and binding laws and regulations for AI in Japan (see 1.1 General Legal Background Framework). Therefore, there is no regulatory authority that plays a leading role in regulating AI. Instead, the following ministries and agencies are primarily responsible for the enforcement of AI-related laws by sector and application within the scope of the laws and regulations under their jurisdiction.
In relation to AI, the Ministry of Health, Labour and Welfare (MHLW) has jurisdiction over labour laws (ie, the Labour Standards Act, Labour Contract Act, Employment Security Act, among others) and the Pharmaceutical and Medical Devices Act (PMDA). In connection with labour laws, the MHLW addresses AI-related employment issues, such as recruitment, personnel evaluation and monitoring of employees using AI (see 13 AI in Employment). In connection with the medical devices field, there is a move to accommodate AI-enabled medical devices under the PMDA (see 14.3 Healthcare).
The Ministry of Land, Infrastructure, Transport and Tourism (MLIT) has jurisdiction over the Road Traffic Act, which establishes rules for automated driving.
The Ministry of Economy, Trade and Industry (METI) has jurisdiction over various AI-related laws and regulations (such as the Unfair Competition Prevention Act, which protects big data as “limited provision data”) and is actively formulating guidelines and other relevant materials for businesses involved in the development and utilisation of AI, such as “Contract Guidelines on Utilisation of AI and Data Version 1.1” and “AI Guidelines for Businesses 1.0”. In addition, the Japan Patent Office, an external bureau of METI, has jurisdiction over the Patent Act (see 15.1 Applicability of Patent and Copyright Law regarding the protection of AI-enabled technologies and datasets under the Patent Act).
The PPC has jurisdiction over the APPI. The PPC addresses APPI-related issues where personal data is involved in the development and use of AI.
The Japanese Fair Trade Commission (JFTC) has jurisdiction over the Act on Prohibition of Private Monopolisation and Maintenance of Fair Trade (the Anti-Monopoly Act) and the Subcontract Act. The JFTC addresses issues that the use of AI, including AI and algorithmic price adjustment behaviour and dynamic pricing, may have on a fair competitive environment.
The Financial Services Agency (FSA) has jurisdiction over the Banking Act and the Financial Instruments and Exchange Act, among others. The FSA addresses risks and other issues related to investment decisions by AI for financial instrument business operators (see 14.2 Financial Services).
The Agency for Cultural Affairs has jurisdiction over the Copyright Act. See 15.1 Applicability of Patent and Copyright Law regarding the protection of AI-enabled technologies and datasets under the Copyright Act).
MIC addresses the policy related to information and communication technologies (including the policy related to advancement of network system with AI as a component). In April 2024, MIC also issued “the AI Guidelines for Businesses 1.0” jointly with METI.
The definitions of AI used by regulators include some that are specific to machine learning as well as other more broad definitions which could include generative AI. However, the Japanese government has not yet established any fixed definition that applies in every context. The main examples are as follows.
The MHLW, through its enforcement of the Labour Act, addresses issues related to the utilisation of AI in various aspects of employment, including recruitment, personnel evaluation, employee monitoring and AI replacement and termination/reassignment issues (see 13 AI in Employment). Steps are also being taken to address AI-based medical devices under the PMDA, such as providing a framework for determining whether an AI-based medical device program constitutes a “medical device” subject to licensing (see 14.3 Healthcare).
MLIT handles the development of laws on traffic rules for automated driving through the enforcement of the Road Traffic Act.
METI addresses the protection of data and information used in AI development and products created in the process of AI development under the Unfair Competition Prevention Act (see 15.1 Applicability of Patent and Copyright Law).
See 14.2 Financial Services for a discussion on the amended Instalment Sales Act, which came into effect in April 2021, enabling credit card companies to determine credit limits through credit screening using AI and big data analysis.
The PPC, through its enforcement of the APPI, addresses the handling of personal information that may be used in the development and utilisation of AI.
The JFTC addresses issues related to the use of AI in a fair competitive environment through enforcement of the Anti-Monopoly Act (see 12.6 Anti-Competitive Conduct).
Although the development and use of AI itself was not a target of enforcement, there was a case where the handling of personal data in a service using AI became an issue. In this case, back in 2019, a service provider used AI technology to calculate the expected job offer rejection rate for individuals during job hunting and provided it to client companies without the consent of the subject individuals. The PPC issued a warning and guidance to the service provider while the MHLW issued administrative guidance.
Government agencies, national research institutions, and industry groups each contribute significantly to developing and establishing AI-related standards and guidelines.
Japanese Industrial Standards (JIS)
Established by the Ministry of Economy, Trade, and Industry (METI), on 21 August 2023, the Japanese Industrial Standards introduced JIS X 22989, “Information technology -- Artificial intelligence -- Artificial intelligence concepts and terminology”. This standard, identical to ISO/IEC 22989, defines the concepts and terminology related to AI. Additionally, JISQ 38507 “Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organisations” is being developed to align with ISO/IEC 38507:2022 and is intended to provide practical governance guidelines for AI use in organisations.
AI Safety Institute
The AI Safety Institute, established on 14 February 2024 by the Cabinet Office and the Information-technology Promotion Agency (IPA), focuses on enhancing AI safety standards domestically and internationally. The institute collaborates with ISO/IEC SC42 to standardise safety measures and is also developing frameworks for reliable safety evaluation methods and testing procedures for AI systems. It is poised to play a pivotal role in establishing these safety standards and providing guidance for the secure deployment of AI technologies across various sectors.
The Consortium of Quality Assurance for Artificial-Intelligence-Based Products and Services (QA4AI Consortium)
The QA4AI Consortium, a collaborative effort of leading IT companies, academic institutions, and the National Research and Development Agency, has published the “Guidelines for Quality Assurance of AI-Based Products and Services”. These guidelines address key areas such as data integrity, model robustness, system quality, process agility, and customer expectations, providing detailed checklists that aid in developing reliable AI products.
Research and Guidance by AIST
The National Institute of Advanced Industrial Science and Technology (AIST) continues to lead in AI research and standards development. The “Machine Learning Quality Management Guideline (Revision 3.2.1)” published by AIST classifies the quality of machine learning systems into three categories: quality at the time of use, external quality, and internal quality. It further details methods for applying quality control tailored to these quality categories, which are essential for ensuring the effectiveness and reliability of AI systems in various applications.
In Japan, aligning business practices with international AI standards is becoming increasingly important for companies involved in AI development and deployment.
The AI Guidelines for Businesses, issued on 19 April 2024 by the Ministry of Internal Affairs and Communications and METI, emphasise the importance of adhering to international standards that ensure responsible development, deployment, and management of AI systems. The guidelines advocate a proactive approach to integrating international standards into Japanese business practices. They include direct references to comprehensive standards such as ISO/IEC 23894:2023, which addresses various environmental considerations for AI systems. Moreover, the guidelines cover standards relevant to various aspects of AI implementation, from information security (ISO/IEC 27001) and data quality (ISO/IEC 25012) to privacy protection (ISO/IEC 27701, ISO/IEC 29100, and ISO/IEC 27018).
Although current Japanese regulations do not mandate compliance with these international standards, the proactive involvement of Japanese experts in their development illustrates Japan’s commitment to aligning domestic practices with global benchmarks. This participation bolsters Japan’s position on the international stage and helps ensure that local practices are in sync with international standards, reducing potential discrepancies and conflicts.
Regarding the introduction of AI technology in government, the “Guidebook for the Use and Introduction of AI in Local Governments” was published by MIC in June 2022. The “Guidebook for the Use and Introduction of AI in Local Governments (Introduction Steps)”, released by MIC around the same time, provides specific methods and points to note for local governments in introducing AI technology.
The use of facial and biometric recognition by the government is subject to the Act on the Protection of Personal Information because the required data falls under the category of personal information and may infringe on the right to privacy and publicity.
There are no particular judicial decisions regarding issues related to the use of AI technologies by government agencies in Japan.
In the AI Strategy 2022 formulated by the Cabinet Office in April 2022, it is stated that “[i]n light of the increasing complexity of the international geo-political situation and changes in the socioeconomic structure, various initiatives are being considered for key technologies including AI from the perspective of economic security, and it is necessary to coordinate related measures so that the government as a whole can effectively focus on these issues”. This was the first time AI-related announcements referred to economic security. In addition, in May 2022, the Economic Security Act was enacted, which also stipulates the provision of information and financial support for the specified critical technologies, including AI-related technologies. In addition, following the enactment of the Act, in April 2024, METI designated the “Cloud Program” (including generative AI) as critical material under the Economic Security Act, and announced its plan to establish relevant computing resources domestically. This plan aims to make resources for the Cloud Program, with a particular focus on generative AI, accessible to a broad range of developers, in order to secure a stable supply of such services.
Conversely, a notable instance of the government ceasing to use AI is the discontinued use of LINE, a social networking service that also functioned as an automated chatbot for responding to inquiries. in March 2021, an issue emerged following reports that LINE’s subcontractor in China could access the personal data of LINE users in Japan. Consequently, local governments faced the dilemma of whether to suspend the use of LINE.
Discussions around generative AI technologies, such as GPT, and their ethical, legal, and social implications in Japan continue to grow more prevalent and increase in intensity. These issues can be categorised into several critical areas, as outlined below.
Intellectual Property Violations
Generative AI poses new challenges in intellectual property law, especially potential copyright issues. In Japan, a significant point of contention is the application of copyright provisions that limit copyright protections related to information analysis, including machine learning (Copyright Law Article 30-4). These provisions, particularly in the context of using copyrighted works during the AI training phase, can lead to significant conflicts among stakeholders.
Invasion of Publicity Rights
The unauthorised use of celebrity images in AI-generated content raises concerns about publicity rights violations. These concerns include the creation of accurate deepfakes and blending features from multiple celebrities to form new virtual characters for both commercial and non-commercial uses, leading to new legal and ethical challenges.
Misuse of Personal Data and Invasion of Privacy
The use of personal data by generative AI without prior consent can lead to inappropriate handling or use for unintended purposes. This includes the risk of AI learning from this data and incorporating it into its output, sometimes inaccurately, which can lead to privacy violations.
Leakage of Confidential Information
Generative AI may inadvertently disclose sensitive or proprietary information. If AI systems are trained on confidential data, there is a risk that this information could be exposed to other users or misused by entities for competitive advantages, breaching confidentiality obligations.
Misinformation
Generative AI can produce inaccurate or entirely fabricated information, spreading misinformation and impacting decision-making processes.
Bias and Discrimination
Improperly designed and monitored AI systems can perpetuate or amplify existing biases, resulting in unfair or discriminatory treatment.
Illegal and Unethical Use
While political impersonation has not been a prominent issue in Japan, generative AI has been implicated in other criminal activities, such as fraud and hacking. Issues like using AI for phishing scams or to facilitate hacking are increasingly significant concerns.
IP Protection of the AI Process
Generative AI processes involve (i) training the AI model using a training dataset and (ii) generating outputs by providing prompts to the trained model. These processes may yield valuable assets such as the AI model, training datasets, input prompts, and output. These assets may be protected under intellectual property law, as outlined below.
AI model
Mathematical or theoretical AI models are generally not eligible for patent protection as they are often viewed as discoveries of natural laws. However, if the learning methods of an AI model provide innovative solutions to existing problems, they can be patented. If not patented, these innovations can be treated as trade secrets, provided they meet the requirements for trade secrets. It is unclear whether AI models can be recognised as “database works” or “program works” under copyright law.
Training dataset
Training datasets typically do not qualify for patent protection; however, the methods used to generate them, unique selections, and combinations of data items and preprocessing techniques that effectively train specific AI models, can be subjects of patent protection. If the components of the datasets, such as images, videos, and music, qualify as works of authorship, they are individually protected by copyright. Additionally, if these datasets meet the criteria for trade secrets or are offered on a limited basis, they can be protected under the Unfair Competition Prevention Act.
Input (prompts)
Innovations in prompt generation methods can be patented if they enhance AI system inputs or are designed to elicit specific responses. Additionally, prompts that include copyrighted elements like images, videos, and music are protected under copyright law.
Output
Outputs generated by AI themselves typically do not qualify for patent protection; however, the processes or systems that produce these outputs can be patented. Additionally, the outputs generated by AI may contain creative expressions eligible for copyright protection, contingent upon the content and nature of the inputs to the AI and how the AI is utilised.
AI Terms for Input and Output Rights
Generative AI providers typically offer users the option to opt out of using their input data for model training, reflecting industry standards and user concerns about data use and protection. Users usually retain ownership of outputs generated by these AI tools, per the terms of service. However, these terms do not guarantee the legal protectability of these outputs.
Please refer to 15 Intellectual Property for IP infringements related to the AI process.
Under Articles 17 and 18 of Japan’s Personal Information Protection Act (PIPA), which advocate for purpose limitation and data minimisation, personal information handling operators, acting as controllers, must ensure that the usage of personal information in generative AI services aligns with the purposes for which the data was collected. As mentioned in 3.6 Data, Information or Content Laws, the recent advisory issued by Japan’s Personal Information Protection Commission emphasises the critical importance of the appropriate handling of personal data within AI applications. The Commission cautions that using personal data in generative AI without prior consent and for purposes other than those disclosed could violate PIPA. The Commission has highlighted the need for data subjects’ explicit consent before using their sensitive personal information in AI models, aligning with PIPA’s consent requirements under Article 20.
Additionally, individuals have specific rights under PIPA, such as the right to rectify or delete incorrect personal data under Article 34 and the right to request suspension or deletion of unlawfully processed data under Article 35. However, it is important to note that personal information used in generative AI may not always fall under the definition of “retained personal data”, which refers to data systematically organised for retrieval. Consequently, the rights to request disclosure, correction, or cessation of use may not be applicable in all scenarios where AI generates output.
Whether AI chatbot legal advice and AI automated drafting services violate the Attorneys Act that prohibits non-lawyers from providing legal services is a major issue. This was highlighted when the Ministry of Justice responded to inquiries from legal tech service providers about the legality of such services in 2022, suggesting that their contemplated services may constitute the unauthorised practice of law. However, in August 2023, the Ministry of Justice issued guidelines clarifying that the following types of contract drafting, review and management services do not constitute the unauthorised practice of law:
The guidelines have made it clear that the scope of legality for AI contract review services is quite broad.
In Japan, AI is not recognised as a legal entity, and there is no specific legislation regarding liability arising from the acts or use of AI. Therefore, general civil and criminal liability will apply to them. Civil liability is as described in 1.1 General Legal Background, but in some cases, depending on the relationship between the injured party and the manufacturer, manufacturer’s liability may be based on a contract. In addition, regarding automated driving, the “operator” (the owner of the vehicle) may be liable for damages; specifically, the operator is liable unless it can be proven that it was not negligent. In terms of criminal liability, professional or ordinary negligence resulting in injury or death (Article 211 of the Criminal Code or Article 210 of the Criminal Code) are typically considered to be applicable to the developers and users of AI, but other crimes may also be applicable depending on the circumstances. In addition, in cases where the actions of a third party intervene and the use of AI causes damage to others, the issues of joint tort liability with respect to civil liability and conspiracy with respect to criminal liability may arise.
In relation to the civil liability mentioned above, if a product has a defect, product liability will be imposed regardless of whether the manufacturer was negligent; this may have a chilling effect on AI developers. In this regard, this risk can be hedged by insurance, which can encourage development.
Regarding the sharing of responsibility in the supply chain, the Contract Guidelines for the Use of AI and Data, Version 1.1 (see 5.1 Regulatory Agencies), note that there are difficulties in determining the attribution of liability (percentage of negligence) based on tortious acts because of the difficulty of verifying causal relationships after an accident and the fact that the results of AI use depend on learning datasets, the content of which is difficult to identify, and the input data at the time of use, which is unspecified. In addition, claims for damages may be made based on contractual liability between the user and the AI developer, and between the AI developer and the data provider for the generation of trained models. It is desirable to clearly specify the division of responsibility in the contract according to the circumstances.
In addition, the model version described in Version 1.1 of the Contract Guidelines for the Use of AI and Data is a good reference for common industry practice.
In Japan, there is no cross-sectional legislation or guidelines regarding criminal and civil legal liability with respect to AI.
Algorithmic bias refers to situations in which a bias occurs in the output of an algorithm, resulting in unfair or discriminatory decisions. In Japan, there has not been a case in which a company has been found legally liable for illegality arising from algorithmic bias. However, if a company were to make a biased decision based on the use of AI, it could be found liable for damages based on tort or other grounds. In addition, companies may face reputational risk if unfair or discriminatory decisions are made in relation to gender or other matters that significantly affect a person’s life, such as the hiring process.
There are no laws or regulations that directly address algorithmic bias. Companies are expected to take initiatives themselves to prevent the occurrence of algorithmic bias. For example, The AI Guidelines for Businesses by METI and MIC recommend the following: “AI developers must ensure that AI models are trained on representative datasets and are inspected for any unfair biases in the AI system. AI providers are to regularly assess the inputs and outputs of the AI models and their decision-making bases, and monitor for the occurrence of any bias. AI business users must ensure fairness in the data inputs and responsibly make business decisions based on the AI’s outputs, being mindful of any bias included in the prompts”.
Given that all processes involved in data generation and selection, annotation, pre-processing, and model/algorithm generation are subject to potential bias, documentation regarding the specifics of these processes should be obtained and maintained. However, when using complex algorithms such as deep learning, it may not be possible for humans to understand the above-mentioned process, even if collecting the material in relation to such process, in the first place. Therefore, it is advisable to select algorithms that can be used by taking into account aspects of “explainable AI” (XAI).
The AI Guidelines for Businesses call for the protection of privacy across all AI systems and services. They require AI developers to ensure appropriate data training through privacy by design and other means. AI providers are tasked with implementing mechanisms and measures for privacy protection. AI users are expected to prevent improper input of personal information and take adequate measures to ensure against privacy violations. Under Japanese law, the right to privacy is considered to be “the right to control one’s own information”, which is not necessarily the same as the protection of personal information under the Personal Information Protection Act and requires separate consideration.
Profiling by AI to infer a person’s behaviour and characteristics from their browsing history may raise privacy concerns. A well-known Japanese recruiting company that operates a job search website for university students provided a service that indicates the likelihood of students leaving the hiring process or declining job offers; the company offered this service to companies that were considering hiring new graduates. This service used an algorithm that calculated the likelihood of a student declining a job offer based on the student’s browsing history by industry on job search websites and provided the company with a score indicating the likelihood of the student declining the offer. This service involved issues such as the fact that some students did not agree to the privacy policy and the fact that the privacy policy was not adequately specific, making it difficult for the students to foresee that their information would be provided to companies in the form of the likelihood that they would decline the company’s offer. The Privacy Protection Commission issued a recommendation and guidance as this service was a violation of the APPI. The above service was strongly criticised by Japanese society.
Under Japanese law, in relation to privacy and personal information, the obligations or responsibility related to the processing of personal data by AI, such as in profiling, do not change based on the existence of direct human supervision. For example, the secrecy of communications is protected as a type of the right to privacy. However, even if the contents of communications are obtained and analysed solely by a machine without any human involvement, in principle this would constitute an infringement of the right to secrecy of communications if the consent of the individual concerned was not obtained.
Personal Data
Facial or biometric authentication requires the capture of biometric data such as facial images and fingerprint data. Such data is considered personal information under the APPI, but is not regarded as personal information requiring special care (Article 2, paragraph 3 of the Act). Therefore, when acquiring such information, as long as its purpose of use is notified or disclosed, the individual’s consent is not required. However, depending on how the data is acquired and used, it may constitute an improper acquisition (Article 20, paragraph 1 of the Act) or improper use (Article 19 of the Act). It is therefore advisable to consider this issue carefully.
Privacy and Portrait Rights
In addition, depending on how facial images and biometric information are obtained and used, there may also be infringement of privacy rights and portrait rights (ie, infringement of personality rights). Although the debate over the circumstances in which an infringement of privacy and portrait rights occurs has intensified with a growing number of court precedents, since the debate surrounding facial and biometric authentication has not yet crystallised, it is difficult to definitively specify what type of acquisition and use would be permissible. With respect to the use of video images, in practice, it is advisable to refer to the Guidebook for Utilisation of Camera Images Version 3.0 (March 2022).
Profiling will be used as an example of automated decision-making. While some foreign countries have introduced regulations on profiling using AI, such as Article 22 of the EU’s GDPR, there are no laws or regulations that directly regulate profiling in Japan. Notwithstanding this, however, the provisions of the APPI must be complied with. For example, when personal data is acquired for profiling purposes to analyse behaviour, interests and other information from data obtained from individuals, the purpose of the use of such data must be explicitly notified or disclosed to the public in accordance with the APPI. However, it should be noted that individuals’ consent is not required under the APPI, unless acquiring personal information requiring special care. In addition, precautions should be taken to avoid inappropriate use (Article 19 of the APPI).
Further, if automated decision-making leads to unfair or discriminatory decisions, liability for damages and reputational risk could be an issue, similar to the issues discussed in 11.1 Algorithmic Bias.
In Japan, there are no laws or regulations that provide specific rules for AI transparency and accountability. However, in the AI Business Guidelines published by METI and MIC in April 2024, transparency and accountability are established as common principles for businesses involved in the AI field. This means that when utilising AI, it is necessary to ensure that AI systems and services can be verified, and are within technically feasible limits, with appropriate information on the AI systems being provided to stakeholders. This includes information about the use of AI, its application scope, methods of data collection, the capabilities and limitations of the system, and the methods of its use.
However, there is no clear guidance on when and what information should be disclosed when AI, such as chatbots, replaces services typically provided by people.
The above can also be problematic from the standpoint of the APPI. For example, if AI is actually being used, but the company does not disclose this, leading the user to mistakenly believe that a human is making decisions and providing personal data, there may be a breach of the duty to properly acquire the data or the duty to notify the purpose of its utilisation.
In March 2021, the Japan Fair Trade Commission published the “Report of the Study Group on Competition Policy in Digital Markets – Algorithms/AI and Competition Policy”, with the aim of ensuring that competition risks associated with algorithms/AI are properly addressed. The report discusses three types of algorithms/AI that may have a significant impact on competition at this time: price research and pricing algorithms, ranking, and personalisation (especially personalised pricing). The JFTC is examining potential competition policy issues in these areas.
It is generally believed that it is not easy to make a case for concerted conduct that uses algorithms because there is little contact between competing businesses and it is difficult to actually identify the communication of intent. The above report points to the following cases where even if there is no direct or indirect exchange of information between businesses using algorithms, it is considered that there is a common recognition that prices are synchronised and thus a cartel exists:
In addition, with regard to rankings, if a leading ranking operator arbitrarily manipulates the rankings and obstructs transactions between competing business operators and consumers by displaying its own products at a higher ranking and treating them more favourably, it is considered to be in violation of the Anti-monopoly Act. In a related matter, in June 2022 the Tokyo District Court ordered the payment of damages in a case in which a restaurant claimed that a restaurant rating platform in a dominant position unfairly lowered its rating due to an algorithm change, in violation of the Anti-monopoly Act. However, in February 2024, the Tokyo High Court overturned the said District Court’s decision and ruled in favour of the platform. The case has been further appealed to the Supreme Court.
AI as a Service (AIaaS) models utilise provider-sourced data to train algorithms that interact with user inputs during application stages. The inherent multi-tenancy of these services means that interactions with AI by one user can potentially affect others. This characteristic raises specific concerns about the management of user data and related output.
Data Interaction, User Input, and Output Management
Significant privacy and confidentiality risks arise in AIaaS models when user inputs or prompts – and the output derived from these – are used for further AI learning. Contracts should specify how user inputs are managed, ensuring that they are not stored or used beyond immediate operational requirements without explicit user consent. Additionally, contracts should safeguard users’ rights over their inputs and clarify whether the AI is authorised to reproduce similar output for other users or use cases, thus preventing unauthorised use or replication of proprietary information. It is also crucial to ensure that the AI employs technical measures to prevent the generation of output that could infringe on any third-party copyright.
Explainability
Explainability in decision-making is critical in sectors such as finance, healthcare, and legal, as well as in operations where AI-driven decisions significantly impact individuals. AIaaS contracts should emphasise transparent decision-making processes across all applications, enhancing trustworthiness and ethical integrity.
Ethical Considerations
Ethical practices are essential in the deployment and operation of AI systems within AIaaS. Contracts should include mechanisms for users to inquire and report any concerns regarding biases or ethical shortcomings in the AI system.
Advantages for employers using AI in hiring and termination include the fact that, unlike the subjective evaluations conducted by recruiters in the past, AI-based evaluations can be conducted fairly and objectively by setting certain standards, and that the use of AI can make the recruitment process more efficient. On the other hand, the following points are relevant with respect to the information that may be obtained through the hiring process and the exercise of the right to termination.
Hiring
In Japan, there are no laws that specifically restrict the use of AI in hiring or recruitment activities. Additionally, under Japanese law and judicial precedent, since companies have the freedom to hire, even if an AI analysis is incorrect and the employer does not fully verify this analysis, this would not necessarily constitute a violation of applicable laws. However, it can be said that AI-based recruitment limits a company’s freedom to hire to a certain extent.
Specifically, even in cases where AI is utilised in recruitment activities and information on jobseekers is automatically obtained, in accordance with Article 5-4 of the Employment Security Act and Article 4-1 (2) of the Employment Security Act Guidelines, the information must be collected in a lawful and fair manner such as directly from the jobseeker or from a person other than the jobseeker with the consent of the jobseeker.
In addition, when using AI to obtain information on jobseekers, companies must be careful not to obtain certain prohibited information.
Specifically, under Article 20 of the Personal Information Protection Act, the company is typically prohibited from obtaining information requiring special care (race, creed, social status, medical history, criminal record and any facts related to the jobseeker being a victim of a crime) without the consent of the jobseeker, and, under Article 5-4 of the Employment Security Act and Article 5-1(2) of the Employment Security Act Guidelines, the company may not obtain certain information (eg, membership in labour union, place of birth) even with the consent of the jobseeker.
In addition, there is a risk that as a result of an erroneously high AI evaluation of a jobseeker, an offer may be made to a jobseeker or the jobseeker may be hired even though the jobseeker would not have been given an offer or hired if the company’s original criteria were followed. In such case, under Japanese law, the legality and validity of a decision to reject or dismiss the jobseeker will be determined based on how the recruitment process was conducted.
Termination
Situations in which the selection of the persons to be terminated may be problematic include termination as part of employment redundancy or voluntary resignations.
Under Japanese law, unilateral termination of employees by employers is restricted, and termination that constitutes an abuse of the right to terminate is considered invalid. In particular, in the case of termination as part of employment redundancy, the validity of termination is examined from the viewpoints of (i) the necessity of reducing the workforce; (ii) the necessity of terminating employees through employment redundancy; (iii) the validity of the selection of employees to be terminated; and (iv) the validity of the procedures for termination. AI’s use is mainly anticipated in the selection of employees to be terminated in (iii) above. It should be noted that these four perspectives are considered as factors rather than requirements, and even if AI is utilised to select an employee for termination in a reasonable and fair manner that eliminates subjectivity in the selection of the employee to be terminated, this does not necessarily mean that the termination is valid. Naturally, if the data on which the AI bases its judgement is erroneous or if the AI is unreasonably biased, there is a high possibility that the selection of the terminated employee will not be recognised as valid.
On the other hand, there is no law that specifically regulates voluntary resignations, since the resignation is made voluntarily by the employee. However, it is necessary for the voluntary resignations to take place in a manner that respects the voluntary decision of the employee; there are court cases that have held that a voluntary resignation resulting from an unreasonable act or conduct that may have impeded the employee’s voluntary decision to resign constitutes a tort under Article 709 of the Civil Code. Therefore, even if the selection of employees subject to voluntary resignation is based on an objective and impartial evaluation by AI, the company should not approach the voluntary resignation with the attitude that the decision is based on the AI’s judgment and that there is no room for negotiation. Instead, the company should provide a thorough explanation to the employee so that the employee understands the pros and cons of resigning and is able to make a voluntary decision. This recommendation to companies precedes the introduction of AI in the termination process.
Personnel Evaluation
Generally, the items and standards of assessment in Japanese personnel evaluations are abstract, and supervisors have broad discretion in the assessments. AI-based personnel evaluations are expected to reduce the unfairness and uncertainty stemming from the discretion given to supervisors.
Legally, the following provisions regulate personnel evaluations:
In the case of a company that has the authority to evaluate an employee, courts have held that a tort is not established unless the employer violated the above-mentioned provisions or abused its discretionary power in violation of the purpose of the personnel evaluation system. Cases that would fall under abuse of discretion include factual errors, misapplication of evaluation criteria, arbitrary evaluation and discriminatory evaluation.
Therefore, even in the case of personnel evaluation using AI, if there is an error in the data on which the AI bases its judgement, or if there is an error in the algorithm or learning method by which the AI evaluates such data, personnel evaluation based on such AI’s judgement may constitute a tort.
Monitoring
One possible method of monitoring workers using AI would be, for example, for AI to check emails and automatically notify managers if there are suspicious emails.
The question is whether this would infringe on the privacy rights of the workers to be monitored, but monitoring is considered permissible as long as the company’s authority to monitor is clearly defined in the internal rules. Courts have also held that, even if the authority is not clearly stated, monitoring is permissible as long as there is a reasonable business management need, such as when it is necessary to investigate whether or not there has been a violation of corporate order, and the means and methods used are reasonable.
Therefore, when conducting monitoring using AI, it would be advisable to (i) specify in the internal rules that managers ultimately have the authority to check the contents of employees’ email exchanges, and (ii) communicate such rules to the employees.
Ridesharing services were partially liberalised in Japan in 2024, but strict legal regulations still apply and ridesharing services such as Uber are not yet widespread in Japan. However, food delivery platforms, such as Uber Eats, which uses an algorithm to guide delivery staff to deliver orders quickly and efficiently, are widely used. Many food delivery platforms do not have an employment relationship with the delivery staff who work on a freelance basis. The MHLW guidelines for freelance workers state the following.
The Uber Eats Union, a labour union of Uber Eats delivery staff, demanded collective bargaining with the Japanese entity that operates the Uber Eats business in Japan (Uber Eats Japan). Specifically, the Uber Eats Union demanded collective bargaining regarding compensation in the event of an accident during delivery. Uber Eats Japan rejected the union’s demands for the reason that the delivery staff do not constitute employees under the Labour Union Act. The union then sought the intervention of the Tokyo Labour Relations Commission, which, in November 2022, ruled that the delivery staff were employees under the Labour Union Act.
In the financial sector, AI is used by banks and lenders for credit decisions and by investment firms for investment decisions. In addition, the amended Instalment Sales Act, which came into effect in April 2021, enables credit card companies to determine credit limits through credit screening using AI and big data analysis.
The FSA’s supervisory guidelines require banks, etc, when concluding a loan contract, to be prepared to explain the objective rationale for concluding a loan contract based on the customer’s financial situation in relation to the provisions of the loan contract. This is true even if AI is used for credit operations. Therefore, it is necessary to be able to explain the rationale of credit decisions made by AI.
In addition, when credit scoring is used by AI to determine the loan amount available for personal loans, care should be taken to avoid discriminatory judgements, such as different judgements of loan amounts available based on gender or other factors. The Principles for a Human-Centred AI Society also state: “Under the AI design philosophy, all people must be treated fairly, without undue discrimination on the basis of their race, gender, nationality, age, political beliefs, religion, or other factors related to diversity of backgrounds”.
Financial instrument firms must not fail to protect investors by conducting inappropriate solicitation in light of the customer’s knowledge, experience, financial situation, and the purpose of concluding the contract (the compliance principle). In addition, these firms are obligated to explain to customers the outline of the contract and the risks of investment in accordance with the compliance principle. Therefore, if the criteria for investment decisions by AI cannot be reasonably explained, problems may arise in relation to the compliance principle and the duty to explain.
If AI-based programs, such as diagnostic imaging software or health management wearable terminals, or devices equipped with such programs fall under the category of “medical devices” under the Pharmaceuticals and Medical Devices Act, approval is required for their manufacture and sale, and approval or certification is also required for individual medical device products. Whether AI-based diagnostic support software and other medical programs constitute “medical devices” must be determined on a case-by-case basis, but the MHLW has provided a basic framework for making such determinations.
According to this framework, the following two points should be considered.
In addition, when a change procedure is required to change a part of the approved or certified content of a medical device, the product design for an AI-based medical device may be based on the assumption that its performance will constantly change as new data is obtained after the product is marketed. Given the characteristics of AI-based programs, which are subject to constant changes in performance and other aspects after their initial approval, the amended Pharmaceuticals and Medical Devices Act, which came into effect in September 2020, introduces a medical device approval review system that allows for continuous improvement.
Since medical services such as diagnosis and treatment may only be performed by physicians, programs that provide AI-based diagnostic and treatment support may only serve as a tool to assist physicians in diagnosis and treatment, and physicians will be responsible for making the final decision.
Medical history, physical and mental ailments, and results of medical examinations conducted by physicians are considered “personal information requiring special care”, under the APPI, and, in principle, the consent of the patient must be obtained when obtaining such information. In many cases, medical institutions are required to provide personal data to medical device manufacturers for the development and verification of AI medical devices. In principle, the provision of personal information to a third party requires the consent of the individual, but it may be difficult to obtain prior consent from the patient. An opt-out system is also in place. However, it cannot be used for personal information requiring special care.
Anonymised information, which is irreversibly processed so that a specific individual cannot be identified from the personal information, can be freely provided to a third party. However, it has been noted that it is practically difficult for medical institutions to create anonymised information. In addition, the Next Generation Medical Infrastructure Act allows authorised business operators to receive medical information from medical information handlers (hospitals, etc) and anonymise it through an opt-out method. However, it is not widely used.
The revised Next Generation Medical Infrastructure Act passed by the Diet in April 2023 established a new system for the creation and use of “pseudonymised medical information”.
Regarding traffic rules, amendments to the Road Traffic Act have already been enacted to permit Level 3 (conditional automated driving) and Level 4 (unmanned automated driving).
Regarding liability in the event of an accident, there are no specific regulations that determine liability when an autonomous vehicle causes an accident, and currently, the existing legal framework applies. Under the current law, the entities liable in the event of an accident involving an autonomous vehicle include the driver, the operator (a concept that includes the owner of the vehicle and the transport business operator, in addition to the driver), and the manufacturer of the vehicle.
As for the driver’s liability, under the amended Road Traffic Act, at Level 3, the driver is not required to be vigilant if not requested to override and take over the autonomous driving system, thus liability for accidents occurring without an override request is limited to exceptional circumstances. At Level 4, since intervention by a person riding in the car is not requested at all, the person in the car will not bear any responsibility if an accident occurs.
Regarding the manufacturer’s liability, under the Product Liability Act, there is currently an active discussion on how to define the “defect” in an autonomous vehicle that must be proven by the victim. But generally, it is considered very challenging to hold manufacturers liable under the Product Liability Act when an autonomous vehicle causes an accident.
In light of this, the government has a policy to ensure the protection of a traffic accident victim by clarifying that the operator’s liability applies to autonomous driving for the time being. In Japan, when a personal injury accident occurs, the operator is subject to almost strict liability. When the operator is held liable, victims are compensated through the compulsory automobile liability insurance that comes with the vehicle.
There are currently no specific regulations or government guidelines for the use of AI in manufacturing. Nevertheless, the AI Guidelines for Businesses are broadly applicable to the use of AI in the manufacturing sector. Interestingly, a document released in June 2020 by the Regulatory Reform Promotion Council, an advisory body to the Cabinet Office, suggests that existing regulations regarding the inspection of products at manufacturing facilities could be relaxed if AI is used to assist in the inspection. It states that “if precise risk management is carried out using digital technologies during the manufacturing process, inspections themselves should be considered unnecessary”.
In addition to legal services (see 9 Legal Tech), when AI assists with professional services such as tax and accounting work, individual professional regulations must be observed. For example, as stated in Article 72 of the Attorneys Act, non-lawyers or entities other than law firms are not permitted to engage in the practice of law as a business. Nevertheless, a violation will not occur if the relevant AI services are intended to assist lawyers and are designed so that the output of AI services must be reviewed by lawyers and then provided to clients as the lawyers’ own work product. However, if the output of the AI services is provided directly to clients, there may be a problem under the Attorney Act. Since there are many such restrictions under current laws applicable to professional services, it is necessary to ensure that AI performing certain professional tasks does not violate these professional regulations.
Discussions regarding whether AI technology can be recognised as an inventor or co-inventor for patent purposes, an author or co-author for copyright purposes, or a moral right holder are also taking place in Japan. Under current Japanese law, AI is not considered a natural person, and therefore cannot be recognised as the inventor for patent purposes, the author for copyright purposes, or the holder of moral rights. In this regard, on 16 May 2024, the Tokyo District Court ruled that an “inventor” as defined in the Patent Act is limited to natural persons and does not include AI, in a case where the Japan Patent Office (JPO) in its decision dismissed the patent application related to an AI-generated invention because only “DABAS, an artificial intelligence which invented the invention autonomously” was listed as the inventor’s name in the national phase documents of the PCT application and the plaintiff filed a lawsuit to seek the revocation of the JPO decision.
However, if a person who used AI to create a work had the intention to create a work and made a creative contribution, then the resulting work may be recognised as having been created by the person who used the AI as a tool, rather than by the AI itself. In such a case, the natural person who had the creative intention and made the creative contribution is considered to be the author. While it is controversial whether AI should be given judicial personality, such a legal system is not being considered at this point.
AI technology and (big) data utilised in the development and use of AI are protected as trade secrets just like other informational assets (Article 2 (6) of the Unfair Competition Prevention Act (the UCPA)) as long as they are (i) kept secret; (ii) not publicly known; and (iii) are useful for business activities. The trade secret holder can seek an injunction against unauthorised use by a third party and can also claim damages for unauthorised use. In addition, criminal penalties may also apply for acts of unfair competition, etc, for the purpose of wrongful gain or causing damage (Article 21 of the UCPA).
Moreover, even if the data does not qualify as a trade secret because it is not kept secret as it is intended to be provided to a third party in the course of the development or use of AI, if the data constitutes technical or business information that is accumulated to a significant extent and is managed by electromagnetic means as information to be provided to a specific party on a regular basis, it is protected as “shared data with limited access” (Article 2 (7) of the UCPA). The holder of the rights to shared data with limited access can seek an injunction against unauthorised use by a third party and can also claim damages for unauthorised use. However, unlike trade secrets, there are currently no criminal penalties with respect to shared data with limited access.
Protection Based on Judicial Precedents
Even if not protected by the UCPA, unauthorised use of data may constitute a tort under Article 709 of the Civil Code if there are special circumstances, such as infringing on legally protected interests (Supreme Court, Judgment, 8 December 2011, Minshu 65(9)3275 [2012]). Legally protected interests include, for example, business interests in business activities (a case in which incorporating another company’s database into one’s own database for sale was considered to constitute a tort; Tokyo District Court, Judgment, 25 May 2001, Hanta 1081, 267 [2002]).
Protection Through Contracts
Even if not protected by the UCPA, it is possible to set rights and obligations related to data between parties in data transaction contracts and protect valuable data. However, in current Japanese law, data, which is an intangible asset, is not recognised as an object of ownership and remains a subject of the right to use under the contract. Especially for programs or models and their source code, it is reasonable to expect that they should be treated separately, so it is desirable to explicitly agree on the handling of the source code in cases where the transfer of the source code is an issue.
Copyright Law
Works created autonomously by AI are not protected by copyright since AI lacks ideas or emotions. However, if the user of AI (a human being) has creative intent in the process of generating the work and contributes creatively to obtaining the AI-generated work through instructions or other means, it can be considered that the user has creatively expressed their thoughts or sentiments using AI as a tool, and the work is protected as a copyrighted work.
Using third-party copyrighted works for the purpose of “AI learning” before generating AI-created work does not constitute copyright infringement. This is because in certain cases where the use is not intended for enjoying the expression of thoughts or sentiments in the copyrighted work (Article 30-4 (ii) of the Copyright Act), copyright protection does not apply and such use is not considered copyright infringement. However, if one tries to use the copyrighted works as they are for a database rather than as data for AI-learning purposes, such use may constitute copyright infringement, even under the above conditions.
Copyright infringement is established when someone relies on and uses another’s copyrighted work (in other words, someone’s work is derived from the copyrighted work). However, it is controversial whether the reliance requirement is satisfied in the case where AI that is developed using another’s copyrighted work as AI-learning data produces its own work that resembles another’s copyrighted work that was used as AI-learning data, and there is no established view on this matter.
Patent Law
AI-related technologies, including inventions of methods for AI to produce works and works produced by AI, are eligible to receive patents as long as they meet the general patent requirements. Under Japanese law, it is considered that data and pre-trained models are not excluded from eligibility for patent protection as long as they are considered programs or program equivalents (ie, data with structure and data structure). On the other hand, data or datasets that are merely presented as information are not eligible for patent protection.
As mentioned in 15.2 Applicability of Trade Secrecy and Similar Protection, if the user of AI has creative intent in the process of generating the work and contributes creatively to obtaining the AI-generated work through instructions or other means, the user can be considered to have creatively expressed their ideas or emotions using AI as a tool. In such cases, the AI-generated work is protected as a copyrighted work. This also applies to creating works and products using OpenAI, and there is no difference in protection whether the product is an image or text.
However, the extent to which creative contribution must be made to qualify for copyright protection is determined on a case-by-case basis and is still controversial.
Under the Copyright Act, it is likely that the prompts used to generate high-quality output can be protected as copyrighted works unless they are mere ideas since the copyright protects expressions not ideas. On the other hand, even if the prompt can be protected by the copyright, it is likely that the work generated by/with OpenAI is not a derivative work of the prompts if creativity in the prompts is difficult to find in the generated work.
In Japan, there are no cross-sectoral laws and regulations applicable to AI, only regulations in individual areas of law.
However, given that the use of AI often involves the use of personal information, compliance with the APPI is essential. In particular, the APPI is only a minimum set of required rules. Therefore, a more cautious approach is needed for the use of advanced technologies such as AI, depending on the purpose of the use and the type of personal information involved.
In addition to legal liability, there is also reputational risk if the use of AI results in discriminatory or unfair treatment.
Ultimately, it is for businesses to decide how to use AI in light of these considerations, which falls within the remit of the directors. However, since these decisions involve expert judgement, an increasing number of companies are turning to external expert panels or advisory boards on AI.
One AI governance guideline that is expected to be used as a reference for such business judgement is the “AI Guidelines for Businesses 1.0” established by METI and MIC. Although the guidelines are not legally binding, it is anticipated that until binding regulations on AI are introduced, this will serve as a primary reference point for Japanese companies regarding AI regulations.
Since there is no comprehensive AI regulation in Japan, best practice includes: (i) compliance with existing laws in specific areas; (ii) building a robust AI governance framework; (iii) contractual measures; and (iv) technical measures. The following discussion focuses on points (i) through (iii).
Legal Compliance
When developing, providing, or using AI, it is necessary to comply with existing laws, especially both the Copyright Act and APPI. These issues are discussed in more detail in other sections of this chapter.
Risk Management and Governance Framework (Building an AI Governance System)
Since there is no comprehensive AI regulation in Japan, there is a need to address risks not necessarily covered by law, such as bias and fairness issues. In this regard, mere compliance with existing regulations is not sufficient. Therefore, companies developing high-risk AI systems in particular are increasingly considering establishing a comprehensive AI governance framework across their organisations. Such AI governance frameworks mainly consist of an internal process to identify and address AI risks, as well as the organisations and personnel that develop and operate these processes.
Guidance that can be useful in this context includes the “AI Guidelines for Businesses 1.0” published by METI and MIC in April 2024. While these guidelines are not legally binding and non-compliance does not incur penalties, Japanese case law suggests that widely adopted guidelines could be considered when determining important issues such as breaches of duty of directors. Consequently, industry participants are recommended to review these guidelines to ensure that their systems are not significantly below industry standards.
Contractual Measures
Given that multiple parties are involved in the process of developing, providing, or using AI, it is worth considering contractually allocating appropriate risk distribution and responsibility sharing. In this context, the “Contract Guidelines on the Utilization of AI and Data” published by METI in June 2018 can serve as a useful reference. However, it is important to be cautious of regulations found in other applicable laws, such as the Subcontract Act, the Consumer Contract Act, and standard terms of contract provisions under the Civil Code, which invalidate certain contract clauses that unilaterally impose a disadvantage on a counterparty.
JP Tower
2-7-2 Marunouchi
Chiyoda-ku
Tokyo 100-7036
Japan
+81 3 6889 7000
+81 3 6889 8000
www.noandt.com/en/AI Guidelines for Business
Introduction
In recent years, the pervasive growth and integration of artificial intelligence (AI) technologies have prompted significant attention from regulatory bodies worldwide, with Japan being a front-runner in establishing comprehensive governance frameworks. The “AI Guidelines for Business Ver 1.0” is a critical document issued by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry. Tese guidelines underscore Japan’s proactive approach in shaping the ethical deployment of AI technologies across various business sectors, aiming to foster innovation while ensuring security, privacy and ethical compliance.
Background and purpose
Japan’s commitment to integrating AI aligns with its broader vision of “Society 5.0”, a concept that envisions a human-centred society enhanced by digital technologies. The formulation of the AI Guidelines for Business reflects a concerted effort to harness AI’s potential while addressing the ethical, legal and societal challenges that accompany its deployment. This initiative not only supports domestic policy frameworks but also aligns with international standards, contributing to global discussions on AI governance at forums such as the G7, G20 and OECD.
Policy framework and development process
The AI Guidelines for Business are the combination of “AI Research & Development Guidelines”, “AI Utilisation Guidelines” and “Governance Guidelines for Implementation of AI Principles” and are grounded in the “Social Principles of Human-Centric AI”, which emphasise dignity, inclusion and sustainability. These principles guide the development, deployment and management of AI systems, ensuring that technological advancements contribute positively to society.
The guidelines have been developed through a collaborative approach involving multiple stakeholders, including academia, industry and civil society. This inclusive process ensures that the guidelines are comprehensive, reflecting a broad range of perspectives and expertise. The development process also incorporates continuous feedback, adapting to new challenges and technologies through a “Living Document” approach.
Key components of the guidelines
Basic philosophies
The guidelines articulate three fundamental philosophies:
These philosophies underpin the detailed principles and practices that guide AI development, deployment and utilisation across business sectors.
AI business actors and their responsibilities
The guidelines define roles and responsibilities for three main categories of AI business actors:
Governance and compliance
Effective governance is crucial for the safe and ethical use of AI. The guidelines provide a framework for:
Ten guiding principles
1 Human-centric
When developing, providing or using an AI system or service, each AI business actor should act in a way that does not violate the human rights guaranteed by the Constitution of Japan or granted internationally, as the foundation for accomplishing all matters to be conducted, including the matters described later. In addition, it is important that each AI business actor acts so that the AI expands human abilities and enables diverse people to seek diverse well-being.
2 Safety
Each AI business actor should avoid damage to the lives, bodies, minds and properties of stakeholders during the development, provision and use of AI systems and services. In addition, it is important that the environment is not damaged.
3 Fairness
During the development, provision or use of an AI system or service, it is important that each AI business actor makes efforts to eliminate unfair and harmful bias and discrimination against any specific individuals or groups based on race, gender, national origin, age, political opinion, religion and so forth. It is also important that before developing, providing or using an AI system or service, each AI business actor recognises that there are some unavoidable biases even if such attention is paid, and determines whether the unavoidable biases are allowable from the viewpoints of respect for human rights and diverse cultures.
4 Privacy protection
It is important that during the development, provision or use of an AI system or service, each AI business actor respects and protects privacy in accordance with its importance. At this time, relevant laws should be obeyed.
5 Ensuring security
During the development, provision or use of an AI system or service, it is important that each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.
6 Transparency
When developing, providing or using an AI system or service, based on the social context in which the AI system or service is used, it is important that each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.
7 Accountability
When developing, providing or using an AI system or service, it is important that each AI business actor fulfils its accountability to stakeholders within a reasonable extent for ensuring traceability, conforming to common guiding principles and the like, based on that AI business actor’s roles and the degree of risks posed by the AI system or service.
8 Education/literacy
Each AI business actor is expected to provide the persons engaged in AI within the AI business actor with the necessary education to gain the knowledge, literacy and ethical views to correctly understand and use AI in a socially correct manner. Each AI business actor is also expected to provide stakeholders with education, in consideration of the characteristics of AI, including its complexity and the misinformation that it may provide, and the possibilities of intentional misuse of AI.
9 Ensuring fair competition
Each AI business actor is expected to maintain a fair competitive environment surrounding AI so that new businesses and services using AI are created, sustainable economic growth is maintained, and solutions for social challenges are provided.
10 Innovation
Each AI business actor is expected to make efforts to actively contribute to the promotion of innovation for the whole society.
Implementation strategies and international alignment
The guidelines emphasise the importance of aligning with international norms and standards to ensure that Japanese AI technologies are globally competitive and compliant. This alignment involves continuous updates to the guidelines based on international developments and technological advancements.
Challenges and future directions
While the guidelines set a robust framework for AI governance, ongoing challenges such as data privacy, algorithmic bias and cross-border data flows require continuous attention. Future revisions of the guidelines will need to address these evolving challenges and ensure that AI governance remains dynamic and responsive to new risks and opportunities.
Conclusion
Japan’s AI Guidelines for Business represent a forward-thinking approach to AI governance that balances the need for innovation with the imperatives of security, privacy and ethical integrity. As AI continues to transform industries, these guidelines will play a crucial role in guiding businesses towards responsible and sustainable AI practices, setting a benchmark for global AI governance frameworks.
General Understanding on AI and Copyright
In May 2024, the Japan Copyright Office published a guidance entitled “General Understanding on AI and Copyright in Japan” (hereinafter “General Understanding”), which describes the discussion that took place in a dedicated legal subcommittee in the Copyright Office. While the General Understanding is not legally binding, it represents the subcommittee’s views on the interpretation of legal issues involving AI and the Japanese Copyright Act at the time of publication thereof. To discuss legal issues involving AI and copyright, the General Understanding essentially set out the following two situations: (i) the situation where copyrighted works are utilised in the “AI development / training stage” and (ii) the situation where utilisation of an AI product/service (such as generating artistic works by AI) might infringe someone’s copyrights. Besides this, the General Understanding also raises the legal question as to (iii) whether AI-generated materials are susceptible to copyright protection and can become copyrighted works.
Copyright issues involving the “AI development / training stage”
Under Article 30-4 of the Japanese Copyright Act, exploitation of a copyrighted work not for enjoyment of the thoughts or sentiments expressed in the copyrighted work (exploitation for non-enjoyment purposes) such as AI development or other forms of data analysis may, in principle, be allowed without the permission of the copyright holder. In this basic framework, the key question would be the standards/criteria to determine whether certain use of copyrighted works for AI development/training would fall under “enjoyment of the thoughts or sentiments expressed in the copyrighted work”. On this point, the General Understanding suggests that, in the following cases, the reproduction of copyrighted works for AI training does not satisfy the “non-enjoyment purpose” requirement and thus Article 30-4 of the Copyright Act would not be applicable:
The General Understanding also points out the necessity to assess the applicability of the Article 30-4 proviso by considering “whether it will compete in the market with the copyrighted work” and “whether it will impede the potential sales channels of the copyrighted work in the future”. This assessment should be made by taking various factors into account, such as “technological advancements” and “changes in the way the copyrighted work is used”.
Possible copyright infringement when utilising an AI product/service
First, the General Understanding notes that, when AI-generated images or copies thereof are uploaded to social media or sold, copyright infringement will be determined based on the same criteria as for normal infringement. In other words, if an AI-generated image or any other creation is found to have “similarity” with and “dependence” on an existing image, etc (copyrighted work), and there are no applicable copyright exceptions, it will be considered an infringement of copyright. One of the key questions here would be how to determine “dependence” in the case of AI-generated content. On this point, the General Understanding suggests the following approach:
Possible copyright protection of AI-generated materials
Under the Japanese Copyright Act, a copyrighted work is defined as a “creatively produced expression of thoughts or sentiments that falls within the literary, academic, artistic or musical domain”. Besides this, the General Understanding notes that only a person (ie, a natural or legal person) can be an “author” under the Copyright Act, meaning that AI itself, which does not have a legal personality, cannot be an author.
In light of this principle, the General Understanding points out that materials autonomously generated by AI are not “creatively produced expressions of thoughts or sentiments” and are therefore not considered copyrighted works. On the other hand, the General Understanding explains that, if AI is used as a “tool” by a person to creatively express thoughts or sentiments, such material is considered a work, and the user of the AI the “author”.
Also, the General Understanding suggests that determining whether a person has used AI as a “tool” depends on two factors: (i) whether the person had a “creative intention” and (ii) whether the person has made a “creative contribution”. As regards factor (ii), the General Understanding outlines circumstances under which AI products are recognised as containing the AI user’s “creative contributions”, and provides examples of how this factor may determine the copyrightability of AI-generated material.
AI-based Software as a Medical Device (SaMD)
In August 2023, the Subcommittee on Software as a Medical Device Utilising AI and Machine Learning of the Science Board of the Pharmaceuticals and Medical Devices Agency (PMDA) compiled and published a report summarising discussions from a scientific standpoint regarding AI-based Software as a Medical Device (SaMD). Key points from the report are introduced below.
Activities to establish medical device regulations and safety standards in Japan
The activities contributing to the medical device regulations include the establishment of a review working group for preparing draft evaluation indices for AI-based diagnostic imaging support systems in the project of the Ministry of Health, Labour and Welfare (MHLW) for preparing evaluation indices for next-generation medical devices and regenerative medicine products. The deliverables of the review working group that was in operation from FY2017 to FY2018 were issued as PSEHB/MDED Notification No.1219-4 (Director, Medical Device Evaluation Division, Pharmaceutical Safety and Environmental Health Bureau, MHLW) in May 2019 after reviewing the public comments, and adopted as evaluation indices.
Furthermore, the formulation and revision of certification standards is progressing to transfer SaMD with a track record of approval to the certification system according to the type and the target disease in accordance with the regulatory reform plan approved by the Cabinet on 7 June 2022. In parallel, PMDA has started organising information on review points, eg, studying the conditions and evaluation points necessary for efficacy/safety evaluations, and publishing the information on its website to enhance the predictability of developers.
Additionally, the activities to provide scientific support include the research project for pharmaceutical regulatory harmonisation and evaluation of the Japan Agency for Medical Research and Development (AMED), “Study of pharmaceutical regulations on SaMD using advanced technology such as artificial intelligence” that started in 2019. In this study, the feasibility of AI-based SaMD capable of post-market learning was evaluated under industry-academia-government collaboration. As a result, a proposal for the implementation of continuous learning and performance change by manufacturers within the existing regulatory framework, particularly the “Improvement Design within Approval for Timely Evaluation Notice (IDATEN)”, was compiled and submitted to the MHLW. An experimental study to identify training data factors that affect the performance of SaMD through post-market learning was also conducted. The results were incorporated into the proposal. As a successor project to the above study, the AMED pharmaceutical regulatory harmonisation and evaluation research project “Study to contribute to the performance evaluation during the post-market learning of AI-based SaMD” was started in 2022, and an experimental study to identify the points to be considered when determining the validity of the performance evaluation process has been advanced. In the future, an industry-academia-government collaboration system will be established, and draft performance evaluation guidance will be prepared based on the results of the experimental study.
Meanwhile, the results of the Health and Labour Sciences Research Grant project were compiled and issued by the MHLW in May 2023 as a guidance for approval and development based on the characteristics of SaMD.
Current status and challenges of SaMD in Japan
To make the most of the features of SaMD, the IDATEN system was developed as an approval system and applied to change plans in Japan. However, the system has yet to be fully used. The reasons may include possible improvement and deterioration of performance due to post-market learning and concerns about risks such as catastrophic forgetting. Attention should be paid to the potential risk of repeated use of the same test data when evaluating the performance after repeated retraining. Evaluation using pre-marketing test data available at the time of approval is important to ensure that the performance achieved at the time of approval is maintained without any problem such as catastrophic forgetting. Evaluation using post-market test data is necessary to check the performance of the system in operation. When overfitting occurs, the cause should be identified, and certain actions should be taken, eg, using a low-risk development method to continue the development of the SaMD or preventing the problem from spreading by taking strict measures including suspension of use of the SaMD. It is important to further deepen the discussions on this issue.
Furthermore, in the development of deep learning (DL)/AI systems using medical images (radiological and ultrasound images), there are few prospective DL clinical trials or randomised clinical trials of diagnostic imaging. Most non-randomised clinical trials do not have a prospective design, involve a high bias risk and deviate from the existing reporting standard of clinical trial outcomes. To avoid obstacles to the approval of AI-based medical devices, appropriate clinical evaluation methods should be discussed while taking into consideration future trends in research and development of related technologies. However, efforts to scientifically reduce bias risks should eventually be made. It will also be necessary to continue discussions on whether post-market prospective or randomised clinical trials are necessary to evaluate the performance improvement made by post-market learning.
Additionally, combined use of numerical simulation and machine learning (ML) has also been studied. ML-based medical device programs using numerical simulation in the development process or medical device programs developed based on a combination of numerical simulation and ML may become available in the near future. Note that the issues involved in real measurement, including biases, can be controlled with careful use of numerical simulation, but there are limitations specific to numerical simulation. The Science Board report will be a useful reference.
Databases developed to date in Japan
To date, AMED has supported the development of four representative medical image databases (surgery videos, digital pathological images, ECG and gastrointestinal endoscopy) in Japan.
The surgery videos database, primarily established at the National Cancer Center Hospital East, has collected approximately 4,000 cases of 13 different procedures as of 23 August 2023. Each surgical video dataset is linked to patient information, surgeon details and device data. Challenges include the complexity of standardising different video formats and privacy considerations concerning the personal information visible in the videos. Other databases, such as those for digital pathology images, electrocardiograms and gastrointestinal endoscopies, share common challenges related to large data volumes, the complexity of creating annotated datasets, and the need for funding.
Recent Developments in Japan Concerning AI
Recently in Japan, there has been a rapid emergence of businesses utilising generative AI to produce anime and manga, which are among the country’s principal export goods. Traditionally, the creation of anime and manga has required substantial time and costs. However, it is anticipated that generative AI will enable the production of high-quality content both quickly and economically. Nevertheless, there are concerns that using generative AI for creating anime and manga might pose legal challenges under copyright law. In response, the Agency for Cultural Affairs has provided guidance as stakeholders explore lawful ways to conduct business.
Additionally, in Japan, there is a growing trend of developing generative AIs that specialise in Japanese, based on data that mitigates the risk of copyright infringement. Although these initiatives may not be as large-scale as those involving platforms like ChatGPT, they are welcomed for their emphasis on eliminating copyright infringement risks. There is keen interest in the future results of these developments.
Otemachi Park Building
1-1-1 Otemachi
Chiyoda-ku
Tokyo
100-8136
Japan
+81 3 6775 1000
+81 3 6775 2088
tomoko.adachi@amt-law.com www.amt-law.com/en/