Artificial Intelligence 2025

Last Updated May 22, 2025

South Korea

Law and Practice

Authors



Bae, Kim & Lee LLC was founded in 1980 and is a full-service law firm covering all major practice areas, including corporate law; mergers and acquisitions; dispute resolution (arbitration and litigation); white-collar criminal defence; competition law; tax law; capital markets law; finance; intellectual property; employment law; real estate; technology, media and telecoms (TMT); maritime; and insurance matters. With more than 650 professionals located across its offices in Seoul, Beijing, Hong Kong, Shanghai, Hanoi, Ho Chi Minh City, Yangon and Dubai, it offers its clients a wide range of expertise throughout Asia. The firm is composed of a diverse mix of Korean and foreign attorneys, tax advisers, industry analysts, former government officials, and other specialists. A number of its professionals are multilingual and have worked at well-known law firms in other countries, enabling them to assist international clients as well as Korean clients abroad with cross-border transactions.

Before the promulgation of the Framework Act on the Development of AI and the Establishment of a Foundation for Trust, etc, (the “AI Framework Act), the discourse around AI in South Korea (“Korea”) mainly focussed on two areas:

  • the legality of processing publicly available personal information for artificial intelligence learning in light of data privacy concerns; and
  • the potential for copyright infringement when using such data.

However, in January 2025, the AI Framework Act was enacted (to be enforced starting 22 January 2026), and current debate now focuses on the scope of the AI Framework Act’s application.

AI and machine learning are leading innovation in various industries including the medical, financial and manufacturing sectors, and their influence continues to expand. For example, financial institutions are applying AI in the areas of customer service, asset management and investment advice.

AI technology has demonstrated its prowess across various everyday situations, including providing personalised services by analysing consumer data through machine learning, improving business response times through business process automation, and operating chatbots based on generative AI. In addition, financial institutions are using AI to improve the efficiency of their employees’ work evaluating customer credit. Platform companies are also providing optimised user interfaces by providing customised advertising based on users’ search records and improving internet search engine accuracy through AI.

The Presidential Committee on AI announced the following policy directives on 26 September 2024, for the development and promotion of AI innovation:

  • establishment of a National AI Computing Centre worth up to KRW2 trillion based on public-private joint ventures;
  • support for revitalising private investment in AI;
  • promotion of the nationwide transformation of AI in industry, public, society, regions, and national defence;
  • establishing the AI Safety Research Institute as a national organisation dedicated to systematically responding to advanced AI risks;
  • fostering AI start-ups and talents to expand the capacity to support national AI innovation;
  • expanding AI core and source technologies and promoting AI infrastructure innovation;
  • creating a foundation for sustainable AI development and diffusion; and
  • establishing a new order of the AI era and leading global AI norms and governance.

Also, the Presidential Committee on AI published detailed policies for the directives above on 20 February 2025.

Furthermore, regarding investment in AI, currently under the Act on Restriction on Special Cases Concerning Taxation, AI-related technologies have been designated as new growth source technologies, and tax credits have been granted for investment in research and development activities concentrating on AI. However, moving forward, an amendment is being discussed to designate AI technology as national strategic technology and increase tax benefits for the technology.

Korea has recently taken significant steps toward establishing a comprehensive regulatory framework. In particular, the National Assembly has enacted the AI Framework Act, which serves as the country’s foundational legislation for AI governance. Further details of the Act are provided in 3.2 Jurisdictional Law.

On 26 December 2024, the AI Framework Act was passed into law by the National Assembly, and on 22 January 2025, the AI Framework act was announced publicly. This new Act will be effective from 21 January 2026. The AI Framework Act is composed of three main sections:

  • a section on AI policy promotion systems (ie, the establishment of a basic plan and the operation of the Presidential Committee on AI, the AI Policy Centre, and the AI Safety Institute);
  • a section on policy support for the development of AI technology and industry (ie, establishment of an industrial foundation, support for the development of AI technology and industry revitalisation); and
  • a section that covers the obligations of the government and AI businesses to ensure trust in AI (ie, establishment of AI ethical principles and self-regulation, obligation to ensure safety, responsibilities for high-impact AI, etc).

While the AI Framework Act has been enacted as described in 3.2 Jurisdictional Law, subordinate statutes and relevant guidelines and notifications have not been published.

In the meantime, government departments have issued the following guidelines

  • The Korea Communications Commission (KCC), which is a regulatory body in the broadcasting and telecommunications fields, focusing on user protection, published guidelines in 2019 to encourage AI development that ensured the protection of users based on the principles of accountability, non-discrimination and transparency. Also, it has published a guideline on user protection involving generative AI on 28 February 2025.
  • The Ministry of Science and ICT (MSIT), which is in charge of encouraging policies for the development of new technology in the ICT industry, announced a set of ethical standards for AI in 2021, whereby it identified ten principles for managing AI, such as protection of human rights, protection of privacy and respect for diversity.
  • The Financial Services Commission (FSC) issued, in 2022, its detailed policy guidelines on development and utilisation of AI in the financial sector, setting forth various factors to be considered when developing and using AI.
  • The Personal Information Protection Commission (PIPC) announced, in 2023, its policy direction for safe use of personal information in the era of AI, which includes the principles for processing of person information at each stage of AI development and service. Later, the PIPC published a guideline on the processing of personal information for the development of AI services on 17 July 2024, and a risk management model for AI privacy risk in data utilisation on 19 December 2024. In addition, the PIPC has a preliminary appropriateness determination system for the use of personal information for artificial intelligence, which allows companies to inquire about the appropriateness of using personal information for artificial intelligence development.

This is not applicable in Korea.

This is not applicable in Korea.

This is not applicable in Korea.

The Personal Information Protection Act has been amended to introduce the data subject’s right not to be subject to an entirely automated decision, similar to the automated decision-making right in the EU’s GDPR, which will become effective from 15 March 2024.

In addition, the amended Personal Information Protection Act includes provisions for individuals to request explanations or human review of automated decisions, as well as the ability to reject such decisions if they materially affect their rights and obligations as data subjects.

Furthermore, the amended Personal Information Protection Act has been designed to secure transparency and enhance credibility in the processing of personal information by mandating disclosure of the criteria and procedures for automated decisions in advance.

Representative Min Byung-deok has proposed a bill that would allow the use of personal information for AI learning in cases where no existing legal basis applies, provided that risk factors are assessed and appropriate safety measures are taken, subject to the approval of the Personal Information Protection Commission.

No significant precedents in this area have been found.

The main regulator for the AI Framework Act is the MSIT, and the Presidential Committee on AI is also relevant. The Presidential Committee on AI reviews and takes decisions regarding the government plan, strategic investment, and other government actions on AI.

Setting aside the government agencies that are responsible for drafting AI regulatory policies, the PIPC is the most active government agency in regulating AI-related issues.

Please refer to 3.3 Jurisdictional Directives.

In 2024, the PIPC listed precautions to be taken when using publicly available personal information. This was released after the Commission conducted an inspection on AI service providers in 2023.

The Korea Fair Trade Commission has investigated the business practices of mobility and advertising business operators from the perspective of the fairness of algorithms.

The AI Framework Act provides for the establishment of the AI Safety Institute. The AI Safety Institute is responsible for defining and analysing AI-related risks, providing criteria for evaluating them, and researching technologies and standardisation for AI safety.

Additionally, the Telecommunication Technologies Association (TTA), an affiliated agency of the Korea Communications Commission, issued an artificial intelligence development guide in 2023.

The Financial Security Institute, an affiliated agency of the Financial Services Commission, has published AI security guidelines.

Generally, the Korean Standards Association plays an important role in adopting international standards.

The Presidential Committee of Digital Platform Government has presented a draft policy that suggests the use of AI in various sectors of society. This draft policy is based on the government’s plan for realising digital platform government, which is one of the government’s policy objectives. This includes:

  • digitalisation of records in AI-readable form;
  • establishment of a mega AI infrastructure as the top-level integration platform for the digital platform government;
  • establishment of an AI and big data-based forecast model for emergencies (fires, explosions, etc);
  • use of public government documents for AI learning and support for the preparation of documents using AI; and
  • introduction of AI-based digital textbooks for primary and secondary schoolers.

Under the above plan, AI services have been introduced and used to predict wildfires and floods in 2024.

Facial recognition technology has been in use in the immigration process and has simplified the process significantly.

Since June 2023, the Gyeonggi municipal government has been implementing an “AI chat service”, which is an active welfare service in which an AI counsellor makes a phone call once a week to elderly persons (65 and older) who are in need of care in the area. The purpose of the call is to check in on the elderly by engaging in conversation and monitoring their health and life. The person in charge directly makes a phone call or visits their residence if there is no answer to the phone call three times or more. Increasing numbers of local governments are adopting these AI chat services.

There have been no judicial decisions related to government use of AI.

The Republic of Korea Armed Forces have been developing an AI model to predict the demand for repair parts for each of the approximately 30,000 types of equipment in operation in the military, by establishing a team to analyse the demand for repair parts within the Korean Institute for Defence Analysis since January 2012.

In addition, the Republic of Korea Armed Forces plan to introduce AI technology prioritising defence logistics, such as AI-based smart maintenance, smart factories, and smart warehouses.

In addition, on 1 April 2024, the Ministry of National Defence established an AI centre for national defence to carry out President Yoon’s government project of “cultivating AI science and technology forces”.

Since the emergence of generative AI, there has been controversy over the protection of IP addresses and personal information. In particular, lawsuits have been filed over the use of news articles as training data for generative AIs, as discussed in 15.1 IP and Generative AI.

On the other hand, the AI Framework Act obliges the following regarding generative AI:

  • to provide prior notice to users in advance that the services is based on generative AI, in the T&C, service contracts, user manual, or the UI; and
  • to indicate that the outputs of generative AI or the products or services using it have been generated by generative AI – this disclosure labelling must be human recognisable or machine readable, and additional notifications are required of businesses that produce deepfakes as output.

The PIPC issued guidelines on the use of publicly available personal information for the development of AI. The Commission believes that the data subject’s right to deletion and to rectification must be protected with regard to AI. Moreover, the Personal Information Protection Act has been amended to introduce the data subject’s right to object to an automated decision.

The use of AI in private enterprises has been quite limited so far. Only a few companies or law firms have commenced or plan to commence using AI to providing legal services (except translation).

However, a medium-sized law firm in Korea introduced, on 20 March 2024, an AI-based legal counselling chatbot service in collaboration with Naver Cloud and Nexus AI (a legal tech venture company). In the wider legal tech space, Allibee by BHSN, SuperLawyer by Law and Co., LBOX AI, etc, are providing legal solution in software as a service form.

The Ministry of Justice, Prosecutors’ Office, and National Police Agency opened the Next Generation Criminal Justice Information System (KICS) on September 19, 2024. KICS aims to fully digitalise the criminal justice process, expand online and non-face-to-face services through technological innovation, and to completely reorganise the aging existing system. Using this service the Prosecutor’s Office anticipates reducing its workload by using the AI to search for similar cases, summarise the investigation information, get sentencing recommendations in document drafting, extract key information from evidence, generate relevant questions for the investigation, identify missing information, and transcribe conversations.

The court officially opened the next-generation e-litigation system on 31 January 2025, to revolutionise judicial affairs and judicial information disclosure by completely overhauling the existing e-litigation system. The Litigation Procedure Guidance Chatbot was introduced, which uses AI to guide litigants through the litigation process 24 hours a day, and a service that allows users to submit their resident registration certificates and corporate registration certificates, which previously had to be issued separately to be submitted in e-litigation, through electronic linkage methods such as mobile phones. An e-litigation portal and e-depository service are also available.

There is no debate yet on the liability for damages resulting from AI-enabled technologies. If an AI service provider pinpoints that the consumer is ultimately responsible for their decisions to use the AI and the following consequences in the terms and conditions of the service, the fairness of such terms and conditions may be assessed by the relevant government authorities, such as the Korea Fair Trade Commission.

The AI Framework Act was passed on 25 December 2024, and it imposes obligations of transparency (Article 31) for high-impact and generative AI, as well as other obligations specifically for high-impact AI (Article 34). Violations of administrative orders to rectify violations of the above requirements of transparency or for high-impact AI will result in administrative fines of up to KRW30 million (Article 43).

However, the current AI Framework Act does not include specific regulations on liability and compensation in the event of damage caused by AI. Therefore, it is expected that this issue will need to be resolved through related laws such as the existing product liability law.

The various guidelines outlined in 3.3 Jurisdictional Directives provide guidance that fairness should be maintained in the development and use of AI. In addition, the Financial Services Commission’s guide stipulates that a fairness indicator must be imposed on the process of developing and using an AI technology to assess and maintain fairness.

The guidelines for AI use in the financial sector are being prepared for amendments to consider systematic consistency with other overlapping guidelines and the characteristics of generative AI such as bias and illusion. In addition, the Guidelines for the Development of Reliable AI, published by the MSIT and the Korea Telecommunications Technology Association (TTA) on March 2024, suggest that measures be taken to eliminate bias in collected and processed data and to eliminate bias in AI models.

The Ministry of Justice has streamlined the immigration process using facial recognition technology, the Ministry of Justice is using facial recognition technology to automate the identification and tracking of domestic and foreign citizens during immigration screening, and the Ministry of the Interior and Safety has introduced a facial recognition-based access system for government buildings. 

Meanwhile, the National Human Rights Commission of Korea has recommended using biometric information for employee’s attendance management, which is increasingly adopted by private enterprises.

The AI Framework Act classifies AI systems used to analyse and use biometric information for criminal investigation and arrest as high-impact AI and imposes certain obligations on businesses that provide such products and services.

The amended Personal Information Protection Act ensures the data subject’s right to object to an automated decision.

The guidelines of each agency highlight transparency as a principle in developing and using AI. The most detailed content on this subject can be found in the PIPC’s guideline on automated decision-making under the amended Personal Information Protection Act.

Although the Commission finds it unnecessary to explain the specific operation method of the algorithm to the data subject, it requires the individual variables in artificial intelligence to be disclosed.

The AI Framework Act imposes transparency obligations by stipulating prior notification obligations for high-impact AI or generative AI, display obligations for generative AI, and notification and display obligations for deep-fake products.

As explained in 7.3 National Security, AI systems for demand forecasting are being introduced in the procurement sector (specifically, military procurement).

Article 35 of the AI Framework Act does not impose an obligation on businesses to conduct impact assessments, but it provides incentives for operators to conduct impact assessments by stipulating that “when a national organisation or other entity intends to use a product or service using high-impact AI, it shall give priority to the product or service that has undergone impact assessment”. 

The AI Framework Act, to be enforced starting 22 January 2026, categorises AI used in hiring as high impact (Article 2(4)). High-impact AI deployers are required to give prior notice to users (Article 31(1)). In addition, to ensure the safety and reliability of high-impact AI, per Article 34(1). a business that uses AI for recruitment must:

  • establish a risk management plan;
  • establish and implement a plan to explain the final results produced by the AI (to the extent technically feasible) and the main criteria used to derive the final results produced by the AI;
  • provide an overview of the training data used to develop and use the AI; and
  • establish and operate a user protection plan;
  • ensure the creation and storage of documents that confirm the management and supervision of human beings over high-impact AI;
  • provide the contents of measures to ensure safety and reliability; and
  • undertake any other measures for matters deliberated and resolved by the Presidential Committee on AI.

Furthermore, any use of AI in hiring must be preceded by efforts to make a prior impact assessment on the basic rights of people (Article 35(1)). The specifics of these requirements will be provided in the upcoming Enforcement Decree.

Many companies have adopted an attendance management system using biometric authentication information, but there has been no discussion on the introduction of AI in employee performance evaluation. The AI Framework Act defines high-impact AI as a judgment or evaluation that has a significant impact on the rights and obligations of individuals, such as recruitment and loan screening, and leaves it to the Enforcement Decree to determine which other parts of the AI fall under high-impact AI. Therefore, it is not yet clear whether employee evaluation and monitoring will fall under high-impact AI.

Platform companies are making significant use of recommendation algorithms that use AI. They are most commonly used to provide personalised services based on the user’s behavioural information.

Financial companies are actively utilising or trying to utilise AI in providing customer consulting and support services, calculating credit ratings, designing insurance products, managing assets and risks, and detecting abnormal transactions and money laundering.

In particular, as chatbot services become more sophisticated with the advances made by generative AI, many financial companies are providing customer consulting and support services using chatbots, and AI is increasingly being used for asset management and personalised marketing purposes.

As the use of AI increases, the risks for financial institutions are also increasing. For instance, as the number of investment product transactions using AI increases, there is a possibility that a large number of unintended orders are placed all at once due to algorithm errors, which will increase market volatility. In addition, there is a possibility that financial companies may sell products that are not suitable for customers or fail to properly perform their obligations to explain while utilising AI for product recommendation.

The AI Basic Act defines AI used for loan screening as high-impact AI (Article 2(4)). Therefore, the use of AI for loan screening is subject to regulations related to high-impact AI (see Chapter 13).

Also, the Korean financial supervisory authorities have announced AI guidelines (and AI security guidelines) in the financial sector to ensure that financial companies using AI technology protect financial consumers’ rights and take responsibility for their services.

In particular, the AI guidelines in the financial sector require financial companies to prevent unreasonable discrimination against consumers. Accordingly, financial companies should establish fairness standards based on the characteristics of services and conduct evaluations based on certain standards to prevent the possibility of unexpected discrimination that may occur due to AI-enabled services.

Big data analytics platforms based on video information are gaining traction as an important trend in healthcare. Non-medical institutions are required to receive data from medical institutions, but there are many challenges in obtaining such data. For example, medical institutions tend to be cautious about providing medical data and there are many legal regulations in this area. To resolve this issue, the government is proceeding with special legislation for healthcare data.

On the other hand, the AI Framework Act applies to AI used in:

  • medical devices defined in the Medical Device Act:
    1. products used for the purpose of diagnosing, treating, reducing, curing, or preventing diseases;
    2. products used for the purpose of diagnosing, treating, reducing, or correcting injuries or disabilities;
    3. products used for the purpose of inspecting, replacing, or modifying structures or functions;
    4. products used for the purpose of controlling pregnancy; and
  • digital medical devices defined in the Digital Medical Products Act:
    1. products used for the purpose of diagnosing, treating, or observing the prognosis of diseases to which intelligent information technology, robotics, information and communication technology, etc, are applied;
    2. products used for the purpose of predicting treatment response and treatment outcomes of diseases; and
    3. products used for the purpose of monitoring treatment effects or side effects of diseases, etc.

The AI Framework Act imposes obligations of high-impact AI systems (Article 2(4) on such devices. Please refer to 13. AI in Employment for further detail.

There is no general regulation governing the use of AI in autonomous vehicles. Provided, however, that certain laws prescribe matters relating to autonomous vehicles. First of all, with respect to liability in the event of an accident, the Compulsory Motor Vehicle Liability Security Act provides measures to seek reimbursement against the manufacturer in the case of any defect in the vehicle while maintaining the existing drivers’ liability, and to establish an accident investigation committee to investigate the autonomous driving data recording device affixed to the autonomous vehicle. Meanwhile, the Rules on Safety and Performance of Motor Vehicles and Motor Vehicle Components (Motor Vehicle Safety Standards), sub-regulations of the Motor Vehicle Management Act, have safety standards for Level 3 autonomous driving systems.

Autonomous vehicles precent a number of data privacy issues, including the use of video information taken by autonomous AI driving devices while driving. Although it has been necessary to use mosaiced (pseudonymised) video data to ensure that no individual can be identified even when developing autonomous driving technology, the PIPC has prepared a measure to permit the use of non-mosaiced original video through a regulatory sandbox, and accordingly, several companies have applied for the sandbox for the development of autonomous driving AI.

To date, there is no law regulating the use of AI in the manufacturing sector. Provided, however, that the Ministry of Trade, Industry and Energy has commenced the establishment of a master plan for AI autonomous manufacturing to drive innovation in the manufacturing process and enhance productivity. The Ministry has also stated that it will

  • conduct a manufacturing process analysis and promote pilot projects for AI autonomous manufacturing;
  • develop core technology for AI autonomous manufacturing; and
  • proceed with the renovation of systems and infrastructure to expand the introduction of AI autonomous manufacturing.

Meanwhile, MSIT has announced a plan to prepare a basic law on artificial intelligence in 2024 and a bill on fostering the artificial intelligence industry and creating a foundation of trust has been proposed and is currently pending in the National Assembly.

It is also worth noting that the above bill permits the launch of artificial intelligence technology, artificial intelligence products (products using artificial intelligence technology), or artificial intelligence services, in principle, but it also prescribes the principles of priority permission and ex-post regulation that can limit them if they cause any harm to the lives, safety, rights and interests of the citizens, or significantly disrupt public safety, the maintenance of general order, and welfare promotion.

In the accounting, tax, and legal markets, individual companies are conducting a review on the use of AI for the analysis of contracts or financial statements.

However, in such professional services, companies cannot provide relevant data for a large-scale large language models due to confidentiality issues with clients, and for this reason, as they are basing the work on small language models, progress is sluggish.

If the training data used for AI learning consists of works produced by others, such works are subject to copyright protection. Unless the copyright holder approves the use of the work in the model learning process, there is a risk of copyright infringement. In a related case, South Korea’s three over-the-air broadcasting companies filed a lawsuit against NAVER on 13 January 2025, alleging copyright infringement, claiming that NAVER used news articles for learning without permission when developing HyperClova and HyperClovaX, its generative AI services (Seoul Central District Court 2025 Gahap 5105).

If a prompt is entered for the creation of an AI product, that prompt may be recognised as a creative expression per se. Accordingly, it may be subject to the copyright protection as a type of literary work. On this point, the courts will look into the specific prompts of each case.

Under the current legal frame, the end result of an AI’s work is unlikely to be recognised as a work product, which makes it difficult to be protected under the copyright law. However, if any AI-created product is substantially similar to any work of another person, a copyright infringement may be recognised.

An AI model can be protected through patents for its novelty and advancement. In such a case, the source code for realising the AI model can be protected as the computer program works. If an AI tool provider restricts the input method for using the generative AI tools in question, and also restricts the method of using the product through the service terms and conditions, to prevent infringement of intellectual property rights in the course of using the AI services, any user who fails to comply with such restrictions may be held liable for a breach of the terms and conditions.

On 30 June 2023, in a lawsuit filed by Stephen Thaler, an AI developer in the United States, as part of the so-called DABUS project to seek recognition of AI as an inventor, the Seoul Administrative Court ruled that “invention” under Article 2(1) of the Patent Act refers to the highly advanced creation of a technical idea using the laws of nature and that such a technical idea presupposes human reasoning, and therefore, under the current laws, AI cannot be recognised to have the legal capacity to “invent”. The appellate court also ruled that the inclusion of AI as an inventor under the current provisions of the Patent Act is beyond the limits of legitimate legal interpretation, and that if there are objects that should be protected as AI inventions in the future, this should be achieved through legislation that takes public discourse into account (Seoul High Court 2023Nu52088). The case is currently under appeal (Supreme Court 2024Du45177).

In addition, the Copyright Act defines “work” as a creative production that expresses human thoughts and emotions (Article 2(1) of the Copyright Act) and “author” as “a person who creates a work” (Article 2(1) of the Copyright Act). The Ministry of Culture, Sports and Tourism stated in the Generative AI Copyright Guide issued on 27 December 2023 that, under the current laws, an AI cannot be recognised as an author.

Under Korean laws, a trade secret refers to information, including a production method, sales method, or useful technical or business information for business activities, which is not known publicly, is managed as a secret, and has independent economic value (Article 2(2) of the Unfair Competition Prevention and Trade Secret Protection Act). An act of acquiring trade secrets or using or disclosing trade secrets improperly acquired, with knowledge of the fact that an act of improper acquisition of the trade secrets has occurred, or without such knowledge due to gross negligence, constitutes infringement of trade secrets (Article 2(3)(b) of the above Act). Therefore, if any data considered as trade secrets of another person is collected without permission and used for AI learning, trade secret infringement issues may arise.

Meanwhile, if any technical data, such as the source code of any AI model created for AI services, is kept confidential and not disclosed to others, such data can be protected as trade secrets.

Under the current law, any product created by generative AI itself is not recognised as a work of authorship.

Any product created by Open AI’s generative AI is not protected by copyrights or patents. However, if any product created by Open AI’s generative AI is substantially similar to any existing work, such product may infringe on the copyrights of others.

On 17 December 2024, the Korea Fair-Trade Commission (KFTC) published its report: Generative AI and Competition. The purpose of the report is to systematically analyse the structure and competition status of the generative AI market, examine competition and consumer issues that may arise in the process, and propose future initiatives.

In this report, the KFTC categorised the generative AI market into three stages: AI infrastructure, AI development and AI implementation. The report presented structural factors affecting market competition in light of the characteristics of the AI market: (i) capital and technology-intensive industries, (ii) economies of scale and scope, and (iii) market pre-emption effects. In addition, the KFTC analysed the potential for competition in the generative AI market and concerns about harm to consumer interests from the perspectives of solo conduct, business combinations, and consumer interest. The KFTC also mentioned its future plans to consider improving the system to regulate consumer interest infringement related to data collection and use.

Following the publication of the above report, the KFTC announced that it will conduct a survey of major domestic and foreign companies in March 2025 to examine competition restrictions in the AI data market, and it plans to examine whether domestic and foreign big tech companies monopolise data or control the market by blocking competitors’ access to data. Based on the above survey, the KFTC will publish a policy report on the AI data market around October 2025, focusing on analysing competition restrictions that may occur in the process of collecting and utilisingdata essential for AI model training.

There is no separate cybersecurity legislation for AI yet. Therefore, cybersecurity measures should be taken in accordance with the Act on Promotion of Information and Communications Network Utilisation and Information Protection, which is generally applicable to online businesses. In addition, it is necessary to refer to the guidelines published by the AI Safety Institute.

The use of AI in corporate ESG assessments, carbon reduction, and social contribution activities is on the rise. Companies such as Sustinvest, SK Hynix, Samsung SDS, SK Telecom, and KT are using AI technology to identify and analyse corporate ESG status, improve electricity use efficiency, track logistics carbon emissions, care for the elderly and dementia patients, and train medical technology in developing countries.

In addition, the Ministry of Environment has completed the 2024 Smart Ecological Factory Construction Project, a government-led eco-friendly factory transformation project to help small and medium-sized enterprises reduce greenhouse gases and pollutants and improve energy and resource efficiency. Through the project, energy-saving equipment, factory energy management systems (FEMS), and monitoring systems (ICT) were introduced to dramatically reduce energy usage, resulting in a reduction in greenhouse gas emissions and a reduction in factory operating costs.

The AI Framework Act categories AI used in energy, healthcare, and nuclear sectors to be high-impact and imposes certain obligations.

AI Best Practice Compliance Strategies

Article 34 of the AI Framework Act requires the establishment and operation of a risk management plan as a responsibility of business operators related to high-impact AI. In this regard, measures such as establishing an AI risk management organisation and establishing a private autonomous AI ethics committee can be considered. Furthermore, as the Financial Services Commission provides the most specific guidelines, the content thereof may also serve as a model for other government agencies. The same article requires

  • the establishment and implementation of a plan to explain the final results derived by AI and the main criteria used for derivation (as well as an overview of training data);
  • the establishment and operation of user protection measures, and the human management and supervision of high-impact AI; and
  • the creation and storage of documentation of measures to ensure safety and reliability.

Guidelines including specific standards and examples will be established in this regard. It is possible to consider the introduction of an EU-style AI law, documentation obligations to secure accountability as proposed by the Commission, an artificial intelligence governance system, and ex-post verification procedures for AI services.

In addition, since ISO 42001 can be an important standard, it is recommended to consider obtaining ISO 42001 certification or building an internal system at a level similar thereto.

Bae, Kim & Lee LLC

Centropolis B
26 Ujeongguk-ro
Jongno-gu
Seoul 03161
South Korea

+82 2 3404 0000

+82 2 3404 0001

bkl@bkl.co.kr www.bkl.co.kr/law?lang=en
Author Business Card

Trends and Developments


Authors



Lee & Ko was established in 1977 and has since evolved into a leading full-service law firm in South Korea, recognised for its excellence in various legal domains. The firm boasts a strong reputation for client satisfaction and quality services. Lee & Ko is known for delivering timely, practical solutions in complex legal matters. The firm’s in-house resources include attorneys, accountants, patent agents, former government officials, and other specialists, ensuring clients can access a wide range of services cost-effectively. Further, Lee & Ko maintains an extensive global network and collaborates with international law firms allowing it to assist clients not only in Korean legal matters but also in cross-border transactions. This makes the firm a sought-after choice for clients requiring comprehensive and efficient legal assistance. Lee & Ko has recently launched a Tech & AI Team to cater to its clients’ growing legal needs in the AI space, equipped with lawyers who are deeply knowledgeable about both the law and the technology.

Introduction

Artificial intelligence (AI) is on the verge of becoming a truly ubiquitous fixture in our daily lives. A significant portion of the population interacts with AI on a daily basis to manage various tasks. With substantial investments pouring into AI-related industries and governments competing to secure a leading position in this global race, the world is beginning to explore ways to foster innovation while mitigating its potential drawbacks. As with any emerging technology, the rise of AI inevitably invites legal challenges, and AI is no exception. This article examines the latest developments and trends in the legislative and regulatory frameworks governing AI in South Korea, alongside market insights into current and future applications of AI across both private and public sectors, with a focus on key legal considerations.

AI Litigation

The global trend of escalating lawsuits concerning AI and intellectual property rights has now extended to South Korea. In early 2025, major terrestrial broadcasters filed lawsuits against Naver, a domestic IT giant and the country’s largest internet portal service, over its AI service, thereby intensifying disputes regarding the legality of AI training practices. Concurrently, debates surrounding the legal status of AI as a creator or an inventor are also approaching a decisive court ruling with the Supreme Court expected to render its decision on the question soon.

Dispute over use of news content in AI training

Domestic media outlets have persistently raised concerns regarding the extensive use of their news content by both domestic and international big tech companies for AI training without obtaining proper authorisation or offering adequate compensation. Against this backdrop, South Korea’s three leading terrestrial broadcasters filed a lawsuit on 13 January 2025, against Naver, seeking an injunction and damages for copyright infringement under the Copyright Act and unfair competition under the Unfair Competition Prevention Act (UCPA). The complaint alleges that Naver utilised their news content to train its generative AI systems without prior consent.

This lawsuit represents the first legal action in South Korea addressing copyright infringement in the context of AI training and marks a precedent-setting case concerning the use of news content by AI companies. The court’s ruling is anticipated to significantly influence future legislative frameworks and the development and operation of domestic AI services.

The Korean lawsuit comes on the heels of a recent US district court decision in Thomson Reuters Enterprise Centre GmbH v Ross Intelligence Inc. in February 2025. In that case, Ross Intelligence, an AI-based legal search engine, was found to have violated copyright law by using Thomson Reuters’s Westlaw headnotes for AI training without authorisation. The court rejected Ross’s fair use defence. This decision, as the first major ruling on the interplay between data training for AI and copyright law’s fair use doctrine, is expected to influence the Korean lawsuit. Major industry players and stakeholders are closely monitoring the outcome of the Korean lawsuit, as the ruling is likely to shape not only domestic copyright jurisprudence but also broader discussions on the two competing interests – that is, balancing innovation in AI with intellectual property protections for authors and inventors.

In addition, the Korean Association of Newspapers (KAN) has accused Naver of utilising its members’ news content for AI training without permission, asserting that such unauthorised use constitutes both copyright infringement and an abuse of dominant market position in violation of the Monopoly Regulation and Fair Trade Act. In response, KAN intends to file a petition with the Korea Fair Trade Commission (KFTC). Furthermore, KAN claims that foreign AI companies, including OpenAI and Google, are similarly using news content without permission and has announced plans to gradually file additional petitions against these companies.

Whether AI may qualify as inventor or author

Like other jurisdictions, Korea maintains the position that an AI cannot be recognised as an inventor under the Patent Act or as an author within the meaning of the Copyright Act.

In 2020, Dr Stephen Thaler filed two patent applications for DABUS, an AI system of his own invention. However, the Korean Intellectual Property Office (KIPO), the national patent authority, refused to issue the patents. The KIPO decision was upheld by the court of first instance, prompting Dr Thaler to file an appeal. In May 2024, the Seoul High Court, serving as the appellate court, dismissed Dr Thaler’s appeal, holding that an AI, as opposed to a human, cannot be designated as an inventor in patent applications.

The court emphasised that recognising AI as an inventor lies outside the permissible boundaries of legal interpretation under the current text of the Patent Act. Nevertheless, it acknowledged that if AI-generated outputs warrant protection as inventions in the future, such recognition would require legislative amendments guided by adequate societal discussion and deliberation. This ruling is important in that it reflects, to a certain extent, judicial awareness of the necessity for a revised legislative framework capable of addressing the challenges posed by rapidly evolving technological advances.

Legislation and Regulation

Following a series of legislative discussions that had remained within the confines of individual laws, in 2024, the National Assembly of Korea passed the Basic Act on Development of Artificial Intelligence and Establishment of Foundation for Trustworthiness (the “AI Basic Act”) during the plenary session on 26 December 2024. Promulgated on 21 January 2025, the AI Basic Act positions South Korea as the second jurisdiction globally – after the European Union – to enact comprehensive AI legislation. The AI Basic Act will enter into force on 22 January 2026, one year after its promulgation.

The AI Basic Act aims to achieve various policy goals through establishing a systematic governance structure that can manage and supervise AI as well as devising support measures to systematically foster the development of the AI industry. More specifically, the government-led AI plan will be rolled out, and the related committees will be set up so that AI technologies and related industries can be properly supported.

Furthermore, the AI Basic Act sets forth various obligations imposed on AI business operators, primarily in relation to what it defines as “high-impact” AI and generative AI. The major obligations, include, among others, the following.

  • High-impact AI and generative AI operators must ensure transparency by notifying users in advance that their products or services are operated based on AI. Additionally, overseas AI operators that meet certain thresholds are required to designate a domestic agent.
  • High-impact AI operators are obligated to establish and operate risk management plans, AI explanation mechanisms and user protection measures to ensure the safety and reliability of high-impact AI systems.
  • Generative AI operators must clearly indicate that the outcomes have been generated by generative AI.

The AI Basic Act introduces several regulations centred on high-impact AI and generative AI while adopting a flexible approach to minimise regulatory burdens. This flexibility is reflected in provisions allowing companies to manage their own risks, requiring government reporting only after-the-fact, and delegating specific details to be outlined in the Enforcement Decrees and subsequent guidelines.

There is scepticism over the AI Basic Act. The scope of high-impact AI, standards and procedures for fulfilling legally prescribed obligations and other critical aspects, which have been delegated to the Enforcement Decrees, has raised concerns about the vagueness of the Basic AI Act and the uncertainty it creates in the business environment. Additionally, the requirements for exercising the fact-finding authority of the Ministry of Science and ICT (MSIT) are not clearly defined, leading to a worry that investigations could be triggered by insufficient grounds, such as anonymous complaints or reports. In response, the government plans to clarify these conditions in subordinate legislation to ensure fact-finding is conducted objectively and not influenced by private interests.

Ultimately, more specific regulatory details under the AI Basic Act will be finalised through enforcement regulations and guidelines. Therefore, it is crucial to closely monitor related policy developments and legislative trends to fully understand the implications for businesses operating in South Korea’s AI ecosystem.

Meanwhile, according to Article 5 of the AI Basic Act, unless there are special, overriding provisions in other laws, AI will be governed and regulated under the framework of the AI Basic Act. However, unlike the European Union, which undertook a process of legal harmonisation that adjusted any conflicting laws after investigating and reviewing 19 existing laws that may have potentially conflicted with its AI Law (such as, in the fields of machinery, civil aviation security and medical devices), concerns have been raised that the AI Basic Act did not fully consider the possible conflict between related laws. As the Ministry of Government Legislation is conducting a full investigation into laws governing AI to see whether there are overlapping or conflicting provisions among the related laws, it is expected that the review will resolve the ambiguities of individual regulations.

Prior to the enforcement of the AI Basic Act, individual laws and guidelines will continue to be in full effect and force. The following new updates are noteworthy from a privacy perspective.

  • On 9 September 2024, the Personal Information Protection Commission (PIPC) issued a guideline on the rights of information subjects regarding decisions made through processing personal information with a “fully automated system”, including systems that apply AI technology under the Personal Information Protection Act (PIPA). The guideline details what personal information processors must do in relation to the exercise of their rights by information subjects, the scope of automated decisions and the disclosure of the criteria and procedures for such decisions and real-life cases. It is expected to be a major guideline for companies operating automated decision systems, as it clearly presents practical standards and examples of the scope of automated decisions, the rights of data subjects and the obligations of personal information processors.
  • In July 2024, the PIPC published a guideline on handling disclosed personal information for AI development and services, explaining the legal basis for lawfully utilising disclosed personal information for AI development and services and various safety measures that can be implemented in the process of AI development. The guideline provides guidance to AI operators on how to comply with PIPA during the training and servicing of AI by informing them that disclosed personal information can be utilised for AI training and service development based on the so-called “legitimate interest” provision under PIPA, and providing specific standards and real-life examples of each requirement.

AI Use in Private and Public Sectors

The emergence of generative AI technologies, such as ChatGPT, has catalysed rapid growth in the AI market, with global investments and adoption accelerating across industries. Recently, the unveiling of Deepseek has intensified international competition, as countries strive to establish leadership in the AI sector. Amid this competitive landscape, domestic companies are dedicating significant resources to developing AI products and services capable of competing in the global market. Reflecting this surge in innovation, AI is increasingly being implemented across diverse industries and sectors, driving transformative changes in workflows and business operations. Some of the notable use cases are outlined below.

  • Finance: AI is being actively utilised in the domestic finance industry to provide customer services, such as chatbots and robo-advisers. In terms of business operations, AI is being put to use in areas of risk management, regulatory compliance (eg, fraud and anti-money laundering detection systems, credit scoring). AI is being introduced from the front office to the back office to improve operational efficiency and reduce costs.
  • Manufacturing: In the manufacturing industry, AI was previously applied to specific, distinct tasks, such as software development, issue reporting and visual inspection, but the use has been expanded to other areas, including supporting strategic decision-making in the overall production process and providing real-time production planning monitoring, error identification, process optimisation and quality control. Samsung Electronics is considering applying AI to all processes in the semiconductor industry, and is utilising its own solution, Gauss (a text, code, and image generation tool) for daily tasks, such as email drafting, document summary, and translation, while Hyundai Motor Company is also integrating advanced AI technologies into its manufacturing system.
  • Tourism: In the tourism industry, AI is being used for booking destinations, flights and hotels based on big data analysis, suggesting customised travel itineraries through real-time information exploration. Chatbot functions that support travel experiences in real time (eg, information updates, translations) are growing in popularity. In the future, it is expected that designing personalised travel services by collecting customer data and analysing customer travel patterns and preferences will be possible.
  • Content industry: In the content industry, AI is revolutionising the entire production environment, including automatic editing, colour correction, sound processing and special effects generation. In the webtoon (web cartoon) field, AI is being actively applied to assist webtoon creators with creation and production, provide optimised curation, and to monitor illegal distribution and piracy, while in the gaming field, AI is being used for interactive user-content interactions.
  • Legal services: Many law firms are investing in building specialised legal AI systems, and many lawyers are subscribing to AI legal service products on the market. It has already become a reality to see AI finding, translating, summarising and even drafting legal documents based on identified precedents and literature. However, AI is also becoming a seed of conflict between lawyers and legal tech companies, as the Korean Bar Association recently disciplined a law firm for violating the Attorneys-at-Law Act by introducing a free AI chatbot that renders advice.

Recently, the adoption of AI in the public sector has been growing in South Korea, as well. The government aims to actively foster the development of AI technologies and related industries. Under such policy direction, the diffusion of AI technologies across public institutions is being spurred through initiatives, including the establishment of the Digital Platform Government and the National Artificial Intelligence Committee. Various ministries are actively formulating AI utilisation plans to achieve these objectives.

Specifically, the Ministry of Justice is advancing several AI-related projects, including the development of a next-generation immigration control system utilising AI, the introduction of intelligent criminal case processing systems and the creation of AI-based electronic supervisory technologies and systems. The Ministry of Trade, Industry, and Energy is focusing on transforming manufacturing into AI-driven autonomous production systems while encouraging businesses to adopt AI. Meanwhile, the Ministry of Health and Welfare has developed a Medical Artificial Intelligence Research and Development Roadmap, aiming to enhance healthcare service quality and operational efficiency through AI technologies and support advanced medical technology and drug development.

The judiciary has also been proactive in integrating AI into its operational structure. It recently published guidelines on AI Utilisation in Judicial Affairs, which aim to streamline judicial administrative tasks, reduce repetitive processes and optimise resource allocation. These efforts include developing systems that improve access to justice for disabled individuals and implementing chatbot-style systems to guide citizens through legal procedures. Such initiatives are expected to significantly enhance judicial accessibility for the public.

In conclusion, South Korea’s public sector is actively embracing AI technologies across various domains to improve efficiency, foster innovation and address societal challenges.

Outlook

South Korea continues to demonstrate remarkable dynamism in AI development and commercialisation, building a large-scale AI ecosystem that rivals those of the United States, China and the United Kingdom. The government is actively bolstering this growth by providing substantial support to the private sector, aiming to enhance industrial competitiveness and solidify the country’s position as a global AI leader. Under the framework of the AI Basic Act, government ministries and regulators are devising and implementing various strategies to sustain and accelerate this upward trajectory.

On the legal front, South Korea is witnessing its first test case concerning the boundaries of AI training, reflecting global trends in litigation over AI-related issues. This marks a critical juncture for clarifying legal uncertainties surrounding AI technologies. Meanwhile, private-sector companies must remain vigilant about ethical considerations and legal compliance, ensuring that the pursuit of innovation does not come at the expense of societal trust or accountability.

Lee & Ko

Hanjin Building, 63 Namdaemun-ro
Jung-gu
Seoul 04532
South Korea

+82 2 772 4000

+82 2 772 4001 2

mail@leeko.com www.leeko.com
Author Business Card

Law and Practice

Authors



Bae, Kim & Lee LLC was founded in 1980 and is a full-service law firm covering all major practice areas, including corporate law; mergers and acquisitions; dispute resolution (arbitration and litigation); white-collar criminal defence; competition law; tax law; capital markets law; finance; intellectual property; employment law; real estate; technology, media and telecoms (TMT); maritime; and insurance matters. With more than 650 professionals located across its offices in Seoul, Beijing, Hong Kong, Shanghai, Hanoi, Ho Chi Minh City, Yangon and Dubai, it offers its clients a wide range of expertise throughout Asia. The firm is composed of a diverse mix of Korean and foreign attorneys, tax advisers, industry analysts, former government officials, and other specialists. A number of its professionals are multilingual and have worked at well-known law firms in other countries, enabling them to assist international clients as well as Korean clients abroad with cross-border transactions.

Trends and Developments

Authors



Lee & Ko was established in 1977 and has since evolved into a leading full-service law firm in South Korea, recognised for its excellence in various legal domains. The firm boasts a strong reputation for client satisfaction and quality services. Lee & Ko is known for delivering timely, practical solutions in complex legal matters. The firm’s in-house resources include attorneys, accountants, patent agents, former government officials, and other specialists, ensuring clients can access a wide range of services cost-effectively. Further, Lee & Ko maintains an extensive global network and collaborates with international law firms allowing it to assist clients not only in Korean legal matters but also in cross-border transactions. This makes the firm a sought-after choice for clients requiring comprehensive and efficient legal assistance. Lee & Ko has recently launched a Tech & AI Team to cater to its clients’ growing legal needs in the AI space, equipped with lawyers who are deeply knowledgeable about both the law and the technology.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.