Contributed By Bae, Kim & Lee LLC
Before the promulgation of the Framework Act on the Development of AI and the Establishment of a Foundation for Trust, etc, (the “AI Framework Act), the discourse around AI in South Korea (“Korea”) mainly focussed on two areas:
However, in January 2025, the AI Framework Act was enacted (to be enforced starting 22 January 2026), and current debate now focuses on the scope of the AI Framework Act’s application.
AI and machine learning are leading innovation in various industries including the medical, financial and manufacturing sectors, and their influence continues to expand. For example, financial institutions are applying AI in the areas of customer service, asset management and investment advice.
AI technology has demonstrated its prowess across various everyday situations, including providing personalised services by analysing consumer data through machine learning, improving business response times through business process automation, and operating chatbots based on generative AI. In addition, financial institutions are using AI to improve the efficiency of their employees’ work evaluating customer credit. Platform companies are also providing optimised user interfaces by providing customised advertising based on users’ search records and improving internet search engine accuracy through AI.
The Presidential Committee on AI announced the following policy directives on 26 September 2024, for the development and promotion of AI innovation:
Also, the Presidential Committee on AI published detailed policies for the directives above on 20 February 2025.
Furthermore, regarding investment in AI, currently under the Act on Restriction on Special Cases Concerning Taxation, AI-related technologies have been designated as new growth source technologies, and tax credits have been granted for investment in research and development activities concentrating on AI. However, moving forward, an amendment is being discussed to designate AI technology as national strategic technology and increase tax benefits for the technology.
Korea has recently taken significant steps toward establishing a comprehensive regulatory framework. In particular, the National Assembly has enacted the AI Framework Act, which serves as the country’s foundational legislation for AI governance. Further details of the Act are provided in 3.2 Jurisdictional Law.
On 26 December 2024, the AI Framework Act was passed into law by the National Assembly, and on 22 January 2025, the AI Framework act was announced publicly. This new Act will be effective from 21 January 2026. The AI Framework Act is composed of three main sections:
While the AI Framework Act has been enacted as described in 3.2 Jurisdictional Law, subordinate statutes and relevant guidelines and notifications have not been published.
In the meantime, government departments have issued the following guidelines
This is not applicable in Korea.
This is not applicable in Korea.
This is not applicable in Korea.
The Personal Information Protection Act has been amended to introduce the data subject’s right not to be subject to an entirely automated decision, similar to the automated decision-making right in the EU’s GDPR, which will become effective from 15 March 2024.
In addition, the amended Personal Information Protection Act includes provisions for individuals to request explanations or human review of automated decisions, as well as the ability to reject such decisions if they materially affect their rights and obligations as data subjects.
Furthermore, the amended Personal Information Protection Act has been designed to secure transparency and enhance credibility in the processing of personal information by mandating disclosure of the criteria and procedures for automated decisions in advance.
Representative Min Byung-deok has proposed a bill that would allow the use of personal information for AI learning in cases where no existing legal basis applies, provided that risk factors are assessed and appropriate safety measures are taken, subject to the approval of the Personal Information Protection Commission.
No significant precedents in this area have been found.
The main regulator for the AI Framework Act is the MSIT, and the Presidential Committee on AI is also relevant. The Presidential Committee on AI reviews and takes decisions regarding the government plan, strategic investment, and other government actions on AI.
Setting aside the government agencies that are responsible for drafting AI regulatory policies, the PIPC is the most active government agency in regulating AI-related issues.
Please refer to 3.3 Jurisdictional Directives.
In 2024, the PIPC listed precautions to be taken when using publicly available personal information. This was released after the Commission conducted an inspection on AI service providers in 2023.
The Korea Fair Trade Commission has investigated the business practices of mobility and advertising business operators from the perspective of the fairness of algorithms.
The AI Framework Act provides for the establishment of the AI Safety Institute. The AI Safety Institute is responsible for defining and analysing AI-related risks, providing criteria for evaluating them, and researching technologies and standardisation for AI safety.
Additionally, the Telecommunication Technologies Association (TTA), an affiliated agency of the Korea Communications Commission, issued an artificial intelligence development guide in 2023.
The Financial Security Institute, an affiliated agency of the Financial Services Commission, has published AI security guidelines.
Generally, the Korean Standards Association plays an important role in adopting international standards.
The Presidential Committee of Digital Platform Government has presented a draft policy that suggests the use of AI in various sectors of society. This draft policy is based on the government’s plan for realising digital platform government, which is one of the government’s policy objectives. This includes:
Under the above plan, AI services have been introduced and used to predict wildfires and floods in 2024.
Facial recognition technology has been in use in the immigration process and has simplified the process significantly.
Since June 2023, the Gyeonggi municipal government has been implementing an “AI chat service”, which is an active welfare service in which an AI counsellor makes a phone call once a week to elderly persons (65 and older) who are in need of care in the area. The purpose of the call is to check in on the elderly by engaging in conversation and monitoring their health and life. The person in charge directly makes a phone call or visits their residence if there is no answer to the phone call three times or more. Increasing numbers of local governments are adopting these AI chat services.
There have been no judicial decisions related to government use of AI.
The Republic of Korea Armed Forces have been developing an AI model to predict the demand for repair parts for each of the approximately 30,000 types of equipment in operation in the military, by establishing a team to analyse the demand for repair parts within the Korean Institute for Defence Analysis since January 2012.
In addition, the Republic of Korea Armed Forces plan to introduce AI technology prioritising defence logistics, such as AI-based smart maintenance, smart factories, and smart warehouses.
In addition, on 1 April 2024, the Ministry of National Defence established an AI centre for national defence to carry out President Yoon’s government project of “cultivating AI science and technology forces”.
Since the emergence of generative AI, there has been controversy over the protection of IP addresses and personal information. In particular, lawsuits have been filed over the use of news articles as training data for generative AIs, as discussed in 15.1 IP and Generative AI.
On the other hand, the AI Framework Act obliges the following regarding generative AI:
The PIPC issued guidelines on the use of publicly available personal information for the development of AI. The Commission believes that the data subject’s right to deletion and to rectification must be protected with regard to AI. Moreover, the Personal Information Protection Act has been amended to introduce the data subject’s right to object to an automated decision.
The use of AI in private enterprises has been quite limited so far. Only a few companies or law firms have commenced or plan to commence using AI to providing legal services (except translation).
However, a medium-sized law firm in Korea introduced, on 20 March 2024, an AI-based legal counselling chatbot service in collaboration with Naver Cloud and Nexus AI (a legal tech venture company). In the wider legal tech space, Allibee by BHSN, SuperLawyer by Law and Co., LBOX AI, etc, are providing legal solution in software as a service form.
The Ministry of Justice, Prosecutors’ Office, and National Police Agency opened the Next Generation Criminal Justice Information System (KICS) on September 19, 2024. KICS aims to fully digitalise the criminal justice process, expand online and non-face-to-face services through technological innovation, and to completely reorganise the aging existing system. Using this service the Prosecutor’s Office anticipates reducing its workload by using the AI to search for similar cases, summarise the investigation information, get sentencing recommendations in document drafting, extract key information from evidence, generate relevant questions for the investigation, identify missing information, and transcribe conversations.
The court officially opened the next-generation e-litigation system on 31 January 2025, to revolutionise judicial affairs and judicial information disclosure by completely overhauling the existing e-litigation system. The Litigation Procedure Guidance Chatbot was introduced, which uses AI to guide litigants through the litigation process 24 hours a day, and a service that allows users to submit their resident registration certificates and corporate registration certificates, which previously had to be issued separately to be submitted in e-litigation, through electronic linkage methods such as mobile phones. An e-litigation portal and e-depository service are also available.
There is no debate yet on the liability for damages resulting from AI-enabled technologies. If an AI service provider pinpoints that the consumer is ultimately responsible for their decisions to use the AI and the following consequences in the terms and conditions of the service, the fairness of such terms and conditions may be assessed by the relevant government authorities, such as the Korea Fair Trade Commission.
The AI Framework Act was passed on 25 December 2024, and it imposes obligations of transparency (Article 31) for high-impact and generative AI, as well as other obligations specifically for high-impact AI (Article 34). Violations of administrative orders to rectify violations of the above requirements of transparency or for high-impact AI will result in administrative fines of up to KRW30 million (Article 43).
However, the current AI Framework Act does not include specific regulations on liability and compensation in the event of damage caused by AI. Therefore, it is expected that this issue will need to be resolved through related laws such as the existing product liability law.
The various guidelines outlined in 3.3 Jurisdictional Directives provide guidance that fairness should be maintained in the development and use of AI. In addition, the Financial Services Commission’s guide stipulates that a fairness indicator must be imposed on the process of developing and using an AI technology to assess and maintain fairness.
The guidelines for AI use in the financial sector are being prepared for amendments to consider systematic consistency with other overlapping guidelines and the characteristics of generative AI such as bias and illusion. In addition, the Guidelines for the Development of Reliable AI, published by the MSIT and the Korea Telecommunications Technology Association (TTA) on March 2024, suggest that measures be taken to eliminate bias in collected and processed data and to eliminate bias in AI models.
The Ministry of Justice has streamlined the immigration process using facial recognition technology, the Ministry of Justice is using facial recognition technology to automate the identification and tracking of domestic and foreign citizens during immigration screening, and the Ministry of the Interior and Safety has introduced a facial recognition-based access system for government buildings.
Meanwhile, the National Human Rights Commission of Korea has recommended using biometric information for employee’s attendance management, which is increasingly adopted by private enterprises.
The AI Framework Act classifies AI systems used to analyse and use biometric information for criminal investigation and arrest as high-impact AI and imposes certain obligations on businesses that provide such products and services.
The amended Personal Information Protection Act ensures the data subject’s right to object to an automated decision.
The guidelines of each agency highlight transparency as a principle in developing and using AI. The most detailed content on this subject can be found in the PIPC’s guideline on automated decision-making under the amended Personal Information Protection Act.
Although the Commission finds it unnecessary to explain the specific operation method of the algorithm to the data subject, it requires the individual variables in artificial intelligence to be disclosed.
The AI Framework Act imposes transparency obligations by stipulating prior notification obligations for high-impact AI or generative AI, display obligations for generative AI, and notification and display obligations for deep-fake products.
As explained in 7.3 National Security, AI systems for demand forecasting are being introduced in the procurement sector (specifically, military procurement).
Article 35 of the AI Framework Act does not impose an obligation on businesses to conduct impact assessments, but it provides incentives for operators to conduct impact assessments by stipulating that “when a national organisation or other entity intends to use a product or service using high-impact AI, it shall give priority to the product or service that has undergone impact assessment”.
The AI Framework Act, to be enforced starting 22 January 2026, categorises AI used in hiring as high impact (Article 2(4)). High-impact AI deployers are required to give prior notice to users (Article 31(1)). In addition, to ensure the safety and reliability of high-impact AI, per Article 34(1). a business that uses AI for recruitment must:
Furthermore, any use of AI in hiring must be preceded by efforts to make a prior impact assessment on the basic rights of people (Article 35(1)). The specifics of these requirements will be provided in the upcoming Enforcement Decree.
Many companies have adopted an attendance management system using biometric authentication information, but there has been no discussion on the introduction of AI in employee performance evaluation. The AI Framework Act defines high-impact AI as a judgment or evaluation that has a significant impact on the rights and obligations of individuals, such as recruitment and loan screening, and leaves it to the Enforcement Decree to determine which other parts of the AI fall under high-impact AI. Therefore, it is not yet clear whether employee evaluation and monitoring will fall under high-impact AI.
Platform companies are making significant use of recommendation algorithms that use AI. They are most commonly used to provide personalised services based on the user’s behavioural information.
Financial companies are actively utilising or trying to utilise AI in providing customer consulting and support services, calculating credit ratings, designing insurance products, managing assets and risks, and detecting abnormal transactions and money laundering.
In particular, as chatbot services become more sophisticated with the advances made by generative AI, many financial companies are providing customer consulting and support services using chatbots, and AI is increasingly being used for asset management and personalised marketing purposes.
As the use of AI increases, the risks for financial institutions are also increasing. For instance, as the number of investment product transactions using AI increases, there is a possibility that a large number of unintended orders are placed all at once due to algorithm errors, which will increase market volatility. In addition, there is a possibility that financial companies may sell products that are not suitable for customers or fail to properly perform their obligations to explain while utilising AI for product recommendation.
The AI Basic Act defines AI used for loan screening as high-impact AI (Article 2(4)). Therefore, the use of AI for loan screening is subject to regulations related to high-impact AI (see Chapter 13).
Also, the Korean financial supervisory authorities have announced AI guidelines (and AI security guidelines) in the financial sector to ensure that financial companies using AI technology protect financial consumers’ rights and take responsibility for their services.
In particular, the AI guidelines in the financial sector require financial companies to prevent unreasonable discrimination against consumers. Accordingly, financial companies should establish fairness standards based on the characteristics of services and conduct evaluations based on certain standards to prevent the possibility of unexpected discrimination that may occur due to AI-enabled services.
Big data analytics platforms based on video information are gaining traction as an important trend in healthcare. Non-medical institutions are required to receive data from medical institutions, but there are many challenges in obtaining such data. For example, medical institutions tend to be cautious about providing medical data and there are many legal regulations in this area. To resolve this issue, the government is proceeding with special legislation for healthcare data.
On the other hand, the AI Framework Act applies to AI used in:
The AI Framework Act imposes obligations of high-impact AI systems (Article 2(4) on such devices. Please refer to 13. AI in Employment for further detail.
There is no general regulation governing the use of AI in autonomous vehicles. Provided, however, that certain laws prescribe matters relating to autonomous vehicles. First of all, with respect to liability in the event of an accident, the Compulsory Motor Vehicle Liability Security Act provides measures to seek reimbursement against the manufacturer in the case of any defect in the vehicle while maintaining the existing drivers’ liability, and to establish an accident investigation committee to investigate the autonomous driving data recording device affixed to the autonomous vehicle. Meanwhile, the Rules on Safety and Performance of Motor Vehicles and Motor Vehicle Components (Motor Vehicle Safety Standards), sub-regulations of the Motor Vehicle Management Act, have safety standards for Level 3 autonomous driving systems.
Autonomous vehicles precent a number of data privacy issues, including the use of video information taken by autonomous AI driving devices while driving. Although it has been necessary to use mosaiced (pseudonymised) video data to ensure that no individual can be identified even when developing autonomous driving technology, the PIPC has prepared a measure to permit the use of non-mosaiced original video through a regulatory sandbox, and accordingly, several companies have applied for the sandbox for the development of autonomous driving AI.
To date, there is no law regulating the use of AI in the manufacturing sector. Provided, however, that the Ministry of Trade, Industry and Energy has commenced the establishment of a master plan for AI autonomous manufacturing to drive innovation in the manufacturing process and enhance productivity. The Ministry has also stated that it will
Meanwhile, MSIT has announced a plan to prepare a basic law on artificial intelligence in 2024 and a bill on fostering the artificial intelligence industry and creating a foundation of trust has been proposed and is currently pending in the National Assembly.
It is also worth noting that the above bill permits the launch of artificial intelligence technology, artificial intelligence products (products using artificial intelligence technology), or artificial intelligence services, in principle, but it also prescribes the principles of priority permission and ex-post regulation that can limit them if they cause any harm to the lives, safety, rights and interests of the citizens, or significantly disrupt public safety, the maintenance of general order, and welfare promotion.
In the accounting, tax, and legal markets, individual companies are conducting a review on the use of AI for the analysis of contracts or financial statements.
However, in such professional services, companies cannot provide relevant data for a large-scale large language models due to confidentiality issues with clients, and for this reason, as they are basing the work on small language models, progress is sluggish.
If the training data used for AI learning consists of works produced by others, such works are subject to copyright protection. Unless the copyright holder approves the use of the work in the model learning process, there is a risk of copyright infringement. In a related case, South Korea’s three over-the-air broadcasting companies filed a lawsuit against NAVER on 13 January 2025, alleging copyright infringement, claiming that NAVER used news articles for learning without permission when developing HyperClova and HyperClovaX, its generative AI services (Seoul Central District Court 2025 Gahap 5105).
If a prompt is entered for the creation of an AI product, that prompt may be recognised as a creative expression per se. Accordingly, it may be subject to the copyright protection as a type of literary work. On this point, the courts will look into the specific prompts of each case.
Under the current legal frame, the end result of an AI’s work is unlikely to be recognised as a work product, which makes it difficult to be protected under the copyright law. However, if any AI-created product is substantially similar to any work of another person, a copyright infringement may be recognised.
An AI model can be protected through patents for its novelty and advancement. In such a case, the source code for realising the AI model can be protected as the computer program works. If an AI tool provider restricts the input method for using the generative AI tools in question, and also restricts the method of using the product through the service terms and conditions, to prevent infringement of intellectual property rights in the course of using the AI services, any user who fails to comply with such restrictions may be held liable for a breach of the terms and conditions.
On 30 June 2023, in a lawsuit filed by Stephen Thaler, an AI developer in the United States, as part of the so-called DABUS project to seek recognition of AI as an inventor, the Seoul Administrative Court ruled that “invention” under Article 2(1) of the Patent Act refers to the highly advanced creation of a technical idea using the laws of nature and that such a technical idea presupposes human reasoning, and therefore, under the current laws, AI cannot be recognised to have the legal capacity to “invent”. The appellate court also ruled that the inclusion of AI as an inventor under the current provisions of the Patent Act is beyond the limits of legitimate legal interpretation, and that if there are objects that should be protected as AI inventions in the future, this should be achieved through legislation that takes public discourse into account (Seoul High Court 2023Nu52088). The case is currently under appeal (Supreme Court 2024Du45177).
In addition, the Copyright Act defines “work” as a creative production that expresses human thoughts and emotions (Article 2(1) of the Copyright Act) and “author” as “a person who creates a work” (Article 2(1) of the Copyright Act). The Ministry of Culture, Sports and Tourism stated in the Generative AI Copyright Guide issued on 27 December 2023 that, under the current laws, an AI cannot be recognised as an author.
Under Korean laws, a trade secret refers to information, including a production method, sales method, or useful technical or business information for business activities, which is not known publicly, is managed as a secret, and has independent economic value (Article 2(2) of the Unfair Competition Prevention and Trade Secret Protection Act). An act of acquiring trade secrets or using or disclosing trade secrets improperly acquired, with knowledge of the fact that an act of improper acquisition of the trade secrets has occurred, or without such knowledge due to gross negligence, constitutes infringement of trade secrets (Article 2(3)(b) of the above Act). Therefore, if any data considered as trade secrets of another person is collected without permission and used for AI learning, trade secret infringement issues may arise.
Meanwhile, if any technical data, such as the source code of any AI model created for AI services, is kept confidential and not disclosed to others, such data can be protected as trade secrets.
Under the current law, any product created by generative AI itself is not recognised as a work of authorship.
Any product created by Open AI’s generative AI is not protected by copyrights or patents. However, if any product created by Open AI’s generative AI is substantially similar to any existing work, such product may infringe on the copyrights of others.
On 17 December 2024, the Korea Fair-Trade Commission (KFTC) published its report: Generative AI and Competition. The purpose of the report is to systematically analyse the structure and competition status of the generative AI market, examine competition and consumer issues that may arise in the process, and propose future initiatives.
In this report, the KFTC categorised the generative AI market into three stages: AI infrastructure, AI development and AI implementation. The report presented structural factors affecting market competition in light of the characteristics of the AI market: (i) capital and technology-intensive industries, (ii) economies of scale and scope, and (iii) market pre-emption effects. In addition, the KFTC analysed the potential for competition in the generative AI market and concerns about harm to consumer interests from the perspectives of solo conduct, business combinations, and consumer interest. The KFTC also mentioned its future plans to consider improving the system to regulate consumer interest infringement related to data collection and use.
Following the publication of the above report, the KFTC announced that it will conduct a survey of major domestic and foreign companies in March 2025 to examine competition restrictions in the AI data market, and it plans to examine whether domestic and foreign big tech companies monopolise data or control the market by blocking competitors’ access to data. Based on the above survey, the KFTC will publish a policy report on the AI data market around October 2025, focusing on analysing competition restrictions that may occur in the process of collecting and utilisingdata essential for AI model training.
There is no separate cybersecurity legislation for AI yet. Therefore, cybersecurity measures should be taken in accordance with the Act on Promotion of Information and Communications Network Utilisation and Information Protection, which is generally applicable to online businesses. In addition, it is necessary to refer to the guidelines published by the AI Safety Institute.
The use of AI in corporate ESG assessments, carbon reduction, and social contribution activities is on the rise. Companies such as Sustinvest, SK Hynix, Samsung SDS, SK Telecom, and KT are using AI technology to identify and analyse corporate ESG status, improve electricity use efficiency, track logistics carbon emissions, care for the elderly and dementia patients, and train medical technology in developing countries.
In addition, the Ministry of Environment has completed the 2024 Smart Ecological Factory Construction Project, a government-led eco-friendly factory transformation project to help small and medium-sized enterprises reduce greenhouse gases and pollutants and improve energy and resource efficiency. Through the project, energy-saving equipment, factory energy management systems (FEMS), and monitoring systems (ICT) were introduced to dramatically reduce energy usage, resulting in a reduction in greenhouse gas emissions and a reduction in factory operating costs.
The AI Framework Act categories AI used in energy, healthcare, and nuclear sectors to be high-impact and imposes certain obligations.
AI Best Practice Compliance Strategies
Article 34 of the AI Framework Act requires the establishment and operation of a risk management plan as a responsibility of business operators related to high-impact AI. In this regard, measures such as establishing an AI risk management organisation and establishing a private autonomous AI ethics committee can be considered. Furthermore, as the Financial Services Commission provides the most specific guidelines, the content thereof may also serve as a model for other government agencies. The same article requires
Guidelines including specific standards and examples will be established in this regard. It is possible to consider the introduction of an EU-style AI law, documentation obligations to secure accountability as proposed by the Commission, an artificial intelligence governance system, and ex-post verification procedures for AI services.
In addition, since ISO 42001 can be an important standard, it is recommended to consider obtaining ISO 42001 certification or building an internal system at a level similar thereto.
Centropolis B
26 Ujeongguk-ro
Jongno-gu
Seoul 03161
South Korea
+82 2 3404 0000
+82 2 3404 0001
bkl@bkl.co.kr www.bkl.co.kr/law?lang=en