The discourse over AI in South Korea (“Korea”) mainly revolves around two areas:
Discussions on AI in other areas remain relatively undeveloped.
AI and machine learning are leading innovation in various industries including the medical, financial and manufacturing sectors, and their influence continues to expand. For example, financial institutions are applying AI in the areas of customer service, asset management and investment advice.
AI technology has demonstrated its prowess across various everyday situations, including providing personalised services by analysing consumer data through machine learning, improving business response times through business process automation, and operating chatbots based on generative AI. In addition, financial institutions are using AI to improve the efficiency of their employees’ work evaluating customer credit. Platform companies are also providing optimised user interfaces by providing customised advertising based on users’ search records and improving internet search engine accuracy through AI.
Under the Act on Restriction on Special Cases Concerning Taxation, AI-related technologies have been designated as new growth source technologies, and tax credits have been granted for investment in research and development activities concentrating on AI.
Recently, discussions have begun to expand tax exceptions for AI services, including granting tax benefits to investment in facilities for the provision of AI services.
In Korea, there is ongoing discussion over whether it is necessary to enact any general regulatory legislation on AI, similar to the EU’s AI Act for example. There are proponents calling for legislation to address AI risks, while others argue that AI is still in the early stages of its development and that it is therefore too early for effective regulation to be passed.
No regulatory legislation specific to AI has been enacted.
In the absence of an AI-specific piece of legislation in Korea, the following non-binding guidelines have been promoted by regulators/government bodies:
This is not applicable in Korea.
This is not applicable in Korea.
This is not applicable in Korea.
The Personal Information Protection Act has been amended to introduce the data subject’s right not to be subject to an entirely automated decision, similar to the automated decision-making right in the European GDPR, which will become effective from 15 March 2024.
Furthermore, the amended Personal Information Protection Act includes provisions for individuals to request explanations or human review of automated decisions, as well as the ability to reject such decisions if they materially affect their rights and obligations as data subjects.
Furthermore, the amended Personal Information Protection Act has been designed to secure transparency and enhance credibility in the processing of personal information by mandating disclosure of the criteria and procedures for automated decisions in advance.
The MSIT and the KCC are currently preparing a basic law on AI and a user protection law. In parallel, the 22nd National Assembly, which will commence its session in June 2024, is expected to hold a comprehensive discussion on the regulatory issues of AI.
No significant precedents in this area have been found.
The Seoul Administrative Court’s 2022GuHap89524 judgment, dated June 20 2023, states that “artificial intelligence (AI) is defined as a technology that realises the human brain functions, such as human perception, judgment, inference, problem-solving, and the language, behavioural instruction, and learning functions resulting therefrom, through a computer”.
The AI definition in the above judgment is broad and abstract, and both generative AI and predictive AI are within the scope of “realising human brain activities through a computer”. It is unclear at this stage whether the definition will separately affect or restrict generative AI and/or predictive AI.
The MSIT is trying to lead the way in regulating AI technologies. However, there is an opinion that the Ministry is not suited to supervising business enterprises by regulating AI technologies. This is because the nature of the MSIT’s general work tends to be friendly and supportive of new businesses. Despite the controversy, as other agencies are not actively pursuing the enactment of general legislation on artificial intelligence, the MSIT is likely to continue to take the initiative for the time being.
Setting aside the government agencies that are responsible for drafting AI regulatory policies, the PIPC is the most active government agency in regulating AI-related issues.
There are no widely referenced definitions of AI used by regulators, as Korean regulators are still in the early stages of discussing AI regulations. The AI discussion in the 22nd National Assembly (mentioned in 3.7 Proposed AI-Specific Legislation and Regulations) is likely to include the definition of AI. The previous bill by a lawmaker Cheol-Soo Ahn, which was disposed of at the end of the 21st National Assembly session, describes AI as "software for the electronic realization of human intellectual abilities such as learning, perception, judgment, and understanding of natural language" and defines the scope of AI broadly to ensure that generative AI is included.
All government agencies prioritise the protection of human dignity, consumer rights and privacy as their main regulatory objectives. However, each regulatory agency’s priorities may be different depending on its objectives. The Korea Communications Commission, the Financial Services Commission, and the PIPC focus on protecting the privacy of telecommunications users, financial consumers, and the general public, respectively.
In 2024, the PIPC listed precautions to be taken when using publicly available personal information. This was released after the Commission conducted an inspection on AI service providers in 2023. These results will be included in the guidelines for the use of AI for personal information, which is scheduled to be published in 2024.
The Korea Fair Trade Commission has investigated the business practices of mobility and advertising business operators from the perspective of the fairness of algorithms.
The Telecommunication Technologies Association (TTA), an affiliated agency of the Korea Communications Commission, issued an artificial intelligence development guide in 2023.
The Financial Security Institute, an affiliated agency of the Financial Services Commission, has published AI security guidelines.
Generally, the Korean Standards Association plays an important role in adopting international standards.
The Presidential Committee of Digital Platform Government has presented a draft policy that suggests the use of AI in various sectors of society. This draft policy is based on the government’s plan for realising digital platform government, which is one of the government’s policy objectives. This includes:
Facial recognition technology has been in use in the immigration process and has simplified the process significantly.
Since June 2023, the Gyeonggi municipal government has been implementing an “AI chat service”, which is an active welfare service in which an AI counsellor makes a phone call once a week to elderly persons (65 and older) who are in need of care in the area. The purpose of the call is to check in on the elderly by engaging in conversation and monitoring their health and life. The person in charge directly makes a phone call or visits their residence if there is no answer to the phone call three times or more.
There have been no judicial decisions related to government use of AI.
The Republic of Korea Armed Forces have been developing an AI model to predict the demand for repair parts for each of the approximately 30,000 types of equipment in operation in the military, by establishing a team to analyse the demand for repair parts within the Korean Institute for Defence Analysis, since January 2012.
In addition, the Republic of Korea Armed Forces plan to introduce AI technology prioritising defence logistics, such as AI-based smart maintenance, smart factories, and smart warehouses.
In addition, on 1 April 2024, the Ministry of National Defence established an AI centre for national defence to carry out President Yoon’s government project of “cultivating AI science and technology forces”.
Since the emergence of generative AI, there has been controversy over the protection of IP addresses and personal information.
If the training data used for AI learning consists of works produced by others, such works are subject to copyright protection. Unless the copyright holder approves the use of the work in the model learning process, there is a risk of copyright infringement. If a prompt is entered for the creation of an AI product, that prompt may be recognised as a creative expression per se. Accordingly, it may be subject to the copyright protection as a type of literary work. On this point, the courts will look into the specific prompts of each case.
Under the current legal frame, the end result of an AI’s work is unlikely to be recognised as a work product, which makes it difficult to be protected under the copyright law. However, if any AI-created product is substantially similar to any work of another person, a copyright infringement may be recognised.
An AI model can be protected through patents for its novelty and advancement. In such a case, the source code for realising the AI model can be protected as the computer program works. If an AI tool provider restricts the input method for using the generative AI tools in question, and also restricts the method of using the product through the service terms and conditions, to prevent infringement of intellectual property rights in the course of using the AI services, any user who fails to comply with such restrictions may be held liable for a breach of the terms and conditions.
The PIPC is expected to issue guidelines on the use of publicly available personal information for the development of AI. The Commission believes that the data subject’s right to deletion and to rectification must be protected with regard to AI. Moreover, the Personal Information Protection Act has been amended to introduce the data subject’s right to object to an automated decision.
The use of AI in private enterprises has been quite limited so far. Only a few companies or law firms have commenced or plan to commence using AI to providing legal services (except translation).
However, a medium-sized law firm in Korea introduced, on 20 March 2024, an AI-based legal counselling chatbot service in collaboration with Naver Cloud and Nexus AI (a legal tech venture company). Also, a legal tech company is preparing to launch an AI-based software-as-a-service (SaaS) called “SuperLawyer”, and LBOX is also developing a legal AI service, “LBOX AI”.
The Prosecutor’s Office and the Supreme Prosecutor’s Office are planning to develop a service that recommends court records of similar cases. It is currently being developed as part of the next generation Korea Information System of Criminal Justice Services (KICS) project, as of the second half of 2024.
Using this service the Prosecutor’s Office anticipates reducing its workload by using the AI to search for similar cases, summarise the investigation information, get sentencing recommendations in document drafting, extract key information from evidence, generate relevant questions for the investigation, identify missing information, and transcribe conversations.
The courts plan to apply an automatic judgment recommendation AI model within the next-generation electronic litigation system, which is scheduled to be launched in September 2024. The AI model has the function of recommending the ten most similar cases by analysing the complaints, briefs and memoranda filed in the cases assigned to the court.
There is no debate yet on the liability for damages resulting from AI-enabled technologies. If an AI service provider pinpoints that the consumer is ultimately responsible for their decisions to use the AI and the following consequences in the terms and conditions of the service, the fairness of such terms and conditions may be assessed by the relevant government authorities, such as the Korea Fair Trade Commission.
The pending bill proposed by the lawmaker Cheol-Soo Ahn (see 3.7 Proposed AI-Specific Legislation and Regulations) specifies that if damages occur due to AI-enabled technologies that are categorised as high risk, and the company fails to perform its obligations in its use of the technology, in principle, the liability for damages shall be imposed on the company. However, the company may not be liable if such damage would have occurred even if the company had performed all its obligations.
The various guidelines outlined in 3.3 Jurisdictional Directives provide guidance that fairness should be maintained in the development and use of AI. In addition, the Financial Services Commission’s guide stipulates that a fairness indicator must be imposed on the process of developing and using an AI technology to assess and maintain fairness.
Financial institutions are preparing a fairness indicators to assess the fairness of their AI-enabled services in accordance with the guidelines. Audits on those services will include a fairness test accordingly. Institutions in other sectors are not known to be taking any particular actions to ensure the fairness of their AI-enabled services.
The amended Personal Information Protection Act ensures the data subject’s right to object to an automated decision. In addition, the PIPC will publish guidelines on the elements that should be considered for the protection of personal information in the use of AI.
The Ministry of Justice has streamlined the immigration process using facial recognition technology However, no other department has yet announced any plan to adopt facial or biometric information for its services.
Meanwhile, the National Human Rights Commission of Korea has recommended using biometric information for employees attendance management, which is increasingly adopted by private enterprises.
The amended Personal Information Protection Act ensures the data subject’s right to object to an automated decision.
The guidelines of each agency highlight transparency as a principle in developing and using AI. The most detailed content on this subject can be found in the PIPC’s guideline on automated decision-making under the amended Personal Information Protection Act.
Although the Commission finds it unnecessary to explain the specific operation method of the algorithm to the data subject, it requires the individual variables in artificial intelligence to be disclosed.
Although there have been theoretical discussions about the possibility of price discrimination or price fixing using AI technology, there has not been any specific investigation or regulation by the relevant authority, the Korea Fair Trade Commission. The Commission has initiated the work to publish a report on the AI market by the end of 2024, through which the Commission will examine whether there is any issue, such as algorithmic collusion or disadvantage for content providers.
As explained in 7.3 National Security, AI systems for demand forecasting are being introduced in the procurement sector (specifically, military procurement).
The introduction of AI is not being discussed in earnest with respect to the management of labour relations. Some early discussions are in progress around using AI for job interviews, but such interviews will be used only as an additional tool for hiring, in conjunction with more traditional methods.
Many companies have adopted an attendance management system using biometric authentication information, but there has been no discussion on the introduction of AI in employee performance evaluation.
Platform companies are making significant use of recommendation algorithms that use AI. They are most commonly used to provide personalised services based on the user’s behavioural information.
Financial companies are actively utilising or trying to utilise AI in providing customer consulting and support services, calculating credit ratings, designing insurance products, managing assets and risks, and detecting abnormal transactions and money laundering.
In particular, as chatbot services become more sophisticated with the advances made by generative AI, many financial companies are providing customer consulting and support services using chatbots, and AI is increasingly being used for asset management and personalised marketing purposes.
As the use of AI increases, the risks for financial institutions are also increasing. For instance, as the number of investment product transactions using AI increases, there is a possibility that a large number of unintended orders are placed all at once due to algorithm errors, which will increase market volatility. In addition, there is a possibility that financial companies may sell products that are not suitable for customers or fail to properly perform their obligations to explain while utilising AI for product recommendation.
Currently, there is no separate regulation relating to the use of AI by financial companies. However, the Korean financial supervisory authorities have announced AI guidelines (and AI security guidelines) in the financial sector to ensure that financial companies using AI technology protect financial consumers’ rights and take responsibility for their services.
In particular, the AI guidelines in the financial sector require financial companies to prevent unreasonable discrimination against consumers. Accordingly, financial companies should establish fairness standards based on the characteristics of services and conduct evaluations based on certain standards to prevent the possibility of unexpected discrimination that may occur due to AI-enabled services.
Big data analytics platforms based on video information are gaining traction as an important trend in healthcare. Non-medical institutions are required to receive data from medical institutions, but there are many challenges in obtaining such data. For example, medical institutions tend to be cautious about providing medical data and there are many legal regulations in this area. To resolve this issue, the government is proceeding with special legislation for healthcare data.
There is no general regulation governing the use of AI in autonomous vehicles. Provided, however, that certain laws prescribe matters relating to autonomous vehicles. First of all, with respect to liability in the event of an accident, the Compulsory Motor Vehicle Liability Security Act provides measures to seek reimbursement against the manufacturer in the case of any defect in the vehicle while maintaining the existing drivers’ liability, and to establish an accident investigation committee to investigate the autonomous driving data recording device affixed to the autonomous vehicle. Meanwhile, the Rules on Safety and Performance of Motor Vehicles and Motor Vehicle Components (Motor Vehicle Safety Standards), sub-regulations of the Motor Vehicle Management Act, have safety standards for Level 3 autonomous driving systems.
Autonomous vehicles precent a number of data privacy issues, including the use of video information taken by autonomous AI driving devices while driving. Although it has been necessary to use mosaiced (pseudonymised) video data to ensure that no individual can be identified even when developing autonomous driving technology, the PIPC has prepared a measure to permit the use of non-mosaiced original video through a regulatory sandbox, and accordingly, several companies have applied for the sandbox for the development of autonomous driving AI.
To date, there is no law regulating the use of AI in the manufacturing sector. Provided, however, that the Ministry of Trade, Industry and Energy has commenced the establishment of a master plan for AI autonomous manufacturing to drive innovation in the manufacturing process and enhance productivity. The Ministry has also stated that it will
Meanwhile, MSIT has announced a plan to prepare a basic law on artificial intelligence in 2024 and a bill on fostering the artificial intelligence industry and creating a foundation of trust has been proposed and is currently pending in the National Assembly.
It is also worth noting that the above bill permits the launch of artificial intelligence technology, artificial intelligence products (products using artificial intelligence technology), or artificial intelligence services, in principle, but it also prescribes the principles of priority permission and ex-post regulation that can limit them if they cause any harm to the lives, safety, rights and interests of the citizens, or significantly disrupt public safety, the maintenance of general order, and welfare promotion.
In the accounting, tax, and legal markets, individual companies are conducting a review on the use of AI for the analysis of contracts or financial statements.
However, in such professional services, companies cannot provide relevant data for a large-scale large language models due to confidentiality issues with clients, and for this reason, as they are basing the work on small language models, progress is sluggish.
On 30 June 2023, in a lawsuit filed by Stephen Thaler, an AI developer in the United States, as part of the so-called DABUS project to seek recognition of AI as an inventor, the Seoul Administrative Court ruled that “invention” under Article 2(1) of the Patent Act refers to the highly advanced creation of a technical idea using the laws of nature and that such a technical idea presupposes human reasoning, and therefore, under the current laws, AI cannot be recognised to have the legal capacity to “invent”. The appeal process in this case is currently ongoing (Seoul High Court 2023Nu52088).
In addition, the Copyright Act defines “work” as a creative production that expresses human thoughts and emotions (Article 2(1) of the Copyright Act) and “author” as “a person who creates a work” (Article 2(1) of the Copyright Act). The Ministry of Culture, Sports and Tourism stated in the Generative AI Copyright Guide issued on 27 December 2023 that, under the current laws, an AI cannot be recognised as an author.
Under Korean laws, a trade secret refers to information, including a production method, sales method, or useful technical or business information for business activities, which is not known publicly, is managed as a secret, and has independent economic value (Article 2(2) of the Unfair Competition Prevention and Trade Secret Protection Act). An act of acquiring trade secrets or using or disclosing trade secrets improperly acquired, with knowledge of the fact that an act of improper acquisition of the trade secrets has occurred, or without such knowledge due to gross negligence, constitutes infringement of trade secrets (Article 2(3)(b) of the above Act). Therefore, if any data considered as trade secrets of another person is collected without permission and used for AI learning, trade secret infringement issues may arise.
Meanwhile, if any technical data, such as the source code of any AI model created for AI services, is kept confidential and not disclosed to others, such data can be protected as trade secrets.
Under the current law, any product created by generative AI itself is not recognised as a work of authorship.
Any product created by Open AI’s generative AI is not protected by copyrights or patents. However, if any product created by Open AI’s generative AI is substantially similar to any existing work, such product may infringe on the copyrights of others.
As no artificial intelligence regulation bill has been introduced in Korea so far, the direction that future legislation is likely to take is very important. It is therefore vital to take a good look at the legislative trends and respond thereto.
There is much interest in the European AI law in Korea. This means that the European AI law could be the most important model. Furthermore, as the Financial Services Commission provides the most specific guidelines, the content thereof may also serve as a model for other government agencies.
It is necessary to consider the introduction of the European AI Law, documentation work to secure accountability as proposed by the Commission, an artificial intelligence governance system, and ex-post verification procedures for AI services.
In addition, since ISO 42001 can be an important standard, it is recommended to consider obtaining ISO 42001 certification or building an internal system at a level similar thereto.
Centropolis B
26 Ujeongguk-ro
Jongno-gu
Seoul 03161
South Korea
+82 2 3404 0000
+82 2 3404 0001
bkl@bkl.co.kr www.bkl.co.kr/law?lang=enIntroduction
Artificial intelligence (AI) has now entered the public lexicon, permeating every corner of our society to a level where it is a conversation starter at the family dinner table. With heavy investment flowing into AI-related industries, the world is starting to grapple with ethical dilemmas occasioned by the rapid emergence and growth of the technology. As always, the rise of a new technology invites legal troubles, and AI is, of course, no exception. This article explores the latest developments and trends in the legislative and regulatory landscapes governing AI in South Korea, together with market insights into the current and future applications of AI in both the private and public sectors in view of key legal considerations.
Legislation
In Korea, legislative discussions on AI touch on various topics from intellectual property rights to data privacy, covering a broad spectrum of industry sectors (eg, finance, healthcare and content industries).
Key legislation
Noteworthy laws currently in force are set out below.
The Framework Act on Promoting Data Industry and Data Utilisation (the “Data Industry Act”) and the Amendment to Unfair Competition Prevention Act (UCPA)
The Data Industry Act, which came into effect on 20 April 2022, introduces the notion of data assets, which is data that has economic value and which is created by a data producer through considerable investment involving human and material resources. In turn, Article 2(1)(k) of the amended UCPA prohibits unfair use of data assets. In the context of AI, legal issues may arise during the course of obtaining data sets or using them for training AI. Not only damages compensation but also criminal punishment are possible remedies. The Data Industry Act and the amended UCPA brought further clarity to legal uncertainties surrounding data collection and use.
The Amendment to the Personal Information Protection Act (PIPA)
In effect since March 2024, the amended PIPA regulates “automated decisions” – that is, decisions involving processing of personal information that are made entirely by automated systems (eg, AI-powered systems). Data subjects may demand an explanation of an automated decision from data handlers and object to such automated decisions if they materially impact their rights and obligations. Data handlers may reject the demand or objection if there are justifiable reasons, such as unfair infringement on another person’s life, body and property, but should notify such rejection without delay. The amended PIPA calibrates competing interests by providing the rights of data subjects while ensuring autonomous use of AI, all under the principle of transparency.
The Amendment to the Public Official Election Act (POEA)
The amended POEA took effect in January 2024 and aims to prevent deep fake videos, voices or images that are hardly distinguishable from real ones from being illicitly used in elections. When producing, distributing and displaying such content, it should be disclosed that AI technology has been implemented in their creation.
Proposed legislation
In addition to these laws, a score of bills have been proposed to the legislature. For example, the Framework Act on Fostering AI Industry and Establishing Trustworthy AI, the so-called “AI Framework Bill”, is pending before the National Assembly. Informed by the principle of “permit-first-regulate-later”, the AI Framework Bill seeks to lay down ethical standards governing AI and foster AI-related industries. Meanwhile, to enhance transparency and fairness in AI-assisted hiring, amendments to the Fair Hiring Procedure Act were proposed in March 2023, requiring companies that use AI technology in the hiring process to undergo verification that they are not biased and to notify job applicants of the use of AI technology in advance.
Proposed amendments to the Copyright Act and the Content Industry Promotion Act (CIPA) are another interesting legislative development. Proposed in June 2023, the proposed amendment to the Copyright Act stipulate clear standards for using copyrighted works for automated, computerised data analysis, also known as “data mining”. The proposed amendment to the CIPA in May 2023 imposes an obligation on content creators who use AI technology to unequivocally disclose the involvement of AI in the creation of their content.
Litigation
With the increasing frequency with which AI is being employed around the globe, lawsuits have inevitably followed in its wake and Korea has seen its fair share of these. Of particular relevance to this trend are intellectual property and data privacy issues. Government ministries and regulators are issuing guidelines, which offer insights on what could be expected in the near future.
Whether AI may qualify as an inventor or author
Similar to other jurisdictions, Korea does not recognise AIs as inventors under the Patent Act nor as authors within the meaning of the Copyright Act. Therefore, AI-created works are not eligible for patent or copyright registration. With scholars weighing in on these issues, the government takes the position, at least for now, that a person may be punishable for false registration if such person files an application to obtain copyright registration for AI-created works by indicating that the works were their own.
An illustrative case is the DABUS patent case that is concurrently taking place in multiple jurisdictions. Dr Stephen Thaler filed patent applications with patent offices while stating DABUS, the AI he himself developed, as the inventor. In Korea, the Korean Intellectual Property Office (KIPO) rejected the application on the ground that, in accordance with Article 33(1) of the Patent Act, only humans can be inventors and thus his AI does not fall within the meaning of an “inventor” under the Patent Act. The administrative court subsequently affirmed KIPO’s decision.
Although the final appeal is underway, court spectators are of the view that overturning the decision will not be easy, particularly given the level of deference assigned to the text of the law in favour of legal stability. Interestingly, the administrative court decision mentioned that it is difficult to conclusively find that recognising AI as an inventor will necessarily contribute to technological and industrial developments within our society, which may be read as signalling judicial reluctance to grant inventor status to an AI, even when the spirit of the Patent Act is broadly taken into account.
As of this date, no similar lawsuit has been filed in Korea to rule on the issue of authorship for AI. However, on 27 December 2023, the Ministry of Culture, Sports and Tourism (MCST) issued its guidelines on generative AI copyright. According to the MCST’s guidelines:
The MCST’s guidelines take the principled approach that it is not possible to register a human-created secondary work that used AI in its creation. Instead, it may be registered as a compilation work to the extent that the compilation work exhibits human creativity in the AI’s selection and arrangement in the resulting compilation work. For example, in the United States, the US Copyright Office registered a web cartoon that was created using AI as a compilation work as to the selection and arrangement of texts and images. However, the MCST announced that it may be premature to codify AI copyright provisions within the current copyright law regime.
Overall, although not legally binding, the MCST’s guidelines may serve as a meaningful reference to companies conducting AI-related businesses and users of AI solutions in that they offer a general roadmap in the context of AI copyright.
Whether fair use can be successfully invoked to defend claims of copyright infringement in the context of AI training
Similar to the OpenAI litigation initiated by the New York Times on the basis of OpenAI and Microsoft’s unauthorised use of its published works, Korean AI companies are also experiencing friction with the media over fair use in using the works of newspapers as training data sets for AI. There has been no Korean court ruling on the applicability of fair use in this context. Fair use under Article 35-5 of the Copyright Act allows use of copyrighted works to the extent that such use does not unreasonably prejudice copyright holders’ legitimate interests without conflicting with the customary exploitation of the works. In determining fair use, the following factors are comprehensively considered:
There is potential room for actual litigation in relation to legal issues that may occur in terms of AI training. The Korea Association of Newspapers (KAN) filed a petition with the Korea Fair Trade Commission (KFTC) in which it claimed that their news content is being used to train AI without permission. The KAN is demanding that fair compensation should be paid. The KFTC has yet to announce its decision. In a similar vein, a South Korean portal generating huge user traffic attempted amending its terms and conditions to enable its affiliates to use news content for AI development. This sparked a controversy, as no prior consent of the journalists who produced that news was obtained.
Whether publicly available personal information may involve data privacy violations in the contexts of AI training or AI research
The currently enforced PIPA does not contain separate provisions on the collection and use of publicly available personal information for AI training. Courts have not ruled on this issue, either. However, in August 2023, the Personal Information Protection Commission (PIPC) announced its “Policy Directions for Safe Use of Personal Information in the Age of AI” (the “Policy Directions”), thereby setting the expectations and offering certain standards.
The PIPC noted in its Policy Directions that:
The PIPC further explained that the collection and use of publicly available personal information is possible if the legitimate interests of a personal information handler providing an AI service clearly outweigh the rights of data subjects. It noted that much detailed guidelines on how to evaluate the “legitimate interests” will be announced in the middle of this year. Although PIPC’s Policy Directions have certain limitations in that they only provide the high-level criteria and lack concrete details, they set meaningful legal boundaries in terms of when and how publicly available personal information may be lawfully collected and used.
Meanwhile, with AI research gaining traction, legal issues arise in connection with research projects involving publicly available personal information and pseudonymised data. Personal information handlers may process publicly available personal information and pseudonymised data without obtaining the consent of the relevant data subjects for the purposes of compiling statistics, doing scientific research and making archives serving public interests. The scope of scientific research is interpreted to not only include academic research but also encompass industrial (eg, new product and service development) and empirical research. Therefore, it is possible to engage in AI research and development using publicly available personal information that has been pseudonymised as long as the raw data is lawfully collected.
In response to the controversy over the appropriate level and method of pseudonymisation for unstructured data, such as images, videos and texts, in February 2024, the PIPC released its guidelines for pseudonymisation of unstructured data. The PIPC’s guidelines provide methods and specific examples of pseudonymisation of unstructured data. Points of consideration at each phase of pseudonymisation (pre-preparation, risk review, pseudonymisation, appropriateness review and safe management) are explained in particular detail. The PIPC’s guidelines place emphasis on post-management, as it is difficult to completely eliminate in advance various risks that may materialise in the course of AI development and utilisation.
Finally, securing the quality of AI through pseudonymised data may be challenging in certain situations. In this regard, the PIPC has recently allowed original video data to be used for AI research and development, on the condition that enhanced safety measures are applied through the regulatory sandbox system. In addition, the PIPIC will develop and support a scheme tentatively dubbed a “Privacy Safety Zone”, which offers a safe and secure, controlled environment for testing out new technologies and conducting data privacy-related experiments.
The Status of AI Use and Initiatives in the Private and Public Sectors
Private sector
For businesses, AI has become a must, not an option. Companies are moving agilely to integrate AI technologies and systems in every imaginable industry sector. In fact, the use of AI by Korean companies has exploded by a factor of ten, from 2.7% in 2022 to 28% in 2023. To drive the growth of AI and foster AI-related industries, the government is also striving to strengthen the AI competitiveness of Korean companies by investing KRW9.4 trillion over the span of three years and holding meetings with representatives of major AI companies. The below is a high-level summary of noteworthy developments unfolding in the private sector.
Healthcare
Currently, most uses of AI in the healthcare industry have been limited to administrative tasks, as opposed to sophisticated clinical applications, such as diagnosis and medical services. However, more advanced applications of AI are on the horizon with promising potential. In particular, the digital medical device industry is increasingly incorporating AI technology. Also, spurred by the demand for non-face-to-face medical treatment and drug prescription, especially real-time patient monitoring, in the wake of the COVID-19 pandemic, digital medical devices for the treatment and monitoring of various diseases, such as stroke, alcoholism and depression, have been approved by the Ministry of Food and Drug Safety and are undergoing clinical trials. In addition, all aspects of patient engagement are being improved, from scheduling appointments and viewing medical records to communicating with medical staff and care co-ordination teams. When it comes to clinical trials, using cognitive automation to integrate clinical trial data from multiple systems and automatically enter data into standardised digital data elements is being experimented with.
Content/entertainment
The use of AI is notable in the content and entertainment industries, supplementing what were once laborious tasks, such as video editing. For example, when a member of the cast of a TV show stirs a social controversy after the filming of that TV how has been completed, and the particular cast needs to be edited out at the post-production phase before the final airing or screening, AI is being used to automatically filter them out. Another illustrative example is de-aging. When an audiovisual work depicts a scene of an actor in flashback, AI is employed to bring to life the actor’s younger appearance. The applicability of AI in these industries is certain to grow further.
Finance
The finance industry was one of the fist industries to embrace AI, with the fintech industry sector showing the highest level of maturity. AI is currently carrying out various tasks, including customer service and support, investment and portfolio management, data analysis for credit rating and loan screening, risk management, internal controls and compliance monitoring support.
ICT
The degree of AI implementation and its level of industrial maturity vary across industry sectors in ICT. Telecommunications companies are vying to introduce AI as they have long focused on improving operational efficiency and securing and maintaining their customers. AI is used not only for garden-variety customer services, but also for back offices, such as manufacturing and logistics.
Manufacturing
In the manufacturing industry, AI is utilised to evaluate the performance of production lines and lead production time. Demand can be predicted, and inventory management is being rendered lean through detailed data sets, such as information on plant operations, production line performance, sales and on-site feedback. A notable use case relates to real-time production line inspections. An AI solution picks up a minuscule movement on the production line that deviates from normal movements of machinery based on the real-time analysis informed by videos it has been trained with. The field personnel can then promptly attend to the issue to ensure that a batch coming from a particular production line at a particular time does not end up producing non-conforming products.
Companies are competing to take proactive measures to adapt to the new era of AI. Microsoft announced the new Copilot Copyright Commitment to extend indemnities on claims of copyright infringement that users may face as a result of using its generative AI, thereby assuaging users’ concerns associated with the use of AI. In Korea, several leading companies have rolled out their own initiatives. For instance, in January 2018, the nation’s largest messenger app service provider became the first company in Korea to establish and publish a Code of Ethics for AI, which specifies what AI developers must comply with when designing algorithms. Under the principle of utilising AI for the benefit and happiness of humanity, the Code covers, among other things, vigilance against discrimination, training data management and algorithm impartiality.
Public sector
The extent of AI use in the public sector likewise varies depending on the type of institution, its unique function and the need for adhering to the legacy system (eg, where a more conservative approach, such as paper archiving, is strictly warranted, given the nature of the work the institution handles). The possibility of inaccurate answers or AI hallucinations in the public sector is a worrying prospect and can undermine public trust, which partially explains the strategic prudence behind the relatively slow introduction of AI in comparison to the private sector. A possible information leak, which could spell disaster, is another concern that guides the government’s prudent and principled approach. Nevertheless, AI is being used in various fields, such as automation of civil complaints and procurement contracts.
The government is also aware of the importance of AI in the public sector and is making considerable efforts, inviting public comment on how to use AI in the public sector. In particular, Article 20 of the recently enacted Framework Act on Public Administration is expected to provide more momentum for the use of AI in the public sector by allowing administrative agencies to issue administrative dispositions using an automated system.
Outlook
Korea has witnessed an exponential growth AI, resulting in a large-scale AI ecosystem that is on a par with the United States, China, and the UK. The government is bringing further momentum to this growth by actively supporting the private industry, all in an effort to enhance industrial competitiveness. Government ministries and regulators, such as the PIPC, the MCST, the Ministry of Science and ICT (MSIT), and the Ministry of SMEs and Start-Ups, are discussing and devising various plans to further this upward growth trajectory. On the legal front, especially, the MSIT has inaugurated a task force to revamp the legal framework on AI, which is expected to lay down a solid and stable framework governing AI. In the private sector, companies should not lose sight of ethics and legal compliance in the fervent, blinded pursuit of AI. Most importantly, co-ordination between the public and private sectors is needed more than ever to maintain trust in AI. Without trust, the promise of AI will never be fulfilled.
Hanjin Building
63 Namdaemun-ro
Jung-gu
Seoul 04532
South Korea
+82 2 772 4000
+82 2 772 4001 2
mail@leeko.com www.leeko.com