Technology & Outsourcing 2025

Last Updated October 28, 2025

Taiwan

Trends and Developments


Authors



Lee and Li Attorneys-at-Law was founded over 50 years ago and is Taiwan’s largest law firm, serving the Greater China region through collaborations with mainland Chinese firms. With offices in Taipei, Hsinchu, Taichung and southern Taiwan, and alliances in Shanghai and Beijing, the firm employs around 870 staff, including 200+ Taiwan-qualified lawyers, 50 foreign lawyers, patent agents, technology experts and accountants. Lee and Li offers comprehensive legal services across 29 practice groups, specialising in intellectual property, banking, capital markets, technology law, public construction, government procurement and M&A. The firm has played a key role in Taiwan’s economic development, supporting foreign investment and legislative initiatives. Serving over 60,000 corporate clients globally, Lee and Li’s success stems from its extensive resources, global network, and active involvement in advising government agencies on public policy, ensuring clients receive expert legal support aligned with evolving industry trends and business needs.

In recent years, rapid advancement of generative artificial intelligence (AI) and cloud services has been a significant catalyst for the expansion of the technology outsourcing market. Generative AI has provided elevated capabilities in automation and computing, enabling enterprises to develop innovations with greater efficiency. At the same time, cloud services offer increasingly flexible and scalable infrastructure solutions, thereby reducing enterprises’ information technology costs and simplifying management complexities. This article provides a concise overview of key legal issues and emerging trends related to AI and cloud services under Taiwan’s legal framework.

AI and Intellectual Property Rights

AI cannot be recognised as a patentee

AI, by simulating human cognition and employing learning algorithms and computational processes, can perform specific tasks and generate research and development outcomes. However, recent rulings by Taiwan’s Intellectual Property and Commercial Court and the Supreme Administrative Court have unequivocally held that AI cannot be recognised as an inventor of patents.

The courts have held that under current Taiwan laws and regulations, an inventor must be a natural person who has made a substantive intellectual contribution to the technical features claimed in a patent application. Since AI is legally classified as a thing or object rather than a legal person, under the principle of jus rerum AI lacks the legal personality and capacity to be recognised as a patent holder or designated as an inventor.

Copyright issues concerning content generated by generative AI

Pursuant to the Copyright Act of Taiwan, copyright protection is granted only to works created by natural persons or legal entities capable of holding rights and obligations. Consequently, whether AI-generated content qualifies for copyright protection depends on the presence of meaningful human creative input during the creation process.

In a development process where AI functions solely as an auxiliary tool, such as drafting software, and human creativity is demonstrably involved, the resulting work may be eligible for copyright protection. Except where otherwise provided under Article 11 (regarding works made for hire) and Article 12 (regarding commissioned work) of the Copyright Act, copyright in such works generally vests in the person who contributed the creative input. Conversely, if the creation is entirely the product of AI’s autonomous computational processes absent any human intellectual contribution, the resulting work is not entitled to copyright protection.

It is important to note, however, that if AI-generated content reproduces original works included in the AI’s training data, and the user subsequently exploits such content commercially, such as by printing AI-generated images for sale, such acts may constitute unauthorised reproduction of the underlying copyrighted works. To mitigate the risk of copyright infringement claims, AI users are advised to obtain clear authorisation from the AI model’s developer or manager, confirming that appropriate licences from the holders of economic rights have been secured and that sublicensing for commercial use is permitted.

Personal Data Protection and Information Security

In addition to compliance with the Personal Data Protection Act (PDPA), the Ministry of Digital Affairs (MODA) introduced the Regulations on Security Maintenance and Management of Personal Data Files for Digital-Economy-Related Industries (the “Security Maintenance Regulations”) in 2023 to enhance personal data protection within digital economy sectors such as cloud computing service providers, AI model developers, e-commerce platforms and information service providers. Under such regulations, covered entities are required to establish: (i) a personal data file security maintenance plan; and (ii) appropriate protocols for managing personal data upon cessation of business activities. These measures are designed to prevent unauthorised access, theft, alteration, damage, loss or disclosure of personal data. The principal obligations are summarised as follows.

Personal data protection management policy and security maintenance plan

Entities subject to the Security Maintenance Regulations must formulate and implement a comprehensive Personal Data File Security Maintenance Plan, which includes procedures for handling personal data upon business termination (collectively referred to as the “Security Maintenance Plan”). This plan must incorporate provisions ensuring compliance with the PDPA, such as obligations to provide notice and obtain consent, respond to requests for inquiry, inspection or copies, maintain data accuracy and notify affected individuals in the event of a data breach. Additionally, entities must adopt an internal personal data protection management policy and related procedures, and circulate such policy internally to ensure that personnel are informed of and adhere to the requirements of such policy.

Human resources allocation and personnel management

Covered entities are required to allocate adequate management personnel and resources responsible for developing, revising and enforcing their personal data protection management policies and Security Maintenance Plans. Furthermore, entities shall:

  • impose confidentiality obligations on employees;
  • assign access rights to personal data based on business needs, data sensitivity and operational requirements
  • periodically review the necessity and appropriateness of such access rights;
  • conduct regular training and awareness programmes; and
  • upon employee termination, ensure the return and deletion of any personal data accessed or retained during employment.

Periodic inspections and risk assessments

Entities must regularly take inventory of and verify the status of personal data collected, processed or utilised, and shall clearly define the scope of data governed by the Security Maintenance Plan. Entities are also required to conduct periodic risk assessment of business processes affecting personal data and implement appropriate security measures to address identified risks.

Information security management measures

When handling personal data, covered entities must employ suitable encryption and protective measures for encrypted data, backup copies and data in transit. For personal data processed directly or indirectly through information and communication systems, entities must implement the following security controls:

  • establish and maintain firewalls, email filtering systems, intrusion detection devices and other safeguards against external network threats, with regular updates;
  • monitor for abnormal data access activities on systems storing personal data and conduct periodic incident response drills;
  • regularly assess equipment for security vulnerabilities;
  • continuously update and operate antivirus software and perform routine malware scans;
  • implement authentication mechanisms for systems containing personal data, ensuring that account and password complexity meet established standards;
  • minimise, to the greatest extent practicable, the use of actual personal data in system testing environments;
  • conduct periodic inspections of systems processing personal data; and
  • assess usage scenarios and apply data masking techniques where appropriate.

Cloud service providers (CSPs) may refer to MODA’s Reference Guidelines for Implementation of Personal Data Protection and Information Security by the Information Service Industry, and may also refer to published standards such as ISO/IEC 27001 (Information Security Management System), ISO/IEC 27701 (Privacy Information Management System) or the Taiwan Personal Information Protection and Administration System (TPIPAS) when establishing their personal information management systems and information security management systems.

Common security measures include:

  • encrypting database-resident data (eg, AES-256);
  • protecting backup data through encrypted storage, automated backups, automated compression and automated key encryption;
  • adopting Secure Sockets Layer/Transport Layer Security (SSL/TLS) transport encryption for application programming interface (API)-based transmissions;
  • applying data masking for display purposes;
  • providing encrypted channels for customers transmitting sensitive data;
  • masking unnecessary elements of sensitive data during transmission; and
  • implementing access-control measures for customer entitlements.

Additionally, entities should deploy and periodically update system servers, office automation network protections and application firewalls.

Management of data storage media

Covered entities must apply appropriate protective measures and technologies tailored to the characteristics and usage of data storage media. Such entities must establish and enforce management protocols for custodianship and control access to environments where storage media are kept.

Information security incident response

Entities must maintain effective mechanisms for responding to, notifying data subjects of and preventing information security incidents. Such mechanisms should include procedures and communication channels designed to mitigate harm and inform data subjects about the incidents and their resolution. Following an incident, entities are required to evaluate and implement corrective and preventive actions. In cases where an incident threatens normal operations or the rights and interests of a substantial number of data subjects, the entity must report the incident to MODA within 72 hours of becoming aware using the prescribed reporting format. If the incident is also reported to municipal, county or city authorities, a copy of such report must be submitted to MODA.

Competition Law Issues in Generative AI

In response to the potential risks of restraint of trade posed by algorithms and generative AI, and to inform future legislation and enforcement efforts, the Taiwan Fair Trade Commission has issued the White Paper on Competition Policy in the Digital Economy and Explanatory Information on Soliciting Public Opinions on Competition Law Issues Related to Generative AI. The competition law concerns arising from algorithms and generative AI can be broadly categorised as follows.

Unilateral abuse of market power

This category primarily encompasses four key areas.

Acquisition of computing resources

Computing resources are essential for generative AI development, particularly given the dominance of a few firms, such as Nvidia, in the graphics processing unit (GPU) market, which is significant to AI considerations since AI relies heavily on processing technologies originally developed for GPU applications. Similarly, CSPs are concentrated among a handful of major technology companies. Should these dominant operators restrict access to computing resources, whether by tying hardware sales to other products or by prioritising supply to preferred partners, such actions risk posing significant barriers to entry, thereby undermining competitive dynamics.

Restriction of data access

Data constitutes the foundational input for AI model training. Digital platform operators controlling vast troves of user data enjoy a substantial competitive advantage in data analytics and AI development. If new entrants are unable to access data of comparable quality and scale, their ability to compete effectively with the incumbent market players will be severely constrained.

Cloud service platforms

The cloud computing services market is currently concentrated among a few large CSPs, resulting in ecosystem lock-in effects that impede user mobility between different platforms. Moreover, if CSPs favour their own or affiliated generative AI models in platform interfaces, or condition model usage on their cloud services, such practices may distort competition and entrench market power.

Foundation models

Market-dominant operators may impose requirements that terminal devices (eg, smartphones and laptops) exclusively incorporate their AI models or prioritise their model outputs, thereby restricting competitors’ market access. Additionally, by mandating access to proprietary internal data through terms of use agreements, such operators may further consolidate their data advantages, raising the risk of unfair monopoly.

Concerted actions

Prior to the advent of generative AI, algorithms already served as instruments to facilitate collusive agreements among competitors, enabling the implementation and ongoing monitoring of co-ordinated conduct. With the advancement of AI models, the incentives and capabilities for co-ordinated pricing and business decisions are likely to increase. If multiple enterprises employ the same generative AI model or derivative products, they may inadvertently or deliberately align their commercial strategies, thereby impairing market competition.

Mergers

Leading AI companies may pursue horizontal, vertical or conglomerate mergers to consolidate resources and strengthen market influence. Given the high capital and technological barriers to entry in generative AI infrastructure and model development, start-ups may be incentivised to collaborate with or be acquired by dominant technology firms. The competitive implications of such consolidation warrant close scrutiny.

False advertising and other unfair competition practices

Generative AI-generated content may be exploited to engage in false or misleading marketing practices, potentially deceiving consumers. Furthermore, AI’s advanced data analytics capabilities could facilitate the dissemination of false, misleading or deceptive information tailored to individual users, raising concerns under unfair competition laws.

Draft Artificial Intelligence Fundamental Act

On 28 August 2025, the Executive Yuan approved the Draft Artificial Intelligence Fundamental Act. The Act will take effect upon review and ratification by the Legislative Yuan. Its primary objective is to create a legal framework that fosters the development of AI while safeguarding human rights and managing associated risks.

Article 3 of the draft enumerates seven fundamental principles that underpin the government’s AI policy:

  • sustainable development and social welfare;
  • protection of human autonomy and fundamental rights;
  • privacy protection and data governance;
  • cybersecurity and safety;
  • transparency and explainability;
  • fairness and non-discrimination; and
  • accountability mechanisms.

These principles serve as the foundational values guiding AI promotion efforts.

Articles 4 through 17 set forth policy directions encompassing resource allocation, subsidies and tax incentives, regulatory adaptation, supervisory sandboxes, public–private collaboration, international co-operation, educational initiatives and the protection of labour rights and interests. Additionally, the draft introduces a risk-based regulatory framework to establish standards, verification procedures and liability attribution. The draft also enhances personal data protection, data sharing protocols and data quality controls to mitigate risks of bias and misuse.

Notably, the draft assigns the government the role of promoting AI development without imposing direct regulatory obligations on related industries. In the future, the manner in which each competent authority implements regulatory adaptations and formulates risk management norms will be a critical area of focus for stakeholders in the industry.

Legal Issues Related to Autonomous Vehicles

Unmanned Vehicles Technology Innovative Experimentation Act

Taiwan enacted the Unmanned Vehicles Technology Innovative Experimentation Act (also known as the “Unmanned Vehicle Sandbox Act”) effective 1 June 2019. This Act establishes an experimental regulatory sandbox framework for unmanned vehicles (including unmanned land vehicles, aircraft, ships and other combined unmanned transportation devices) to promote domestic innovation in unmanned vehicle technologies. Pursuant to paragraph 22 of the Act, during the period of authorised innovative experimentation, applicants conducting experiments within the scope approved by the competent authority are exempt from certain provisions of the Road Traffic Management and Penalty Act, Highway Act, Civil Aviation Act, Law of Ships, Telecommunications Act and other relevant laws.

Additionally, Article 20 of the Regulations Governing Road Traffic Safety primarily governs vehicle testing activities but does not extend to the general operation of autonomous vehicles on public roads. Currently, testing of Level 3 and Level 4 autonomous vehicles is permitted under the sandbox framework; however, Level 3 autonomous vehicles have not yet received approval for unrestricted public use. Moreover, in accordance with the Society of Automotive Engineers (SAE) standards, fully autonomous vehicles (Level 5), which require no human intervention, are not yet authorised for general operation on public roads. Furthermore, upon completion of the sandbox experimentation phase, autonomous vehicles must comply with existing traffic laws to be legally operated on public roads, at which point regulatory oversight will transfer to the Ministry of Transportation and Communications.

Allocation of liability for accidents involving vehicles equipped with ADAS under current Taiwan law

Regarding vehicles equipped with advanced driver assistance systems (ADAS), current Taiwan laws do not recognise AI as a legal person; therefore, AI cannot bear civil or criminal liability directly. Consequently, liability for accidents involving ADAS-equipped vehicles primarily rests with the human driver and the vehicle manufacturer.

From a civil liability perspective, if a driver negligently operates an ADAS vehicle and causes harm to a third party, the injured party may seek damages under Article 184 of the Civil Code. Additionally, the ADAS vehicle may be classified as a “motor vehicle” under Article 191-2 of the Civil Code, thereby imposing tort liability on the driver. Where an accident results from a product defect, such as failure by the manufacturer or provider to meet reasonable safety standards or to issue adequate warnings, the manufacturer or provider may be held liable pursuant to Article 7 of the Consumer Protection Act.

Regarding criminal liability, AI cannot be held criminally liable due to the absence of mens rea (guilty mind). Accordingly, criminal responsibility may rest with the driver or the provider. For example, if a driver’s negligent operation of an ADAS vehicle causes bodily injury to another, criminal liability requires proof of causation between the driver’s negligence and the harm. Providers may also incur criminal liability if they fail to exercise the requisite duty of care in the design, manufacture or maintenance of the vehicle.

In terms of administrative liability, under the Highway Act and the Road Traffic Management and Penalty Act, the driver remains responsible for vehicle control and is liable for any traffic violations committed during operation.

As autonomous vehicle technology continues to evolve, the allocation of liability is expected to become increasingly complex. It is anticipated that legislators will revisit and amend existing legal frameworks to more clearly delineate the respective responsibilities and risks borne by human operators and AI systems.

Lee and Li Attorneys-at-Law

8F, No 555, Sec 4 Zhongxiao E Rd
Taipei 110055
Taiwan
ROC

5F, Science Park Life Hub No 1
Industry E 2nd Rd
Hsinchu Science Park
Hsinchu 300091
Taiwan
ROC

+886 2 2763 8000

+886 2 2766 5566

attorneys@leeandli.com www.leeandli.com/EN
Author Business Card

Trends and Developments

Authors



Lee and Li Attorneys-at-Law was founded over 50 years ago and is Taiwan’s largest law firm, serving the Greater China region through collaborations with mainland Chinese firms. With offices in Taipei, Hsinchu, Taichung and southern Taiwan, and alliances in Shanghai and Beijing, the firm employs around 870 staff, including 200+ Taiwan-qualified lawyers, 50 foreign lawyers, patent agents, technology experts and accountants. Lee and Li offers comprehensive legal services across 29 practice groups, specialising in intellectual property, banking, capital markets, technology law, public construction, government procurement and M&A. The firm has played a key role in Taiwan’s economic development, supporting foreign investment and legislative initiatives. Serving over 60,000 corporate clients globally, Lee and Li’s success stems from its extensive resources, global network, and active involvement in advising government agencies on public policy, ensuring clients receive expert legal support aligned with evolving industry trends and business needs.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.