Artificial Intelligence 2025 Comparisons

Last Updated May 22, 2025

Contributed By Drew & Napier LLC

Law and Practice

Authors



Drew & Napier LLC is a full-service Singapore law firm, which was founded in 1889 and remains one of the largest law firms in the country. Drew & Napier has a highly regarded TMT practice group, which consistently ranks as the leading TMT practice in Singapore. The firm possesses unparalleled transactional, licensing and regulatory experience in the areas of telecommunications, technology, media, data protection and cybersecurity. The TMT practice is supported by more than ten lawyers and paralegals with extensive experience in infocommunications, data protection, technology, and sector-specific and general competition law. The TMT practice acts for a broad range of clients, spanning multinational corporations and local companies across industries. These clients include global and regional telecommunications service providers, sectoral regulators (both local and foreign), consultants, software houses, hardware manufacturers, and international law firms.

Singapore unveiled its first National AI strategy in 2019, with the aim of becoming a leader in developing and deploying scalable, impactful AI solutions in key sectors of high value and relevance to citizens and businesses by 2030. With AI going “mainstream” with the release of ChatGPT by OpenAI in November 2022, Singapore’s National AI Strategy was updated in December 2023 (the “NAIS 2.0”), with AI now positioned as a necessity that people “must know”, rather than just “good to have”. Singapore will also take a global approach to AI, in terms of co-operating both to innovate and also to overcome the challenges brought about by AI (eg, energy, data and ethics).

Singapore has not enacted any laws concerning the use of AI in general. However, the following laws address specific applications of AI, which are detailed further in 3.2 Jurisdictional Law.

  • Singapore’s Road Traffic Act 1961 was amended in 2017 in order to provide a regulatory sandbox for the trial and use of autonomous motor vehicles, which was previously done by way of exemptions.
  • The Health Products Act 2007 (HPA) requires medical devices incorporating AI technology (AI-MD) to be registered before they are used (see 14.3 Healthcare for further details).
  • The Elections (Integrity of Online Advertising) (Amendment) Act 2024 (passed on 15 October 2024) is the first time the words “artificial intelligence” appeared in Singapore’s legislation – the Act bans manipulated online election advertising containing realistic but fake representations of candidates, where generative AI technology is one of the “digital means” by which content could be generated or manipulated.

Organisations must comply with relevant laws when deploying AI technology – for example, laws relating to safety, intellectual property, personal data protection and fair competition. Where the use of AI results in harm, existing legal principles (such as tort liability and contractual liability) will still apply.

Singapore also has a set of voluntary guidelines and testing frameworks in place for both traditional/predictive AI (which makes predictions based on historical data instead of creating new content) and generative AI, as follows.

For traditional/predictive AI:

  • the Model Artificial Intelligence Governance Framework (Second Edition) (the “Model Framework”) issued by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) in 2020, which states that the use of AI should be fair, explainable, transparent and human-centric;
  • the Implementation and Self-Assessment Guide for Organisations (ISAGO) – a companion to the Model Framework issued in 2020 – which sets out questions and examples for organisations to rely on when self-assessing how their AI governance practices align with the Model Framework; and
  • “AI Verify” (an AI governance testing framework and toolkit rolled out in May 2022, comprising both technical tests and process checks for organisations to assess their AI systems against 11 internationally-accepted AI ethics principles); Singapore aims to align its testing frameworks with the international community’s, and AI Verify has since been mapped to the US National Institute of Standards and Technology’s AI Risk Management Framework in October 2023 (both the Singapore and US frameworks are aligned (ie, interoperable), thereby reducing compliance costs for organisations to meet the requirements within both frameworks) and the ISO/IEC 42001:2023 (the first international standard on the responsible adoption of AI within organisations) in June 2024.

For generative AI:

  • the IMDA first issued a paper – “Generative AI: Implications for Trust and Governance” ‒ in June 2023, outlining six key risks brought about by generative AI and measures to address them;
  • the IMDA then issued (in October 2023) a paper on baseline standards for evaluating large language models (LLMs), titled “Cataloguing LLM Evaluations”; and
  • the Model AI Governance Framework for Generative AI (the “Model Gen-AI Framework”) ‒ setting out nine dimensions to build trustworthy generative AI, as well as the actions the industry and policymakers must take to achieve it ‒ was released on 16 January 2024 for public consultation up to 15 March 2024 and was finalised on 20 May 2024.
  • “Project Moonshot”, one of the world’s first LLM evaluation toolkits to address security and safety challenges of LLMs, developed by the AI Verify Foundation, with curated benchmarks (similar to “exam questions”) for organisations to test their model across a variety of competencies (eg, summarisation, language, context), as well as modules for manual and automated red-teaming (adversarial attacks on the model) to flush out vulnerabilities in the LLM.

Regulators also issue guidance notes to organisations, such as the following ‒ of which, the first three are for general application, and the final two apply to specific industries.

The PDPC released the Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems in March 2024 (the “PDPC AI Advisory Guidelines”), following its public consultation in July 2023.

  • The Intellectual Property Office of Singapore (IPOS) issued the IP and Artificial Intelligence Information Note to provide an overview of how AI inventions can receive IP protection.
  • The Cybersecurity Agency of Singapore (CSA) has released the Guidelines and Companion Guide on Securing AI Systems on 15 October 2024, which set out best practices for owners of AI systems to adopt to secure their AI systems at every stage from designing to deployment to disposal at the end of the life cycle.
  • The Monetary Authority of Singapore (MAS) released the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector for voluntary adoption by firms providing financial products and services.
  • The Ministry of Health (MOH), the Health Sciences Authority (HSA) and the Integrated Health Information Systems co-developed the Artificial Intelligence in Healthcare Guidelines in order to set out good practices for AI developers and complement the HSA’s regulation of medical devices incorporating AI technology (AI-MDs).

AI is deployed widely across industries in Singapore, from finance to healthcare to food service in restaurants. AI-enabled tools are also integrated into educational curriculums. The revised NAIS 2.0 encourages AI innovation and adoption across all sectors, with a focus on manufacturing, financial services, transport and logistics, and biomedical sciences.

The IMDA/PDPC have also published a Compendium of AI Use Cases (in two volumes) to demonstrate how the Model Framework’s AI governance principles have been applied by organisations.

Singapore’s government announced further support for organisations to adopt AI solutions during the Budget speech in February 2025. There will be SGD150 million set aside for a new Enterprise Compute Initiative for eligible organisations to partner with major cloud services to access AI tools and computing power, as well as with expert consultancy services. This builds on previous Budget announcements (February 2024) that the government will be investing more than SGD1 billion during the next five years to boost its AI computing resources, talent pool and industry development. Some initiatives include investing up to SGD500 million to secure the high-performance compute resources required for running AI systems, as well as more than SGD20 million in the next three years to fund additional SG Digital Scholarships for Singaporeans to pursue AI-related courses in universities and overseas internships.

Singapore’s approach to regulating AI is that of “agility”, as set out in its NAIS 2.0. Singapore’s priority is to deepen our understanding of AI and discover and address its potential risks. There is no need for AI-specific legislation for now, as existing laws can cover its use and regulators will issue guidelines to organisations so that they have a clearer picture of how to conduct their affairs.

However, the government will enact legislation if it is necessary to do so, and this will be done “thoughtfully and in concert with others, accounting for the global nature of AI” (NAIS 2.0). The approach is dependent on the nature of the risk to and from AI, where some cases are best settled by voluntary guidelines, and others by legislation.

Singapore’s approach to AI so far has been to issue voluntary guidelines and guidance notes to aid industries in navigating this new technology and set out best practices. Guidelines are suitable for an area in which change is rapid, as they can be amended and issued quickly.

As mentioned in 1.1 General Legal Background, Singapore has not yet enacted legislation that regulates the use of AI in general. However, there is legislation that concerns specific applications of AI – namely, digitally manipulated content in elections, autonomous vehicles (AVs) (see 14.4 Autonomous Vehicles) and AI-MDs (see 14.3 Healthcare).

Please refer to 1.1 General Legal Background for the key jurisdictional directives.

This is not applicable in Singapore.

This is not applicable in Singapore.

This is not applicable in Singapore.

In relation to personal data that is used to train AI systems or that is processed by AI systems, Singapore’s Personal Data Protection Act 2012 (for private sector data) will apply, as it is technology-agnostic. The PDPC has also issued its first PDPC AI Advisory Guidelines to set out best practices for organisations developing or deploying AI systems.

In relation to copyright issues arising from the use of data to train AI systems, Singapore has a computational data analysis exception under Section 244 of the Copyright Act 2021, which was introduced after a public consultation in 2019. This is independent of the fair use exception under Section 190 of the Copyright Act 2021. For more details, please refer to 15.1 IP and Generative AI.

There is no legislation targeting specific uses of AI, or omnibus legislation targeting uses of AI across multiple sectors like the EU AI Act, that is pending enactment in Singapore at present ‒ although this is not something that the authorities are closed off to, as set out in the NAIS 2.0. Please refer to 3.1 General Approach to AI-Specific Legislation.

Singapore’s Court of Appeal has issued a key decision on the use of deterministic algorithms in contracting. Singapore does not yet have reported decisions on the use of AI and the surrounding IP rights.

In Quoine Pte Ltd v B2C2 Ltd (2020) SGCA(I) 02 (“Quoine”), transactions on Quoine’s cryptocurrency exchange platform were conducted by algorithms for both Quoine and B2C2, with the algorithms giving trading instructions based on observations of market data. Owing to an oversight, Quoine failed to make certain changes to several critical operating systems on its platform, so it could not generate new orders. It is relevant that B2C2 had – back when designing its algorithm – set a virtual price of 10 Bitcoin to 1 Ethereum in the event that there was insufficient market data from Quoine to draw upon in order to price its trades.

Quoine’s oversight sparked off a chain of events that triggered buy orders for Ethereum being placed on behalf of some platform users – at 250 times the going market rate for purchasing Ethereum with Bitcoin – in favour of B2C2. This was the virtual price B2C2 had set to sell its Ethereum.

Quoine cancelled the trades when it realised this and B2C2 sued Quoine as a result. Quoine argued that the contracts were void/voidable for unilateral mistake. It is important to note that all the algorithms functioned as they should and that the cause was actually human error.

The court described a deterministic algorithm as one that “will always produce precisely the same output given the same input”, where it “will do just what it was programmed to do and does not have the capacity to develop its own responses to varying conditions” and “hence, faced with any given set of conditions, it will always respond to that in the same way” (at (15)).

The court held that where contracts are made by way of deterministic algorithms, in order to determine knowledge, the court would refer to the state of mind of the algorithm’s programmers from the time of the programming up to the point that the relevant contract is formed (see (97) to (99)). The court upheld the contract, as it found that the programmer did not have actual or constructive knowledge of Quoine’s mistake, and hence did not unconscionably take advantage of it.

It would be interesting to see whether the same principles would apply in the case of a non-deterministic algorithm, as the outcome may not always be known and the computer could be said to “have a mind of its own” (see (185)), or if there are multiple programmers – given that, in Quoine, the software used by B2C2 was devised almost exclusively by one of the founders.

All ministries and statutory boards have a part to play in developing Singapore’s use of AI. The following is a non-exhaustive list of key regulatory agencies.

  • The Smart Nation and Digital Government Office (SNDGO) is under the Prime Minister’s Office, where it plans and prioritises key national projects and drives the digital transformation of the government. The SNDGO issued the National AI Strategies.
  • The Government Technology Agency (“GovTech”) is the implementing arm of the SNDGO, in which it develops products for the public and government, in addition to managing cybersecurity for the government.
  • The IMDA regulates the infocommunications and media sectors and drives Singapore’s digital transformation. The PDPC is part of the IMDA, where it implements policies to balance the protection of an individual’s personal data with organisations’ need to use it.
  • The IPOS has initiated fast-track programmes for patent protection and copyright protection to support AI innovation.

Other bodies have also been set up that will complement the work of the regulatory agencies.

  • The Advisory Council on the Ethical Use of AI and Data – chaired by the former Attorney-General V K Rajah SC and comprising 11 members from diverse industry backgrounds (multinational corporations and local companies, as well as advocates of social and consumer interests) – works with the government on responsible development and deployment of AI, advising on ethical, policy and governance issues.
  • AI Singapore, a national programme comprising a partnership between various economic agencies (eg, the IMDA, Enterprise Singapore, and the SNDGO) and academia, was launched in May 2017 to accelerate AI adoption by industry.
  • The AI Verify Foundation, a non-profit that is a wholly owned subsidiary of the IMDA, was launched in June 2023 to create a global open-source community to contribute to the use and development of AI testing frameworks, code base, standards and best practices. It has more than 100 corporate members ranging from multinational technology companies to banks to e-commerce companies.

Generally, Singapore’s regulatory agencies seek to build public trust in the use of AI in Singapore and minimise the risks posed by AI. They do this by ensuring that:

  • the decision-making process is explainable, transparent and fair when AI is used to make decisions;
  • AI solutions are human-centric (ie, promote the well-being and safety of humans); and
  • there is accountability for all players in the AI development chain so that they are responsible towards end users.

Singapore’s regulatory agencies frequently hold public consultations on their draft AI guidelines before releasing the finalised version incorporating the public feedback. For example, the CSA held a public consultation on securing AI systems between July and September 2024, before releasing the finalised guidelines in October 2024. The PDPC also held a public consultation from July to August 2023 before releasing the Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems in March 2024. 

There is presently no reported enforcement action by regulators concerning the use of AI.

Enterprise Singapore oversees the setting of standards in Singapore through the industry-led Singapore Standards Council.

On 31 January 2019, Enterprise Singapore published a Technical Reference for Autonomous Vehicles, known as “TR 68”. This was born out of a year-long industry-led effort administered by the SSC’s Manufacturing Standards Committee. The TR 68 was intended to set a provisional national standard to guide the industry in the development of fully autonomous vehicles. In 2021, following a review by the Land Transport Authority and the SSC, TR 68 was updated to include guidelines on the application of machine learning, software updates management, cybersecurity principles and testing framework.

The SSC has also published TR 99:2021, which provides guidance for assessing and defending against AI security threats.

Singapore actively participates in standard-setting and norm-shaping processes with key international organisations and standard-setting organisations such as the World Economic Forum, the OECD, the International Organisation for Standardisation (ISO), and the International Electrotechnical Commission (IEC).

AI Singapore (see 5.1 Regulatory Agencies) also actively participates in international standards bodies. In 2019, the AI Technical Committee (AITC) was formed to recommend the adoption of international AI standards for Singapore and support the development of new AI standards. The AITC represents Singapore as a participating member in ISO/IEC JTC 1/SC 42 – the international standards committee responsible for standardisation in the area of AI. To date, the AITC has contributed to the development and publication of three standards:

  • ISO/IEC TR 24030:2021 Information Technology – Artificial Intelligence (AI) – Use Cases; and
  • Singapore Standards TR 99:2021 Artificial Intelligence (AI) security – Guidance for assessing and defending against AI security threats.
  • Singapore Standard SS ISO/IEC 42001:2024 Information technology – Artificial Intelligence – Management System – an identical adoption of ISO/IEC 42001:2023, with a national Annex describing AI Verify as an example of a voluntary testing tool to align AI systems with the standard.

Across the Singapore government, AI solutions are being adopted, including the following.

  • The MAS is using machine learning models to analyse market trading data in order to identify potential instances of market collusion or manipulation for further investigations.
  • The Singapore Police Force (SPF) is using AI to sieve out material that is likely to be obscene from seized devices to improve the efficiency of investigations.
  • The SPF also partnered with the National Crime Prevention Council and GovTech to combat scams through the development of the “Scamshield” application, which uses AI to filter scam messages through the identification of keywords and blocks calls from blacklisted numbers.
  • The Land Transport Authority (LTA) uses video analytics and AI to maintain more than 9,500 kilometres of roads more efficiently, thereby saving up to 30% of man-hours needed for detecting road defects. The LTA uses high-speed cameras mounted onto a van to automatically detect and report road defects, which enables targeted and predictive maintenance of roads.

In February 2023, the government announced a pilot project known as “Pair” that integrates ChatGPT into Microsoft Word to assist public officers in their writing and research. Agreements were made to ensure that confidential information would not be available Microsoft and OpenAI. Work that contains highly confidential or sensitive information will also continue to be written exclusively by civil servants as an additional safeguard.

The Singapore courts are unlikely to use AI tools – eg, tools that assess an offender’s risk of recidivism or recommend a sentence – for sentencing within the foreseeable future. The Chief Justice stated at the Sentencing Conference held on 31 October 2022 that the underlying algorithms for such systems were opaque and the data and assumptions that they are built upon could reflect bias. Instead, for greater consistency in sentencing, Singapore will have a Sentencing Advisory Panel that will issue publicly available sentencing guidelines that are persuasive but not binding on the courts.

However, the Singapore courts will use AI to improve access to justice. In September 2023, they signed a memorandum of understanding with Harvey AI, an American start-up, to trial its technology to assist litigants-in-person at the small claims tribunal. The goal is for the AI to give the litigant information on their next steps and what material they need to submit to support their claim. In April 2025, the Harvey and the Small Claims Tribunals launched the first initiative offering AI-powered translation services for court users, where court documents will be translated into Chinese, Malay or Tamil from English. 

The Singapore Courts have also issued the “Guide on the Use of Generative Artificial Intelligence Tools by Court Users”, effective 1 October 2024, and applying to both lawyers and self-represented persons. The courts do not prohibit the use of generative AI tools to prepare court documents, provided that the Guide is complied with. Users are expected to check and verify the AI-generated content, and responsibility for any AI-generated content (including infringements of personal data laws or IP laws) rests with the user. The court does not require pre-emptive declaration of the use of generative AI, but court users are expected to answer truthfully if asked by the court. 

The Ministry of Defence and the Singapore Armed Forces (SAF) have been exploring the use of AI in military operations to enhance capabilities and stay ahead of potential security threats. One such example is the upgraded command and control information system that helps commanders make faster decisions through displaying a real-time battlefield picture integrated with the best options commanders can take to neutralise the threat.

AI is also being used to enhance the safety of servicemen during training and operations. The SAF Enterprise Safety Information System leverages data science and AI technologies to identify potential risks in operations and recommend pre-emptive action to prevent potential accidents. Additionally, in order to better utilise manpower, the SAF is also conducting trials on the use of AVs in military camps for the unmanned transportation of supplies and personnel.

For the discussion of IP issues, see 15.1 IP and Generative AI, and for data protection issues, see 8.2 Data Protection and Generative AI.

The Personal Data Protection Act 2012 (PDPA) applies to the collection, use and disclosure of personal data by organisations. Organisations may only collect, use and disclose personal data “for purposes that a reasonable person would consider appropriate in the circumstances” (Section 3).

The PDPC encourages organisations to use anonymised data when developing, testing and monitoring AI systems as much as possible. Anonymised data is not considered personal data for the purposes of the PDPA. However, there is always a risk of re-identification in combination with other data about the individual – especially where AI makes connections between different datasets and creates a profile about the person, whereby the data that is anonymised now becomes personal data subject to the PDPA.

A data subject has the right to access their personal data held by an organisation (Section 21; subject to certain exceptions), as well as request that an organisation correct an error or omission in the personal data about them that is in the possession or under the control of the organisation (Section 22; subject to certain exceptions). At the same time, an organisation also has a duty to “make a reasonable effort to ensure that personal data collected by or on behalf of the organisation is accurate and complete, if the personal data is likely to be used by the organisation to make a decision that affects the individual to whom the personal data relates” (Section 23).

However, when it comes to the output of generative AI, which may make false factual claims about an individual, there are no reported cases locally yet. Regulators are working to address the risk of “hallucinations” (whereby false information is given by a generative AI system) and the Model Gen-AI Framework recommends techniques such as retrieval-augmented generation and few-shot learning to reduce hallucinations and improve the accuracy of output.

It remains to be seen if courts or regulators in Singapore will order deletion of the entire AI model or cease the use of such AI model if it is trained on illegally obtained personal data. However, given that there have been reported instances of this in other jurisdictions, the authors cannot rule out such a response locally if the situation warrants it. 

Law firms in Singapore are using AI for document review, for due diligence processes in M&A transactions, and to summarise contractual documents – to name just a few uses.

The Singapore Academy of Law has released (in September 2024) a guide on prompt engineering for lawyers, giving lawyers tips and concrete examples on how to write more effective prompts for chat-based generative AI tools. The Singapore courts have also issued guidance on the use of generative AI in preparing court documents (see 7.2 Judicial Decisions for details).

The Ministry of Law is presently (as at March 2025) developing guidelines for lawyers on using generative AI responsibly in their work, given the risks posed by generative AI – eg, inaccurate responses, as well as security and privacy concerns (especially if the model stores prompts and outputs for future training).

Where the use of AI gives rise to personal injury, property damage or financial loss, the claimant can seek a remedy in tort (negligence) or contract. Singapore does not have product liability laws like those in the UK or the EU. Instead, remedies are available under statutes such as the Unfair Contract Terms Act 1977 and the Sale of Goods Act 1979, as well as specific legislation (eg, the HPA) and the common law (contract and tort).

Singapore has not amended its laws to provide for any special rules concerning liability arising from the use of AI. As yet, there have been no cases in court involving damages due to AI not performing as expected.

The authors are of the view that there are three features of AI that may affect the application of conventional principles of liability, as follows.

  • AI is a “black box” – it is not always possible to explain how or why an AI system reached a particular outcome and the type of model chosen affects how easily its workings can be explained.
  • AI is self-learning/autonomous – it has the ability to learn from the data it has been exposed to during its training and improve without being explicitly programmed, meaning the behaviour of the AI system is not always foreseeable.
  • AI has many people involved in its development – from procuring the datasets, to training the algorithm, to selecting the algorithm, to monitoring the performance of the algorithm. So who is to blame when the AI output is not as expected or if it causes harm?

Fault-Based Liability (Negligence)

Negligence requires that someone owes a duty of care, that there is breach of such duty (falling below the standard of care), and that the breach caused the loss. Owing to the nature of AI, where many people are involved in its development, the plaintiff might find it difficult to identify the party at fault and the identified party could try to push the blame to a party upstream or downstream in the AI life cycle. However, the Model Gen-AI Framework suggests that liability could be allocated based on the level of control that each stakeholder has in the AI development chain.

Next comes the requirement to prove breach of the standard of care. However, if the opacity of AI makes it impossible to explain why it reached an outcome, then it may be difficult to prove that the behaviour of the AI was due to a defect in the code (rather than any other reason). As the use of AI is developing, it is not clear what standard of care will apply either. Furthermore, even where there is a human in the loop to review the outcome of the AI system, the human will not be able to determine whether the AI is making an error in time to prevent it if the AI is meant to exceed human capabilities.

Finally, there is a requirement to show that the breach caused the loss. Even though it could be argued that the autonomous nature of AI breaks the chain of causation, such an argument is unlikely to be accepted on public policy grounds. In contrast with the EU’s proposed AI Liability Directive (which has since been withdrawn in 2025), Singapore has not introduced any laws that introduce a rebuttable presumption of causality between the defendant’s fault and the damage resulting from the AI system’s output (or failure to produce one).

Contract Liability

With a contract, parties negotiate to pre-allocate the risk, so this may resolve some of the issues faced in tort regarding who is the responsible party. However, establishing whether there is a breach will depend on what parties have agreed to in the contract – for example, whether there are specific, measurable standards the AI system must meet. The Sale of Goods Act 1979 (which provides for an implied condition that goods supplied under the contract are of satisfactory quality) will only apply to the extent that the AI system is a good if it is not embedded in hardware such as a physical disc.

Liability Independent of Fault (Strict Liability/Product Liability)

As mentioned previously, Singapore does not have product liability laws like those in the UK/EU. Nevertheless, the Singapore Academy of Law’s Law Reform Committee considered the application of those laws in its Report on the Attribution of Civil Liability for Accidents Involving Autonomous Cars (published September 2020) and found that product liability presents the same difficulties as negligence because the claimant generally still has to show some fault on the manufacturer’s part (ie, prove there is a “defect” with the software) (see (5.17)–(5.18) of the Report).

Whether there will be strict liability imposed for damage arising from the use of AI remains to be seen, as policymakers must strike a balance between ensuring that innovation is not stifled and obtaining a remedy with ease.

At present, there are no proposed regulations regarding the imposition and allocation of liability for the use of AI.

The Singapore Academy of Law’s Law Reform Committee has issued two reports that make recommendations on the application of the law to robotic and AI systems in Singapore, namely:

  • Criminal Liability, Robotics and AI Systems (February 2021); and
  • The Attribution of Civil Liability for Accidents Involving Autonomous Cars (September 2020).

The Model Framework highlights the risk of “bias” in the data used to train the AI model and proposes some solutions to minimise it. The IMDA/PDPC acknowledge the reality that virtually no dataset is completely unbiased; however, where organisations are aware of this possibility, it is more likely that they can take steps to mitigate it. Organisations are encouraged to collect data from a variety of reliable sources and to ensure that the dataset is as complete as possible. It is noted that premature removal of data attributes may make it difficult to identify inherent biases in the data.

In addition, the model should be tested on different demographic groups to see if any groups are being systematically advantaged or disadvantaged. Running through the questions in the ISAGO or AI Verify will also help organisations to reduce bias in the AI development process. In relation to LLMs, the IMDA’s October 2023 paper on “Cataloguing LLM Evaluations” sets out recommended evaluation and testing approaches for bias.

Most recently (in February 2025), the IMDA released the results of the “Singapore AI Safety Red Teaming Challenge”, where partner institutes across nine countries (in ASEAN, China, India, Japan and South Korea) tested how effective the safety guardrails in commonly used LLMS were in filtering out biased stereotypes in the output, when the prompts are in regional languages as opposed to English. The goal was to improve AI safety for the unique cultural and linguistic contexts in the Asia-Pacific region, as much of the AI red-teaming and bias research today is Western-centric.

There have not been any reported regulatory actions or judicial decisions with regard to algorithmic bias in Singapore.

Generally, biometric data such as fingerprints and likeness – when associated with other information about an individual – will form personal data under the PDPA. As such, any organisation that collects, uses or discloses such data will be subject to the obligations under the PDPA.

The PDPC has released the Guide on Responsible Use of Biometric Data in Security Applications. This guide specifically addresses the use of biometric data in relation to security cameras and CCTVs for security monitoring and facial or fingerprint recognition systems for security purposes to control movement in and out of premises. It highlights certain risks of using such data and measures that organisations may implement to mitigate the risks.

First, there is a risk of identity spoofing where a synthetic object (such as a 3D mask) is used to fake the physical characteristics of the individual in order to obtain a positive match in the system. Organisations should thus consider implementing anti-spoofing measures such as liveliness detection or installing such biometric systems near a manned security post.

Second, there is a risk of error in identification through false negatives or false positives. This may occur when the threshold for matching is set either too high or too low and the system fails or wrongly identifies a person. Organisations should thus implement a reasonable matching threshold, taking into account industry practice, and/or have additional factors of authentication to complement the existing matching thresholds.

Finally, there are systemic risks to biometric templates where the uniqueness of a biometric template may be diluted (and thus vulnerable to adversaries) if the algorithm used to create the template is used multiple times by the service provider across different sets of customers. Organisations should consider encrypting the biometric template in the database or use customised algorithms.

The Model Framework encourages organisations to consider the appropriate level of human oversight in AI-augmented decision-making. Broadly speaking, there are three degrees of human oversight:

  • human-in-the-loop – the human is in full control and the AI only provides a recommendation;
  • human-out-of-the loop – there is no human oversight and the AI is in full control; and
  • human-over-the-loop – the human is monitoring or supervising the output and can take control in the event of unexpected or unusual cases.

In determining the level of human involvement required, the Model Framework sets out the following factors:

probability of the harm occurring (high/low);

  • severity of the harm occurring (high/low) – for example, the impact of wrong medical diagnosis compared with the consequences of shopping recommendations;
  • nature of the harm (whether physical or intangible in nature);
  • reversibility of the harm, including the avenues for recourse of the individual; and
  • whether it is feasible or meaningful for a human to be involved at all (human involvement is not feasible in high-speed financial trading as per the case of Quoine).

Lastly, the Model Framework encourages organisations to disclose their use of AI so that persons are aware that they are interacting with it and, in particular, to:

  • explain how AI is used in the decision-making process and what factors are taken into account in making the decision;
  • offer an option to opt out from the use of AI, if it is feasible to so; and
  • allow affected persons to appeal against an AI decision that materially affects them ‒ the person should be given enough information about the reasons for the previous decision so that the person can effectively craft their appeal.

To build trust in the use of AI, the Model Framework encourages organisations to ensure that consumers are aware that they are interacting with AI (whether in the case of chatbots or other technologies that are a substitute for services rendered by natural persons). For more details on disclosures, see 11.3 Automated Decision-Making.

Singapore has also stated (in the ASEAN Guide on AI Governance and Ethics) that AI systems should not be used to manipulate consumer behaviour ‒ namely, that “AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society”.

The ASEAN Guide on AI Governance and Ethics recommends that deployers who procure AI systems from third-party developers should “appropriately govern their relationships with these developers through contracts that allocate liability in a manner agreed between parties”. The deployer should also require the developer to assist it in meeting its transparency and explainability obligations to both customers and regulators. The ASEAN Guide on AI Governance and Ethics also recommends that deployers and developers collaborate to conduct joint audits and assessments of the AI system, and testing frameworks such as Singapore’s AI Verify may be used for this purpose.

AI can be used to screen CVs and identify select candidates to move to the next round, thereby making the hiring process more efficient. However, an AI system is only as good as the humans who programmed it, and it is also susceptible to biases in the data it is trained on – for example, the training data may be weighted heavily in favour of one gender for a role.

The Tripartite Guidelines on Fair Employment Practices set out fair employment practices for employers to abide by. Employees must be selected on the basis of merit (ie, skills and experience), regardless of their age, race, gender, religion, marital status and family responsibilities, or disability. Therefore, automated employment screening tools must not take into account such characteristics (with the exception of gender where it is a practical requirement of the job – for example, hiring a female masseuse to do spa treatments for female customers).

The Ministry of Manpower can take action against employers who do not follow the Tripartite Guidelines by curtailing their work pass privileges, such that they may not apply for new work passes or renew the work passes of their existing employees. Singapore also passed the Workplace Fairness Act on 8 January 2025, to complement the existing Tripartite Guidelines. The Act is not yet in force as at the time of writing.

Although organisations will require consent to collect, use or disclose such personal data, organisations may also rely on two exceptions under the PDPA to do so without obtaining consent from the individual. However, the organisation must still act based on what a reasonable person considers appropriate in the circumstances — it does not have carte blanche to collect every single piece of personal data about an employee through its employee monitoring software. This is because the employer’s monitoring of the employee’s email account, internet browsing history, etc, can reveal very private information about the employee, including private medical information that may not be relevant to the employee’s workplace performance.

The first exception is where the collection, use or disclosure of personal data is for the purpose of managing or terminating an employment relationship between the organisation and the individual. However, to rely on this exception, the organisation must inform its employees of the purposes of such collection, use or disclosure ‒ for example, through the employment contract or employee handbooks. The second exception is where the collection, use or disclosure of personal data about an individual is necessary for evaluative purposes (ie, for determining the suitability or eligibility of the individual for employment, promotion, or continuance in employment).

Although consent may not be needed to collect such data, organisations should be aware that other obligations under the PDPA – for example, the protection obligation to prevent unauthorised access to the data – continue to apply.

A Parliamentary question of 12 September 2022 concerned whether the government will:

  • consider regulating platform companies to ensure they do not encourage excessive risk-taking (eg, taking on too many jobs in an hour or riding during dangerous weather) by the workers to fulfil orders; and
  • study the AI and algorithms of such companies to ensure this is not the case.

The Ministry of Manpower (MOM) responded that it will be “cautious” about regulating the incentives and algorithms of such companies. The MOM would resolve the issue through discussions with tripartite partners and strengthening protections for workers, “rather than jump to regulation and risk over-regulation”.

The government has since accepted the recommendations of the Advisory Committee on Platform Workers in November 2022, thereby strengthening protections for platform workers in terms of financial protection in case of work injury, improving housing and retirement adequacy, and enhancing representation for such workers. The Platform Workers Act 2024 was subsequently introduced to implement the recommendations of the Advisory Committee.

Firms that use AI and data analytics to offer financial products and services should reference the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector, which was published by the MAS in 2018. The principles align with the Model Framework and are voluntary; financial services companies must continue to comply with all other applicable laws and requirements.

The MAS also leads an industry consortium (“Veritas”) that creates frameworks for financial institutions to assess their use of Artificial Intelligence and Data Analytics (AIDA) solutions against the FEAT principles. The White Papers arising from each phase of Veritas (there have been three phases to date since 2019) are published on the MAS website.

Digital advisers (or robo-advisers) are automated, algorithm-based tools with limited or no human adviser interaction. Where such tools are used to provide advice on investment products, the MAS Guidelines on Provision of Digital Advisory Services state that they should minimally provide the client with the following information:

  • assumptions, limitations and risks of the algorithms;
  • circumstances under which the digital advisers may override the algorithms or temporarily halt the digital advisory service; and
  • any material adjustments to the algorithms.

As mentioned in 1.1 General Legal Background, the HPA requires medical devices to be registered. AI-MDs are a type of “medical device” (as defined in the HPA) – hence they must be registered – and they are subject to further requirements for registration by the HSA’s Regulatory Guidelines for Software Medical Devices. Under those Guidelines, additional information must be submitted when registering the AI-MD – for example, information on the datasets used for training and testing and a description of the machine-learning model that is used in the AI-MD.

However, when it comes to liability for errors made by an AI-MD or by any other AI application, there are no judicial decisions yet as to who is liable (or jointly liable) for the error – ie, whether the hospital, doctor, developer of the AI system, etc, is liable.

Singapore’s Road Traffic Act 1961 provides a regulatory sandbox for the use and testing of AVs – see Sections 2(1), 6C, 6D and 6E, and the Road Traffic (Autonomous Motor Vehicles) Rules 2017 (“the Rules”). The Rules prohibit the trial or use of an AV without authorisation and, among other things, set out:

  • the application process for authorisation;
  • the conditions of authorisation (eg, requiring a qualified safety driver to be seated in the AV to monitor its operation and take over if necessary);
  • that a data recorder must be installed in the AV;
  • that there must be liability insurance or security in lieu of liability insurance; and
  • that any incident or accident involving the AV must be reported to the Land Transport Authority.

When asked about liability for AV accidents in 2017, the then-Second Minister for Transport responded: “The traditional basis of claims for negligence may not work so well where there is no driver in control of a vehicle. When presented with novel technologies, courts often try to draw analogies to legal constructs in other existing technologies. In the case of AVs, the courts have autopilot systems for airplanes, autopilot navigational systems for maritime vessels, and product liability law to draw references from. As with accidents involving human-driven vehicles, it is likely that issues of liability for AVs will be resolved through proof of fault and existing common law.”

Please see 10.1 Theories of Liability, which sets out potential remedies when AI systems do not function as intended or cause harm.

Please see 9.1 AI in the Legal Profession and Ethical Considerations, where the issues across professional services are similar. Professionals must be mindful of the risks and limitations of AI and take steps to verify the accuracy of the output, critically analyse it for bias, and ensure that confidential client data is not input into an AI system where it can be accessed by unauthorised third parties.

In terms of liability, a person cannot delegate their professional responsibility (eg, a duty to provide correct information) to an AI system. Hence, if they were to rely on the output of an AI system, they would ultimately remain responsible for it

A report (“When Code Creates: A Landscape Report on Issues at the Intersection of Artificial Intelligence and Intellectual Property Law”), published on 28 February 2024 by IPOS and the Singapore Management University (“IPOS-SMU Report”), highlights key IP issues arising from the use of generative AI.

Use of Copyrighted Content to Train a Generative AI System

There are no cases brought before the Singapore courts yet.

In Singapore, under Section 244 of the Copyright Act 2021, making a copy of any copyrighted work is permissible if it is for the purpose of computational data analysis (as defined in Section 243 of the Copyright Act 2021 – eg, using images to train a computer program to recognise images) or preparing the work for computational data analysis, provided that certain conditions are met. Singapore also has the fair use exception under Section 190 of the Copyright Act 2021. Both Sections 190 and 244 of the Copyright Act 2021 have not yet been tested in Singapore courts in the context of training generative AI systems.

Protection of the Output of the Generative AI System Under Copyright and/or Patent Laws

This is a developing area of law both overseas and in Singapore. The IPOS-SMU Report, highlights that there is a spectrum of AI involvement and draws a distinction between “AI-generated” inventions and works (ie, no human intervention) and “AI-assisted” inventions and works (ie, where AI is used as a tool like a paintbrush).

In relation to copyright, the current position under the Copyright Act 2021 is that the author must be a natural person. Hence, whether copyright can subsist in the output of generative AI is likely to depend on two factors:

  • the extent to which the human involved in prompting the generative AI exercised creativity in the prompting process and the subsequent editing of the output; and
  • the nature of the output of the generative AI (as not all works are by their nature protected by copyright).

In relation to patents, the “inventor” must also be a natural person under Singapore law. As with copyright, the output may be protected depending on the level of involvement of the human who prompted the generative AI.

Liability for Copyright Infringement Resulting from the Output of Generative AI

This is also a developing area of law in Singapore and around the world. Whether or not the use of generative AI’s output can result in a person being liable for copyright infringement if the output is substantially similar to an existing work depends in part on how the Generative AI works – ie, how it is trained and how it produces its output. LLMs such as ChatGPT, for example, generate text based on the statistical probability of one word appearing after another and this may suffice as an explanation for the similarities in the works – although cases running this defence are still making their way through the courts.

In theory, AI image generators also create a new image based on the text prompt received – albeit not by replicating an existing image (or part of it) that they have been trained on. Instead, AI image generators produce their own image based on their own “understanding” of what the “essence” of an object is after being trained on tens of thousands of photographs of the object.

For more details on IP protection of an AI system itself (and not its output), please see 15.2 Applicability of Patent and Copyright Law and 15.3 Applicability of Trade Secrecy and Similar Protection.

Protecting AI Innovations Through Patents

Under Section 13 of the Patents Act 1994, an invention must fulfil the following three conditions to be patentable:

  • the invention must be new;
  • the invention must involve an inventive step; and
  • the invention must be capable of industrial application.

However, not all inventions are eligible for patent protection (even if they meet the three conditions). The Examination Guidelines for Patent Applications of the IPOS are instructive. Neural networks, support vector machines, discriminant analysis, decision trees, k-means and other such computational models and algorithms applied in machine learning are mathematical methods in themselves and are thus not considered to be inventions by the IPOS.

However, where the claimed subject matter relates to the application of a machine-learning method to solve a specific (as opposed to a generic) problem, this could be regarded as an invention because the actual contribution of the claimed subject matter goes beyond the underlying mathematical method. Solving a generic problem by using the method to control a system, for example, is unlikely to cross the threshold. The application must be a specific one, such as using the method to control the navigation of an AV.

Protecting AI Innovations Through Copyright

Source codes and AI algorithms are protected by copyright.

Protecting Output Generated by AI

The extent to which copyright and/or patent laws protect the output of generative AI systems is discussed in 15.1 IP and Generative AI.

AI innovations may also be protected under the law of confidence, as set out in the IPOS’ IP and Artificial Intelligence Information Note. Generally, confidential information refers to non-trivial, technical, commercial or personal information that is not known to the public, whereas trade secrets usually describe such information with commercial value.

Information will possess the quality of confidence if it remains relatively secret or inaccessible to the public in comparison with information already in the public domain. Therefore, it is important to secure the confidential information by implementing non-disclosure agreements, encrypting materials, and classifying information so as to limit access to only select groups of people.

However, it is not possible to protect an AI innovation under both patent and the law of confidence because the former requires public disclosure, which destroys the quality of confidence. Therefore, when deciding which regime to use to protect their work, AI innovators should consider whether the invention constitutes patentable subject matter and if the invention is likely to be made public soon or can be easily derived by others through reverse engineering.

See the issues outlined in 15.1 IP and Generative AI.

See the issues outlined in 15.1 IP and Generative AI.

Pricing algorithms range from those that monitor and extrapolate trends in prices in the market to those that can weigh information such as supply and demand, customer profile and competitor’s pricing in order to make real-time adjustment to prices. Such algorithms raise three key issues of concern when it comes to competition law. The Competition and Consumer Commission of Singapore (CCCS) announced in June 2024 that it is working with the IMDA to develop an extension of the “AI Verify” toolkit for organisations to test for potential anticompetitive behaviour in their AI systems, such as recommending prices that may lead to collusive outcomes, or preferring certain products over others.

Algorithmic Collusion

The individual use of a pricing algorithm does not fall foul of competition law. However, where organisations have an explicit agreement to collude and use pricing software to implement their agreement, the CCCS has unequivocally stated that this will contravene Section 34 of the Competition Act 2004 as an agreement that prevents, restricts or distorts competition.

If organisations use a distinct algorithm with no prior or ongoing communication, but achieve an alignment of market behaviour, the CCCS will take a fact-centric approach to determine whether the collusive outcomes can be attributed to the organisations.

Personalised Pricing

Where an organisation with a dominant position in the market utilises AI to implement personalised pricing, it may be deemed an exclusionary abuse of dominance and infringe Section 47 of the Competition Act 2004. Specifically, if personalised pricing is used to set discounts that foreclose all or a substantial part of a market, the CCCS may find that the organisation has abused its dominance in the market.

Liability Where AI Learns Collusive Behaviour

If an AI system autonomously learns and implements collusive behaviour, the CCCS is unlikely to find no fault on the part of the organisation that deploys the AI system. Although it is non-binding, the Model Framework states that organisations should be able to explain decisions made by AI. Accordingly, organisations are unlikely to be able to disclaim responsibility for the decisions made by the AI they deploy.

The CCCS has also highlighted that the development of AI models requires access to substantial amounts of compute power (including specialised chips), as well as data and technical expertise, and not all companies are able to access these resources equally. The CCCS will continue to review developments in this area.

Singapore’s Cybersecurity Act 2018 (CA) sets out the requirements for certain organisations (eg, owners of critical information infrastructure) to take measures to prevent, manage and respond to cybersecurity threats and incidents, as well as to regulate cybersecurity service providers, amongst other matters. Amendments to the CA were passed in Parliament in May 2024 (but are not yet in force as at the time of writing), expanding the regulatory ambit of the CA to four new entities, including entities providing cloud computing services or data centre facility services.

The CA is technology-agnostic, such that so long as an organisation falls within the description of an entity the CA seeks to regulate (regardless of whether or not it developers, deploys or uses AI), the obligations to provide information, report incidents and comply with codes, standards and directions, etc, under the CA will apply.

The Computer Misuse Act 1993 (CMA) complements the CA, where it targets cybercrime in Singapore. Unauthorised access (ie, hacking) or modification of computer material is an offence, as is unauthorised interference with or obstruction of the lawful use of a computer (eg, launching cyberattacks).

The CSA released the Guidelines and Companion Guide on Securing AI Systems on 15 October 2024, which set out best practices for owners of AI systems to adopt to secure their AI systems at every stage from designing to deployment to disposal at the end of the life cycle. The CSA emphasises that AI systems should be “secure by design and secure by default”, and that AI systems are not just vulnerable to classic cybersecurity risks, but also to new forms of attacks like data poisoning (injecting corrupted data into training data sets) or extraction attacks (where the model is probed to expose sensitive or restricted data).

Various sectorial regulators also issue sector-specific cybersecurity guidelines (which are general in nature without solely focusing on AI), such as the MAS with Technology Risk Management Guidelines and Cyber Hygiene Guidelines.

With Singapore’s goal of net-zero emissions by 2045, the IMDA has highlighted the need for “Green AI”, where organisations develop energy-efficient AI systems powered by low- or zero-carbon energy sources. Data centres are a priority for the IMDA as while they are essential to powering AI and digital services, they also consume large amounts of power and water and produce a large carbon footprint. The IMDA has introduced the “Green Data Centre Roadmap” (2024) for data centres to reduce their environmental impact, as well as the tropical data centre standard (SS 697:2023) – the world’s first sustainability standard for data centres in tropical climates (as tropical climates present additional challenges in operating the data centre cooling systems). 

With the abundance of AI guidelines and frameworks introduced across jurisdictions, it can be difficult for organisations to pick one to start with, especially if they intend to deploy their AI solution across multiple jurisdictions. Nevertheless, it is good to take one framework as a starting point or baseline and make improvements/adjustments from there, incorporating recommended actions from other jurisdictions that may not be found locally. Singapore’s ISAGO is useful for both developers and deployers of AI solutions, and it is broadly aligned to the AI governance frameworks in key AI jurisdictions. Organisations may also assess their systems with AI Verify (although the ISAGO is simpler, as it is a checklist with no technical tests).

Organisations should also create a generative AI use policy to set common expectations across employees on how they may use (or not use) generative AI tools such as ChatGPT, given the prevalent use of such tools. The policy can give examples of acceptable and unacceptable prompts for clarity.

Drew & Napier LLC

10 Collyer Quay
10th Floor Ocean Financial Centre
Singapore 049315

+65 6535 0733

+65 6535 4906

mail@drewnapier.com www.drewnapier.com
Author Business Card

Law and Practice in Singapore

Authors



Drew & Napier LLC is a full-service Singapore law firm, which was founded in 1889 and remains one of the largest law firms in the country. Drew & Napier has a highly regarded TMT practice group, which consistently ranks as the leading TMT practice in Singapore. The firm possesses unparalleled transactional, licensing and regulatory experience in the areas of telecommunications, technology, media, data protection and cybersecurity. The TMT practice is supported by more than ten lawyers and paralegals with extensive experience in infocommunications, data protection, technology, and sector-specific and general competition law. The TMT practice acts for a broad range of clients, spanning multinational corporations and local companies across industries. These clients include global and regional telecommunications service providers, sectoral regulators (both local and foreign), consultants, software houses, hardware manufacturers, and international law firms.