Contributed By Burges Salmon
The legal landscape around AI is broad and complex, spanning a range of legal areas which professionals and organisations should consider. Issues arise throughout the procurement, development, building, licensing and use of an AI system.
Risks to consider include the following.
Various industries are applying AI and machine learning, including:
The AI models that are developed and deployed across industries may be generative, creating new content learned from existing data (such as drug discovery), while others are predictive, making predictions based on historical data (such as models predicting asset prices).
There are a number of industry innovations driving the use of AI and machine learning, such as no-code AI, which allow non-technical users to build AI solutions, and the development of foundation models faster and at lower cost than previously thought possible, such as Deep Seek, which have shifted perception and attitudes in relation to the use and deployment of AI systems within businesses and by governments. The potential benefits to consumers and businesses are vast, including improved services and products and advancements in research.
One example of a cross-industry initiative is the UK government’s AI Opportunities Action Plan, a UK government-led initiative to scale AI adoption across sectors, including the public and healthcare sector, through initiatives such as building additional data centres, national data libraries and increasing access to compute.
The UK government is actively involved in promoting the adoption and advancement of AI for industry use, evidenced by its commitment to a “pro-innovation” approach to the regulation of AI. For example, the UK government has launched the AI Opportunities Action Plan, released the AI Playbook and various guidance notes for public sector use of AI.
The UK government has also been looking to integrate AI into public services. Through these initiatives the UK government is aiming to bolster its position as a global leader in AI development. Under the National AI Strategy, the UK government has committed to ensuring long-term support for the AI ecosystem.
The UK government offers certain R&D-related tax credits to incentivise industry innovation of AI, including offering grants and funding opportunities to medium-sized businesses, as well as the Global Talent Visa which allows individuals to work in the UK if they are a leaders or potential leaders in digital technology, which includes AI.
The UK’s approach to AI regulation is “pro-innovation” and principles based, focusing on flexibility and adaptability.
The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology. This differs from the EU approach of creating a standalone regulator (the “European AI Board”) and introducing overarching AI-specific regulation (the “EU AI Act”) to sit above existing regulation.
Industry regulators are expected to interpret and apply these principles within their respective domains using existing powers supplemented with additional funding and regulatory guidance from the UK government. In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles:
The UK government included in the 2024 King’s Speech the indication for regulation of frontier models; however, no government proposed regulation or consultation has been published.
There is currently no AI-specific legislation governing the development, deployment or use of AI in the UK; see 3.1 General Approach to AI-Specific Legislation. However, various existing laws and regulations do apply to the various stages of the AI life cycle.
Various private members’ bills have been proposed in the House of Lords, such as the Artificial Intelligence (Regulation) Bill. However, private members’ bills often do not become law, and in any event are subject to change during the legislative process. Nevertheless, they do give an indication for political direction and help shape the political debate.
The UK government has issued non-binding AI specific guidance regarding the use of AI in the UK. One example is the AI Playbook which was published early 2025 and offers guidance on using AI safely, effectively, and security for civil servants and people working in government organisations.
In line with the UK government’s “pro-innovation” approach:
The UK government also requires public sector organisations to comply with the Algorithmic Transparency Recording Standard which makes available publicly (subject to exceptions) specified information about public sector use of algorithms that: (i) have a significant influence on a decision-making process with direct or indirect public effect; or (ii) directly interact with the general public.
See also 3.1 General Approach to AI-Specific Legislation.
This section is not relevant in this jurisdiction.
This section is not relevant in this jurisdiction.
This section is not relevant in this jurisdiction.
At the time of writing, there have not been any amendments to UK data protection legislation or information and content laws. However, the UK regulator for data protection, the ICO, has released specific guidance in respect of the interaction between UK data protection legislation and the use/deployment of AI-based solutions.
In its response to the UK White Paper (regarding the regulation of AI), the UK government confirmed that its working group had failed to develop a voluntary code of practice for copyright and AI, with the intention of making licences for text and data mining more available. However, in light of this, the UK government launched a consultation on copyright and AI in December 2024 setting out a number of options for reform but strongly advocating for an approach similar to that of the EU. This would allow AI developers to train AI models on large volumes of copyright materials but with “a data mining exception” that would give rights holders the opportunity to opt out. This approach would be underpinned by supporting measures on transparency. The consultation closed on 25 February 2025, and updates from the UK government are eagerly awaited.
The UK government is taking a pro-innovation approach to the regulation of AI and has released certain non-binding principles, which industry regulators are expected to interpret and apply in line with existing laws. It has been confirmed that “future binding measures” are likely to be introduced for highly capable general-purpose AI models (the content of such measures is currently unknown).
Under the previous UK government an Artificial Intelligence Bill had been proposed, but, following the general election, the new government has dropped the proposed Bill. The new UK government may release a new AI Bill during 2025, which will likely impact the regulatory landscape for AI in the UK.
The current key trend in the UK in respect of AI regulation is the tension between the extent to which AI should be regulated and a “pro-innovation” approach.
The EU AI Act will also affect UK business to the extent that UK-based providers of AI systems place their system on the market in the EU or otherwise supply into the EU.
Very few UK judicial decisions have dealt with AI directly. The most noteworthy example is the Supreme Court’s decision in Thaler v Comptroller-General of Patents, Designs and Trade Marks, which confirmed that UK patent applications must identify a human “inventor”. However, the case concerned the formalities of patent registration only, and not the patentability of inventions created by or with the aid of AI, where a human is listed as the inventor on the application form; see 15.2 Applicability of Patent and Copyright Law for more information.
Also notable is Getty Images’ ongoing copyright, trade mark and database right infringement claim against Stability AI in respect of its “Stable Diffusion” text to image generative AI model. A trial is not expected until 2025 at the earliest.
In the context of privacy and data protection concerning automated systems, in 2020 the Court of Appeal held that South Wales Police’s use of automated facial recognition technology was unlawful under Article 8 of the ECHR (right to privacy) and the UK Data Protection Acts 1998 and 2018.
The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology.
The regulators expected to play leading roles in the UK regulation of AI include the following, which have publicised strategic updates in response to the White Paper and, in some cases, additional guidance:
The above are members of what is known as the Digital Regulation Cooperation Forum (DRCF). Other regulators include:
In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles, as outlined in 3.1 General Approach to AI-Specific Legislation.
Please see 3.1 General Approach to AI-Specific Legislation.
In May 2024, the UK government required regulators to each set out their approach to AI regulation and some regulators have now issued AI-related guidance. Regulators that did produce such updates are listed in 5.1 Regulatory Agencies.
Whilst largely non-binding, reviewing the strategic approach of regulators in relation to AI regulation, and the regulators’ AI guidance, is a helpful step towards understanding compliance with existing UK laws in respect of procurement, development and deployment of an AI solution.
Further, the Department for Science, Innovation and Technology (DSIT), the AI Security Institute and the DRCF have all released other resources to aid the implementation of certain AI-related aspects of the UK government’s industrial strategy. These include the following:
In May 2022, the ICO fined US-based Clearview AI more than GBP7.5 million for misusing UK residents’ publicly available personal data by scraping images from social media without consent to create a database of 20 billion images. Clearview AI used its database to provide facial recognition services to its customers.
Clearview successfully appealed: in October 2023, it was found that the ICO lacked jurisdiction because Clearview only provided its services to law enforcement/national security bodies outside the UK and EU, falling within an exception to the UK GDPR applicable to the acts of foreign governments. As of 31 January 2025, the Upper Tribunal granted the ICO permission to appeal and a date for the hearing is yet to be set.
The ICO issued a preliminary enforcement notice to Snap Inc over potential failure to properly assess the privacy risks posed by its generative AI chatbot “My AI”. The investigation provisionally found that Snap failed to adequately identify and assess the risks to several million “My AI” users in the UK, including children aged 13 to 17. The ICO concluded its investigation into Snap in May 2024, satisfied that Snap had undertaken a risk assessment compliant with data protection law, and issued a warning to the industry to engage with data protection risks of generative AI before products are brought to market.
Separately, the CMA has investigated a number of AI partnerships between large technology companies and AI organisations. In 2024, the CMA concluded that the following partnerships did not qualify for further investigation:
The CMA is still considering whether Microsoft’s partnership with OpenAI (the creator of ChatGPT) amounts to a de facto merger and, if so, whether it could impact competition.
Standards development organisations such as the International Organization for Standardization (ISO)/the International Electrotechnical Commission (IEC), the Institute of Electrical and Electronics Engineers (IEEE) and the British Standards Institution (BSI) have paved the way for consensus-driven standards through multi-stakeholder discussions to promote global alignment. Standards can be grouped as follows:
Following on from the “introduction to AI assurance” guidance note that was published last year, the DSIT has also announced a new voluntary Code of Practice for the Cyber Security of AI, which it claims will form the basis of a new global standard for secure AI through the European Telecommunications Standards Institute.
In December 2023, the ISO and the IEC jointly published a new global management standard for artificial intelligence, known as ISO/IEC 42001, which forms part of a broader series of AI standards including:
While these standards do not carry the same legal weight as legislation, they are expected to have a significant impact on how organisations demonstrate responsible and transparent use of AI. References to global standards are often included where organisations are providing goods and services on both a business-to-consumer and business-to-business basis. In addition, they are often referenced in commercial contracts as a way of establishing expectations of regulatory guidance and best practice.
As these standards evolve over time in line with emerging trends, risks and opportunities in the AI space, they have the potential to act as the default standard that organisations will be expected to adhere to.
Studies in the UK have indicated that AI usage across the UK public sector remains inconsistent, despite government messaging increasingly encouraging the utilisation of AI in the public sector. Within government, a 2024 study by the National Audit Office found that 70% of government bodies surveyed are piloting or planning the use of AI. February 2025 saw the launch of the Artificial Intelligence Playbook for the UK Government, which aims to provide departments and public sector organisations with technical guidance on the safe and effective use of AI.
The Incubator for AI in Government (i.AI) is an agile technical delivery team within the DSIT that builds AI tools for use across the public sector. To date, i.AI has created a suite of bespoke tools for civil servants as well as a range of tools for wider public sector use.
Current uses of AI across UK government include GOV.UK Chat, a pilot tool that uses relevant website content to generate responses to natural language queries by users, aiming to simplify navigation across more than 700,000 pages on GOV.UK. Other uses include the development of a tool which aims to improve access to user research undertaken across the NHS, and a CCS tool which generates relevant agreement recommendations for customers based on spend and customer market segmentation data.
More broadly, facial recognition technology is becoming increasingly common in UK law enforcement, including the use of retrospective facial recognition as part of criminal investigations as well as live facial recognition (eg, at large-scale sporting events).
There are risks when using AI in the public sector and more widely, such as the risks of biases and discrimination flowing from foundation model outputs, as well as the risk of nefarious actors using foundational models to intentionally cause harm (with the added legitimacy of public sector usage).
See 4.1 Judicial Decisions for a discussion of the judicial decision relating to South Wales Police’s use of automated facial recognition technology.
There are currently no relevant pending cases that are available publicly. This may reflect the early stage of some challenges or the nature of the court system. However, there are indications that government has received potential challenges to its use of algorithmic decision-making and AI.
AI plays a significant role in national security. Use cases include AI systems deployed to:
National security considerations will play a pivotal role in shaping the future government legislation, regulations and policies on AI systems, such as:
Also, the UK’s “AI Safety Institute” was rebranded as the “AI Security Institute” (AISI) on 15 February 2025 to reflect AISI’s focus on serious AI risks such as “how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks and enable crimes such as fraud and child sexual abuse”. AISI will focus on advancing its understanding of the most serious risks posed by AI “to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops”.
The key issues and risks posed by generative AI are as follows.
Where AI models are used to process personal data, at a fundamental level those systems need to comply with the principles of data protection law by design and by default.
Under data protection law, individuals have various rights regarding their personal data, and these rights apply wherever that personal data is being processed by an AI system.
Where an individual exercises their right to the “rectification” or “erasure” of their personal data and it is not possible to separate out the individual’s data from the AI model in order to comply with those rights, in order to avoid being in breach of regulatory compliance and subject to potential enforcement actions by the regulator, the model may need to be deleted entirely.
The challenge for AI developers is designing AI systems that can automatically comply with the law. Stipulating data protection principles, such as “purpose limitation” and “data minimisation”, as part of the design architecture and engineering process is key to achieving successful compliance outcomes.
AI is anticipated to have a significant impact on the legal profession due to the knowledge intensive and sometimes highly repetitive work. Use cases include:
The Law Society has released (and recently updated) guidance on how to manage the risks of adopting AI (which include IP, cybersecurity and data protection issues). The Bar Council has also released guidance on considerations when using ChatGPT and generative AI software based on large language models (LLMs). At the time of writing, The Law Society is expected to publish research on the impacts of AI specific to areas of law.
Of the risks already identified, ethical issues include the risks that those who use AI are not accountable or responsible for their actions, AI is not used in the best interests of clients and providing appropriate transparency to stakeholders (such as clients, colleagues, courts) about how and when AI was used.
The UK does not have a specific liability framework applicable generally to harm or loss resulting from the use of AI. Therefore, individuals or businesses who suffer loss or damage caused by AI must generally seek redress under existing laws (eg, contract law, tort law or consumer protection legislation). However, the UK passed the Automated and Electric Vehicles Act 2018 and Automated Vehicles (AV) Act 2024, pursuant to which liability for damages caused by an insured automated vehicle when driving itself lies with the insurer (subject to exceptions).
To claim damages under contract, a claimant needs to prove that there was a valid contract, the contract was breached, the defendant breached the contract, and that said breach caused loss. Whilst this may be straightforward with simple products and services, establishing causation in an AI-based product or service may be more difficult: for example, demonstrating who caused the loss claimed in a complex and multi-stakeholder value chain.
In terms of trends, businesses are assessing whether or not they are sufficiently protected against liability risks arising from such emerging technologies, be it as operators, users or manufacturers. This is typically addressed through technical mitigations (such as system design and verification), ensuring that contractual arrangements with suppliers and/or customers are appropriate, or by obtaining appropriate insurance coverage (albeit AI systems may not neatly align with typical insurance principles).
The UK government’s stated approach continues to be for pro-innovation and encouraging regulators to tackle AI opportunities and risks within their remits. As discussed in 10.1 Theories of Liability, with the exception of the Automated and Electric Vehicles Act 2018 and the AV Act 2024, liability must rely upon existing frameworks, and for the most part existing laws will apply to the allocation of liability in respect of AI.
The DSIT’s response to the UK AI White Paper confirms that regulation and binding measures on “highly capable general-purpose AI” are likely to be required in the future. The UK government has confirmed it will not “rush to regulate”, as “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation”. The UK’s prime minister confirmed in March 2025 that the UK government would regulate in a way that is “pro-growth and pro-innovation”.
Bias in predictive and generative AI systems can arise from biased training data, biases in the training and verification processes, and biased model choice and system design. Legally, there are concerns regarding discrimination, privacy and accountability, to name a few. Current legislation such as the Equality Act 2010 and data protection legislation aim to mitigate these risks. Examples of UK regulators’ proposals to mitigate risks include:
Key consumer areas at risk of bias from the use of AI systems include:
Businesses may find themselves subject to liability from individual claims and regulatory fines if found in breach of legislation such as the Equality Act 2010, data protection legislation and the FCA’s Consumer Duty requirements.
Businesses can take certain measures to address bias, such as:
An overarching issue in this area is the lack of a clear and consolidated regulatory response to facial recognition technology (FRT). At present, the UK approach is a combination of human rights law, data protection law, equality law and, in the context of law enforcement, criminal justice legislation. For completeness, it should be noted that the EU AI Act, while not directly effective in the UK, does directly address the processing of biometric data, and the UK will likely be influenced by the approach taken in the EU AI Act. In this regard, it is noted that ‘untargeted scraping to develop facial recognition databases’ has been included in the EU Commission’s Guidelines (dated 4 February 2025) on prohibited artificial intelligence (AI) practices in connection with Article 5 of the EU AI Act.
FRT relies on individuals' personal data and biometric data, and its use raises a number of challenges from a data protection perspective. Biometric data is intrinsically sensitive and the use of FRT therefore gives rise to challenges around the necessity and proportionality of processing, as well as the need to identify a lawful basis for processing.
This is particularly true where FRT involves the automatic and indiscriminate collection of biometric data in public places, which is becoming increasingly common, whether for law enforcement purposes or for commercial purposes such as targeted advertising in the retail sector. In this context, issues include:
Companies utilising FRT should be cognisant of the risks associated with its use, particularly in relation to potential violations of data protection legislation and equality and anti-discrimination laws.
Automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on:
Article 22 of the UK GDPR restricts organisations’ abilities to make solely automated decisions that result in a legal or similarly significant effect on an individual. In this context, “solely automated” means that the decision is totally automated without any human influence on the outcome. It is worth noting that the Public Authority Algorithmic and Automated Decision-Making Systems Bill aims to regulate the use of automated and algorithmic tools in decision-making processes in the public sector by requiring public authorities to conduct impact assessment and adopt transparency standards. However, it remains to be seen whether this Bill will become law in the UK.
Decisions taken in this manner can potentially have a significant adverse effect, which is particularly concerning as there can often be a lack of understanding around how the decision-making process works. There is also a risk that inherent biases in the AI-based decision-making tools may lead to discriminatory outcomes.
Organisations that fail to comply with Article 22 may be subject to significant fines and liability to affected individuals, who may exercise their right to object under Article 21 of the UK GDPR as well as bringing a legal claim against the company.
Currently, there is no UK-wide regulatory scheme specific to AI. As noted in 3.1 General Approach to AI-Specific Legislation, existing laws continue to apply, as well as specific AI-related guidance issued by regulators.
For example, UK data protection legislation will almost always apply to the development, deployment and/or procurement of AI. One of the key principles under UK data protection legislation is transparency. The UK regulator for data protection, the ICO, has released certain AI-related guidance, in which it makes it clear that businesses must be transparent about how they process personal data in an AI system, stressing the importance of “explainability” in AI systems.
In addition, UK data protection legislation includes rules around the profiling of individuals and automated decision-making. If permitted, transparency with individuals is key.
It is also worth noting the Algorithmic Transparency Recording Standard which “helps public sector organisations provide clear information about the algorithmic tools they use, and why they’re using them”.
The UK government has published the Guidelines for Public Procurement of AI to support contracting authorities when engaging with suppliers of AI solutions, and therefore the following steps should also be considered by contracting authorities prior to procuring an AI solution:
Whilst these steps are focused on public sector organisation, they are also helpful for private sector organisations procuring AI solutions.
The Society for Computers and Law (SCL) AI Group has also produced sample clauses for transactions involving AI systems, which serve as a useful checklist of issues to consider when procuring AI solutions both in the private and public sector.
Businesses are increasingly turning to AI to drive efficiencies throughout the recruitment process, including when sourcing, screening and scoring potential candidates. However, there are a number of risks: for example, the AI solution could inaccurately screen candidates and a suitable candidate may be lost in the process.
The ICO released specific guidance in November 2024 on AI tools in recruitment. A number of risks were highlighted by the ICO’s audit (November 2024) of AI recruitment tool providers, which found significant room for improvement. In particular, the audit showed that:
The ICO has prepared a helpful list of six key questions for businesses thinking of adopting AI recruitment technologies, as a starting point.
Used properly, AI technologies can help make recruitment processes more transparent and cost-efficient. However, employers need to do their due diligence before taking steps in the AI direction.
With an increase in homeworking since the COVID-19 pandemic, there has also been an increased use of monitoring by employers, which may include CCTV, attendance logs, email and telephone monitoring, keystroke-logging, browser monitoring, flight risk analysis and CV analysis (to consider potential new skills). Whilst these developments in technology could be considered as a new and untapped way of providing an employer invaluable information in relation to their workforce, some of these methods are highly intrusive.
Monitoring of the workforce has UK data protection legislation implications, and businesses will need to consider, amongst other things, whether such monitoring can be justified and whether it has an appropriate lawful basis.
Separately, information acquired from such technology could create broader employee relations issues, such as the employer gaining information that increases the risk of a potential discrimination claim (ie, information coming to light about an employee’s health and the employer then has a proactive legal duty to make reasonable adjustments).
From an employment law perspective, if a digital platform is used (eg, car services and food delivery), there can be challenges when assessing the service provider’s employment status.
Where the platform is more intuitive, which presents as having more control over the service provider, there is a greater risk that the service provider will not be considered genuinely self-employed, which may be contrary to the service provider’s intention. This has broader employment law implications. For example:
This issue has been tested recently in the Supreme Court in relation to the employment status of Uber drivers. Uber argued that its drivers were self-employed with flexibility to pick up work at their discretion. However, the Supreme Court found the drivers to be “workers” and one of the reasons for this was the way Uber’s technology platform controlled their drivers, resulting in them not being considered genuinely self-employed.
AI is increasingly used by financial services firms, particularly in the following areas:
There is a broad range of existing regulations potentially applicable to firms using AI. The FCA takes a technology-agnostic approach, regulating firms’ activities (including those utilising AI) rather than AI technology itself. Therefore, the rules currently applicable to firms generally remain relevant in the context of AI, including the FCA’s Principles for Business, the Handbook and Consumer Duty. Other key areas for firms to consider when integrating AI into their business operations include the Senior Managers’ and Certification Regime and applicable data protection regulations.
However, the benefits of AI use come with associated risks, including risks to customers and to the markets. Risks include those related to:
Broader risks include those related to the following:
Financial services firms must ensure that they adequately eliminate or mitigate these risks and avoid customer detriment.
AI systems continue to be used widely across healthcare in the UK, both in relation to back office administrative functions as well as the delivery of healthcare services to patients. There are a large number of Medical Devices with regulatory approval that use AI systems, principally in radiology but also in other areas. The use of AI systems to structure and review electronic health records (EHR) is a growing area where there is continued debate about the extent to which such systems are Medical Devices or not. This is an area of scrutiny for regulatory authorities in the UK and overseas, with classification as a Medical Device depending on the intended use of the AI system in relation to the EHR.
There is currently no specific legislation in the UK that exclusively governs AI or its use in healthcare. A variety of existing regulations apply, including:
The MHRA is consulting on reforms to the UK Medical Devices regulatory regime, with changes expected in 2025, including in relation to requirements for, and the definitions of, AI as a Medical Device.
Data use and sharing is a key consideration where AI is used in healthcare, so organisations must comply with UK data protection legislation. For example, in order to provide training data for machine learning, operators must:
Following ethical guidelines is essential for the responsible use of data in healthcare AI.
Patient medical data is vulnerable to cyber-attacks and data breaches. As has been seen in the past, healthcare systems are open to attack and are often targeted by hackers due to the sensitive information they store. Ensuring that these areas have bolstered cybersecurity measures is vital for maintaining the security of patient data.
The Sudlow Review, published in November 2024, looked at how the UK can better utilise health data, and recommended a UK-wide system for standards and accreditations for any environment used to store health data.
Given the prevalence of AI systems in healthcare, providers should ensure that environments storing sensitive health data are secure by design and adhere to the highest security standards.
The AV Act 2024 received royal assent on 20 May 2024; however, it does not come into force until relevant statutory instruments are made by the Secretary of State.
Under the Act, if an automated vehicle is authorised (following a successful self-driving test), an “authorised self-driving entity” shall be legally responsible for the automated vehicle. For “user-in-charge” vehicles, there will be immunity from liability in certain circumstances, as well as establishing when the “user-in-charge” will be liable (where the user is legally defined as a driver).
It is recognised that automated vehicles are likely to have access to/store personal data. If the Act is successfully implemented in UK law, other laws will continue to apply, such as UK data protection legislation.
The UK has participated in the World Forum for Harmonisation of Vehicle Regulations (a working party within the framework of the United Nations). The UK government has also published principles of cybersecurity for connected and automated vehicles. The UK government has also launched the AV Act Implementation Programme to secure the safe deployment of automated vehicles on roads in Great Britain.
In the UK, the legal framework for addressing product safety and liability largely remains as retained EU law post-Brexit, which broadly requires products to be safe in their normal or foreseeable usage.
Sector-specific legislation (such as for automated vehicles, electrical and electronic equipment and medical devices) may apply to some products that include integrated AI. However, on the whole, it is widely considered that existing rules do not comprehensively address the new and substantial risks posed by AI at the manufacturing stage.
In response, the UK government introduced the Product Regulation and Metrology Bill in September 2024, which gives the UK government powers to recognise the EU’s new Product Liability Directive (PLD). The PLD came into force on 8 December 2024, and (among other changes) expands the definition of a “product” to include software (encompassing computer programs and AI systems). The purpose of this update was to assist consumers in bringing damages claims against developers of AI systems and their liability insurers when something goes wrong with the operation of an AI system.
Professionals using AI in their services must adhere to existing professional standards and regulations, including adherence to sector-specific regulations and guidelines, such as those issued by the FCA and the Solicitors Regulation Authority.
Professionals must ensure that AI systems are designed and used in a manner that upholds professional integrity, competence and ethical conduct, which involves safeguarding client confidentiality through compliance with data protection laws, respecting intellectual property rights and, where necessary, obtaining client consent when using AI systems. Liability issues may arise if:
The use of generative AI raises issues of both IP protection and IP infringement. In the case of copyright (the primary IP right for the protection of literary, dramatic, musical or artistic works), protection will only arise if a work meets the criteria of originality. Originality implies a degree of human creative input that, in the context of generative AI, may be minimal, absent or difficult to prove.
If copyright does exist in works produced by generative AI, it may not be clear who the “author” is, and therefore who owns the copyright.
Users of AI tools should not assume they will automatically own any copyright, and should check the provider’s terms and conditions, which may assign ownership to the provider or give the provider a licence to use the works and/or materials the user inputs into the AI tool. Users should also be mindful of the potential for generative AI content to infringe third-party IP rights and, again, review the provider’s terms to check for appropriate protections.
In addition to the output of AI systems, organisations will need to be mindful of IP rights in the data used to train such systems. Although organisations may be aware of IP rights such as trade marks, copyright and patents, it is also paramount that they are aware of database rights as well.
In the case of copyright, Section 9(3) of the Copyright, Designs and Patents Act (CDPA) 1988 states that the author of a computer-generated literary, dramatic, musical or artistic work is the person that undertook the arrangements necessary for the creation of that work. Applying this to the example of an image created by a text-to-image generative AI system, and assuming copyright exists in the work (see 15.4 AI-Generated Works of Art and Works of Authorship), it is unclear whether the author would be the user who entered the prompt or the operator of the AI. Although the position on this may vary from one work to another, the author must be a human being.
In the case of patents, the UK Supreme Court confirmed in Thaler v Comptroller-General of Patents, Designs and Trade Marks that only a human can be recorded as the “inventor”, and not an AI machine such as Dr Thaler’s “DABUS”. It is important to note that Dr Thaler’s case concerned the formalities of patent registration, not the wider question of the patentability of inventions created or aided by AI systems more generally. Had Dr Thaler recorded himself as the inventor on the registration form (in his capacity as the creator/owner of DABUS), the application may well have succeeded.
A trade secret is a piece of information that is treated as confidential by its owner and has commercial value because it is secret. UK law protects trade secrets against unjustified use and disclosure, both through the equitable doctrine of confidence and under the Trade Secrets Regulations 2018. Trade secret protections could be used to protect the underlying source code of an AI system as well as training sets, algorithms and data compilations. These elements are essential for AI systems but may not always qualify for patent protection.
The immediacy of trade secret protection and the broad scope of coverage mean that this is an increasingly common method of protection in the UK. The use of trade secret protection in this area, however, must be balanced with the need for transparency and accountability.
AI-generated works pose a new challenge to copyright law. Section 9(3) of the CDPA 1988 provides that the author of copyright in computer-generated works is the person who undertakes the necessary arrangements to create the work. Putting aside the difficulty of determining who this person is (see 15.2 Applicability of Patent and Copyright Law), Section 9(3) is only engaged if copyright subsists in the computer-generated work in the first place, which may not be the case for works created by generative AI.
Works are only protected by copyright – which subsists automatically – if they are “original”, broadly meaning the expression of human creative freedom. Unlike more traditional means of using computers to create art (typically requiring at least some level of human skill and effort), generative AI is capable of creating art from simple – even minimal – user prompts. There is a very significant question mark over whether such works can be “original” and therefore benefit from copyright protection. In the absence of specific legislation on the issue, it is possible that the courts will not come to a blanket conclusion and that the answer will vary from work to work, depending on the extent and manner of human creativity involved in its creation.
In its December 2024 consultation, the UK government recognised the lack of clarity around ownership of AI outputs and has sought the views of stakeholders (see 3.6 Data, Information or Content Laws).
The creation of works through OpenAI tools such as ChatGPT raises a number of IP issues. Such models are trained on vast amounts of data from a number of sources. As a result, determining the ownership of AI-generated works is challenging.
The question to be asked for IP purposes is whether the original creator is the AI model (which currently, in the UK, is not recognised as having a separate legal personality), the developer of the AI model, the owner of the underlying information, or the user providing input. Furthermore, the use of pre-trained models and datasets may infringe upon existing IP rights.
Beyond legal IP issues, there are ethical IP concerns about the use of OpenAI tools (particularly in creative industries), such as the potential for such models to replicate the works of human creators without proper disclosure, credit or compensation.
In addition, the rights and limitations concerning generated content are governed by OpenAI’s licensing agreements and terms of use, which should be reviewed carefully as they may have restrictions on commercial use.
New Powers Under the Digital Markets, Competition and Consumers Act 2024 (DMCCA)
The Competition Markets Authority (CMA) has confirmed its intent to use new powers under the DMCCA (which came into force on 1 January 2025) to prioritise investigations into digital activities where choice in AI Foundation Model services and competition could be restricted and set targeted conduct requirements for firms that have “Strategic Market Status”.
The CMA has confirmed that AI and its deployment by firms will be relevant to its selection of Strategic Market Status (SMS) candidates, particularly where AI is deployed in connection with other more established activities.
New merger control thresholds have also been introduced under the DMCCA, which are designed to target so-called acqui-hires or killer acquisitions, where new, innovative companies are acquired by large well-established entities in the technology sector. The CMA now has jurisdiction to review mergers where just one of the parties has turnover of GBP350 million or more combined with a share of 33% or more in the supply of goods and services.
CMA Approach – AI Outlined in Strategic Update Paper
The CMA’s AI strategic update published in April 2024 highlighted the potential negative effects of AI systems that affect choices offered to customers and how they are presented, in particular where algorithms give undue prominence to a particular supplier or platform, rather than the best option for the customer. The CMA has confirmed that it is continuing to monitor developments in this space and also to invest in its own technological capabilities in order to combat the use of AI to facilitate anti-competitive behaviour, though more specific or targeted action to deal with these issues is yet to be announced.
The key cybersecurity legislation that applies to AI within the UK includes the following:
The UK has aimed to tackle the use of AI systems by cybercriminals by:
There are a number of ESG reporting requirements in the UK which may indirectly require reporting in relation to AI. For example, the UK has adopted frameworks including the Task Force on Climate-Related Financial Disclosures and UK Sustainability Reporting Standards.
Many organisations are using or looking to use AI to streamline their ESG compliance processes. AI systems can automate data collection across multiple, real-time sources and identify compliance gaps. Such systems can also automate ESG monitoring and trend-analysis, providing a more transparent overview of an organisation’s ESG performance.
Although AI can drive sustainability and improve resource efficiency, its energy-intensive nature (particularly in training large models and operating data centres) can contribute to significant carbon emissions and water usage. As noted, the UK government has emphasised a “pro-innovation” approach but has also encouraged responsible AI use to mitigate environmental impacts.
Implementing AI best practices will require consideration of a number of key issues, such as regulatory compliance, ethical considerations, effective risk management and robust data governance.
The key for organisations in the UK is ensuring adherence to the UK government’s five cross-sectoral principles (and related regulator guidance) (see 3.1 General Approach to AI-Specific Legislation).
Ethical considerations are also significant. For example, businesses should consider conducting thorough ethical assessments before developing, deploying and procuring AI systems, which is likely to involve identifying and eliminating biases and addressing privacy concerns.
Effective risk management and internal governance is another key area to consider, such as the identification and mitigation of potential risks associated with AI deployment, development or procurement, and establishing robust internal processes with appropriate guardrails to ensure the responsible and safe use of AI.
Listed companies would also need to consider their obligations under the UK Corporate Governance Code, which requires listed companies to carry out a robust assessment of the company’s emerging and principal risks and how such risks are being mitigated. The Code’s guidance recognises that, for many companies, cyber/IT security risks will likely be amongst the risks identified by companies, and therefore risks relating to use of AI may also fall within the risks identified by a listed company in its assessment.
Additionally, the Institute of Directors has developed resources and guidelines to help boards understand and oversee AI initiatives effectively. For example, A Director’s Guide to AI Board Governance presents nine principles to guide boards’ oversight of AI in their organisations. These principles include:
The firm would like to thank the following team members for their contributions to this guide: Victoria McCarron (Solicitor), Mope Akinyemi (Trainee), Yadhavi Analin (trainee), Emily Fox (Solicitor), Harry Jewson (Senior Associate), Abbie McGregor (Solicitor), Pooja Bokhiria (Solicitor), Alex Fallon (Associate), Alice Gillie (Solicitor), Matthew Loader (Associate), Ebony Ezekwesili (Associate), Ellen Goodland (Associate), Brandon Wong (Associate), Ryan Jenkins (Associate), Rory Trust (Director) and Tom Green (Associate).
One Glass Wharf
Bristol
BS2 0ZX
UK
+44 (0) 117 939 2000
+44 (0) 117 902 4400
www.burges-salmon.com