Artificial Intelligence 2025

Last Updated May 22, 2025

UK

Law and Practice

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities, and the public sector.

The legal landscape around AI is broad and complex, spanning a range of legal areas which professionals and organisations should consider. Issues arise throughout the procurement, development, building, licensing and use of an AI system.

Risks to consider include the following.

  • Compliance with data protection laws – AI systems rely heavily on large volumes of data, particularly when processing personal data, but having awareness of the risks around profiling and automated decision-making is also essential.
  • Protection of intellectual property rights – for example, in training data, and ownership of copyright in works generated by AI systems. AI systems must also comply with consumer protection legislation in relation to unfair commercial practices and transparency (for example, providing consumers with appropriate information).
  • Discrimination – for example, of end users due to biases in data, AI system design, or how AI outputs are used.
  • Liability frameworks along the value chain, both in contract and tort.

Various industries are applying AI and machine learning, including:

  • medical imaging and drug discovery in the healthcare sector;
  • credit scoring and fraud detection in the finance sector; and
  • dynamic pricing in retail and e-commerce.

The AI models that are developed and deployed across industries may be generative, creating new content learned from existing data (such as drug discovery), while others are predictive, making predictions based on historical data (such as models predicting asset prices).

There are a number of industry innovations driving the use of AI and machine learning, such as no-code AI, which allow non-technical users to build AI solutions, and the development of foundation models faster and at lower cost than previously thought possible, such as Deep Seek, which have shifted perception and attitudes in relation to the use and deployment of AI systems within businesses and by governments. The potential benefits to consumers and businesses are vast, including improved services and products and advancements in research.

One example of a cross-industry initiative is the UK government’s AI Opportunities Action Plan, a UK government-led initiative to scale AI adoption across sectors, including the public and healthcare sector, through initiatives such as building additional data centres, national data libraries and increasing access to compute.

The UK government is actively involved in promoting the adoption and advancement of AI for industry use, evidenced by its commitment to a “pro-innovation” approach to the regulation of AI. For example, the UK government has launched the AI Opportunities Action Plan, released the AI Playbook and various guidance notes for public sector use of AI.

The UK government has also been looking to integrate AI into public services. Through these initiatives the UK government is aiming to bolster its position as a global leader in AI development. Under the National AI Strategy, the UK government has committed to ensuring long-term support for the AI ecosystem.

The UK government offers certain R&D-related tax credits to incentivise industry innovation of AI, including offering grants and funding opportunities to medium-sized businesses, as well as the Global Talent Visa which allows individuals to work in the UK if they are a leaders or potential leaders in digital technology, which includes AI.

The UK’s approach to AI regulation is “pro-innovation” and principles based, focusing on flexibility and adaptability.

The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology. This differs from the EU approach of creating a standalone regulator (the “European AI Board”) and introducing overarching AI-specific regulation (the “EU AI Act”) to sit above existing regulation.

Industry regulators are expected to interpret and apply these principles within their respective domains using existing powers supplemented with additional funding and regulatory guidance from the UK government. In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

The UK government included in the 2024 King’s Speech the indication for regulation of frontier models; however, no government proposed regulation or consultation has been published.

There is currently no AI-specific legislation governing the development, deployment or use of AI in the UK; see 3.1 General Approach to AI-Specific Legislation. However, various existing laws and regulations do apply to the various stages of the AI life cycle.

Various private members’ bills have been proposed in the House of Lords, such as the Artificial Intelligence (Regulation) Bill. However, private members’ bills often do not become law, and in any event are subject to change during the legislative process. Nevertheless, they do give an indication for political direction and help shape the political debate.

The UK government has issued non-binding AI specific guidance regarding the use of AI in the UK. One example is the AI Playbook which was published early 2025 and offers guidance on using AI safely, effectively, and security for civil servants and people working in government organisations.

In line with the UK government’s “pro-innovation” approach:

  • the government has also published the AI Opportunities Action Plan. Its stated aim is to provide a roadmap for maximising AI’s potential to drive growth and deliver real benefits to people across the UK; and
  • regulators have also issued guidance covering AI in their respective domains. For example, the Information Commissioner’s Office (ICO) has issued guidance on best practice for data protection-compliant AI, and on interpreting data protection law as it applies to AI systems. The National Cyber Security Centre has released guidance explaining the cybersecurity risks posed by AI.

The UK government also requires public sector organisations to comply with the Algorithmic Transparency Recording Standard which makes available publicly (subject to exceptions) specified information about public sector use of algorithms that: (i) have a significant influence on a decision-making process with direct or indirect public effect; or (ii) directly interact with the general public.

See also 3.1 General Approach to AI-Specific Legislation.

This section is not relevant in this jurisdiction.

This section is not relevant in this jurisdiction.

This section is not relevant in this jurisdiction.

At the time of writing, there have not been any amendments to UK data protection legislation or information and content laws. However, the UK regulator for data protection, the ICO, has released specific guidance in respect of the interaction between UK data protection legislation and the use/deployment of AI-based solutions.

In its response to the UK White Paper (regarding the regulation of AI), the UK government confirmed that its working group had failed to develop a voluntary code of practice for copyright and AI, with the intention of making licences for text and data mining more available. However, in light of this, the UK government launched a consultation on copyright and AI in December 2024 setting out a number of options for reform but strongly advocating for an approach similar to that of the EU. This would allow AI developers to train AI models on large volumes of copyright materials but with “a data mining exception” that would give rights holders the opportunity to opt out. This approach would be underpinned by supporting measures on transparency. The consultation closed on 25 February 2025, and updates from the UK government are eagerly awaited.

The UK government is taking a pro-innovation approach to the regulation of AI and has released certain non-binding principles, which industry regulators are expected to interpret and apply in line with existing laws. It has been confirmed that “future binding measures” are likely to be introduced for highly capable general-purpose AI models (the content of such measures is currently unknown).

Under the previous UK government an Artificial Intelligence Bill had been proposed, but, following the general election, the new government has dropped the proposed Bill. The new UK government may release a new AI Bill during 2025, which will likely impact the regulatory landscape for AI in the UK.

The current key trend in the UK in respect of AI regulation is the tension between the extent to which AI should be regulated and a “pro-innovation” approach.

The EU AI Act will also affect UK business to the extent that UK-based providers of AI systems place their system on the market in the EU or otherwise supply into the EU.

Very few UK judicial decisions have dealt with AI directly. The most noteworthy example is the Supreme Court’s decision in Thaler v Comptroller-General of Patents, Designs and Trade Marks, which confirmed that UK patent applications must identify a human “inventor”. However, the case concerned the formalities of patent registration only, and not the patentability of inventions created by or with the aid of AI, where a human is listed as the inventor on the application form; see 15.2 Applicability of Patent and Copyright Law for more information.

Also notable is Getty Images’ ongoing copyright, trade mark and database right infringement claim against Stability AI in respect of its “Stable Diffusion” text to image generative AI model. A trial is not expected until 2025 at the earliest.

In the context of privacy and data protection concerning automated systems, in 2020 the Court of Appeal held that South Wales Police’s use of automated facial recognition technology was unlawful under Article 8 of the ECHR (right to privacy) and the UK Data Protection Acts 1998 and 2018.

The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology.

The regulators expected to play leading roles in the UK regulation of AI include the following, which have publicised strategic updates in response to the White Paper and, in some cases, additional guidance:

  • the ICO (data protection and privacy);
  • the Office for Communications (Ofcom) (communications);
  • the Financial Conduct Authority (FCA) (financial services); and
  • the Competition and Markets Authority (CMA) (competition/antitrust).

The above are members of what is known as the Digital Regulation Cooperation Forum (DRCF). Other regulators include:

  • Medicines and Healthcare products Regulatory Agency (MHRA);
  • Ofsted (education);
  • Ofgem (energy);
  • Equality and Human Rights Commission (EHRC);
  • Health and Safety Executive (HSE);
  • Legal Services Board;
  • Office for Qualifications and Examinations Regulation (Ofqual); and
  • Civil Aviation Authority (CAA).

In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles, as outlined in 3.1 General Approach to AI-Specific Legislation.

Please see 3.1 General Approach to AI-Specific Legislation.

In May 2024, the UK government required regulators to each set out their approach to AI regulation and some regulators have now issued AI-related guidance. Regulators that did produce such updates are listed in 5.1 Regulatory Agencies.

Whilst largely non-binding, reviewing the strategic approach of regulators in relation to AI regulation, and the regulators’ AI guidance, is a helpful step towards understanding compliance with existing UK laws in respect of procurement, development and deployment of an AI solution.

Further, the Department for Science, Innovation and Technology (DSIT), the AI Security Institute and the DRCF have all released other resources to aid the implementation of certain AI-related aspects of the UK government’s industrial strategy. These include the following:

  • Responsible AI Toolkit;
  • AI Assurance Toolkit;
  • Model for Responsible Innovation;
  • Algorithmic Transparency Reporting Standard;
  • Code of Practice for Cybersecurity of AI; and
  • AI and Digital Hub.

In May 2022, the ICO fined US-based Clearview AI more than GBP7.5 million for misusing UK residents’ publicly available personal data by scraping images from social media without consent to create a database of 20 billion images. Clearview AI used its database to provide facial recognition services to its customers.

Clearview successfully appealed: in October 2023, it was found that the ICO lacked jurisdiction because Clearview only provided its services to law enforcement/national security bodies outside the UK and EU, falling within an exception to the UK GDPR applicable to the acts of foreign governments. As of 31 January 2025, the Upper Tribunal granted the ICO permission to appeal and a date for the hearing is yet to be set.

The ICO issued a preliminary enforcement notice to Snap Inc over potential failure to properly assess the privacy risks posed by its generative AI chatbot “My AI”. The investigation provisionally found that Snap failed to adequately identify and assess the risks to several million “My AI” users in the UK, including children aged 13 to 17. The ICO concluded its investigation into Snap in May 2024, satisfied that Snap had undertaken a risk assessment compliant with data protection law, and issued a warning to the industry to engage with data protection risks of generative AI before products are brought to market.

Separately, the CMA has investigated a number of AI partnerships between large technology companies and AI organisations. In 2024, the CMA concluded that the following partnerships did not qualify for further investigation:

  • Amazon and Anthropic;
  • Microsoft and Mistral AI; and
  • Alphabet and Anthropic.

The CMA is still considering whether Microsoft’s partnership with OpenAI (the creator of ChatGPT) amounts to a de facto merger and, if so, whether it could impact competition.

Standards development organisations such as the International Organization for Standardization (ISO)/the International Electrotechnical Commission (IEC), the Institute of Electrical and Electronics Engineers (IEEE) and the British Standards Institution (BSI) have paved the way for consensus-driven standards through multi-stakeholder discussions to promote global alignment. Standards can be grouped as follows:

  • foundational standards – to help build common language and definitions around basic concepts and facilitate dialogue between stakeholders;
  • process standards – to provide guidance on best practices in management, process-design, quality control and governance;
  • measurement standards – to create universal mechanisms and terminologies on measuring various aspects of an AI system’s performance; and
  • performance standards – to assist in setting up benchmarks, prerequisites and expectations that need to be achieved at specific stages for the effective functioning and utilisation of an AI system.

Following on from the “introduction to AI assurance” guidance note that was published last year, the DSIT has also announced a new voluntary Code of Practice for the Cyber Security of AI, which it claims will form the basis of a new global standard for secure AI through the European Telecommunications Standards Institute.

In December 2023, the ISO and the IEC jointly published a new global management standard for artificial intelligence, known as ISO/IEC 42001, which forms part of a broader series of AI standards including:

  • ISO/IEC 23053, which establishes a framework for describing a generic AI system using machine learning technology;
  • ISO/IEC 22989, which establishes terminology for AI and describes concepts in the field of AI; and
  • ISO/IEC 22894, which provides guidance on how organisations that develop, produce, deploy or use products, systems and services that utilise AI can manage the associated risks.

While these standards do not carry the same legal weight as legislation, they are expected to have a significant impact on how organisations demonstrate responsible and transparent use of AI. References to global standards are often included where organisations are providing goods and services on both a business-to-consumer and business-to-business basis. In addition, they are often referenced in commercial contracts as a way of establishing expectations of regulatory guidance and best practice.

As these standards evolve over time in line with emerging trends, risks and opportunities in the AI space, they have the potential to act as the default standard that organisations will be expected to adhere to.

Studies in the UK have indicated that AI usage across the UK public sector remains inconsistent, despite government messaging increasingly encouraging the utilisation of AI in the public sector. Within government, a 2024 study by the National Audit Office found that 70% of government bodies surveyed are piloting or planning the use of AI. February 2025 saw the launch of the Artificial Intelligence Playbook for the UK Government, which aims to provide departments and public sector organisations with technical guidance on the safe and effective use of AI.

The Incubator for AI in Government (i.AI) is an agile technical delivery team within the DSIT that builds AI tools for use across the public sector. To date, i.AI has created a suite of bespoke tools for civil servants as well as a range of tools for wider public sector use.

Current uses of AI across UK government include GOV.UK Chat, a pilot tool that uses relevant website content to generate responses to natural language queries by users, aiming to simplify navigation across more than 700,000 pages on GOV.UK. Other uses include the development of a tool which aims to improve access to user research undertaken across the NHS, and a CCS tool which generates relevant agreement recommendations for customers based on spend and customer market segmentation data.

More broadly, facial recognition technology is becoming increasingly common in UK law enforcement, including the use of retrospective facial recognition as part of criminal investigations as well as live facial recognition (eg, at large-scale sporting events).

There are risks when using AI in the public sector and more widely, such as the risks of biases and discrimination flowing from foundation model outputs, as well as the risk of nefarious actors using foundational models to intentionally cause harm (with the added legitimacy of public sector usage).

See 4.1 Judicial Decisions for a discussion of the judicial decision relating to South Wales Police’s use of automated facial recognition technology.

There are currently no relevant pending cases that are available publicly. This may reflect the early stage of some challenges or the nature of the court system. However, there are indications that government has received potential challenges to its use of algorithmic decision-making and AI.

AI plays a significant role in national security. Use cases include AI systems deployed to:

  • detect and respond to cyberthreats in real time;
  • assist in decision-making, including in crisis situations;
  • enhance intelligence gathering;
  • monitor international borders and critical infrastructure and provide an early warning system;
  • identify trends and anomalies in data sets, particularly online activity data sets, that aid in detecting potential terrorist threats; and
  • operate autonomous and semi-autonomous systems.

National security considerations will play a pivotal role in shaping the future government legislation, regulations and policies on AI systems, such as:

  • ensuring the ethical use of AI systems;
  • creating flexible regulations to keep pace with the fast-developing AI systems;
  • preventing over-reliance on AI systems and diminishing human expertise, particularly in the context of national security, where human intervention is critical; and
  • ensuring the AI systems can provide clear explanations for their outputs, particularly in the context of national security and where decisions can have life-or-death consequences.

Also, the UK’s “AI Safety Institute” was rebranded as the “AI Security Institute” (AISI) on 15 February 2025 to reflect AISI’s focus on serious AI risks such as “how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks and enable crimes such as fraud and child sexual abuse”. AISI will focus on advancing its understanding of the most serious risks posed by AI “to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops”.

The key issues and risks posed by generative AI are as follows.

  • Its generative nature – it may not be clear to those who are presented with the content that it was produced by an AI system. Also, GenAI tools are based on foundation models that are non-deterministic, meaning that the same inputs could result in different outputs. Further, AI could reproduce and, in some cases, emphasise biases dependent on the material it was trained on and how the outputs are used within the wider decision-making framework may reflect that the system lacks necessary context.
  • Explainability – it may be difficult to explain how outputs have been generated, for example, due to the technical complexity and third-party ownership of the models.
  • Cost and environmental impact – not only can it be expensive to train AI models and develop AI systems (although the arrival of Deep Seek is challenging this thinking), which can be a barrier to innovation, there could also be an environmental impact through electricity and water usage.
  • Reliability – GenAI systems can result in hallucinations (fabricated, erroneous and untrue outputs). Further, some generative AI models are static, meaning they have not been trained on and cannot accurately include up-to-date information in their outputs. Some AI systems lack citation and source referencing, making output verification more difficult.
  • IP considerations – there can be debate surrounding IP ownership where generative AI is concerned.
  • Ethical considerations – AI lacks the moral rationalisation that a human has and therefore may generate results that conflict with human values.
  • Data protection – as discussed further in 8.2 Data Protection and Generative AI, AI has increased the discussion on data ownership and how the traditional rights of data protection can be applied to these technologies.

Where AI models are used to process personal data, at a fundamental level those systems need to comply with the principles of data protection law by design and by default.

Under data protection law, individuals have various rights regarding their personal data, and these rights apply wherever that personal data is being processed by an AI system.

Where an individual exercises their right to the “rectification” or “erasure” of their personal data and it is not possible to separate out the individual’s data from the AI model in order to comply with those rights, in order to avoid being in breach of regulatory compliance and subject to potential enforcement actions by the regulator, the model may need to be deleted entirely.

The challenge for AI developers is designing AI systems that can automatically comply with the law. Stipulating data protection principles, such as “purpose limitation” and “data minimisation”, as part of the design architecture and engineering process is key to achieving successful compliance outcomes.

AI is anticipated to have a significant impact on the legal profession due to the knowledge intensive and sometimes highly repetitive work. Use cases include:

  • document summarisation;
  • data analytics, such as identifying trends in legal operations or identifying inconsistencies between documents;
  • transcription; and
  • disclosure (also known as discovery) when prioritising documents for review, identifying concepts, automatically tagging documents for relevance and issue, and producing related documents such as privilege logs.

The Law Society has released (and recently updated) guidance on how to manage the risks of adopting AI (which include IP, cybersecurity and data protection issues). The Bar Council has also released guidance on considerations when using ChatGPT and generative AI software based on large language models (LLMs). At the time of writing, The Law Society is expected to publish research on the impacts of AI specific to areas of law.

Of the risks already identified, ethical issues include the risks that those who use AI are not accountable or responsible for their actions, AI is not used in the best interests of clients and providing appropriate transparency to stakeholders (such as clients, colleagues, courts) about how and when AI was used.

The UK does not have a specific liability framework applicable generally to harm or loss resulting from the use of AI. Therefore, individuals or businesses who suffer loss or damage caused by AI must generally seek redress under existing laws (eg, contract law, tort law or consumer protection legislation). However, the UK passed the Automated and Electric Vehicles Act 2018 and Automated Vehicles (AV) Act 2024, pursuant to which liability for damages caused by an insured automated vehicle when driving itself lies with the insurer (subject to exceptions).

To claim damages under contract, a claimant needs to prove that there was a valid contract, the contract was breached, the defendant breached the contract, and that said breach caused loss. Whilst this may be straightforward with simple products and services, establishing causation in an AI-based product or service may be more difficult: for example, demonstrating who caused the loss claimed in a complex and multi-stakeholder value chain.

In terms of trends, businesses are assessing whether or not they are sufficiently protected against liability risks arising from such emerging technologies, be it as operators, users or manufacturers. This is typically addressed through technical mitigations (such as system design and verification), ensuring that contractual arrangements with suppliers and/or customers are appropriate, or by obtaining appropriate insurance coverage (albeit AI systems may not neatly align with typical insurance principles).

The UK government’s stated approach continues to be for pro-innovation and encouraging regulators to tackle AI opportunities and risks within their remits. As discussed in 10.1 Theories of Liability, with the exception of the Automated and Electric Vehicles Act 2018 and the AV Act 2024, liability must rely upon existing frameworks, and for the most part existing laws will apply to the allocation of liability in respect of AI.

The DSIT’s response to the UK AI White Paper confirms that regulation and binding measures on “highly capable general-purpose AI” are likely to be required in the future. The UK government has confirmed it will not “rush to regulate”, as “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation”. The UK’s prime minister confirmed in March 2025 that the UK government would regulate in a way that is “pro-growth and pro-innovation”.

Bias in predictive and generative AI systems can arise from biased training data, biases in the training and verification processes, and biased model choice and system design. Legally, there are concerns regarding discrimination, privacy and accountability, to name a few. Current legislation such as the Equality Act 2010 and data protection legislation aim to mitigate these risks. Examples of UK regulators’ proposals to mitigate risks include:

  • the ICO guidance on how to address the risks of bias and discrimination in AI systems with regard to personal data; and
  • the FCA’s Consumer Duty which impacts algorithmic decision-making in financial services. The Consumer Duty sets out clear expectations that firms address biases or practices that hinder consumers achieving good outcomes.

Key consumer areas at risk of bias from the use of AI systems include:

  • finance;
  • healthcare; and
  • employment.

Businesses may find themselves subject to liability from individual claims and regulatory fines if found in breach of legislation such as the Equality Act 2010, data protection legislation and the FCA’s Consumer Duty requirements.

Businesses can take certain measures to address bias, such as:

  • implementing contractual frameworks, including warranties;
  • bias testing during system development and deployment; and
  • appropriate human oversight in respect of the AI model’s output prior to reliance on such output.

An overarching issue in this area is the lack of a clear and consolidated regulatory response to facial recognition technology (FRT). At present, the UK approach is a combination of human rights law, data protection law, equality law and, in the context of law enforcement, criminal justice legislation. For completeness, it should be noted that the EU AI Act, while not directly effective in the UK, does directly address the processing of biometric data, and the UK will likely be influenced by the approach taken in the EU AI Act. In this regard, it is noted that ‘untargeted scraping to develop facial recognition databases’ has been included in the EU Commission’s Guidelines (dated 4 February 2025) on prohibited artificial intelligence (AI) practices in connection with Article 5 of the EU AI Act.

FRT relies on individuals' personal data and biometric data, and its use raises a number of challenges from a data protection perspective. Biometric data is intrinsically sensitive and the use of FRT therefore gives rise to challenges around the necessity and proportionality of processing, as well as the need to identify a lawful basis for processing.

This is particularly true where FRT involves the automatic and indiscriminate collection of biometric data in public places, which is becoming increasingly common, whether for law enforcement purposes or for commercial purposes such as targeted advertising in the retail sector. In this context, issues include:

  • consent and transparency;
  • the necessity and proportionality of processing;
  • statistical accuracy;
  • the risk of algorithmic bias and discrimination; and
  • the processing of children's data without necessary additional safeguards.

Companies utilising FRT should be cognisant of the risks associated with its use, particularly in relation to potential violations of data protection legislation and equality and anti-discrimination laws.

Automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on:

  • factual data;
  • digitally created profiles; or
  • inferred data.

Article 22 of the UK GDPR restricts organisations’ abilities to make solely automated decisions that result in a legal or similarly significant effect on an individual. In this context, “solely automated” means that the decision is totally automated without any human influence on the outcome. It is worth noting that the Public Authority Algorithmic and Automated Decision-Making Systems Bill aims to regulate the use of automated and algorithmic tools in decision-making processes in the public sector by requiring public authorities to conduct impact assessment and adopt transparency standards. However, it remains to be seen whether this Bill will become law in the UK.

Decisions taken in this manner can potentially have a significant adverse effect, which is particularly concerning as there can often be a lack of understanding around how the decision-making process works. There is also a risk that inherent biases in the AI-based decision-making tools may lead to discriminatory outcomes.

Organisations that fail to comply with Article 22 may be subject to significant fines and liability to affected individuals, who may exercise their right to object under Article 21 of the UK GDPR as well as bringing a legal claim against the company.

Currently, there is no UK-wide regulatory scheme specific to AI. As noted in 3.1 General Approach to AI-Specific Legislation, existing laws continue to apply, as well as specific AI-related guidance issued by regulators.

For example, UK data protection legislation will almost always apply to the development, deployment and/or procurement of AI. One of the key principles under UK data protection legislation is transparency. The UK regulator for data protection, the ICO, has released certain AI-related guidance, in which it makes it clear that businesses must be transparent about how they process personal data in an AI system, stressing the importance of “explainability” in AI systems.

In addition, UK data protection legislation includes rules around the profiling of individuals and automated decision-making. If permitted, transparency with individuals is key.

It is also worth noting the Algorithmic Transparency Recording Standard which “helps public sector organisations provide clear information about the algorithmic tools they use, and why they’re using them”.

The UK government has published the Guidelines for Public Procurement of AI to support contracting authorities when engaging with suppliers of AI solutions, and therefore the following steps should also be considered by contracting authorities prior to procuring an AI solution:

  • establish a clear responsibility record to define who has accountability for the different areas of the AI model;
  • determine a clear governance approach to meet requirements;
  • ensure there is regular model testing so issues of bias within the data may be addressed;
  • define acceptable model performance (service levels);
  • ensure knowledge is transferred through regular training;
  • ensure that appropriate ongoing support, maintenance and hosting arrangements are in place;
  • address IP ownership and confidentiality;
  • allocate risk/apportion liability to the parties best able to manage it;
  • factor in future regulatory changes; and
  • include appropriate end-of-life processes (define end-of-contract roles and processes).

Whilst these steps are focused on public sector organisation, they are also helpful for private sector organisations procuring AI solutions.

The Society for Computers and Law (SCL) AI Group has also produced sample clauses for transactions involving AI systems, which serve as a useful checklist of issues to consider when procuring AI solutions both in the private and public sector.

Businesses are increasingly turning to AI to drive efficiencies throughout the recruitment process, including when sourcing, screening and scoring potential candidates. However, there are a number of risks: for example, the AI solution could inaccurately screen candidates and a suitable candidate may be lost in the process.

The ICO released specific guidance in November 2024 on AI tools in recruitment. A number of risks were highlighted by the ICO’s audit (November 2024) of AI recruitment tool providers, which found significant room for improvement. In particular, the audit showed that:

  • certain AI tools failed to process personal information fairly – eg, filtering out candidates with particular protected characteristics; and
  • some technologies allowed for bias to be built into the recruitment process, skewing the candidate selection pool in favour of certain groups.

The ICO has prepared a helpful list of six key questions for businesses thinking of adopting AI recruitment technologies, as a starting point.

  • Has a Data Processing Impact Assessment been completed?
  • What is the lawful basis for processing personal information?
  • Has the employer documented responsibilities and set clear data processing instructions?
  • Has the employer checked the provider has mitigated bias?
  • Is the AI tool being used transparently?
  • How much will the employer limit unnecessary processing?

Used properly, AI technologies can help make recruitment processes more transparent and cost-efficient. However, employers need to do their due diligence before taking steps in the AI direction.

With an increase in homeworking since the COVID-19 pandemic, there has also been an increased use of monitoring by employers, which may include CCTV, attendance logs, email and telephone monitoring, keystroke-logging, browser monitoring, flight risk analysis and CV analysis (to consider potential new skills). Whilst these developments in technology could be considered as a new and untapped way of providing an employer invaluable information in relation to their workforce, some of these methods are highly intrusive.

Monitoring of the workforce has UK data protection legislation implications, and businesses will need to consider, amongst other things, whether such monitoring can be justified and whether it has an appropriate lawful basis.

Separately, information acquired from such technology could create broader employee relations issues, such as the employer gaining information that increases the risk of a potential discrimination claim (ie, information coming to light about an employee’s health and the employer then has a proactive legal duty to make reasonable adjustments).

From an employment law perspective, if a digital platform is used (eg, car services and food delivery), there can be challenges when assessing the service provider’s employment status.

Where the platform is more intuitive, which presents as having more control over the service provider, there is a greater risk that the service provider will not be considered genuinely self-employed, which may be contrary to the service provider’s intention. This has broader employment law implications. For example:

  • entitlement to holiday pay;
  • the national minimum wage;
  • discrimination protection; and
  • dismissal protection.

This issue has been tested recently in the Supreme Court in relation to the employment status of Uber drivers. Uber argued that its drivers were self-employed with flexibility to pick up work at their discretion. However, the Supreme Court found the drivers to be “workers” and one of the reasons for this was the way Uber’s technology platform controlled their drivers, resulting in them not being considered genuinely self-employed.

AI is increasingly used by financial services firms, particularly in the following areas:

  • customer engagement and support (in applications such as chatbots);
  • detection of financial crime and fraud;
  • assisting decision-making, including in the insurance credit and investment and desktop management sectors;
  • driving efficiencies, particularly in compliance; and
  • advice-related tools.

There is a broad range of existing regulations potentially applicable to firms using AI. The FCA takes a technology-agnostic approach, regulating firms’ activities (including those utilising AI) rather than AI technology itself. Therefore, the rules currently applicable to firms generally remain relevant in the context of AI, including the FCA’s Principles for Business, the Handbook and Consumer Duty. Other key areas for firms to consider when integrating AI into their business operations include the Senior Managers’ and Certification Regime and applicable data protection regulations.

However, the benefits of AI use come with associated risks, including risks to customers and to the markets. Risks include those related to:

  • bias;
  • poor decision-making; and
  • unlawful discrimination.

Broader risks include those related to the following:

  • governance;
  • accountability;
  • hallucination;
  • cybersecurity;
  • third-party dependencies and concentration; and
  • a lack of sufficient skills and experience (both in financial services firms and throughout the consumer demographic).

Financial services firms must ensure that they adequately eliminate or mitigate these risks and avoid customer detriment.

AI systems continue to be used widely across healthcare in the UK, both in relation to back office administrative functions as well as the delivery of healthcare services to patients. There are a large number of Medical Devices with regulatory approval that use AI systems, principally in radiology but also in other areas. The use of AI systems to structure and review electronic health records (EHR) is a growing area where there is continued debate about the extent to which such systems are Medical Devices or not. This is an area of scrutiny for regulatory authorities in the UK and overseas, with classification as a Medical Device depending on the intended use of the AI system in relation to the EHR.

There is currently no specific legislation in the UK that exclusively governs AI or its use in healthcare. A variety of existing regulations apply, including:

  • the Data Protection Act 2018;
  • the Medical Device Regulations 2002; and
  • guidance issued by the Medicines and Healthcare products Regulatory Agency, including regulatory guidance on Software as a Medical Device and AI as a Medical Device.

The MHRA is consulting on reforms to the UK Medical Devices regulatory regime, with changes expected in 2025, including in relation to requirements for, and the definitions of, AI as a Medical Device.

Data use and sharing is a key consideration where AI is used in healthcare, so organisations must comply with UK data protection legislation. For example, in order to provide training data for machine learning, operators must:

  • obtain patient consent;
  • put systems in place to ensure data is anonymous; and
  • implement robust security measures to protect privacy and confidentiality.

Following ethical guidelines is essential for the responsible use of data in healthcare AI.

Patient medical data is vulnerable to cyber-attacks and data breaches. As has been seen in the past, healthcare systems are open to attack and are often targeted by hackers due to the sensitive information they store. Ensuring that these areas have bolstered cybersecurity measures is vital for maintaining the security of patient data.

The Sudlow Review, published in November 2024, looked at how the UK can better utilise health data, and recommended a UK-wide system for standards and accreditations for any environment used to store health data.

Given the prevalence of AI systems in healthcare, providers should ensure that environments storing sensitive health data are secure by design and adhere to the highest security standards.

The AV Act 2024 received royal assent on 20 May 2024; however, it does not come into force until relevant statutory instruments are made by the Secretary of State.

Under the Act, if an automated vehicle is authorised (following a successful self-driving test), an “authorised self-driving entity” shall be legally responsible for the automated vehicle. For “user-in-charge” vehicles, there will be immunity from liability in certain circumstances, as well as establishing when the “user-in-charge” will be liable (where the user is legally defined as a driver).

It is recognised that automated vehicles are likely to have access to/store personal data. If the Act is successfully implemented in UK law, other laws will continue to apply, such as UK data protection legislation.

The UK has participated in the World Forum for Harmonisation of Vehicle Regulations (a working party within the framework of the United Nations). The UK government has also published principles of cybersecurity for connected and automated vehicles. The UK government has also launched the AV Act Implementation Programme to secure the safe deployment of automated vehicles on roads in Great Britain.

In the UK, the legal framework for addressing product safety and liability largely remains as retained EU law post-Brexit, which broadly requires products to be safe in their normal or foreseeable usage.

Sector-specific legislation (such as for automated vehicles, electrical and electronic equipment and medical devices) may apply to some products that include integrated AI. However, on the whole, it is widely considered that existing rules do not comprehensively address the new and substantial risks posed by AI at the manufacturing stage.

In response, the UK government introduced the Product Regulation and Metrology Bill in September 2024, which gives the UK government powers to recognise the EU’s new Product Liability Directive (PLD). The PLD came into force on 8 December 2024, and (among other changes) expands the definition of a “product” to include software (encompassing computer programs and AI systems). The purpose of this update was to assist consumers in bringing damages claims against developers of AI systems and their liability insurers when something goes wrong with the operation of an AI system.

Professionals using AI in their services must adhere to existing professional standards and regulations, including adherence to sector-specific regulations and guidelines, such as those issued by the FCA and the Solicitors Regulation Authority.

Professionals must ensure that AI systems are designed and used in a manner that upholds professional integrity, competence and ethical conduct, which involves safeguarding client confidentiality through compliance with data protection laws, respecting intellectual property rights and, where necessary, obtaining client consent when using AI systems. Liability issues may arise if:

  • in the course of exercising their professional duties, the professional fails to use AI to the applicable reasonable standard of skill and care – this could be through using AI inappropriately or, potentially when AI systems can do better than humans, failing to use it at all;
  • client confidential information, or personal data, is inputted into a generative AI system, breaching professional and/or confidentiality obligations or data privacy legislation; and
  • harmful outcomes are produced by AI systems, as professionals may be held accountable for the actions of AI systems they employ.

The use of generative AI raises issues of both IP protection and IP infringement. In the case of copyright (the primary IP right for the protection of literary, dramatic, musical or artistic works), protection will only arise if a work meets the criteria of originality. Originality implies a degree of human creative input that, in the context of generative AI, may be minimal, absent or difficult to prove.

If copyright does exist in works produced by generative AI, it may not be clear who the “author” is, and therefore who owns the copyright.

Users of AI tools should not assume they will automatically own any copyright, and should check the provider’s terms and conditions, which may assign ownership to the provider or give the provider a licence to use the works and/or materials the user inputs into the AI tool. Users should also be mindful of the potential for generative AI content to infringe third-party IP rights and, again, review the provider’s terms to check for appropriate protections.

In addition to the output of AI systems, organisations will need to be mindful of IP rights in the data used to train such systems. Although organisations may be aware of IP rights such as trade marks, copyright and patents, it is also paramount that they are aware of database rights as well.

In the case of copyright, Section 9(3) of the Copyright, Designs and Patents Act (CDPA) 1988 states that the author of a computer-generated literary, dramatic, musical or artistic work is the person that undertook the arrangements necessary for the creation of that work. Applying this to the example of an image created by a text-to-image generative AI system, and assuming copyright exists in the work (see 15.4 AI-Generated Works of Art and Works of Authorship), it is unclear whether the author would be the user who entered the prompt or the operator of the AI. Although the position on this may vary from one work to another, the author must be a human being.

In the case of patents, the UK Supreme Court confirmed in Thaler v Comptroller-General of Patents, Designs and Trade Marks that only a human can be recorded as the “inventor”, and not an AI machine such as Dr Thaler’s “DABUS”. It is important to note that Dr Thaler’s case concerned the formalities of patent registration, not the wider question of the patentability of inventions created or aided by AI systems more generally. Had Dr Thaler recorded himself as the inventor on the registration form (in his capacity as the creator/owner of DABUS), the application may well have succeeded.

A trade secret is a piece of information that is treated as confidential by its owner and has commercial value because it is secret. UK law protects trade secrets against unjustified use and disclosure, both through the equitable doctrine of confidence and under the Trade Secrets Regulations 2018. Trade secret protections could be used to protect the underlying source code of an AI system as well as training sets, algorithms and data compilations. These elements are essential for AI systems but may not always qualify for patent protection.

The immediacy of trade secret protection and the broad scope of coverage mean that this is an increasingly common method of protection in the UK. The use of trade secret protection in this area, however, must be balanced with the need for transparency and accountability.

AI-generated works pose a new challenge to copyright law. Section 9(3) of the CDPA 1988 provides that the author of copyright in computer-generated works is the person who undertakes the necessary arrangements to create the work. Putting aside the difficulty of determining who this person is (see 15.2 Applicability of Patent and Copyright Law), Section 9(3) is only engaged if copyright subsists in the computer-generated work in the first place, which may not be the case for works created by generative AI.

Works are only protected by copyright – which subsists automatically – if they are “original”, broadly meaning the expression of human creative freedom. Unlike more traditional means of using computers to create art (typically requiring at least some level of human skill and effort), generative AI is capable of creating art from simple – even minimal – user prompts. There is a very significant question mark over whether such works can be “original” and therefore benefit from copyright protection. In the absence of specific legislation on the issue, it is possible that the courts will not come to a blanket conclusion and that the answer will vary from work to work, depending on the extent and manner of human creativity involved in its creation.

In its December 2024 consultation, the UK government recognised the lack of clarity around ownership of AI outputs and has sought the views of stakeholders (see 3.6 Data, Information or Content Laws). 

The creation of works through OpenAI tools such as ChatGPT raises a number of IP issues. Such models are trained on vast amounts of data from a number of sources. As a result, determining the ownership of AI-generated works is challenging.

The question to be asked for IP purposes is whether the original creator is the AI model (which currently, in the UK, is not recognised as having a separate legal personality), the developer of the AI model, the owner of the underlying information, or the user providing input. Furthermore, the use of pre-trained models and datasets may infringe upon existing IP rights.

Beyond legal IP issues, there are ethical IP concerns about the use of OpenAI tools (particularly in creative industries), such as the potential for such models to replicate the works of human creators without proper disclosure, credit or compensation.

In addition, the rights and limitations concerning generated content are governed by OpenAI’s licensing agreements and terms of use, which should be reviewed carefully as they may have restrictions on commercial use.

New Powers Under the Digital Markets, Competition and Consumers Act 2024 (DMCCA)

The Competition Markets Authority (CMA) has confirmed its intent to use new powers under the DMCCA (which came into force on 1 January 2025) to prioritise investigations into digital activities where choice in AI Foundation Model services and competition could be restricted and set targeted conduct requirements for firms that have “Strategic Market Status”.

The CMA has confirmed that AI and its deployment by firms will be relevant to its selection of Strategic Market Status (SMS) candidates, particularly where AI is deployed in connection with other more established activities.

New merger control thresholds have also been introduced under the DMCCA, which are designed to target so-called acqui-hires or killer acquisitions, where new, innovative companies are acquired by large well-established entities in the technology sector. The CMA now has jurisdiction to review mergers where just one of the parties has turnover of GBP350 million or more combined with a share of 33% or more in the supply of goods and services.

CMA Approach – AI Outlined in Strategic Update Paper

The CMA’s AI strategic update published in April 2024 highlighted the potential negative effects of AI systems that affect choices offered to customers and how they are presented, in particular where algorithms give undue prominence to a particular supplier or platform, rather than the best option for the customer. The CMA has confirmed that it is continuing to monitor developments in this space and also to invest in its own technological capabilities in order to combat the use of AI to facilitate anti-competitive behaviour, though more specific or targeted action to deal with these issues is yet to be announced.

The key cybersecurity legislation that applies to AI within the UK includes the following:

  • Network and Information Systems (NIS) Regulations 2018, which impose obligations on companies operating in AI-driven critical sectors, to boost the overall level of security resilience;
  • UK General Data Protection Regulations 2018 and Data Protection Act 2018, which both include requirements that mandate AI developers and AI users to implement technical and organisational security measures to protect personal data; and
  • Computer Misuse Act 1990, which criminalises unauthorised access and cyber-attacks.

The UK has aimed to tackle the use of AI systems by cybercriminals by:

  • proposing updates to key legislation such as NIS to strengthen requirements on organisations to ensure resilience against AI-enabled cyber-attacks, and the Computer Misuse Act 1990 in respect of AI-assisted cybercrime;
  • hosting the AI Safety Summit, which brought together international governments, leading AI companies and experts to tackle the risks of AI, including through the ongoing work of the AI Security Institute;
  • working with business, through the National Cyber Security Centre to combat AI-enhanced cybercrime; and
  • using AI to detect and counter cyberthreats through the National Crime Agency.

There are a number of ESG reporting requirements in the UK which may indirectly require reporting in relation to AI. For example, the UK has adopted frameworks including the Task Force on Climate-Related Financial Disclosures and UK Sustainability Reporting Standards.

Many organisations are using or looking to use AI to streamline their ESG compliance processes. AI systems can automate data collection across multiple, real-time sources and identify compliance gaps. Such systems can also automate ESG monitoring and trend-analysis, providing a more transparent overview of an organisation’s ESG performance.

Although AI can drive sustainability and improve resource efficiency, its energy-intensive nature (particularly in training large models and operating data centres) can contribute to significant carbon emissions and water usage. As noted, the UK government has emphasised a “pro-innovation” approach but has also encouraged responsible AI use to mitigate environmental impacts.

Implementing AI best practices will require consideration of a number of key issues, such as regulatory compliance, ethical considerations, effective risk management and robust data governance.

The key for organisations in the UK is ensuring adherence to the UK government’s five cross-sectoral principles (and related regulator guidance) (see 3.1 General Approach to AI-Specific Legislation).

Ethical considerations are also significant. For example, businesses should consider conducting thorough ethical assessments before developing, deploying and procuring AI systems, which is likely to involve identifying and eliminating biases and addressing privacy concerns.

Effective risk management and internal governance is another key area to consider, such as the identification and mitigation of potential risks associated with AI deployment, development or procurement, and establishing robust internal processes with appropriate guardrails to ensure the responsible and safe use of AI.

Listed companies would also need to consider their obligations under the UK Corporate Governance Code, which requires listed companies to carry out a robust assessment of the company’s emerging and principal risks and how such risks are being mitigated. The Code’s guidance recognises that, for many companies, cyber/IT security risks will likely be amongst the risks identified by companies, and therefore risks relating to use of AI may also fall within the risks identified by a listed company in its assessment.

Additionally, the Institute of Directors has developed resources and guidelines to help boards understand and oversee AI initiatives effectively. For example, A Director’s Guide to AI Board Governance presents nine principles to guide boards’ oversight of AI in their organisations. These principles include:

  • taking action from a strategic perspective;
  • categorising and addressing risks;
  • building board capability;
  • overseeing AI use and data governance; and
  • proactively building trust.

The firm would like to thank the following team members for their contributions to this guide: Victoria McCarron (Solicitor), Mope Akinyemi (Trainee), Yadhavi Analin (trainee), Emily Fox (Solicitor), Harry Jewson (Senior Associate), Abbie McGregor (Solicitor), Pooja Bokhiria (Solicitor), Alex Fallon (Associate), Alice Gillie (Solicitor), Matthew Loader (Associate), Ebony Ezekwesili (Associate), Ellen Goodland (Associate), Brandon Wong (Associate), Ryan Jenkins (Associate), Rory Trust (Director) and Tom Green (Associate).

Burges Salmon

One Glass Wharf
Bristol
BS2 0ZX
UK

+44 (0) 117 939 2000

+44 (0) 117 902 4400

www.burges-salmon.com
Author Business Card

Trends and Developments


Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities, and the public sector.

Introduction

Artificial Intelligence (AI) remains a prevailing disruptive force in the modern economy. Recent years have seen AI technologies and use cases evolve at breakneck speed, necessitating the rapid transformation of industries and presenting both opportunities and challenges.

One of the most notable recent developments has been the release of the DeepSeek AI chatbot. The large language model (LLM) powering DeepSeek has generated industry shockwaves in its apparent ability to operate at a fraction of the cost compared to models from other providers, marking a pivotal moment in AI accessibility. Crucially, DeepSeek is an open-source model, meaning the model architecture and training data are available for researchers and competitors to modify and improve. This has ruptured barriers to AI access and raised queries as to the regulation of such models, owing to the difficulties in regulating open-source models.

In the UK, several key initiatives promise to shape the AI landscape of 2025 and beyond. In particular, the AI Opportunities Action Plan, commissioned by the Department for Science, Innovation and Technology (DSIT), aims to accelerate AI adoption across various sectors and generally develop sufficient, secure and sustainable AI infrastructure.

While unveiling the details of the AI Opportunities Action Plan, the UK government has “set out a blueprint to turbocharge AI in the UK”, including the creation of dedicated AI Growth Zones to speed up planning for AI infrastructure and a GBP14 billion investment commitment to build AI infrastructure in the UK.

UK’s Regulatory Approach Towards AI-Specific Legislation

Globally, there is an increasing appreciation for nuanced solutions to AI regulation, which has led to interesting and divergent regulatory approaches.

Recent updates

The UK government’s 2023 White Paper sets out a proposal for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, with regulators considering updates to those regulations and remits, rather than applying additional AI-specific regulation to all AI. The UK’s then Conservative government said in the White Paper that it will not “rush to regulate” AI, a message that has continued through to the successive Labour government.

Whilst this remains true, with numerous AI-focused action plans and working groups being launched ahead of any formal AI regulations, the UK’s approach to a regulatory framework is gradually coming into focus. For instance, July 2024 saw the delivery of the King’s Speech, which proposed a set of binding measures on AI. Key messaging revolved around measures to establish appropriate legislation to place requirements on those developing the most powerful AI models. Subsequent press commentary had suggested that a consultation for potential AI regulations would be announced. However, this has not been forthcoming, potentially due to geopolitical changes such as the evolving UK–US relationship.

It is worth noting that other domestic laws will govern and impact AI, though they have received less commentary than AI-focused legislation. These include the Digital Information and Smart Data Bill, which will be accompanied by reforms to data-related laws to support the safe development and deployment of new technologies, including AI.

Calls for regulation

Although the UK has been approaching AI regulation with a stated “pro-innovation” mindset, there have been calls for stringent regulation of AI in regard to aspects such as automated decision-making and transparency. On 9 September 2024, Lord Clement-Jones, a life member of the House of Lords, proposed the Public Authority Algorithmic and Automated Decision-Making Systems Bill (HL) (the “Bill”). One of the primary objectives of the Bill is to ensure that algorithmic and automated decision-making systems (AADMs) are deployed in a manner that:

  • accounts for and mitigates risks to individuals, public authorities, groups and society as a whole; and
  • leads to efficient, fair, accurate, consistent and interpretable decisions.

Similarly, the Artificial Intelligence (Regulation) Bill, proposed by Lord Holmes, has been recently reintroduced in the House of Lords after initially failing to pass following the dissolution of Parliament in May 2024. The Bill remains in the same form as when it was first introduced in November 2023. In his AI Regulation report, Lord Holmes sets out eight sets of circumstances that he claims mean regulatory intervention is required and states that the Bill addresses these risks.

Recommendations

Furthermore, the Parliamentary Office of Science and Technology released a detailed briefing on 7 October 2024, expressing concerns around the use of AI including the lack of understanding about how large AI models make decisions, alongside the lack of transparency in AI models, which raise liability and safety concerns. Similarly, the HM Treasury’s Technology Working Group published their third and final report on October 2024, making key AI recommendations to allow the industry to transform responsibly, whilst encouraging economic growth. Some of the recommendations include improving regulatory clarity and consistency by advancing international regulatory co-ordination and alignment of AI to enable AI developers and its users to feel confident to plan and invest in AI.

Council of Europe Framework

Furthermore, the UK is a signatory to the Council of Europe Framework Convention on Artificial Intelligence (the “AI Convention”), which is the first legally binding international treaty aiming to ensure that AI systems are developed and utilised in ways that respect human rights, democracy and the rule of law. The Convention seeks to uphold the ethical development and regulation of AI and is intended to provide a global legal framework for each signatory to be able to apply their existing international and domestic legal obligations. Lord Chancellor, Shabana Mahmood, stated: “this convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law”.

Intellectual property considerations

In a key case this year, Getty Images Inc and other companies v Stability Al Ltd [2025] EWHC 38, Getty claimed that Stability infringed its intellectual property rights as it used Getty’s images as data inputs to develop and train its AI model and that the outputs generated by the model are synthetic images that reproduce in substantial part Getty’s copyright works and bear Getty’s brandings. Cases like these reinforce the complex relationship between right-holders and enabling AI development. The UK government has been taking steps to set out guidelines here: on 17 December 2024, it launched a consultation to clarify copyright laws for AI developers and creative industries, aiming to support innovation and growth in both sectors. Currently the application of UK copyright law with regards to the training of AI models and ownership of output is disputed. The three main objectives of the consultation are:

  • supporting right-holders’ control of their content and ability to be remunerated for its use;
  • supporting the development of world-leading AI models in the UK by ensuring wide and lawful access to high-quality data; and
  • promoting greater trust and transparency between the sectors.

The government is seeking to deliver the objectives through a package of interventions, which includes proposals to establish an exception to copyright law for text and data mining, mechanisms for right-holders such as the ability to preserve their rights (ie, offering an “opt-out” approach), as well as enabling them to license and receive remuneration for the use of their work in training AI. This approach aligns with the EU’s approach, although the EU approach has an exception for text and data mining, provided by Article 4 of the Digital Single Market Copyright Directive (Directive (EU) 2019/790). The response to the consultation from the Culture, Media and Sport Committee and the Science, Innovation and Technology Committee signals that there is already opposition to the proposals. Their response emphasises that there should be tougher requirements on the transparency of data being used to train AI models and they reject the “opt-out” approach, especially as the technical measures to enforce these opt-outs do not yet exist.

Summary

At this stage, retaining regulatory flexibility in regard to AI within the UK remains the prevailing strategy. The UK government’s approach has been focused on emphasising a balance between its pro-innovation strategy and ensuring AI develops safely and fairly. One example includes guidelines for public procurement of AI, which include a summary of best practice when buying AI technologies in government and an AI playbook for government. There is still lingering concern amongst some voices in the industry, such as Lord Holmes (author of the Artificial Intelligence (Regulation) Bill that fell through before the General Election), that should the UK fail to develop appropriate legislation for the governance of AI, businesses and organisations will suggest alignment with the provisions of the EU AI Act. Despite these concerns, the UK government remains committed to creating a robust regulatory framework that simultaneously fosters technological advancement and ensures the safe and ethical use of AI.

AI Within the Public Sector

AI is increasingly transforming the UK’s public sector. The government has implemented AI initiatives across various public organisations and departments, highlighting the UK’s commitment to leverage AI to modernise the public sector as well as ensuring that services are more accessible, efficient and responsive to the public. This includes the development of the Incubator for AI in Government (i.AI): i.AI is an agile technical delivery team now within DSIT that builds AI tools for use across the public sector. To date, i.AI has created a suite of bespoke tools for civil servants as well as a range of tools for wider public sector use.

The Algorithmic Transparency Recording Standard (ATRS) is another example of the government’s evolving approach to ensuring transparent use of AI in the public sector. ATRS was created in 2022 and, after a slow start, there are now 55 records of where, when, how and why algorithmic tools, including AI, are used in the public sector.

The Cabinet Office also released their updated Procurement Policy Notice on “Improving Transparency of AI use in Procurement”, otherwise known as PPN 017. PPN 017 was refreshed to align with the new Procurement Act 2023 and Procurement Regulations 2024. The PPN recognise that AI is a rapidly growing and evolving market which assists bidders of procurement contracts and those providing the services being procured. Contracting authorities have been asked to take the steps to understand the risks associated with the use of AI tools during the bidding process, including implementing controls to ensure confidential or non-publicly available information is not used to train AI systems.

Current uses of AI across UK government include GOV.UK Chat, a pilot tool that uses relevant website content to generate responses to natural language queries by users, aiming to simplify navigation across more than 700,000 pages on GOV.UK. Other uses include the development of a tool which aims to improve access to user research undertaken across the NHS, and a CCS tool which generates relevant agreement recommendations for customers based on spend and customer market segmentation data.

The government’s innovative approach to AI adoption is evident in its AI playbook, released February 2025. This playbook expands on the Generative AI Framework for HMG and is designed to support the public sector in looking to procure, develop and deploy AI. Its principles, built upon the 2023 White Paper of a pro-innovation approach to AI Regulation, are intended to guide the safe, responsible and effective use of AI in government organisations. These principles include:

  • using AI lawfully, ethically and responsibly (Principle 2);
  • understanding how to manage the AI life cycle (Principle 5); and
  • having the skills and expertise needed to implement and use AI (Principle 9).

Data Protection and AI

Given the extensive data sets used to train AI models and their diverse applications, it is likely that personal data will be involved at some stage in the AI value chain, meaning businesses must ensure compliance with UK data protection legislation. For example, “data scraping” is a fundamental part of how certain AI models are trained, in which large amounts of data from a wide range of sources are collated to train AI models (which can include personal data).

On an EU level, the European Data Protection Board (EDPB) issued an Opinion on 18 December 2024, addressing the use of personal data in AI model development and deployment. This Opinion aims to harmonise AI regulations across Europe, as well as support AI ethical innovation while ensuring compliance with GDPR. The key points include:

  • evaluating AI models for anonymity;
  • using legitimate interest as a legal basis for data processing; and
  • addressing the consequences of using unlawfully processed personal data.

The EDPB also emphasises that AI models must be assessed on a case-by-case basis to ensure they do not identify individuals directly or indirectly and provides guidelines for demonstrating anonymity. Whilst not directly applicable to the UK, given the significant similarities between the UK GDPR and the EU GDPR, the Opinion is a useful resource for those subject to the UK GDPR.

Similarly, the UK’s data protection regulator, the ICO, has continued to stress the importance of data protection in the development of AI. It has welcomed the approach taken by the UK government to build on the strengths of its existing regulators that are already well-placed to tackle the AI risks that emerge in their context. The ICO’s strategy includes providing guidance and tools to help organisations mitigate AI-related risks, as well as taking action against private organisations such as Clearview Inc, which was fined GBP7.5 million and ordered to delete its UK data. The ICO is also focusing on the specific application of AI in biometric technologies in 2025 and expecting to consult on updates to its Guidance on AI and Data Protection and Automated Decision-Making and Profiling, to reflect changes within the Data Protection and Digital Information Bill.

Developments in this area include the following.

  • In November 2024, the ICO published its guidance on AI tools in recruitment, where it raised key recommendations in order to improve data protection compliance and privacy risks management in AI. The ICO’s audit highlighted some areas for improvement, which included tools having search functions that allowed recruiters to filter out job candidates with potential characteristics and tools collecting a lot more information than was necessary. Further to this, the ICO raised concerns about AI providers incorrectly describing themselves as data processors.
  • In early 2024, the ICO also launched a consultation series on how aspects of data protection law should apply to the development and use of generative AI models. It released a series of chapters that outlined its evolving thoughts on how the ICO interprets specific requirements of UK data protection legislation, including the appropriate lawful basis for training generative AI models and how the purpose limitation principle plays out in the context of generative AI development and deployment.

AI – Cases About AI and Cases Using AI

There is some UK case law relating to use of AI, such as the Getty case (see above) and Thaler, but otherwise there is very little litigation within public knowledge, reflecting that disputes may be at pre-action stage or in courts where data is of limited availability (such as administrative or lower courts).

In the UK, there are cases of parties relying on fictitious case law that has apparently been created using AI. Examples include Olsen and another v Finansiel Stabilitet A/S (2025).

In Oakley v Information Commissioner [2024] UKFTT 315 (GRC) (18 April 2024), the First Tier Tribunal had to consider whether evidence generated by ChatGPT could be used in court about potential keywords to search for documents to imply that the keywords actually used were too narrow. The Tribunal concluded that little weight must be placed on Chat GPT’s evidence because “there is no evidence before us as to the sources the AI tool considers when finalising its response nor is the methodology used by the AI tool”. This case illustrates the cautionary approach that the courts will take and emphasises the need for a careful approach to the use of AI systems in courts.

Official guidance has been published by the regulatory bodies for solicitors and barristers, respectively, and His Majesty’s Courts and Tribunals Service about the use of AI in litigation. By way of example, the HMCTS guidance stated that the use of AI in litigation by judicial office holders, parties and representatives is not prohibited.  However, it identified key risks and issues along with suggestions to mitigate them, which includes understanding AI and its applications as well as upholding confidentiality and privacy. This guidance was developed in consultation with the Lady Chief Justice, the Master of the Rolls, the Senior President of Tribunals and the Deputy Head of Civil Justice and is intended to be a first step in proposed future work to support the relationship between the judiciary and AI. The speed at which AI is developing, both in terms of technology and uses cases, means that it is likely the guidance will evolve over time.

AI and Financial Services

Adoption of AI technologies within financial service firms continues to increase, with the Bank of England reporting in November 2024 that 75% of firms are now utilising AI, a significant increase from 58% in 2022. This growth has largely been driven by the increasing integration of complex machine learning models, particularly foundation models, which now account for 17% of all AI use cases. These models, trained on extensive datasets, can be deployed for tasks that traditionally required extensive human input, such as fraud detection and risk assessments, and are increasingly becoming able to perform complex financial decision-making. However, these developments have also brought challenges, including:

  • heightened concerns over data privacy;
  • third-party dependencies;
  • lack of transparency within AI models; and
  • reliance on inaccurate information from AI models.

The financial services industry largely remains focused on a principles-based, outcome-focused regulatory approach in alignment with the UK White Paper, allowing firms to optimise customer and business outcomes while creating a framework for risk management. This has been demonstrated in April 2024 through the Financial Conduct Authority (FCA), Bank of England (BoE) and Prudential Regulation Authority (PRA)’s responses to the government’s White Paper on AI. The key messaging from these regulators was support for the UK’s current regulatory approach, with the aim of maintaining financial stability, trust and confidence while enabling innovation in the financial services sector. The FCA’s response, aligned with the BoE and PRA, emphasises strong accountability and the need for agile, proportionate regulation.

This has also been addressed in the context of investment management: the Technology Working Group to HM Treasury’s asset management taskforce in October 2024, in collaboration with the Investment Association (IA), published a report on AI. While supporting the current direction of AI regulation, the report emphasised the need for clear and consistent regulation in the financial services industry to enable confident planning and investment, as well as combating fraud, cybercrime and misinformation.

More recently, in November 2024, the FCA launched a questionnaire to gather insights on the current and future uses of AI in the UK financial sector and will utilise this to shape its future regulatory framework. Key aspects within the questionnaire include:

  • the exploration of AI use cases;
  • identifying barriers to adoption;
  • assessing the sufficiency of current regulations; and
  • suggesting necessary changes or clarifications to the regulatory regime.

Headlines around artificial intelligence in financial services have recently revolved around the topic of balancing innovation with consumer protection. The Treasury Committee’s inquiry launched on 3 February 2025, which aims to understand how AI can be utilised in banking, pensions and other financial services while safeguarding consumers against potential risks. Recent discussions have also focused on “agentic” AI, which uses autonomous and adaptable reasoning to handle complex tasks and adapt to new situations using context and objectives. This can potentially transform areas like fraud detection and personalised financial advice.

The firm would like to thank the following team members for their contributions to this guide: Victoria McCarron (Solicitor), Mope Akinyemi (Trainee), Yadhavi Analin (trainee), Emily Fox (Solicitor), Harry Jewson (Senior Associate), Abbie McGregor (Solicitor), Pooja Bokhiria (Solicitor), Alex Fallon (Associate), Alice Gillie (Solicitor), Matthew Loader (Associate), Ebony Ezekwesili (Associate), Ellen Goodland (Associate), Brandon Wong (Associate), Ryan Jenkins (Associate), Rory Trust (Director) and Tom Green (Associate).

Burges Salmon

One Glass Wharf
Bristol
BS2 0ZX
UK

+44 (0) 117 939 2000

+44 (0) 117 902 4400

www.burges-salmon.com
Author Business Card

Law and Practice

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities, and the public sector.

Trends and Developments

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities, and the public sector.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.