Artificial Intelligence 2024

Last Updated May 28, 2024

UK

Law and Practice

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities and the public sector.

The legal landscape around AI is broad and complex, spanning a range of legal areas which professionals and organisations should consider. Issues arise throughout the development, licensing and use of an AI system. As AI relies heavily on large volumes of data, compliance with data protection laws will be important, particularly when processing personal data, but having awareness of the risks around profiling and automated decision-making is also essential.

Key questions also arise in respect of the protection of intellectual property rights that exist in training data, and ownership of copyright in works generated by AI systems without human intervention. AI systems must also comply with consumer protection legislation in relation to unfair commercial practices and transparency (for example, providing consumers with appropriate information).

Other concerns include protecting against the possibility of discrimination in AI decision-making, especially in an employment context, and ensuring certainty on attributing product liability for any harm caused.

Various industries are applying AI and machine learning, including:

  • medical imaging and drug discovery in the healthcare sector;
  • credit scoring and fraud detection in the finance sector; and
  • dynamic pricing in retail and e-commerce.

The AI models that are developed and deployed across industries may be generative, creating new content learned from existing data (such as drug discovery), while others are predictive, making predictions based on historical data (such as models predicting share prices).

There are a number of industry innovations driving the use of AI and machine learning, such as no-code AI, which allow non-technical users to build AI solutions, and the launch of ChatGPT and similar models, which have shifted perception and attitudes in relation to the use and deployment of AI systems within businesses. The potential benefits to consumers and businesses are vast, including improved services and products and advancements in research.

One example of a cross-industry initiative is the National AI Research and Innovation Programme, which aims to improve co-ordination and collaboration amongst researchers in the field of AI.

In the UK, the government is actively involved in promoting the adoption and advancement of AI for industry use, evidenced by its commitment to a “pro-innovation” approach to the regulation of AI. Through initiatives like the AI Action Plan and the National AI Strategy, the UK government is aiming to bolster its position as a global leader in AI development. Under the National AI Strategy, the UK government has committed to ensuring long-term support for the AI ecosystem.

The UK government offers certain R&D-related tax credits to incentivise industry innovation of AI.

The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology. This differs from the EU approach of creating a standalone regulator (the European AI Board) and introducing overarching AI-specific regulation (the EU AI Act) to sit above existing regulation.

Industry regulators are expected to interpret and apply these principles within their respective domains using existing powers supplemented with additional funding and regulatory guidance from the UK government. In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

The UK government has acknowledged the need for targeted legislative interventions in the future, particularly concerning highly capable general-purpose AI systems.

There is currently no AI-specific legislation governing the development, deployment or use of AI in the UK; see 3.1 General Approach to AI-Specific Legislation.

A private members' bill – the “Artificial Intelligence (Regulation) Bill” – has been proposed and is currently progressing through the UK legislative process, but there is no guarantee it will be adopted into law.

The UK government has provided initial guidance for UK regulators on how to interpret and apply the cross-sectoral principles (see 3.1 General Approach to AI-Specific Legislation), with further updates to be provided in summer 2024. The guidance discusses each of the five principles in more depth, sets out the key considerations that regulators should be aware of, and suggests technical standards and best practices. Although the guidance is not intended to be prescriptive, its objective is to ensure a level of coherency across different sectors.

In line with the UK’s government’s regulatory approach, some regulators have already issued guidance covering AI in their respective domains. For example, the Information Commissioner’s Office (ICO) has issued guidance on best practice for data protection-compliant AI, and on interpreting data protection law as it applies to AI systems. The National Cyber Security Centre has released guidance explaining the cybersecurity risks posed by AI. Further guidance is planned covering AI assurance and the use of AI within HR and recruitment.

This section is not relevant in this jurisdiction.

This section is not relevant in this jurisdiction.

This section is not relevant in this jurisdiction.

At the time of writing, there have not been any amendments to UK data protection legislation or information and content laws. However, the UK regulator for data protection, the ICO, has released specific guidance in respect of the interaction between UK data protection legislation and the use/deployment of AI-based solutions.

In its response to the UK White Paper (regarding the regulation of AI), the UK government confirmed that its working group had failed to develop a voluntary code of practice for copyright and AI, with the intention of making licences for text and data mining more available. However, in light of this, certain government departments will commence a period of engagement with stakeholders to seek to implement an approach that is workable for both AI and the creative sectors, so there may be guidance on this topic in the future.

The UK government is taking a pro-innovation approach to the regulation of AI and has released certain non-binding principles, which industry regulators are expected to interpret and apply in line with existing laws. It has been confirmed that “future binding measures” are likely to be introduced for highly capable general-purpose AI models (the content of such measures is currently unknown). An Artificial Intelligence Bill has also been proposed, with the main purpose of establishing an “AI Authority” to oversee the regulatory approach to AI. The Bill is progressing through the UK parliamentary process and is not in force.

The key trend in the UK in respect of AI regulation is the tension between the extent to which AI should be regulated and a “pro-innovation” approach.

The EU AI Act will also affect UK business to the extent that UK-based providers of AI systems place their system on the market in the EU or otherwise supply into the EU.

Very few UK judicial decisions have dealt with AI directly. The most noteworthy example is the Supreme Court’s decision in Thaler v Comptroller-General of Patents, Designs and Trade Marks, which confirmed that UK patent applications must identify a human “inventor”. However, the case concerned the formalities of patent registration only, and not the patentability of inventions created by or with the aid of AI, where a human is listed as the inventor on the application form; see 15.1 Applicability of Patent and Copyright Law for more information.

Also notable is Getty Images’ ongoing copyright, trade mark and database right infringement claim against Stability AI in respect of its “Stable Diffusion” text to image generative AI model. A trial is not expected until 2025 at the earliest.

In the context of privacy and data protection concerning automated systems, in 2020 the Court of Appeal held that South Wales Police’s use of automated facial recognition technology was unlawful under Article 8 of the ECHR (right to privacy) and the UK Data Protection Acts 1998 and 2018.

There is not yet an all-encompassing definition of AI under UK law.

The National Security and Investments Act 2021 (NSIA) provides a framework for government screening of proposed foreign direct investment into the UK and defines AI as follows:

“'Artificial intelligence' means technology enabling the programming or training of a device or software to –

(i)       perceive environments through the use of data;

(ii) interpret data using automated processing designed to approximate cognitive abilities; and

(iii) make recommendations, predictions or decisions;

with a view to achieving a specific objective.”

The NSIA definition is narrower than the definitions adopted by the OECD and the EU, which, amongst other things, recognise generative AI. Given the UK’s stated aim of international interoperability, future definitions of AI adopted by the UK may more closely mirror the OECD/EU definitions.

Consequently, whether or not the courts need to consider what is meant by AI remains to be seen and, in any event, may be fact-specific to a specific contract or set of circumstances.

The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology.

The regulators expected to play leading roles in the UK regulation of AI are the four members of the Digital Regulation Cooperation Forum (DRCF):

  • the ICO (data protection and privacy);
  • the Office for Communications (Ofcom) (communications);
  • the Financial Conduct Authority (FCA) (financial services); and
  • the Competition and Markets Authority (CMA) (competition/antitrust).

Each operates across England and Wales. In addition, the Equalities and Human Rights Commission is Britain’s independent equality and human rights regulator, enforcing the Equalities Act 2010.

In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles, as outlined in 3.1 General Approach to AI-Specific Legislation.

Each of the DRCF regulators describe AI differently in their external publications; none of them has adopted a singular definition of AI in a regulatory context.

The ICO’s online “Guidance on AI and data protection” is indicative of the approach currently adopted by UK regulators, reflecting the absence of AI-specific regulation. It notes that data protection legislation does not use the term “AI” and that one’s legal obligations under such legislation therefore do not turn on exactly how the term is construed. It goes on to explain what the ICO means when it refers to “AI” in its guidance, differentiating between how the term is used in the research community (“various methods for using a non-human system to learn from experience and imitate human intelligent behaviour”) and how it is used by the ICO in the context of data protection regulation (“the theory and development of computer systems able to perform tasks normally requiring human intelligence”).

It remains to be seen whether regulators will adopt their own definitions of AI (whether in isolation or collectively – eg, via the DRCF).

The UK’s current approach to AI regulation is for existing regulators to deal with AI within their own remits. There are over 90 regulatory bodies in England and Wales, and four regulators are expected to play a key role in regulation affecting AI (see further commentary under 5.1 Regulatory Agencies).

ICO

The ICO upholds information and data privacy rights for individuals in the UK. The ICO’s guidance advocates a risk-based approach to AI and identifies specific AI risks for each foundational principle of data protection, with a particular focus on transparency, accuracy, fairness/anti-discrimination, security, data minimisation and the safeguarding of individual rights.

FCA

The FCA regulates the UK’s financial services industry. It has identified increased productivity, improved financial modelling, better tailoring of financial advice, hyper-personalised products and the ability to tackle fraud and money laundering more quickly and accurately at scale as potential benefits of AI in financial services. On the risk front, it has highlighted increased dependence on “big tech”, consumer/identify fraud, misinformation and market manipulation, and the potential for bias/discrimination in consumer banking.

CMA

The CMA’s focus is on protecting consumers from anti-competitive activities. In September 2023, it published a report on foundational models (FMs), highlighting the benefits that could arise from FMs if used well, including new better products, technological breakthroughs, easier access to information, lower prices and increased competition. The report also warns of potential harms, including false and misleading information, AI fraud and the risk of a small number of firms using FMs to entrench their positions of market power, harming competition and increasing prices.

Ofcom

Ofcom is the UK’s communications regulator, operating across telecoms, post, broadcasting and on-demand video services. Like the FCA and CMA, it seeks to protect consumers and promote competition. Its “Strategic approach to AI 2024/25” published in March 2024 identifies a number of potential outcomes from the use of AI in communications, including benefits such as improved safety technologies, better visual effects, spam filtration and enhanced speech recognition/generation, and risks such as the easy creation and dissemination of illegal or harmful content, misinformation and disinformation, and more convincing phishing scams.

In May 2022, the ICO fined US-based Clearview AI more than GBP7.5 million for misusing UK residents’ publicly available personal data by scraping images from social media without consent to create a database of 20 billion images. Clearview AI used its database to provide facial recognition services to its customers.

Clearview successfully appealed: in October 2023, it was found that the ICO lacked jurisdiction because Clearview only provided its services to law enforcement/national security bodies outside the UK and EU, falling within an exception to the UK GDPR applicable to the acts of foreign governments. The ICO has sought permission to appeal.

The ICO issued a preliminary enforcement notice to Snap Inc. over potential failure to properly assess the privacy risks posed by its generative AI chatbot “My AI”. The investigation provisionally found that Snap failed to adequately identify and assess the risks to several million “My AI” users in the UK, including children aged 13 to 17.

Separately, the CMA is investigating whether Microsoft’s partnership with OpenAI (the creator of ChatGPT) amounts to a de facto merger and, if so, whether it could impact competition.

Standards development organisations such as ISO/IEC, IEEE and BSI have paved the way for consensus-driven standards through multi-stakeholder discussions to promote global alignment.

The four key types of technical standards for AI governance, risk and governance in the UK are as follows:

  • foundational standards – to help build common language and definitions around basic concepts and facilitate dialogue between stakeholders;
  • process standards – to provide guidance on best practices in management, process-design, quality control and governance;
  • measurement standards – to create universal mechanisms and terminologies on measuring various aspects of an AI system’s performance; and
  • performance standards – to assist in setting up benchmarks, prerequisites and expectations that need to be achieved at specific stages for the effective functioning and utilisation of an AI system.

The UK's Department for Science, Innovation and Technology has published an introduction to AI assurance, with the objective of assisting businesses develop AI tools responsibly through assurance mechanisms and global technical standards.

The International Organization for Standardization (ISO) and the International Electrotechnical Commission jointly published a new global management standard for artificial intelligence, known as ISO/IEC 42001. While this standard does not carry the same legal weight as legislation, it is expected to have a significant impact on how organisations demonstrate responsible and transparent use of AI. References to global standards are often included where organisations are providing goods and services on both a business-to-consumer and business-to-business basis. In addition, they are often referenced in commercial contracts as a way of establishing expectations of regulatory guidance and best practice.

As the standard evolves over time in line with emerging trends, risks and opportunities in the AI space, ISO/IEC 42001 has the potential to act as the default standard that organisations will be expected to adhere to.

Studies in the UK have indicated that AI usage across the UK public sector remains inconsistent, despite government messaging increasingly encouraging the utilisation of AI in the public sector. Within government, a study by the National Audit Office has found that 37% of government bodies surveyed had actively deployed AI, while a further 37% had not deployed AI but were piloting AI use.

Current uses of AI across UK government include the use of document comparison software by HM Land Registry, HMRC use of chatbots and Natural England's use of AI for habitat analysis. More broadly, facial recognition technology is becoming increasingly common in UK law enforcement, including the use of retrospective facial recognition as part of criminal investigations as well as live facial recognition (eg, at large-scale sporting events).

There are risks when using AI in the public sector and more widely, such as the risks of biases and discrimination flowing from foundation model outputs as well as the risk of nefarious actors using foundational models to intentionally cause harm (with the added legitimacy of public sector usage).

See 4. Judicial Decisions for a discussion of the judicial decision relating to South Wales Police’s use of automated facial recognition technology.

There are currently no relevant pending cases.

AI plays a significant role in national security, including in cybersecurity, intelligence monitoring, automated defence systems and counterterrorism. Use cases include AI systems deployed to:

  • detect and respond to cyberthreats in real time;
  • monitor international borders and critical infrastructure, providing an early warning system;
  • identify trends and anomalies in data sets, particularly online activity data sets, that aid in detecting potential terrorist threats; and
  • identify targets as part of defensive weapon systems.

National considerations will play a pivotal role in shaping future government legislation, regulations and policies on AI systems, such as ensuring ethical use.

The key issues and risks posed by generative AI are:

  • its generative nature – AI has a tendency to reproduce and, in some cases, emphasise biases dependent on the material it was trained on;
  • transparency – often it cannot be determined how conclusions have been reached and, in turn, erroneous conclusions are harder to attribute fault;
  • cost and environmental impact – not only is AI incredibly expensive to train, causing a barrier to innovation, but there is also a large environmental impact;
  • reliability – hallucinations (or fabricated, erroneous and untrue outputs) have exposed that generative AI will often attempt to bridge data gaps itself by fabricating information, and some generative AI models are static (lacking up-to-date information);
  • IP considerations – as discussed further in 8.2 IP and Generative AI, there is debate surrounding IP ownership where generative AI is concerned;
  • ethical considerations – AI lacks the moral rationalisation that a human has and therefore may generate results that conflict with human values; and
  • data protection – as discussed further in 8.3 Data Protection and Generative AI, AI has increased the discussion on data ownership and how the traditional rights of data protection can be applied to these technologies.

The use of generative AI raises issues of both IP protection and IP infringement. In the case of copyright (the primary IP right for the protection of literary, dramatic, musical or artistic works), protection will only arise if a work meets the criteria of originality. Originality implies a degree of human creative input that, in the context of generative AI, may be minimal, absent or difficult to prove.

If copyright does exist in works produced by generative AI, it may not be clear who the “author” is, and therefore who owns the copyright.

Users of AI tools should not assume they will automatically own any copyright, and should check the provider’s terms and conditions, which may assign ownership to the provider or give the provider a licence to use the works and/or materials the user inputs into the AI tool. Users should also be mindful of the potential for generative AI content to infringe third-party IP rights and, again, review the provider’s terms to check for appropriate protections.

In addition to the output of AI systems, organisations will need to be mindful of IP rights in the data used to train such systems. They may already be aware of IP rights such as trade marks, copyright and patents, but they should be aware of database rights as well.

Where AI models are used to process personal data, at a fundamental level those systems need to comply with the principles of data protection (law) by design and by default.

Under data protection law, individuals have various rights regarding their personal data, and these rights apply wherever that personal data is being processed by an AI system.

Where an individual exercises their right to the “rectification” or “erasure” of their personal data, if it is not possible to separate out the individual’s data from the AI model in order to comply with those rights; to avoid being in breach of regulatory compliance and subject to potential enforcement actions by the regulator, the model may need to be deleted entirely.

The challenge for AI developers is designing AI systems that can automatically comply with the law. Stipulating data protection principles, such as “purpose limitation” and “data minimisation”, as part of the design architecture and engineering process is key to achieving successful compliance outcomes.

AI is currently used in the legal profession for a number of purposes, including risk and AML compliance, administration and support services (eg, legal chatbots), precedent document generation (eg, producing template real estate leases) and text generation, particularly with the use of predicative text or generative systems like Microsoft Copilot to assist with contract drafting and content creation.

AI can be used in litigation to identify and summarise precedents or carry out automated search functions that are useful for document discovery purposes.

The Law Society has released guidance on how to manage the risks of adopting AI (which include IP, cybersecurity and data protection issues), and the Bar Council has also released guidance on considerations when using ChatGPT and generative AI software based on large language models (LLMs).

Of the risks already identified, ethical issues include the risk of bias in the training data leading to the perpetuation of harmful stereotypes and hallucinations in LLMs leading to factually incorrect responses.

The UK does not have a specific liability framework applicable to harm or loss resulting from the use of AI and therefore existing laws apply. As an exception, the UK has passed the Automated and Electric Vehicles Act 2018, pursuant to which liability for damage caused by an insured automated vehicle when driving itself lies with the insurer.

There are many factors to consider where AI products cause harm, such as whether the defect was attributable to the design of the product, the programming or the user when in use. These factors can impact the liability position.

To claim damages under contract, the claimant needs to prove that the defendant breached a term of the contract and that said breach caused loss. Whilst this may be straightforward with simple products, establishing causation in an AI product may be more difficult.

In terms of trends, businesses are assessing whether or not they are sufficiently protected against liability risks arising from such emerging technologies, be it as operators, users or manufacturers. This is typically tackled by ensuring that contractual arrangements with suppliers and/or customers are sufficient, or by implementing appropriate insurance coverage.

The UK government’s approach continues to be pro-innovation and encouraging regulators to tackle AI regulation within their remits. As discussed in 10.1 Theories of Liability, with the exception of the Automated and Electric Vehicles Act 2018, liability must rely upon existing frameworks, and for the most part existing laws will apply to the allocation of liability in respect of AI.

The Department for Science, Innovation and Technology's response to the UK AI White Paper confirms that regulation and binding measures on “highly capable general-purpose AI” are likely to be required in the future. The UK government has confirmed it will not “rush to regulate”, as “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation”.

Bias in predictive and generative AI systems can arise from biased training data, algorithmic biases and biased design choices. Legally, there are concerns regarding discrimination, privacy and accountability, to name a few. Current legislation such as the Equality Act 2010 and data protection legislation aim to mitigate these risks.

Consumer areas at risk of bias from the use of AI systems include finance, healthcare and employment. Businesses may find themselves subject to liability from individual claims and regulatory fines, if found in breach of legislation such as the Equality Act 2010 and data protection legislation.

Businesses can take certain measures to address bias, such as ensuring appropriate processes are in place for verifying that data used to train the AI model is appropriate, and ensuring human oversight in respect of the AI model’s output prior to relying on any such output.

The UK ICO may take regulatory action where companies breach data protection legislation, by issuing significant fines and requiring companies to take certain steps to rectify non-compliance.

Protecting personal data with AI technology and business practices has both risks and benefits. While AI enables efficient data processing and personalised services, and helps to drive innovation, it also raises concerns relating to important issues such as privacy, bias and security.

Purely automated decision-making from processed personal data without human supervision is restricted at law, and if conducted at scale by AI systems poses challenges when it comes to bias, accountability and other fundamental rights. Individuals may be faced with uncertain decision-making processes, unintended outcomes and very real practical difficulties in challenging and rectifying automatic decisions made without nuance.

Data security measures used in relation to AI systems can benefit from complex data integrity, confidentiality and anonymisation protocols being built-in to the design of the AI system itself. However, vulnerabilities in these systems, and in the security protocols used to protect them, can lead to privacy breaches, regulatory non-compliance and serious reputational damage for businesses; where such vulnerabilities are exposed by a rogue AI system, businesses risk being exposed to a “point of no return” – cybersecurity has never been more important.

An overarching issue in this area is the lack of a clear and consolidated regulatory response to facial recognition technology (FRT). At present, the UK approach is a combination of human rights law, data protection law, equality law and, in the context of law enforcement, criminal justice legislation. For completeness, it should be noted that the EU AI Act, while not directly effective in the UK, does directly address the processing of biometric data, and the UK will likely be influenced by the approach taken in the EU AI Act.

FRT relies on individuals' personal data and biometric data, and its use raises a number of challenges from a data protection perspective. Biometric data is intrinsically sensitive and the use of FRT therefore gives rise to challenges around the necessity and proportionality of processing, as well as the need to identify a lawful basis for processing.

This is particularly true where FRT involves the automatic and indiscriminate collection of biometric data in public places, which is becoming increasingly common, whether for law enforcement purposes or for commercial purposes such as targeted advertising in the retail sector. In this context, issues include:

  • consent and transparency;
  • the necessity and proportionality of processing;
  • statistical accuracy;
  • the risk of algorithmic bias and discrimination; and
  • the processing of children's data without necessary additional safeguards.

Companies utilising FRT should be cognisant of the risks associated with its use, particularly in relation to potential violations of data protection legislation and equality and anti-discrimination laws.

Automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on factual data and on digitally created profiles or inferred data.

Article 22 of the UK GDPR restricts organisations' ability to make solely automated decisions that result in a legal or similarly significant effect on an individual. In this context, “solely automated” means that the decision is totally automated without any human influence on the outcome.

Decisions taken in this manner can potentially have a significant adverse effect, which is particularly concerning as there can often be a lack of understanding around how the decision-making process works. This lack of understanding can arise from the perspective of both the impacted individual and the individual dealing with the consequences of the relevant decision. There is also a risk that inherent biases in the AI-based decision-making tools may lead to discriminatory outcomes.

Organisations that fail to comply with Article 22 may be subject to significant fines as well as liability to affected individuals who may exercise their right to object under Article 21 of the UK GDPR as well as bringing a legal claim against the company.

Currently, there is no UK-wide regulatory scheme specific to AI, and therefore existing laws continue to apply, as well as specific AI-related guidance issued by regulators.

For example, UK data protection legislation will almost always apply to the development, deployment and/or procurement of AI. One of the key principles under UK data protection legislation is transparency. The UK regulator for data protection, the ICO, has released certain AI-related guidance, in which it makes it clear that businesses must be transparent about how they process personal data in an AI system, stressing the importance of “explainability” in AI systems.

In addition, UK data protection legislation includes rules around the profiling of individuals and automated decision-making. If permitted, transparency with individuals is key.

There is currently little regulation or case law that specifically considers the application of UK competition law to AI. In recent years, commentators and the CMA have discussed concerns around the potential for passive collusion between undertakings when using price-setting algorithms.

In September 2023, the CMA published its initial report into AI Foundation Models (FMs – machine learning models trained on vast datasets) and their impact on competition and consumer protection. The CMA also published an update paper to this report in April 2024. In these documents, the CMA sets out six overarching principles on which it will base its response to the future development and deployment of FMs:

  • access – maintaining ongoing, ready access to key inputs;
  • accountability – ensuring FM developers and deployers are accountable for outputs provided to consumers;
  • diversity – ensuring sustained diversity of business models;
  • choice – ensuring sufficient choice for businesses so they can decide how to use FMs;
  • fair dealing – preventing anti-competitive conduct, including self-preferencing, tying or bundling; and
  • transparency – ensuring that consumers and businesses are given information about the risks and limitations of FM-generated content.

The Digital Markets, Competition and Consumers Bill (DMCC), which is expected to come into force later this year, will further enhance the CMA’s enforcement powers in respect of digital activity. In particular, the DMCC is expected to grant the CMA the ability to set targeted conduct requirements on firms found to have strategic market status (SMS) in respect of a digital activity. The CMA notes in its report that it is likely that FMs and their deployment will be relevant to the CMA’s selection of SMS candidates, particularly where FMs are deployed in connection with other, more established activities.

Furthermore, the CMA indicates in its update paper that it intends to take a proactive approach to enforcement by:

  • prioritising certain digital activities for investigation using new powers granted by the DMCC, such as critical inputs and access points;
  • monitoring current and emerging partnerships closely; and
  • stepping up its use of merger control to determine whether partnerships between FM developers and/or deployers fall within the current rules.

Human oversight, circuit breakers and IP ownership are some of the unique areas to consider when procuring AI solutions, and to ensure the contractual documents are reflective of these unique areas.

The UK government has published a set of guidelines for AI procurement to support contracting authorities when engaging with suppliers of AI solutions, and therefore the following steps should also be considered by contracting authorities prior to procuring an AI solution:

  • establish a clear responsibility record to define who has accountability for the different areas of the AI model;
  • determine a clear governance approach to meet requirements;
  • ensure there is regular model testing so issues of bias within the data may be addressed;
  • define acceptable model performance (service levels);
  • ensure knowledge is transferred through regular training;
  • ensure that appropriate ongoing support, maintenance and hosting arrangements are in place;
  • address IP ownership and confidentiality;
  • allocate risk/apportion liability to the parties best able to manage it;
  • factor in future regulatory changes; and
  • include appropriate end-of-life processes (define end-of-contract roles and processes).

Whilst these steps are focused on public sector organisation, they are also helpful for private sector organisations procuring AI solutions.

The Society for Computers and Law (SCL) AI Group has also produced sample clauses for transactions involving AI systems, which serve as a useful checklist of issues to consider when procuring AI solutions.

Very few UK employment tribunal decisions have dealt with AI directly.

Employers may use tools to make recruitment practices more efficient and to seek to identify the best candidates – eg, through CV screening tools and one-way AI video interviews. Using AI to assist during the recruitment process can aid the consistency of decision-making and improve the efficiency of internal processes.

However, an example of a risk is that the AI solution could inaccurately screen candidates and a suitable candidate is lost in the process.

AI has not fully removed the risk of bias in terms of how it assesses candidates. For example, an AI solution could rank AI hobbies in CVs, with more favourable treatment of hobbies generally associated with men (such as football) and therefore inadvertently putting women at a disadvantage.

As the risk of bias in AI tools still exists, there is a risk that the criteria AI is applying to rank candidates could inadvertently result in a discriminatory outcome, with an increase in litigation risk for businesses. Therefore, it is important that businesses implement appropriate human oversight where AI solutions are used for these purposes, to ensure decisions are appropriate and fair.

With an increase in homeworking since the COVID-19 pandemic, there has also been an increased use of monitoring by employers, which may include CCTV, attendance logs, email and telephone monitoring, keystroke-logging, browser monitoring, flight risk analysis and CV analysis (to consider potential new skills). Whilst these developments in technology could be considered as a new and untapped way of providing an employer invaluable information in relation to their workforce, some of these methods are highly intrusive.

Monitoring of the workforce has UK data protection legislation implications, and businesses will need to consider, amongst other things, whether such monitoring can be justified and whether it has an appropriate lawful basis.

Separately, information acquired from such technology could create broader employee relations issues, such as the employer gaining information that increases the risk of a potential discrimination claim (ie, information coming to light about an employee’s health and the employer then has a proactive legal duty to make reasonable adjustments).

From an employment law perspective, if a digital platform is used (eg, car services and food delivery), there can be challenges when assessing the service provider’s employment status.

Where the platform is more intuitive, which presents as having more control over the service provider, there is a greater risk that the service provider will not be considered genuinely self-employed, which may be contrary to the service provider’s intention. This has broader employment law implications – eg, an entitlement to holiday pay, the national minimum wage, discrimination protection and dismissal protection.

This issue has been tested recently in the Supreme Court in relation to the employment status of Uber drivers. Uber argued that its drivers were self-employed with flexibility to pick up work at their discretion. However, the Supreme Court found the drivers to be “workers” and one of the reasons for this is the way Uber’s technology platform controlled their drivers, resulting in them not being considered genuinely self-employed.

AI is increasingly used by financial services firms, particularly in the following areas:

  • customer engagement (such as chatbots);
  • decision-making, including in the credit and investment management sectors;
  • driving efficiencies, particularly in compliance; and
  • advice tools.

There is a broad range of regulations potentially applicable to firms using AI. The FCA takes a technology-agnostic approach, regulating firms’ activities (including those utilising AI) rather than AI technology itself. Therefore, the rules currently applicable to firms generally remain relevant in the context of AI, including the FCA’s Principles for Business, Handbook and the Consumer Duty. Other key areas for firms to consider when integrating AI into their business operations include the Senior Managers’ and Certification Regime and applicable data protection regulations.

With benefits of AI use, however, come risks – such as unintended bias, poor decision-making and unlawful discrimination (including due to poor data quality in the model), as well as broader governance-related risks. Financial services firms must ensure that they adequately eliminate or mitigate these risks and avoid customer detriment.

There is currently no specific legislation in the UK that exclusively governs AI or its use in healthcare; a variety of existing regulations apply, including the Data Protection Act 2018, the Medical Device Regulations 2002 and guidance issued by the Medicines and Healthcare products Regulatory Agency.

Data use and sharing is a key consideration where AI is used in healthcare, so organisations must comply with UK Data Protection Legislation. For example, in order to provide training data for machine learning, operators must obtain patient consent, put systems in place to ensure data is anonymous and implement robust security measures to protect privacy and confidentiality. Following ethical guidelines is essential for the responsible use of data in healthcare AI.

Patient medical data is vulnerable to cyber-attacks and data breaches. As has been seen in the past, healthcare systems are open to attack and are often targeted by hackers due to the sensitive information they store. Ensuring that these areas have bolstered cybersecurity measures is vital for maintaining the security of patient data.

The Automated Vehicles Draft Bill has recently been published and is progressing through the UK’s legislative process (it is not yet in force). The Bill seeks to “set the legal framework for safe deployment of self-driving vehicles in Great Britain”.

Under the Bill, if an automated vehicle is authorised (following a successful self-driving test), an “authorised self-driving entity” shall be legally responsible for the automated vehicle. For “user-in-charge” vehicles, there will be immunity from liability in certain circumstances, as well as establishing when the “user-in-charge” will be liable (where the user is legally defined as a driver).

It is recognised that automated vehicles are likely to have access to/store personal data. If the Bill is successfully implemented in UK law, other laws will continue to apply, such as UK data protection legislation.

The UK has participated in the World Forum for Harmonisation of Vehicle Regulations (a working party within the framework of the United Nations). The UK government has also published principles of cybersecurity for connected and automated vehicles.

In the UK, the legal framework for addressing product safety and liability largely remains as retained EU law post-Brexit, which broadly requires products to be safe in their normal or foreseeable usage.

Sector-specific legislation (such as for automated vehicles, electrical and electronic equipment and medical devices) may apply to some products that include integrated AI but on the whole it is widely considered that existing rules do not comprehensively address the new and substantial risks posed by AI at the manufacturing stage.

The UK government has conducted a review of the UK’s product safety and liability regimes with a view to introducing a new framework fit for the modern age.

Professionals using AI in their services must adhere to existing professional standards and regulations, including adherence to sector-specific regulations and guidelines, such as those issued by the FCA and the Solicitors Regulation Authority.

Professionals must ensure that AI systems are designed and used in a manner that upholds professional integrity, competence and ethical conduct, which involves safeguarding client confidentiality through compliance with data protection laws, respecting IP rights and, where necessary, obtaining client consent when using AI systems. Liability issues may arise if AI systems produce harmful outcomes, and professionals may be held accountable for the actions of AI systems they employ.

In the case of copyright, Section 9(3) of the Copyright, Designs and Patents Act 1988 states that the author of a computer-generated literary, dramatic, musical or artistic work is the person that undertook the arrangements necessary for the creation of that work. Applying this to the example of an image created by a text-to-image generative AI system, and assuming copyright exists in the work (see 15.3 AI-Generated Works of Art and Works of Authorship), it is unclear whether the author would be the user who entered the prompt or the operator of the AI, and the position may vary from one work to another, but the author can only be a person.

In the case of patents, the UK Supreme Court recently confirmed in Thaler v Comptroller-General of Patents, Designs and Trade Marks that only a human can be recorded as the “inventor”, and not an AI machine such as Dr Thaler’s “DABUS”. It is important to note that Dr Thalus’ case concerned the formalities of patent registration, not the wider question of the patentability of inventions created or aided by AI systems more generally. Had Dr Thaler recorded himself as the inventor on the registration form (in his capacity as the creator/owner of DABUS), the application may well have succeeded.

A trade secret is a piece of information that is treated as confidential by its owner and has commercial value because it is secret. UK law protects trade secrets against unjustified use and disclosure, both through the equitable doctrine of confidence and under the Trade Secrets Regulations 2018. Trade secret protections could be used to protect the underlying source code of an AI system as well as training sets, algorithms and data compilations. These elements are essential for AI systems but may not always qualify for patent protection.

The immediacy of trade secret protection and the broad scope of coverage mean that this is an increasingly common method of protection in the UK. The use of trade secret protection in this area, however, must be balanced with the need for transparency and accountability.

AI-generated works pose a new challenge to copyright law. Section 9(3) of the Copyright, Designs and Patents Act 1988 provides that the author of copyright in computer-generated works is the person who undertakes the necessary arrangements to create the work. Putting aside the difficulty of determining who this person is (see 15.1 Applicability of Patent and Copyright Law), Section 9(3) is only engaged if copyright subsists in the computer-generated work in the first place, which may not be the case for works created by generative AI.

Works are only protected by copyright – which subsists automatically – if they are “original”, broadly meaning the expression of human creative freedom. Unlike more traditional means of using computers to create art (typically requiring at least some level of human skill and effort), generative AI is capable of creating art from simple – even minimal – user prompts. There is a very significant question mark over whether such works can be “original” and therefore benefit from copyright protection. In the absence of specific legislation on the issue, it is possible that the courts will not come to a blanket conclusion and that the answer will vary from work to work, depending on the extent and manner of human creativity involved in its creation.

The creation of works through OpenAI tools such as ChatGPT raises a number of IP issues. Such models are trained on vast amounts of data from a number of sources. As a result, determining the ownership of AI-generated works is challenging.

The question to be asked for IP purposes is whether the original creator is the AI model, the owner of the underlying information, or the user. Furthermore, the use of pre-trained models and datasets may infringe upon existing IP.

In addition, the rights and limitations concerning generated content are governed by OpenAI's licensing agreements and terms of use, which should be reviewed carefully.

The Institute of Directors is a professional organisation for company directors, and has released a “reflective checklist” outlining the following 12 principles that are intended to provide guidance to boards of directors on the use of AI within their organisations:

  • monitor the evolving regulatory environment;
  • continually audit and measure what AI is in use;
  • undertake impact assessments that consider the business and the wider stakeholder community;
  • establish board accountability;
  • set high-level goals for the business aligned with its values;
  • empower a diverse, cross-functional ethics committee that has the power to veto the use of AI;
  • document and secure data sources;
  • train people to get the best out of AI;
  • comply with privacy requirements;
  • comply with secure by design requirements;
  • test and remove AI from use if bias and other impacts are discovered; and
  • regularly review the organisation's use of AI.

The key issue for corporate boards to consider is the ongoing and rapidly changing legal landscape in relation to AI and the applicability within their organisation.

Implementing AI best practices will require consideration of a number of key issues, such as regulatory compliance, ethical considerations, effective risk management and robust data governance.

The key for organisations in the UK is ensuring adherence to the UK government’s following five cross-sectoral principles (and related regulator guidance):

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

Ethical considerations are equally significant. For example, businesses should conduct thorough ethical assessments before developing, deploying and procuring AI systems, which is likely to involve identifying and eliminating biases and addressing privacy concerns.

Effective risk management and internal governance is another key area to consider, such as the identification and mitigation of potential risks associated with AI deployment, development or procurement, and establishing robust internal processes with appropriate guardrails to ensure the responsible and safe use of AI.

Burges Salmon

One Glass Wharf
Bristol
BS2 0ZX
UK

+44 (0) 117 939 2000.

+44 (0) 117 902 4400

www.burges-salmon.com
Author Business Card

Trends and Developments


Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities and the public sector.

Artificial Intelligence in France: an Introduction

The world of artificial intelligence (AI) is expanding rapidly. There have been a number of developments in this area, the most notable being the recent launch of OpenAI’s GPT-4 and Google DeepMind’s Gemini that can understand text, images and audio.

Alongside the launch of these platforms, there has been an exponential increase in the development, deployment and procurement of AI across the UK.

Big tech companies like OpenAI and Google DeepMind still lead in AI but face pressure to balance product development and innovation with ethical concerns.

The key legal developments in AI in the UK are discussed below.

UK’s Regulatory Approach to AI

Pro-innovation and sector-led regulatory approach

The UK government's approach to regulating AI is “pro-innovation”. Its White Paper published on 29 March 2023 sets out a plan for AI to be regulated in the UK through the application of existing laws by current regulators within their respective remits, rather than applying blanket regulations to all AI technology. UK sectoral regulators are starting to issue AI-specific guidance in line with the principles-based approach set out in the White Paper, covering:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

This differs from the EU’s approach of creating a standalone regulator (the European AI Board) and introducing overarching AI-specific regulation (the EU AI Act). Whilst the UK may have its own approach to AI regulation, the extraterritorial effect of the EU AI Act means that UK businesses looking to deploy an AI model in the EU will also have to comply with the EU AI Act.

There are, however, calls for more stringent regulation of AI in the UK. A private member of the House of Lords has proposed the Artificial Intelligence Bill, which has completed its second reading in the House of Lords and is currently at committee stage. If it were to come into force, this Private Member’s Bill would create a new AI regulator in the UK, along with the potential for further regulation. As most Private Member’s bills do not become law, whether this bill will come into force is yet to be seen.

General-purpose AI systems

Whilst the UK government has committed to a “pro-innovation” approach, it has also recognised the major risks of highly capable AI models and, in its “response to its own White Paper”, highlights that specific regulation for high-risk AI systems may be required in the future, although it will not “rush to regulate”. So, whilst presently focused on existing law, voluntary commitments and stakeholder consultation, the UK government does leave room for future legal actions to comprehensively address emerging AI issues.

The UK government's plan for AI rules shows a balanced and nuanced view, backing growth of the AI industry in the UK but with sectoral oversight. Guidance for regulators and plans to address advanced AI systems show commitment to an overarching framework supporting responsible AI development and use in the UK.

Impact of the Automated Vehicles (AV) Bill

Once it becomes law, the AV Bill (introduced in November 2023) will also have an impact on the regulation of AI, as it provides a legal framework for the roll-out of automated vehicles in the UK.

The Department for Transport's forthcoming Transport AI Strategy further highlights the government's will to advance AI technology in the transportation and AV sector. Overall, the Bill signals the UK's proactive but also regulatory approach to integrating AI into its infrastructure.

International approach to AI

In November 2023, the UK hosted the AI Summit at Bletchley Park. The summit assembled global leaders, tech executives, academics and civil society figures, and led to an international declaration (the “Bletchley Declaration”) to address AI associated risks. The key takeaways of the Bletchley Declaration are as follows:

  • international co-operation is required to address the international risks posed by AI;
  • the potential of AI is far-reaching across many areas of society;
  • there are significant risks posed by AI, particularly in relation to potential intentional misuses, unintended issues of control, cybersecurity and biotechnology;
  • risks could be mitigated through systems for safety testing, evaluations and other appropriate measures; and
  • the agenda going forward will be to identify AI safety risks of shared concern and build a shared scientific and evidence-based understanding of the risks (in particular, as the capabilities of AI models continue to increase) and collaborate as appropriate to ensure safety in light of the risks identified.

Key developments in the UK following the AI Summit at Bletchley Park include the following:

  • a newly formed partnership between the UK AI Safety Institute and the US AI Safety Institute to develop a shared approach to AI safety testing, collaborate on research and undertake joint testing; and
  • South Korea and the UK are set to host an AI Safety Summit in May 2024, the aims of which are to spearhead discussions on AI safety and address potential capabilities of the most advanced AI models, building on the Bletchley Declaration.

This landmark collaborative international approach to AI safety has been welcomed.

Intellectual Property Considerations in AI

AI raises unique questions around IP rights, particularly copyright and patents. There have been key developments in this area in the UK, with recent case law emphasising the complexity around IP rights and AI-generated content and inventions.

Copyright – Getty Images v Stability AI

This case demonstrates the wider debate around the use of copyrighted data for the training of AI models. AI systems (particularly generative AI) require vast training data sets, which are normally sourced from publicly available content. However, this raises questions of legality when it comes to using copyrighted material without the correct licensing arrangements being in place. Creative industries argue that, absent agreed licensing terms, this usage constitutes copyright infringement. On the other hand, AI developers argue that it falls under fair use exemptions, and is vital for innovation.

The case is ongoing but highlights the need for clarity around the rights and obligations of AI developers and content creators in the age of AI.

The UK government has recently announced (in its White Paper response) that its working group failed to develop a voluntary code of practice for copyright and AI, with the intention of making licences for text and data mining more available. Government departments will now commence a period of engagement with stakeholders, with the hope of seeking to implement an approach that is workable for all parties involved.

Patents – Thaler v Comptroller-General

Another key development is the outcome of the recent case of Thaler v Comptroller-General, in which the UK Supreme Court confirmed that an “inventor” for the purposes of a patent application must be human, not an AI model. This followed the IP Office rejecting patent applications for inventions by the DABUS AI system.

This case was welcomed as a key point of clarity for businesses seeking patent protection. However, it should be noted that the case concerned the formalities of patent registration only, and not the patentability of inventions created by or with the aid of AI, where a human is listed as the inventor on the application form.

Data Protection and AI

Given the large data sets used to train AI models and the use cases of such models, personal data will likely be used at some point within the AI value chain, so businesses will need to consider compliance with UK data protection legislation.

Due to the UK’s “pro-innovation” approach to the regulation of AI through the application of existing laws by existing regulators, the Information Commissioner's Office (ICO – the supervisory authority for data protection in the UK) has released its AI and data protection guidance. This is a key development in the UK as the guidance gives businesses a helpful steer on data protection compliance when procuring, developing or deploying an AI system.

The guidance is structured around the data protection principles set out in the UK GDPR and emphasises:

  • fairness in AI to align with the ICO's commitment to safeguarding individuals and vulnerable groups;
  • accountability and governance, and that data protection impact assessments should be conducted as part of the process of developing, deploying or procuring an AI system;
  • transparency, stressing the importance of explainability in AI systems;
  • lawfulness – namely, that businesses need to ensure they have an appropriate lawful basis to process personal data throughout the AI value chain (noting that such lawful basis may differ when, for example, a business is training an AI model versus deploying an AI model); and
  • accuracy in data produced by the AI model.

The ICO recognises that AI is an area of rapid technological advancement and anticipates further updates to its guidance.

Overview of consultation series announced

Earlier this year, the ICO also a consultation series aiming to address the legal implications of using generative AI, focusing on:

  • the lawful basis for training generative AI models on web-scraped data;
  • purpose limitation in the generative AI lifecycle; and
  • the accuracy of training data and model outputs.

The ICO is “moving fast to address any risks and enable organisations and the public to reap the benefits of generative AI”, and the consultation is expected to lead to further updates to the guidance, which businesses should review and implement to ensure data protection compliance.

Procurement of AI

There has been a rapid increase in businesses procuring AI tools for use within their organisation, with aims such as improving efficiency and enhancing internal offerings to employees and/or external product or service offerings.

Examples of the key areas of risk that businesses are currently focusing on during their AI procurement processes include:

  • the origin of the training data and whether the supplier has the necessary licences and consents to use such data, and ensuring that the contract has sufficient protection in this regard;
  • the liability position where output data produced by the AI model infringes third-party IP rights;
  • whether personal data will be involved; and
  • the confidentiality of the customer’s proprietary information and the extent to which the model learns from such information inputted by the customer.

The Department for Science, Innovation and Technology (DSIT) has published an introduction to AI assurance with the objective of helping businesses develop AI tools responsibly, through assurance mechanisms and global technical standards. Specifically, AI assurance should help businesses gather data on how an AI system functions, and assess the risks and impacts of AI systems. The guidance issued by DSIT is helpful to businesses procuring AI models, as it serves as a useful resource to understand what is considered a responsible AI system, and also to businesses developing and deploying AI models to understand how to build and deploy a responsible AI system.

Public sector procurement

A Procurement Policy Note (PPN) on the use of AI for procurement in the public sector was released in March 2024. The PPN is relevant to buying and using AI, primarily to the use of AI in generating tenders. It sets out best practice guidance for Central Government Departments, their Executive Agencies and Non-Departmental Public Bodies (other public sector authorities are asked to consider applying the approach outlined in the PPN).

The PPN makes it clear that suppliers’ use of AI is not prohibited during the commercial process but that “steps should be taken to understand the risks associated with the use of AI tools in this context, as would be the case if a bid writer has been used by the bidder”. The PPN suggests steps for contracting authorities to understand such risks, such as:

  • requiring suppliers to disclose AI usage;
  • implementing controls to prevent the misuse of confidential information;
  • conducting due diligence;
  • planning for increased activity due to AI streamlining processes;
  • potentially extending procurement timelines; and
  • aligning with internal teams for expertise on AI implications.

The PPN states that contracting authorities could also add a disclosure question to the Invitation to Tender, requiring suppliers to disclose their use of AI when responding to the tender questions, or as part of their proposed delivery of the service. Such questions should not be scored or taken into account in tender evaluation and should be used for information only, but contracting authorities can continue to ask and evaluate any further relevant questions about the use of AI as part of their award process that are specific to their requirements and are compliant with procurement law.

Considering AI Alongside Consumer Issues

The Competition and Markets Authority (CMA) launched an investigation into the expanding market of AI Foundation Models and their potential impacts on both competition and consumer protection in May 2023. It released a report summarising its initial findings in September 2023, and published an update paper in April 2024.

These documents set out the details of notable developments that have occurred in the AI sector, and outline the CMA’s key concerns, following its own in-depth research and engagement with stakeholders. As rapid growth continues in this area, the report and update paper also establish a set of core principles which the CMA considers necessary to ensure fair, open and effective competition, to protect consumers, and to shape positive market outcomes, and upon which it will base its response to future developments in the sector.

More specifically, the CMA considers the trickle-down effects of Foundation Models on markets, highlighting the importance of safeguarding consumer interests with developing AI technology. A key aspect is the emphasis on governance measures and enforcement tools, drawing upon existing competition and consumer protection laws, as well as the new powers expected to be granted to the CMA by the Digital Markets, Competition and Consumers Bill, to tackle emerging challenges.

The CMA also indicates that it intends to carefully scrutinise arrangements between large incumbent firms in the sector, to determine whether they would be subject to the UK’s merger control regime, in order to ensure that powerful partnerships and integrated firms cannot reduce others’ ability to compete or steer markets away from diverse business models and model types. Overall, the report and update paper both stress the importance of the CMA taking proactive steps to consider the impacts of AI in the consumer sphere and how best to tackle these.

AI in litigation

HM Courts and Tribunals Service has released guidance for those holding roles in judicial office regarding the use of AI technology. The guidance concerns the main risks associated with the use of AI and offers mitigation strategies. It covers understanding the implications of AI, confidentiality, ensuring accuracy and being aware of biases and security concerns. The Bar Council has also provided similar guidance for barristers on navigating the use of AI, encouraging sensible use and understanding of potential risks such as system hallucination and biases in data training.

Burges Salmon

One Glass Wharf
Bristol
BS2 0ZX
UK

+44 (0) 117 939 2000

+44 (0) 117 902 4400

www.burges-salmon.com
Author Business Card

Law and Practice

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities and the public sector.

Trends and Developments

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities and the public sector.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.