Artificial Intelligence 2024 Comparisons

Last Updated May 28, 2024

Contributed By Burges Salmon

Law and Practice

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities and the public sector.

The legal landscape around AI is broad and complex, spanning a range of legal areas which professionals and organisations should consider. Issues arise throughout the development, licensing and use of an AI system. As AI relies heavily on large volumes of data, compliance with data protection laws will be important, particularly when processing personal data, but having awareness of the risks around profiling and automated decision-making is also essential.

Key questions also arise in respect of the protection of intellectual property rights that exist in training data, and ownership of copyright in works generated by AI systems without human intervention. AI systems must also comply with consumer protection legislation in relation to unfair commercial practices and transparency (for example, providing consumers with appropriate information).

Other concerns include protecting against the possibility of discrimination in AI decision-making, especially in an employment context, and ensuring certainty on attributing product liability for any harm caused.

Various industries are applying AI and machine learning, including:

  • medical imaging and drug discovery in the healthcare sector;
  • credit scoring and fraud detection in the finance sector; and
  • dynamic pricing in retail and e-commerce.

The AI models that are developed and deployed across industries may be generative, creating new content learned from existing data (such as drug discovery), while others are predictive, making predictions based on historical data (such as models predicting share prices).

There are a number of industry innovations driving the use of AI and machine learning, such as no-code AI, which allow non-technical users to build AI solutions, and the launch of ChatGPT and similar models, which have shifted perception and attitudes in relation to the use and deployment of AI systems within businesses. The potential benefits to consumers and businesses are vast, including improved services and products and advancements in research.

One example of a cross-industry initiative is the National AI Research and Innovation Programme, which aims to improve co-ordination and collaboration amongst researchers in the field of AI.

In the UK, the government is actively involved in promoting the adoption and advancement of AI for industry use, evidenced by its commitment to a “pro-innovation” approach to the regulation of AI. Through initiatives like the AI Action Plan and the National AI Strategy, the UK government is aiming to bolster its position as a global leader in AI development. Under the National AI Strategy, the UK government has committed to ensuring long-term support for the AI ecosystem.

The UK government offers certain R&D-related tax credits to incentivise industry innovation of AI.

The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology. This differs from the EU approach of creating a standalone regulator (the European AI Board) and introducing overarching AI-specific regulation (the EU AI Act) to sit above existing regulation.

Industry regulators are expected to interpret and apply these principles within their respective domains using existing powers supplemented with additional funding and regulatory guidance from the UK government. In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

The UK government has acknowledged the need for targeted legislative interventions in the future, particularly concerning highly capable general-purpose AI systems.

There is currently no AI-specific legislation governing the development, deployment or use of AI in the UK; see 3.1 General Approach to AI-Specific Legislation.

A private members' bill – the “Artificial Intelligence (Regulation) Bill” – has been proposed and is currently progressing through the UK legislative process, but there is no guarantee it will be adopted into law.

The UK government has provided initial guidance for UK regulators on how to interpret and apply the cross-sectoral principles (see 3.1 General Approach to AI-Specific Legislation), with further updates to be provided in summer 2024. The guidance discusses each of the five principles in more depth, sets out the key considerations that regulators should be aware of, and suggests technical standards and best practices. Although the guidance is not intended to be prescriptive, its objective is to ensure a level of coherency across different sectors.

In line with the UK’s government’s regulatory approach, some regulators have already issued guidance covering AI in their respective domains. For example, the Information Commissioner’s Office (ICO) has issued guidance on best practice for data protection-compliant AI, and on interpreting data protection law as it applies to AI systems. The National Cyber Security Centre has released guidance explaining the cybersecurity risks posed by AI. Further guidance is planned covering AI assurance and the use of AI within HR and recruitment.

This section is not relevant in this jurisdiction.

This section is not relevant in this jurisdiction.

This section is not relevant in this jurisdiction.

At the time of writing, there have not been any amendments to UK data protection legislation or information and content laws. However, the UK regulator for data protection, the ICO, has released specific guidance in respect of the interaction between UK data protection legislation and the use/deployment of AI-based solutions.

In its response to the UK White Paper (regarding the regulation of AI), the UK government confirmed that its working group had failed to develop a voluntary code of practice for copyright and AI, with the intention of making licences for text and data mining more available. However, in light of this, certain government departments will commence a period of engagement with stakeholders to seek to implement an approach that is workable for both AI and the creative sectors, so there may be guidance on this topic in the future.

The UK government is taking a pro-innovation approach to the regulation of AI and has released certain non-binding principles, which industry regulators are expected to interpret and apply in line with existing laws. It has been confirmed that “future binding measures” are likely to be introduced for highly capable general-purpose AI models (the content of such measures is currently unknown). An Artificial Intelligence Bill has also been proposed, with the main purpose of establishing an “AI Authority” to oversee the regulatory approach to AI. The Bill is progressing through the UK parliamentary process and is not in force.

The key trend in the UK in respect of AI regulation is the tension between the extent to which AI should be regulated and a “pro-innovation” approach.

The EU AI Act will also affect UK business to the extent that UK-based providers of AI systems place their system on the market in the EU or otherwise supply into the EU.

Very few UK judicial decisions have dealt with AI directly. The most noteworthy example is the Supreme Court’s decision in Thaler v Comptroller-General of Patents, Designs and Trade Marks, which confirmed that UK patent applications must identify a human “inventor”. However, the case concerned the formalities of patent registration only, and not the patentability of inventions created by or with the aid of AI, where a human is listed as the inventor on the application form; see 15.1 Applicability of Patent and Copyright Law for more information.

Also notable is Getty Images’ ongoing copyright, trade mark and database right infringement claim against Stability AI in respect of its “Stable Diffusion” text to image generative AI model. A trial is not expected until 2025 at the earliest.

In the context of privacy and data protection concerning automated systems, in 2020 the Court of Appeal held that South Wales Police’s use of automated facial recognition technology was unlawful under Article 8 of the ECHR (right to privacy) and the UK Data Protection Acts 1998 and 2018.

There is not yet an all-encompassing definition of AI under UK law.

The National Security and Investments Act 2021 (NSIA) provides a framework for government screening of proposed foreign direct investment into the UK and defines AI as follows:

“'Artificial intelligence' means technology enabling the programming or training of a device or software to –

(i)       perceive environments through the use of data;

(ii) interpret data using automated processing designed to approximate cognitive abilities; and

(iii) make recommendations, predictions or decisions;

with a view to achieving a specific objective.”

The NSIA definition is narrower than the definitions adopted by the OECD and the EU, which, amongst other things, recognise generative AI. Given the UK’s stated aim of international interoperability, future definitions of AI adopted by the UK may more closely mirror the OECD/EU definitions.

Consequently, whether or not the courts need to consider what is meant by AI remains to be seen and, in any event, may be fact-specific to a specific contract or set of circumstances.

The UK government White Paper sets out a plan for AI to be regulated in the UK through the application of existing laws by existing regulators to the use of AI within their respective remits, rather than applying blanket regulation to all AI technology.

The regulators expected to play leading roles in the UK regulation of AI are the four members of the Digital Regulation Cooperation Forum (DRCF):

  • the ICO (data protection and privacy);
  • the Office for Communications (Ofcom) (communications);
  • the Financial Conduct Authority (FCA) (financial services); and
  • the Competition and Markets Authority (CMA) (competition/antitrust).

Each operates across England and Wales. In addition, the Equalities and Human Rights Commission is Britain’s independent equality and human rights regulator, enforcing the Equalities Act 2010.

In dealing with AI issues, regulators are encouraged to apply the government’s five cross-sectoral principles, as outlined in 3.1 General Approach to AI-Specific Legislation.

Each of the DRCF regulators describe AI differently in their external publications; none of them has adopted a singular definition of AI in a regulatory context.

The ICO’s online “Guidance on AI and data protection” is indicative of the approach currently adopted by UK regulators, reflecting the absence of AI-specific regulation. It notes that data protection legislation does not use the term “AI” and that one’s legal obligations under such legislation therefore do not turn on exactly how the term is construed. It goes on to explain what the ICO means when it refers to “AI” in its guidance, differentiating between how the term is used in the research community (“various methods for using a non-human system to learn from experience and imitate human intelligent behaviour”) and how it is used by the ICO in the context of data protection regulation (“the theory and development of computer systems able to perform tasks normally requiring human intelligence”).

It remains to be seen whether regulators will adopt their own definitions of AI (whether in isolation or collectively – eg, via the DRCF).

The UK’s current approach to AI regulation is for existing regulators to deal with AI within their own remits. There are over 90 regulatory bodies in England and Wales, and four regulators are expected to play a key role in regulation affecting AI (see further commentary under 5.1 Regulatory Agencies).

ICO

The ICO upholds information and data privacy rights for individuals in the UK. The ICO’s guidance advocates a risk-based approach to AI and identifies specific AI risks for each foundational principle of data protection, with a particular focus on transparency, accuracy, fairness/anti-discrimination, security, data minimisation and the safeguarding of individual rights.

FCA

The FCA regulates the UK’s financial services industry. It has identified increased productivity, improved financial modelling, better tailoring of financial advice, hyper-personalised products and the ability to tackle fraud and money laundering more quickly and accurately at scale as potential benefits of AI in financial services. On the risk front, it has highlighted increased dependence on “big tech”, consumer/identify fraud, misinformation and market manipulation, and the potential for bias/discrimination in consumer banking.

CMA

The CMA’s focus is on protecting consumers from anti-competitive activities. In September 2023, it published a report on foundational models (FMs), highlighting the benefits that could arise from FMs if used well, including new better products, technological breakthroughs, easier access to information, lower prices and increased competition. The report also warns of potential harms, including false and misleading information, AI fraud and the risk of a small number of firms using FMs to entrench their positions of market power, harming competition and increasing prices.

Ofcom

Ofcom is the UK’s communications regulator, operating across telecoms, post, broadcasting and on-demand video services. Like the FCA and CMA, it seeks to protect consumers and promote competition. Its “Strategic approach to AI 2024/25” published in March 2024 identifies a number of potential outcomes from the use of AI in communications, including benefits such as improved safety technologies, better visual effects, spam filtration and enhanced speech recognition/generation, and risks such as the easy creation and dissemination of illegal or harmful content, misinformation and disinformation, and more convincing phishing scams.

In May 2022, the ICO fined US-based Clearview AI more than GBP7.5 million for misusing UK residents’ publicly available personal data by scraping images from social media without consent to create a database of 20 billion images. Clearview AI used its database to provide facial recognition services to its customers.

Clearview successfully appealed: in October 2023, it was found that the ICO lacked jurisdiction because Clearview only provided its services to law enforcement/national security bodies outside the UK and EU, falling within an exception to the UK GDPR applicable to the acts of foreign governments. The ICO has sought permission to appeal.

The ICO issued a preliminary enforcement notice to Snap Inc. over potential failure to properly assess the privacy risks posed by its generative AI chatbot “My AI”. The investigation provisionally found that Snap failed to adequately identify and assess the risks to several million “My AI” users in the UK, including children aged 13 to 17.

Separately, the CMA is investigating whether Microsoft’s partnership with OpenAI (the creator of ChatGPT) amounts to a de facto merger and, if so, whether it could impact competition.

Standards development organisations such as ISO/IEC, IEEE and BSI have paved the way for consensus-driven standards through multi-stakeholder discussions to promote global alignment.

The four key types of technical standards for AI governance, risk and governance in the UK are as follows:

  • foundational standards – to help build common language and definitions around basic concepts and facilitate dialogue between stakeholders;
  • process standards – to provide guidance on best practices in management, process-design, quality control and governance;
  • measurement standards – to create universal mechanisms and terminologies on measuring various aspects of an AI system’s performance; and
  • performance standards – to assist in setting up benchmarks, prerequisites and expectations that need to be achieved at specific stages for the effective functioning and utilisation of an AI system.

The UK's Department for Science, Innovation and Technology has published an introduction to AI assurance, with the objective of assisting businesses develop AI tools responsibly through assurance mechanisms and global technical standards.

The International Organization for Standardization (ISO) and the International Electrotechnical Commission jointly published a new global management standard for artificial intelligence, known as ISO/IEC 42001. While this standard does not carry the same legal weight as legislation, it is expected to have a significant impact on how organisations demonstrate responsible and transparent use of AI. References to global standards are often included where organisations are providing goods and services on both a business-to-consumer and business-to-business basis. In addition, they are often referenced in commercial contracts as a way of establishing expectations of regulatory guidance and best practice.

As the standard evolves over time in line with emerging trends, risks and opportunities in the AI space, ISO/IEC 42001 has the potential to act as the default standard that organisations will be expected to adhere to.

Studies in the UK have indicated that AI usage across the UK public sector remains inconsistent, despite government messaging increasingly encouraging the utilisation of AI in the public sector. Within government, a study by the National Audit Office has found that 37% of government bodies surveyed had actively deployed AI, while a further 37% had not deployed AI but were piloting AI use.

Current uses of AI across UK government include the use of document comparison software by HM Land Registry, HMRC use of chatbots and Natural England's use of AI for habitat analysis. More broadly, facial recognition technology is becoming increasingly common in UK law enforcement, including the use of retrospective facial recognition as part of criminal investigations as well as live facial recognition (eg, at large-scale sporting events).

There are risks when using AI in the public sector and more widely, such as the risks of biases and discrimination flowing from foundation model outputs as well as the risk of nefarious actors using foundational models to intentionally cause harm (with the added legitimacy of public sector usage).

See 4. Judicial Decisions for a discussion of the judicial decision relating to South Wales Police’s use of automated facial recognition technology.

There are currently no relevant pending cases.

AI plays a significant role in national security, including in cybersecurity, intelligence monitoring, automated defence systems and counterterrorism. Use cases include AI systems deployed to:

  • detect and respond to cyberthreats in real time;
  • monitor international borders and critical infrastructure, providing an early warning system;
  • identify trends and anomalies in data sets, particularly online activity data sets, that aid in detecting potential terrorist threats; and
  • identify targets as part of defensive weapon systems.

National considerations will play a pivotal role in shaping future government legislation, regulations and policies on AI systems, such as ensuring ethical use.

The key issues and risks posed by generative AI are:

  • its generative nature – AI has a tendency to reproduce and, in some cases, emphasise biases dependent on the material it was trained on;
  • transparency – often it cannot be determined how conclusions have been reached and, in turn, erroneous conclusions are harder to attribute fault;
  • cost and environmental impact – not only is AI incredibly expensive to train, causing a barrier to innovation, but there is also a large environmental impact;
  • reliability – hallucinations (or fabricated, erroneous and untrue outputs) have exposed that generative AI will often attempt to bridge data gaps itself by fabricating information, and some generative AI models are static (lacking up-to-date information);
  • IP considerations – as discussed further in 8.2 IP and Generative AI, there is debate surrounding IP ownership where generative AI is concerned;
  • ethical considerations – AI lacks the moral rationalisation that a human has and therefore may generate results that conflict with human values; and
  • data protection – as discussed further in 8.3 Data Protection and Generative AI, AI has increased the discussion on data ownership and how the traditional rights of data protection can be applied to these technologies.

The use of generative AI raises issues of both IP protection and IP infringement. In the case of copyright (the primary IP right for the protection of literary, dramatic, musical or artistic works), protection will only arise if a work meets the criteria of originality. Originality implies a degree of human creative input that, in the context of generative AI, may be minimal, absent or difficult to prove.

If copyright does exist in works produced by generative AI, it may not be clear who the “author” is, and therefore who owns the copyright.

Users of AI tools should not assume they will automatically own any copyright, and should check the provider’s terms and conditions, which may assign ownership to the provider or give the provider a licence to use the works and/or materials the user inputs into the AI tool. Users should also be mindful of the potential for generative AI content to infringe third-party IP rights and, again, review the provider’s terms to check for appropriate protections.

In addition to the output of AI systems, organisations will need to be mindful of IP rights in the data used to train such systems. They may already be aware of IP rights such as trade marks, copyright and patents, but they should be aware of database rights as well.

Where AI models are used to process personal data, at a fundamental level those systems need to comply with the principles of data protection (law) by design and by default.

Under data protection law, individuals have various rights regarding their personal data, and these rights apply wherever that personal data is being processed by an AI system.

Where an individual exercises their right to the “rectification” or “erasure” of their personal data, if it is not possible to separate out the individual’s data from the AI model in order to comply with those rights; to avoid being in breach of regulatory compliance and subject to potential enforcement actions by the regulator, the model may need to be deleted entirely.

The challenge for AI developers is designing AI systems that can automatically comply with the law. Stipulating data protection principles, such as “purpose limitation” and “data minimisation”, as part of the design architecture and engineering process is key to achieving successful compliance outcomes.

AI is currently used in the legal profession for a number of purposes, including risk and AML compliance, administration and support services (eg, legal chatbots), precedent document generation (eg, producing template real estate leases) and text generation, particularly with the use of predicative text or generative systems like Microsoft Copilot to assist with contract drafting and content creation.

AI can be used in litigation to identify and summarise precedents or carry out automated search functions that are useful for document discovery purposes.

The Law Society has released guidance on how to manage the risks of adopting AI (which include IP, cybersecurity and data protection issues), and the Bar Council has also released guidance on considerations when using ChatGPT and generative AI software based on large language models (LLMs).

Of the risks already identified, ethical issues include the risk of bias in the training data leading to the perpetuation of harmful stereotypes and hallucinations in LLMs leading to factually incorrect responses.

The UK does not have a specific liability framework applicable to harm or loss resulting from the use of AI and therefore existing laws apply. As an exception, the UK has passed the Automated and Electric Vehicles Act 2018, pursuant to which liability for damage caused by an insured automated vehicle when driving itself lies with the insurer.

There are many factors to consider where AI products cause harm, such as whether the defect was attributable to the design of the product, the programming or the user when in use. These factors can impact the liability position.

To claim damages under contract, the claimant needs to prove that the defendant breached a term of the contract and that said breach caused loss. Whilst this may be straightforward with simple products, establishing causation in an AI product may be more difficult.

In terms of trends, businesses are assessing whether or not they are sufficiently protected against liability risks arising from such emerging technologies, be it as operators, users or manufacturers. This is typically tackled by ensuring that contractual arrangements with suppliers and/or customers are sufficient, or by implementing appropriate insurance coverage.

The UK government’s approach continues to be pro-innovation and encouraging regulators to tackle AI regulation within their remits. As discussed in 10.1 Theories of Liability, with the exception of the Automated and Electric Vehicles Act 2018, liability must rely upon existing frameworks, and for the most part existing laws will apply to the allocation of liability in respect of AI.

The Department for Science, Innovation and Technology's response to the UK AI White Paper confirms that regulation and binding measures on “highly capable general-purpose AI” are likely to be required in the future. The UK government has confirmed it will not “rush to regulate”, as “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation”.

Bias in predictive and generative AI systems can arise from biased training data, algorithmic biases and biased design choices. Legally, there are concerns regarding discrimination, privacy and accountability, to name a few. Current legislation such as the Equality Act 2010 and data protection legislation aim to mitigate these risks.

Consumer areas at risk of bias from the use of AI systems include finance, healthcare and employment. Businesses may find themselves subject to liability from individual claims and regulatory fines, if found in breach of legislation such as the Equality Act 2010 and data protection legislation.

Businesses can take certain measures to address bias, such as ensuring appropriate processes are in place for verifying that data used to train the AI model is appropriate, and ensuring human oversight in respect of the AI model’s output prior to relying on any such output.

The UK ICO may take regulatory action where companies breach data protection legislation, by issuing significant fines and requiring companies to take certain steps to rectify non-compliance.

Protecting personal data with AI technology and business practices has both risks and benefits. While AI enables efficient data processing and personalised services, and helps to drive innovation, it also raises concerns relating to important issues such as privacy, bias and security.

Purely automated decision-making from processed personal data without human supervision is restricted at law, and if conducted at scale by AI systems poses challenges when it comes to bias, accountability and other fundamental rights. Individuals may be faced with uncertain decision-making processes, unintended outcomes and very real practical difficulties in challenging and rectifying automatic decisions made without nuance.

Data security measures used in relation to AI systems can benefit from complex data integrity, confidentiality and anonymisation protocols being built-in to the design of the AI system itself. However, vulnerabilities in these systems, and in the security protocols used to protect them, can lead to privacy breaches, regulatory non-compliance and serious reputational damage for businesses; where such vulnerabilities are exposed by a rogue AI system, businesses risk being exposed to a “point of no return” – cybersecurity has never been more important.

An overarching issue in this area is the lack of a clear and consolidated regulatory response to facial recognition technology (FRT). At present, the UK approach is a combination of human rights law, data protection law, equality law and, in the context of law enforcement, criminal justice legislation. For completeness, it should be noted that the EU AI Act, while not directly effective in the UK, does directly address the processing of biometric data, and the UK will likely be influenced by the approach taken in the EU AI Act.

FRT relies on individuals' personal data and biometric data, and its use raises a number of challenges from a data protection perspective. Biometric data is intrinsically sensitive and the use of FRT therefore gives rise to challenges around the necessity and proportionality of processing, as well as the need to identify a lawful basis for processing.

This is particularly true where FRT involves the automatic and indiscriminate collection of biometric data in public places, which is becoming increasingly common, whether for law enforcement purposes or for commercial purposes such as targeted advertising in the retail sector. In this context, issues include:

  • consent and transparency;
  • the necessity and proportionality of processing;
  • statistical accuracy;
  • the risk of algorithmic bias and discrimination; and
  • the processing of children's data without necessary additional safeguards.

Companies utilising FRT should be cognisant of the risks associated with its use, particularly in relation to potential violations of data protection legislation and equality and anti-discrimination laws.

Automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on factual data and on digitally created profiles or inferred data.

Article 22 of the UK GDPR restricts organisations' ability to make solely automated decisions that result in a legal or similarly significant effect on an individual. In this context, “solely automated” means that the decision is totally automated without any human influence on the outcome.

Decisions taken in this manner can potentially have a significant adverse effect, which is particularly concerning as there can often be a lack of understanding around how the decision-making process works. This lack of understanding can arise from the perspective of both the impacted individual and the individual dealing with the consequences of the relevant decision. There is also a risk that inherent biases in the AI-based decision-making tools may lead to discriminatory outcomes.

Organisations that fail to comply with Article 22 may be subject to significant fines as well as liability to affected individuals who may exercise their right to object under Article 21 of the UK GDPR as well as bringing a legal claim against the company.

Currently, there is no UK-wide regulatory scheme specific to AI, and therefore existing laws continue to apply, as well as specific AI-related guidance issued by regulators.

For example, UK data protection legislation will almost always apply to the development, deployment and/or procurement of AI. One of the key principles under UK data protection legislation is transparency. The UK regulator for data protection, the ICO, has released certain AI-related guidance, in which it makes it clear that businesses must be transparent about how they process personal data in an AI system, stressing the importance of “explainability” in AI systems.

In addition, UK data protection legislation includes rules around the profiling of individuals and automated decision-making. If permitted, transparency with individuals is key.

There is currently little regulation or case law that specifically considers the application of UK competition law to AI. In recent years, commentators and the CMA have discussed concerns around the potential for passive collusion between undertakings when using price-setting algorithms.

In September 2023, the CMA published its initial report into AI Foundation Models (FMs – machine learning models trained on vast datasets) and their impact on competition and consumer protection. The CMA also published an update paper to this report in April 2024. In these documents, the CMA sets out six overarching principles on which it will base its response to the future development and deployment of FMs:

  • access – maintaining ongoing, ready access to key inputs;
  • accountability – ensuring FM developers and deployers are accountable for outputs provided to consumers;
  • diversity – ensuring sustained diversity of business models;
  • choice – ensuring sufficient choice for businesses so they can decide how to use FMs;
  • fair dealing – preventing anti-competitive conduct, including self-preferencing, tying or bundling; and
  • transparency – ensuring that consumers and businesses are given information about the risks and limitations of FM-generated content.

The Digital Markets, Competition and Consumers Bill (DMCC), which is expected to come into force later this year, will further enhance the CMA’s enforcement powers in respect of digital activity. In particular, the DMCC is expected to grant the CMA the ability to set targeted conduct requirements on firms found to have strategic market status (SMS) in respect of a digital activity. The CMA notes in its report that it is likely that FMs and their deployment will be relevant to the CMA’s selection of SMS candidates, particularly where FMs are deployed in connection with other, more established activities.

Furthermore, the CMA indicates in its update paper that it intends to take a proactive approach to enforcement by:

  • prioritising certain digital activities for investigation using new powers granted by the DMCC, such as critical inputs and access points;
  • monitoring current and emerging partnerships closely; and
  • stepping up its use of merger control to determine whether partnerships between FM developers and/or deployers fall within the current rules.

Human oversight, circuit breakers and IP ownership are some of the unique areas to consider when procuring AI solutions, and to ensure the contractual documents are reflective of these unique areas.

The UK government has published a set of guidelines for AI procurement to support contracting authorities when engaging with suppliers of AI solutions, and therefore the following steps should also be considered by contracting authorities prior to procuring an AI solution:

  • establish a clear responsibility record to define who has accountability for the different areas of the AI model;
  • determine a clear governance approach to meet requirements;
  • ensure there is regular model testing so issues of bias within the data may be addressed;
  • define acceptable model performance (service levels);
  • ensure knowledge is transferred through regular training;
  • ensure that appropriate ongoing support, maintenance and hosting arrangements are in place;
  • address IP ownership and confidentiality;
  • allocate risk/apportion liability to the parties best able to manage it;
  • factor in future regulatory changes; and
  • include appropriate end-of-life processes (define end-of-contract roles and processes).

Whilst these steps are focused on public sector organisation, they are also helpful for private sector organisations procuring AI solutions.

The Society for Computers and Law (SCL) AI Group has also produced sample clauses for transactions involving AI systems, which serve as a useful checklist of issues to consider when procuring AI solutions.

Very few UK employment tribunal decisions have dealt with AI directly.

Employers may use tools to make recruitment practices more efficient and to seek to identify the best candidates – eg, through CV screening tools and one-way AI video interviews. Using AI to assist during the recruitment process can aid the consistency of decision-making and improve the efficiency of internal processes.

However, an example of a risk is that the AI solution could inaccurately screen candidates and a suitable candidate is lost in the process.

AI has not fully removed the risk of bias in terms of how it assesses candidates. For example, an AI solution could rank AI hobbies in CVs, with more favourable treatment of hobbies generally associated with men (such as football) and therefore inadvertently putting women at a disadvantage.

As the risk of bias in AI tools still exists, there is a risk that the criteria AI is applying to rank candidates could inadvertently result in a discriminatory outcome, with an increase in litigation risk for businesses. Therefore, it is important that businesses implement appropriate human oversight where AI solutions are used for these purposes, to ensure decisions are appropriate and fair.

With an increase in homeworking since the COVID-19 pandemic, there has also been an increased use of monitoring by employers, which may include CCTV, attendance logs, email and telephone monitoring, keystroke-logging, browser monitoring, flight risk analysis and CV analysis (to consider potential new skills). Whilst these developments in technology could be considered as a new and untapped way of providing an employer invaluable information in relation to their workforce, some of these methods are highly intrusive.

Monitoring of the workforce has UK data protection legislation implications, and businesses will need to consider, amongst other things, whether such monitoring can be justified and whether it has an appropriate lawful basis.

Separately, information acquired from such technology could create broader employee relations issues, such as the employer gaining information that increases the risk of a potential discrimination claim (ie, information coming to light about an employee’s health and the employer then has a proactive legal duty to make reasonable adjustments).

From an employment law perspective, if a digital platform is used (eg, car services and food delivery), there can be challenges when assessing the service provider’s employment status.

Where the platform is more intuitive, which presents as having more control over the service provider, there is a greater risk that the service provider will not be considered genuinely self-employed, which may be contrary to the service provider’s intention. This has broader employment law implications – eg, an entitlement to holiday pay, the national minimum wage, discrimination protection and dismissal protection.

This issue has been tested recently in the Supreme Court in relation to the employment status of Uber drivers. Uber argued that its drivers were self-employed with flexibility to pick up work at their discretion. However, the Supreme Court found the drivers to be “workers” and one of the reasons for this is the way Uber’s technology platform controlled their drivers, resulting in them not being considered genuinely self-employed.

AI is increasingly used by financial services firms, particularly in the following areas:

  • customer engagement (such as chatbots);
  • decision-making, including in the credit and investment management sectors;
  • driving efficiencies, particularly in compliance; and
  • advice tools.

There is a broad range of regulations potentially applicable to firms using AI. The FCA takes a technology-agnostic approach, regulating firms’ activities (including those utilising AI) rather than AI technology itself. Therefore, the rules currently applicable to firms generally remain relevant in the context of AI, including the FCA’s Principles for Business, Handbook and the Consumer Duty. Other key areas for firms to consider when integrating AI into their business operations include the Senior Managers’ and Certification Regime and applicable data protection regulations.

With benefits of AI use, however, come risks – such as unintended bias, poor decision-making and unlawful discrimination (including due to poor data quality in the model), as well as broader governance-related risks. Financial services firms must ensure that they adequately eliminate or mitigate these risks and avoid customer detriment.

There is currently no specific legislation in the UK that exclusively governs AI or its use in healthcare; a variety of existing regulations apply, including the Data Protection Act 2018, the Medical Device Regulations 2002 and guidance issued by the Medicines and Healthcare products Regulatory Agency.

Data use and sharing is a key consideration where AI is used in healthcare, so organisations must comply with UK Data Protection Legislation. For example, in order to provide training data for machine learning, operators must obtain patient consent, put systems in place to ensure data is anonymous and implement robust security measures to protect privacy and confidentiality. Following ethical guidelines is essential for the responsible use of data in healthcare AI.

Patient medical data is vulnerable to cyber-attacks and data breaches. As has been seen in the past, healthcare systems are open to attack and are often targeted by hackers due to the sensitive information they store. Ensuring that these areas have bolstered cybersecurity measures is vital for maintaining the security of patient data.

The Automated Vehicles Draft Bill has recently been published and is progressing through the UK’s legislative process (it is not yet in force). The Bill seeks to “set the legal framework for safe deployment of self-driving vehicles in Great Britain”.

Under the Bill, if an automated vehicle is authorised (following a successful self-driving test), an “authorised self-driving entity” shall be legally responsible for the automated vehicle. For “user-in-charge” vehicles, there will be immunity from liability in certain circumstances, as well as establishing when the “user-in-charge” will be liable (where the user is legally defined as a driver).

It is recognised that automated vehicles are likely to have access to/store personal data. If the Bill is successfully implemented in UK law, other laws will continue to apply, such as UK data protection legislation.

The UK has participated in the World Forum for Harmonisation of Vehicle Regulations (a working party within the framework of the United Nations). The UK government has also published principles of cybersecurity for connected and automated vehicles.

In the UK, the legal framework for addressing product safety and liability largely remains as retained EU law post-Brexit, which broadly requires products to be safe in their normal or foreseeable usage.

Sector-specific legislation (such as for automated vehicles, electrical and electronic equipment and medical devices) may apply to some products that include integrated AI but on the whole it is widely considered that existing rules do not comprehensively address the new and substantial risks posed by AI at the manufacturing stage.

The UK government has conducted a review of the UK’s product safety and liability regimes with a view to introducing a new framework fit for the modern age.

Professionals using AI in their services must adhere to existing professional standards and regulations, including adherence to sector-specific regulations and guidelines, such as those issued by the FCA and the Solicitors Regulation Authority.

Professionals must ensure that AI systems are designed and used in a manner that upholds professional integrity, competence and ethical conduct, which involves safeguarding client confidentiality through compliance with data protection laws, respecting IP rights and, where necessary, obtaining client consent when using AI systems. Liability issues may arise if AI systems produce harmful outcomes, and professionals may be held accountable for the actions of AI systems they employ.

In the case of copyright, Section 9(3) of the Copyright, Designs and Patents Act 1988 states that the author of a computer-generated literary, dramatic, musical or artistic work is the person that undertook the arrangements necessary for the creation of that work. Applying this to the example of an image created by a text-to-image generative AI system, and assuming copyright exists in the work (see 15.3 AI-Generated Works of Art and Works of Authorship), it is unclear whether the author would be the user who entered the prompt or the operator of the AI, and the position may vary from one work to another, but the author can only be a person.

In the case of patents, the UK Supreme Court recently confirmed in Thaler v Comptroller-General of Patents, Designs and Trade Marks that only a human can be recorded as the “inventor”, and not an AI machine such as Dr Thaler’s “DABUS”. It is important to note that Dr Thalus’ case concerned the formalities of patent registration, not the wider question of the patentability of inventions created or aided by AI systems more generally. Had Dr Thaler recorded himself as the inventor on the registration form (in his capacity as the creator/owner of DABUS), the application may well have succeeded.

A trade secret is a piece of information that is treated as confidential by its owner and has commercial value because it is secret. UK law protects trade secrets against unjustified use and disclosure, both through the equitable doctrine of confidence and under the Trade Secrets Regulations 2018. Trade secret protections could be used to protect the underlying source code of an AI system as well as training sets, algorithms and data compilations. These elements are essential for AI systems but may not always qualify for patent protection.

The immediacy of trade secret protection and the broad scope of coverage mean that this is an increasingly common method of protection in the UK. The use of trade secret protection in this area, however, must be balanced with the need for transparency and accountability.

AI-generated works pose a new challenge to copyright law. Section 9(3) of the Copyright, Designs and Patents Act 1988 provides that the author of copyright in computer-generated works is the person who undertakes the necessary arrangements to create the work. Putting aside the difficulty of determining who this person is (see 15.1 Applicability of Patent and Copyright Law), Section 9(3) is only engaged if copyright subsists in the computer-generated work in the first place, which may not be the case for works created by generative AI.

Works are only protected by copyright – which subsists automatically – if they are “original”, broadly meaning the expression of human creative freedom. Unlike more traditional means of using computers to create art (typically requiring at least some level of human skill and effort), generative AI is capable of creating art from simple – even minimal – user prompts. There is a very significant question mark over whether such works can be “original” and therefore benefit from copyright protection. In the absence of specific legislation on the issue, it is possible that the courts will not come to a blanket conclusion and that the answer will vary from work to work, depending on the extent and manner of human creativity involved in its creation.

The creation of works through OpenAI tools such as ChatGPT raises a number of IP issues. Such models are trained on vast amounts of data from a number of sources. As a result, determining the ownership of AI-generated works is challenging.

The question to be asked for IP purposes is whether the original creator is the AI model, the owner of the underlying information, or the user. Furthermore, the use of pre-trained models and datasets may infringe upon existing IP.

In addition, the rights and limitations concerning generated content are governed by OpenAI's licensing agreements and terms of use, which should be reviewed carefully.

The Institute of Directors is a professional organisation for company directors, and has released a “reflective checklist” outlining the following 12 principles that are intended to provide guidance to boards of directors on the use of AI within their organisations:

  • monitor the evolving regulatory environment;
  • continually audit and measure what AI is in use;
  • undertake impact assessments that consider the business and the wider stakeholder community;
  • establish board accountability;
  • set high-level goals for the business aligned with its values;
  • empower a diverse, cross-functional ethics committee that has the power to veto the use of AI;
  • document and secure data sources;
  • train people to get the best out of AI;
  • comply with privacy requirements;
  • comply with secure by design requirements;
  • test and remove AI from use if bias and other impacts are discovered; and
  • regularly review the organisation's use of AI.

The key issue for corporate boards to consider is the ongoing and rapidly changing legal landscape in relation to AI and the applicability within their organisation.

Implementing AI best practices will require consideration of a number of key issues, such as regulatory compliance, ethical considerations, effective risk management and robust data governance.

The key for organisations in the UK is ensuring adherence to the UK government’s following five cross-sectoral principles (and related regulator guidance):

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

Ethical considerations are equally significant. For example, businesses should conduct thorough ethical assessments before developing, deploying and procuring AI systems, which is likely to involve identifying and eliminating biases and addressing privacy concerns.

Effective risk management and internal governance is another key area to consider, such as the identification and mitigation of potential risks associated with AI deployment, development or procurement, and establishing robust internal processes with appropriate guardrails to ensure the responsible and safe use of AI.

Burges Salmon

One Glass Wharf
Bristol
BS2 0ZX
UK

+44 (0) 117 939 2000.

+44 (0) 117 902 4400

www.burges-salmon.com
Author Business Card

Law and Practice in UK

Authors



Burges Salmon has a multidisciplinary technology team that helps organisations across multiple sectors to embrace, develop and monetise cutting-edge technologies, including AI. Its lawyers combine deep technical knowledge and legal expertise with a keen understanding of the way businesses and public bodies procure, design, develop and deploy new technologies, including AI. The firm provides commercially relevant, pragmatic advice to help clients navigate the regulatory landscape whilst meeting their business requirements. As well as supporting clients who are investing in and deploying AI, the team is regularly called upon to provide expert guidance on technology and data regulation and other developments in the UK, EU and internationally. Clients range from leading global technology businesses to high-growth emerging technology companies, across a range of sectors, including financial services, retail, insurance, healthcare, the built environment, energy and utilities and the public sector.