Artificial Intelligence 2023

Last Updated May 30, 2023

UK

Trends and Developments


Authors



Travers Smith is a full-service international law firm headquartered in London. It provides clients with business-focused solutions in relation to a wide variety of technology transactions, advisory work, and disputes. Acting for IT suppliers and customers (including household names and fast-growing tech innovators) and customers alike, Travers Smith’s expertise spans a broad range of sectors – from financial services, insurtech, mobility, medtech and realtech to retail, consumer and beyond. The firm’s team of highly responsive technology lawyers combine technical excellence with pragmatism to assist UK and multinational clients with a full spectrum of technology, data and IP matters, including data protection and privacy compliance, direct marketing, e-commerce, IP contracts and disputes, AI and machine learning, software and cybersecurity, as well as large-scale technology and M&A transactions. The team is well-versed in guiding clients through the complex and constantly evolving digital regulatory landscape as they seek to develop new technologies and exploit data.

The UK’s Pro-innovation Approach to the Regulation of AI

From the outset, this article on AI trends and developments in the UK bucks a recent trend, in that it was written by humans rather than by ChatGPT. Nevertheless, when asked “What is the best approach to AI regulation?”, ChatGPT provided a plausible response by explaining that there is “no one-size-fits-all answer” and that any approach must “balance the benefits of innovation with the need to protect human rights, safety and privacy”. The chatbot went on to propose six general principles for a regulatory framework based around proportionality, transparency, human oversight, privacy, ethics and international co-operation. Sounds sensible, does it not?

Aside from this technology being startlingly sophisticated, perhaps one reason why ChatGPT returned such a coherent answer is that the core principles guiding the approach of lawmakers – whether in the UK, the EU, the USA or China – to the governance of AI are very similar. And yet, although there may be consistency across these jurisdictions in terms of core principles, there is no “one-size-fits-all” approach to the regulation of AI. Indeed, the approach taken by the UK government – as outlined in its March 2023 White Paper, A Pro-innovation Approach to AI Regulation (the “White Paper”) – is quite different from that taken by the EC and set out in the EU’s draft AI Act. What is clear is that the UK is determined to forge its own path and, post-Brexit, it now has the option to do so.

The government funding being pumped into AI is considerable. The ministerial foreword to the White Paper declares that GBP2.5 billion has been invested in AI since 2014, including:

  • GBP110 million in an AI Tech Missions Fund;
  • GBP900 million to establish a new AI Research Resource and to develop an exascale supercomputer capable of running large AI models;
  • GBP8 million in an AI Global Talent Network; and
  • GBP117 million of existing funding to create hundreds of new PhDs for AI researchers.

At the end of April 2023, Prime Minister Rishi Sunak announced a further GBP100 million for a taskforce to help build the UK’s capability when it comes to “safe and reliable” foundation models.

This article looks at:

  • the UK’s approach to AI regulation as set out in the White Paper, including why and how the UK government is choosing to take a different approach from the EU;
  • recent developments concerning AI and data protection;
  • recent developments regarding the interaction between AI and IP; and
  • what businesses can do to prepare.

The White Paper

In July 2022, the UK government delivered a policy paper setting out its approach to the regulation of AI in broad terms. A White Paper was due to follow by the end of the year but was delayed by the political turmoil as Prime Ministers came and went. Meanwhile, excitement surrounding the technology – in particular, large language models such as ChatGPT – continued to surge. 

Sir Patrick Vallance’s report, Pro-innovation Regulation of Technologies Review – Digital Technologies (the “Vallance report”), arrived in March 2023. The White Paper draws on a number of recommendations set out in that report. 

Five overarching principles of the UK approach to AI regulation

In the White Paper, the government lays out the following five principles in determining the approach to the regulation of AI in the UK.

  • Safety, security and robustness – applications of AI should function in a secure, safe and robust way, in which risks are carefully managed.
  • Transparency and explainability – organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of the AI.
  • Fairness – AI should be used in compliance with the UK’s existing laws (eg, equality and data protection laws) and not discriminate against individuals or create unfair or anti-competitive commercial outcomes.
  • Accountability and governance – there must be appropriate oversight of the way AI is being used and clear accountability for the outcomes.
  • Contestability and redress – people should be provided with clear routes to dispute harmful outcomes or decisions generated by AI.

Sectoral approach to regulation based on the use of AI

In contrast to the EU, the UK government intends to rely on existing regulators and regulatory structures. It is not proposing to introduce any broadly applicable AI-specific regulations, akin to the EU’s AI Act, nor to establish a dedicated AI regulator. The rationale is that the existing regulators have the expertise to apply the overarching principles outlined in the White Paper to AI use cases that fall within their sector(s). The regulators will be responsible for issuing relevant guidance to explain how the principles link in with existing legislation and to illustrate what good compliance looks like.

The key to the UK’s proposed framework is that it is “context-specific”. As such, the UK government will regulate the use of the AI as opposed to the technology itself – ie, it will regulate based on the outcomes AI is likely to generate in particular applications, rather than apply rules or risk levels for entire sectors or technologies.

Unlike the EU, the UK government will not put the definition of AI on a statutory footing. It nevertheless introduces a “common understanding” definition of AI by stating that AI should be defined by reference to two characteristics – namely, adaptivity and autonomy – as these are the characteristics that make it hard to explain, predict or control the outputs of an AI system and difficult to allocate responsibility for such outputs. The hope is that this should help future-proof the regime.

Regulatory co-ordination and central government support

It is recognised that such an approach can only work with regulatory co-ordination. In the absence of this, businesses may face an even more confusing matrix of guidance and rules than at present. Some regulators already co-ordinate (the Digital Regulation Cooperation Forum between the Information Commissioner’s Office (ICO), the Competition and Markets Authority, the Financial Conduct Authority (FCA) and Ofcom, for example) but the UK government has said that it will assist further with that co-ordination. Joint regulatory guidance is encouraged and will likely become more common.

In terms of the kind of guidance regulators can be expected to provide, the White Paper offers a case study of an AI system that shortlists candidates based on application forms. If the framework works as it should, the intention would be for the Equality and Human Rights Commission, the ICO and the Employment Standards Inspectorate to issue joint guidance on the use of AI systems in recruitment and employment. The White Paper suggests that the guidance should:

  • clarify the types of information that businesses need to provide when implementing AI systems in recruitment and employment;
  • identify appropriate supply chain management processes (such as due diligence or AI impact assessments);
  • propose proportionate measures for bias detection, mitigation and monitoring (referring to relevant technical standards); and
  • make suggestions for the provision of contestability and redress routes.

As well as co-ordination between regulators, the White Paper recognises that central support from government – including a central monitoring and evaluation framework, cross-sectoral risk function and risk register, and multi-regulator AI sandbox (following another recommendation from the Vallance report) – will be essential to the success of the framework. By way of example, centralised oversight is required to:

  • identify new AI risks (particularly those that cut across the sectors) and advise if they require government intervention;
  • broker agreement on which regulator addresses which risks and prioritise between potentially conflicting principles (eg, between fairness principles and privacy principles, given that it will be difficult to assess an algorithm’s fairness without access to special category data about the subjects of the processing); and
  • identify measures to plug gaps where an AI use case falls between regulators’ respective remits – for example, the UK government has previously had to step in to plug gaps in relation to autonomous vehicles because the existing regulatory structures were set up only for human drivers.

Reasons behind the regulatory approach to AI in the UK

So, why is the UK government taking this approach and is it a positive step? Michelle Donelan, Secretary of State for Science Innovation and Technology, introduced the White Paper as follows: “A heavy-handed and rigid approach can stifle innovation and slow AI adoption. That is why we set out a proportionate and pro-innovation regulatory framework. Rather than target specific technologies, it focuses on the context in which AI is deployed. This enables us to take a balanced approach to weighing up the benefits versus the potential risks.”

The UK government’s objective is to drive growth – as well as ensuring that the UK remains attractive to investors – by making responsible innovation easier and avoiding unnecessary burdens for businesses and regulators. The government is therefore only legislating where it absolutely needs to, while recognising the need to address risks – albeit in a proportionate way – given that trust is a critical driver for AI adoption.

If it can achieve the requisite level of co-ordination between (and oversight of) regulators, then the UK government’s light-touch approach to regulation should be welcomed. Of course, this is a sizeable “if”. Nonetheless, the UK’s approach is a positive step in that it aims to avoid the type of overlap and potential conflict between AI-specific regulations and existing regulations, for which the EU’s approach has been criticised.

The UK’s “context-specific” approach to regulating the use of the AI, rather than the technology itself, could also work better than the more rigid risk-level categories proposed in the EU’s draft AI Act. Take, for example, an AI-powered chatbot or emotion-recognition tool. In a retail, customer-satisfaction context, certain uses of that technology may be relatively anodyne. Use the same technology in a medical diagnostic context and the risk that it presents leaps up the scale. 

The UK government’s approach has given rise to the following qualms.

  • There is a concern among regulators that this approach will not be enforceable unless it is all placed on some form of statutory footing. The UK government is therefore contemplating imposing a statutory duty on regulators to have “due regard” to the principles (but only if it is still considered necessary after an initial implementation period).
  • Although regulators such as the ICO and FCA are equipped for the task, it is possible that other regulators will not have the requisite resources or expertise to see through their roles effectively.
  • The UK government says that it is “too soon” to make decisions about the liability regime for AI – given that “it is a complex, rapidly evolving issue that must be handled properly to ensure the success of the wider AI ecosystem” – and thus does not propose to make changes at this stage. Nonetheless, the White Paper recognises that there are areas where the lack of clarity around liability may prove to be an issue. It provides a case study on automated healthcare triage systems, noting that there is “unclear liability” where such a system provides incorrect medical advice and that this may affect the patient’s ability to seek redress. Some might say that this is a case of the government “kicking the can down the road” and leaving regulators with the important task of allocating responsibility between actors in the supply chain. Once again, this contrasts with approach taken by the EU, which proposes a new law (the AI Liability Directive) in order to address liabilities for harms that arise from the use of AI.
  • There is also nervousness that the UK government’s iterative, “suck-it-and-see” approach may take too long at a time when the capability of the technology is advancing at a terrific pace – particularly in the case of foundation models such as ChatGPT.

Recent data protection developments

ICO guidance

Data is the lifeblood of AI and it comes as no surprise that AI is a priority area for the ICO, given the high risk to individuals. The ICO has identified the following as being particular areas of focus:

  • fairness in AI;
  • dark patterns;
  • AI-as-a-service;
  • AI and recommender systems;
  • biometric data and biometric technologies; and
  • privacy and confidentiality in explainable AI.

In terms of regulatory guidance for AI, the ICO has led the way recently in the UK by issuing various recent AI-specific guidance, including the following examples from the past few years:

Although not specific to AI, the ICO’s draft guidance on anonymisation, pseudonymisation and privacy-enhancing technologies will also be of great interest to businesses developing or using AI – given that these techniques can be used to, for example, help mitigate the privacy risks involved in training AI.

Data protection regulatory reform in the UK

The UK is on course for reform of the UK GDPR via the Data Protection and Digital Information (No 2) Bill (the “DPDI Bill”). Introduced in March 2023, the DPDI Bill could become law late 2023 or early 2024. The changes introduced by the DPDI Bill evidence the UK government’s more flexible approach to AI, even though they are more modest than those that were originally mooted. Take as an example profiling and automated decision-making (ADM) (ie, decisions made without any human involvement), which are highly relevant to AI. The UK government originally proposed scrapping the requirement for human review altogether, but this was resisted by the vast majority of consultation respondents on the basis that it would destroy public trust in ADM in the UK.

Article 22 of the UK GDPR currently provides that data subjects have the right not to be subject to a decision based solely on automated processing (including profiling) if the decision produces legal effects concerning them or similarly affects them. If the DPDI Bill becomes law, ADM will only be subject to a general prohibition (in the absence of explicit consent, contractual necessity or legal obligation as a basis for processing) if special category data is involved.

ADM that does not involve special category data will be permitted, provided certain safeguards for data subjects – for example, the ability to make representations, contest the decision and require human intervention – are put in place. There may therefore be additional scope to rely on the grounds of “legitimate interests” for this type of processing.

Developments in the interaction of AI and IP

One of the recommendations from the Vallance report is that – in order to increase confidence among innovators and investors – the UK government should announce a clear policy position on the relationship between IP law and Generative AI, as the government has made less progress in this area than in other areas discussed in this article.

As the law stands at present, an AI system that creates copyright works cannot be considered the author. The author of a computer-generated work (ie, work where there is no human author) is the person by whom the arrangements necessary for the creation of the work are undertaken. Generative AI blurs the lines, however, as there is some human contribution – whether through the prompts or the content on which the AI draws. The UK Intellectual Property Office consulted in 2020 and 2021 but decided not to make changes in relation to:

  • the protection of computer-generated works; and
  • the duration of protection of such works.

AI also raises the issue of copyright infringement. Copyright can be infringed when training an AI system by, for example, scraping data from the internet and catching material subject to licence restrictions or stated not to be for public use. Practically, however, it is very difficult for rights-holders to trace the AI output back to the training data – so enforcement is challenging. There is a permitted act of text and data mining (TDM) for non-commercial research but, as currently framed, that exception is unlikely to apply to web-scraping for the purpose of training Generative AI. This is because the exception only applies if access to the material is lawful in the first place and, with regard to the restriction to non-commercial research, there has been a shift towards commercialising foundation models (eg, OpenAI’s GPT-4 model is currently paid-for).

In response to a consultation that considered the TDM exception, the UK government announced in July 2022 that it intended to allow the TDM exception to apply for any purpose (including commercial purposes) from which the rights-holders could neither opt out nor impose additional licence fees for TDM. The requirement for lawful access would still remain, so as to enable right-holders to protect their content. Lawful access could involve, for example, subscription-based access or authorisation through website terms and conditions. However, there has been a backlash from creative industries to the proposed TDM exception without an opt-out and there have since been various statements by government ministers suggesting that the changes announced in July 2022 will not be progressed. The current direction is, at the time of writing (May 2023), therefore unclear. 

Although the White Paper said very little about IP rights, the UK government has responded to the Vallance report by promising a code of practice clarifying the parameters around the use of copyright works as training data. This is expected in summer 2023.

Regulatory outlook for use of AI in UK business

The UK government’s consultation on the White Paper is open until 21 June 2023. In the first six months following the White Paper, the government will respond to the consultation, issue the cross-sectoral principles to regulators and publish an AI Regulation Roadmap. In the following six months, the UK can expect to see guidance from key regulators, partnership agreements to deliver the first central functions and proposals for a central monitoring and evaluation framework. Beyond that (ie, more than 12 months from publication), the UK government will:

  • encourage other regulators to publish guidance;
  • deliver a first iteration of all the central functions; and
  • publish its first monitoring and evaluation report, a draft AI risk register for consultation, and an updated AI Regulation Roadmap.

Meanwhile, the EU’s AI Act (unveiled in April 2021) is still making its way through the legislative process and will only apply 24 months after it comes into force. Consequently, while the EU were earlier out of the regulatory starting blocks than the UK, the UK’s approach will effectively be “live” before the AI Act begins to apply.

The game-changing power of foundation models such as ChatGPT has made AI a topic of conversation at all levels of business, whether in the boardroom or at the water-cooler. Many businesses will not just “wait and see” how matters unfold; they want to know now how to prepare. How best to do that when details of the UK government’s approach at this stage are scant?

The EU’s approach has more detail, even if it is not yet finalised. There is nothing in the White Paper to suggest that businesses following the EU’s approach as the “gold standard” would then fall foul of the UK’s approach. In fact, one of the UK government’s overarching objectives is to ensure that its framework remains compatible with other frameworks internationally, including that of the EU. Those UK businesses that also have operations in the EU may therefore choose (for now) to prepare by referring to the EU’s more prescriptive approach but still pay heed to existing guidance on AI published by UK regulators such as the ICO, as this provides some clear guard rails.

Businesses can start to embed as part of their processes:

  • AI risk assessments (in a similar vein to Data Protection Impact Assessments);
  • appropriate record-keeping to maintain an effective audit trail (eg, documentation of the genesis of the data input into the AI tools, the purposes for which the AI is being used, and any decisions taken on the basis of the output); and
  • transparency requirements so that people know when AI is being used (not exclusively where personal data is involved).

Businesses can also decide their current risk appetite for testing and using foundation models such as ChatGPT. It is worth noting, in terms of safeguarding their IP and data, that corporate-focused offerings of ChatGPT (with additional governance and compliance features) exist.

Employee and supply chain policies are likely to address permitted AI use (if they do not already). This will ensure businesses are clear about what is – and what is not – acceptable use of AI.

Before using a third-party AI tool, businesses will want to know:

  • from where the original data fed into the tool has been collected in order to understand whether the tool’s outputs can be trusted; and
  • if and how the AI tool has been tested for bias.

They will also want to review any independent evaluations of the tool’s reliability and accuracy, as well as understand if there will be any human review of the tool’s outputs. The UK can expect to see more protections reflecting these and other AI-related issues in contracts, along with an appropriate allocation of liability for the AI’s outputs.

What will be the regulatory framework in the UK come 2024? ChatGPT said this was difficult to predict, given that the field of AI is rapidly evolving and regulatory frameworks are likely to change as a result. Quite. The UK’s regulatory approach is flexible enough to accommodate such change.

Travers Smith

10 Snow Hill
London
EC1A 2AL
United Kingdom

+44 20 7295 3000

James.longster@traverssmith.com www.traverssmith.com
Author Business Card

Trends and Developments

Authors



Travers Smith is a full-service international law firm headquartered in London. It provides clients with business-focused solutions in relation to a wide variety of technology transactions, advisory work, and disputes. Acting for IT suppliers and customers (including household names and fast-growing tech innovators) and customers alike, Travers Smith’s expertise spans a broad range of sectors – from financial services, insurtech, mobility, medtech and realtech to retail, consumer and beyond. The firm’s team of highly responsive technology lawyers combine technical excellence with pragmatism to assist UK and multinational clients with a full spectrum of technology, data and IP matters, including data protection and privacy compliance, direct marketing, e-commerce, IP contracts and disputes, AI and machine learning, software and cybersecurity, as well as large-scale technology and M&A transactions. The team is well-versed in guiding clients through the complex and constantly evolving digital regulatory landscape as they seek to develop new technologies and exploit data.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.