The UK’s Pro-innovation Approach to the Regulation of AI
From the outset, this article on AI trends and developments in the UK bucks a recent trend, in that it was written by humans rather than by ChatGPT. Nevertheless, when asked “What is the best approach to AI regulation?”, ChatGPT provided a plausible response by explaining that there is “no one-size-fits-all answer” and that any approach must “balance the benefits of innovation with the need to protect human rights, safety and privacy”. The chatbot went on to propose six general principles for a regulatory framework based around proportionality, transparency, human oversight, privacy, ethics and international co-operation. Sounds sensible, does it not?
Aside from this technology being startlingly sophisticated, perhaps one reason why ChatGPT returned such a coherent answer is that the core principles guiding the approach of lawmakers – whether in the UK, the EU, the USA or China – to the governance of AI are very similar. And yet, although there may be consistency across these jurisdictions in terms of core principles, there is no “one-size-fits-all” approach to the regulation of AI. Indeed, the approach taken by the UK government – as outlined in its March 2023 White Paper, A Pro-innovation Approach to AI Regulation (the “White Paper”) – is quite different from that taken by the EC and set out in the EU’s draft AI Act. What is clear is that the UK is determined to forge its own path and, post-Brexit, it now has the option to do so.
The government funding being pumped into AI is considerable. The ministerial foreword to the White Paper declares that GBP2.5 billion has been invested in AI since 2014, including:
At the end of April 2023, Prime Minister Rishi Sunak announced a further GBP100 million for a taskforce to help build the UK’s capability when it comes to “safe and reliable” foundation models.
This article looks at:
The White Paper
In July 2022, the UK government delivered a policy paper setting out its approach to the regulation of AI in broad terms. A White Paper was due to follow by the end of the year but was delayed by the political turmoil as Prime Ministers came and went. Meanwhile, excitement surrounding the technology – in particular, large language models such as ChatGPT – continued to surge.
Sir Patrick Vallance’s report, Pro-innovation Regulation of Technologies Review – Digital Technologies (the “Vallance report”), arrived in March 2023. The White Paper draws on a number of recommendations set out in that report.
Five overarching principles of the UK approach to AI regulation
In the White Paper, the government lays out the following five principles in determining the approach to the regulation of AI in the UK.
Sectoral approach to regulation based on the use of AI
In contrast to the EU, the UK government intends to rely on existing regulators and regulatory structures. It is not proposing to introduce any broadly applicable AI-specific regulations, akin to the EU’s AI Act, nor to establish a dedicated AI regulator. The rationale is that the existing regulators have the expertise to apply the overarching principles outlined in the White Paper to AI use cases that fall within their sector(s). The regulators will be responsible for issuing relevant guidance to explain how the principles link in with existing legislation and to illustrate what good compliance looks like.
The key to the UK’s proposed framework is that it is “context-specific”. As such, the UK government will regulate the use of the AI as opposed to the technology itself – ie, it will regulate based on the outcomes AI is likely to generate in particular applications, rather than apply rules or risk levels for entire sectors or technologies.
Unlike the EU, the UK government will not put the definition of AI on a statutory footing. It nevertheless introduces a “common understanding” definition of AI by stating that AI should be defined by reference to two characteristics – namely, adaptivity and autonomy – as these are the characteristics that make it hard to explain, predict or control the outputs of an AI system and difficult to allocate responsibility for such outputs. The hope is that this should help future-proof the regime.
Regulatory co-ordination and central government support
It is recognised that such an approach can only work with regulatory co-ordination. In the absence of this, businesses may face an even more confusing matrix of guidance and rules than at present. Some regulators already co-ordinate (the Digital Regulation Cooperation Forum between the Information Commissioner’s Office (ICO), the Competition and Markets Authority, the Financial Conduct Authority (FCA) and Ofcom, for example) but the UK government has said that it will assist further with that co-ordination. Joint regulatory guidance is encouraged and will likely become more common.
In terms of the kind of guidance regulators can be expected to provide, the White Paper offers a case study of an AI system that shortlists candidates based on application forms. If the framework works as it should, the intention would be for the Equality and Human Rights Commission, the ICO and the Employment Standards Inspectorate to issue joint guidance on the use of AI systems in recruitment and employment. The White Paper suggests that the guidance should:
As well as co-ordination between regulators, the White Paper recognises that central support from government – including a central monitoring and evaluation framework, cross-sectoral risk function and risk register, and multi-regulator AI sandbox (following another recommendation from the Vallance report) – will be essential to the success of the framework. By way of example, centralised oversight is required to:
Reasons behind the regulatory approach to AI in the UK
So, why is the UK government taking this approach and is it a positive step? Michelle Donelan, Secretary of State for Science Innovation and Technology, introduced the White Paper as follows: “A heavy-handed and rigid approach can stifle innovation and slow AI adoption. That is why we set out a proportionate and pro-innovation regulatory framework. Rather than target specific technologies, it focuses on the context in which AI is deployed. This enables us to take a balanced approach to weighing up the benefits versus the potential risks.”
The UK government’s objective is to drive growth – as well as ensuring that the UK remains attractive to investors – by making responsible innovation easier and avoiding unnecessary burdens for businesses and regulators. The government is therefore only legislating where it absolutely needs to, while recognising the need to address risks – albeit in a proportionate way – given that trust is a critical driver for AI adoption.
If it can achieve the requisite level of co-ordination between (and oversight of) regulators, then the UK government’s light-touch approach to regulation should be welcomed. Of course, this is a sizeable “if”. Nonetheless, the UK’s approach is a positive step in that it aims to avoid the type of overlap and potential conflict between AI-specific regulations and existing regulations, for which the EU’s approach has been criticised.
The UK’s “context-specific” approach to regulating the use of the AI, rather than the technology itself, could also work better than the more rigid risk-level categories proposed in the EU’s draft AI Act. Take, for example, an AI-powered chatbot or emotion-recognition tool. In a retail, customer-satisfaction context, certain uses of that technology may be relatively anodyne. Use the same technology in a medical diagnostic context and the risk that it presents leaps up the scale.
The UK government’s approach has given rise to the following qualms.
Recent data protection developments
ICO guidance
Data is the lifeblood of AI and it comes as no surprise that AI is a priority area for the ICO, given the high risk to individuals. The ICO has identified the following as being particular areas of focus:
In terms of regulatory guidance for AI, the ICO has led the way recently in the UK by issuing various recent AI-specific guidance, including the following examples from the past few years:
Although not specific to AI, the ICO’s draft guidance on anonymisation, pseudonymisation and privacy-enhancing technologies will also be of great interest to businesses developing or using AI – given that these techniques can be used to, for example, help mitigate the privacy risks involved in training AI.
Data protection regulatory reform in the UK
The UK is on course for reform of the UK GDPR via the Data Protection and Digital Information (No 2) Bill (the “DPDI Bill”). Introduced in March 2023, the DPDI Bill could become law late 2023 or early 2024. The changes introduced by the DPDI Bill evidence the UK government’s more flexible approach to AI, even though they are more modest than those that were originally mooted. Take as an example profiling and automated decision-making (ADM) (ie, decisions made without any human involvement), which are highly relevant to AI. The UK government originally proposed scrapping the requirement for human review altogether, but this was resisted by the vast majority of consultation respondents on the basis that it would destroy public trust in ADM in the UK.
Article 22 of the UK GDPR currently provides that data subjects have the right not to be subject to a decision based solely on automated processing (including profiling) if the decision produces legal effects concerning them or similarly affects them. If the DPDI Bill becomes law, ADM will only be subject to a general prohibition (in the absence of explicit consent, contractual necessity or legal obligation as a basis for processing) if special category data is involved.
ADM that does not involve special category data will be permitted, provided certain safeguards for data subjects – for example, the ability to make representations, contest the decision and require human intervention – are put in place. There may therefore be additional scope to rely on the grounds of “legitimate interests” for this type of processing.
Developments in the interaction of AI and IP
One of the recommendations from the Vallance report is that – in order to increase confidence among innovators and investors – the UK government should announce a clear policy position on the relationship between IP law and Generative AI, as the government has made less progress in this area than in other areas discussed in this article.
As the law stands at present, an AI system that creates copyright works cannot be considered the author. The author of a computer-generated work (ie, work where there is no human author) is the person by whom the arrangements necessary for the creation of the work are undertaken. Generative AI blurs the lines, however, as there is some human contribution – whether through the prompts or the content on which the AI draws. The UK Intellectual Property Office consulted in 2020 and 2021 but decided not to make changes in relation to:
AI also raises the issue of copyright infringement. Copyright can be infringed when training an AI system by, for example, scraping data from the internet and catching material subject to licence restrictions or stated not to be for public use. Practically, however, it is very difficult for rights-holders to trace the AI output back to the training data – so enforcement is challenging. There is a permitted act of text and data mining (TDM) for non-commercial research but, as currently framed, that exception is unlikely to apply to web-scraping for the purpose of training Generative AI. This is because the exception only applies if access to the material is lawful in the first place and, with regard to the restriction to non-commercial research, there has been a shift towards commercialising foundation models (eg, OpenAI’s GPT-4 model is currently paid-for).
In response to a consultation that considered the TDM exception, the UK government announced in July 2022 that it intended to allow the TDM exception to apply for any purpose (including commercial purposes) from which the rights-holders could neither opt out nor impose additional licence fees for TDM. The requirement for lawful access would still remain, so as to enable right-holders to protect their content. Lawful access could involve, for example, subscription-based access or authorisation through website terms and conditions. However, there has been a backlash from creative industries to the proposed TDM exception without an opt-out and there have since been various statements by government ministers suggesting that the changes announced in July 2022 will not be progressed. The current direction is, at the time of writing (May 2023), therefore unclear.
Although the White Paper said very little about IP rights, the UK government has responded to the Vallance report by promising a code of practice clarifying the parameters around the use of copyright works as training data. This is expected in summer 2023.
Regulatory outlook for use of AI in UK business
The UK government’s consultation on the White Paper is open until 21 June 2023. In the first six months following the White Paper, the government will respond to the consultation, issue the cross-sectoral principles to regulators and publish an AI Regulation Roadmap. In the following six months, the UK can expect to see guidance from key regulators, partnership agreements to deliver the first central functions and proposals for a central monitoring and evaluation framework. Beyond that (ie, more than 12 months from publication), the UK government will:
Meanwhile, the EU’s AI Act (unveiled in April 2021) is still making its way through the legislative process and will only apply 24 months after it comes into force. Consequently, while the EU were earlier out of the regulatory starting blocks than the UK, the UK’s approach will effectively be “live” before the AI Act begins to apply.
The game-changing power of foundation models such as ChatGPT has made AI a topic of conversation at all levels of business, whether in the boardroom or at the water-cooler. Many businesses will not just “wait and see” how matters unfold; they want to know now how to prepare. How best to do that when details of the UK government’s approach at this stage are scant?
The EU’s approach has more detail, even if it is not yet finalised. There is nothing in the White Paper to suggest that businesses following the EU’s approach as the “gold standard” would then fall foul of the UK’s approach. In fact, one of the UK government’s overarching objectives is to ensure that its framework remains compatible with other frameworks internationally, including that of the EU. Those UK businesses that also have operations in the EU may therefore choose (for now) to prepare by referring to the EU’s more prescriptive approach but still pay heed to existing guidance on AI published by UK regulators such as the ICO, as this provides some clear guard rails.
Businesses can start to embed as part of their processes:
Businesses can also decide their current risk appetite for testing and using foundation models such as ChatGPT. It is worth noting, in terms of safeguarding their IP and data, that corporate-focused offerings of ChatGPT (with additional governance and compliance features) exist.
Employee and supply chain policies are likely to address permitted AI use (if they do not already). This will ensure businesses are clear about what is – and what is not – acceptable use of AI.
Before using a third-party AI tool, businesses will want to know:
They will also want to review any independent evaluations of the tool’s reliability and accuracy, as well as understand if there will be any human review of the tool’s outputs. The UK can expect to see more protections reflecting these and other AI-related issues in contracts, along with an appropriate allocation of liability for the AI’s outputs.
What will be the regulatory framework in the UK come 2024? ChatGPT said this was difficult to predict, given that the field of AI is rapidly evolving and regulatory frameworks are likely to change as a result. Quite. The UK’s regulatory approach is flexible enough to accommodate such change.
10 Snow Hill
London
EC1A 2AL
United Kingdom
+44 20 7295 3000
James.longster@traverssmith.com www.traverssmith.com