Skip to main content
Articles

Government consults on AI regulation

The government is consulting until 21 June 2023 on its plans for implementing a pro-innovation approach to AI regulation, having finally published its long-awaited white paper. Our Regulatory & Compliance and Technology & Digital experts Jeanette Burgess, Andrew Northage and Sally Mewies summarise the proposals and offer their initial views.

Government-consults-on-AI-regulation

What’s the background?

The government’s white paper, which was originally expected in late 2022, follows on from a July 2022 policy paper which set out the government’s “overall pro-innovation direction of travel on regulating AI”.

If you hadn’t guessed already, pro-innovation is very much the focus here. In the government’s announcement introducing the white paper, it says it wants to turbocharge growth and unleash AI’s potential across the economy, while creating the right environment for the technology to flourish safely. This all forms part of the government’s National AI Strategy. We already know that the goal is for the UK to become a science and technology superpower by 2030.

What are the key proposals for AI regulation?

Perhaps surprisingly, unlike the EU and US, for example, the UK doesn’t plan to introduce new legislation for AI regulation. The government says that rushing to regulate too early would risk placing undue burdens on businesses. Instead, the approach relies on collaboration between government, regulators and business. The proposed framework is deliberately designed to be flexible, so that the regulatory approach can be adjusted as technology evolves. It will be “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”.

Rather than creating a new, single regulator responsible for AI regulation, the plan is to empower existing regulators (such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority) to come up with tailored, context-specific approaches that suit the way AI is being used in their sectors.

The government’s view is that the existing regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. It says creating a new AI-specific, cross-sector regulator would introduce complexity and confusion. This would likely undermine and conflict with the existing regulators’ work.

The proposed framework

The framework will be built around the following four key elements:

  • Defining AI based on its unique characteristics to support regulator coordination (products and services that are ‘adaptable’ and ‘autonomous’). The government says that because it’s not creating blanket new rules for specific technologies or applications of AI, like facial recognition or large language models (think ChatGPT), it doesn’t need to use rigid legal definitions.
  • Adopting a context-specific approach: Regulating the use, not the technology. The government won’t assign rules or risk levels to entire sectors or technologies.
  • Providing a set of cross-cutting principles to guide regulator responses to AI risks and opportunities (see below). This will allow the framework to be agile and proportionate.
  • Delivering new central functions to support regulators to deliver the AI regulatory framework, maximising the benefits of an iterative approach and ensuring that the framework is coherent.

The cross-cutting principles

The five principles that the existing regulators will need to consider are:

  • Safety, security and robustness: Applications of AI should function in a secure, safe and robust way where risks are carefully managed.
  • Transparency and explainability: Organisations developing and deploying AI should be able to communicate when and how it’s used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.
  • Fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes.
  • Accountability and governance: Measures are needed to make sure there’s appropriate oversight of the way AI’s being used and clear accountability for the outcomes.
  • Contestability and redress: People need to have clear routes to dispute harmful outcomes or decisions generated by AI.

The government says that not every new AI-related risk will require a regulatory response and there’s a growing ecosystem of tools for trustworthy AI (described in part 4 of the white paper) that can support the application of the cross-cutting principles.

Initially, the principles will be issued on a non-statutory basis. Feedback from some businesses and regulators suggests, however, that government should go beyond a non-statutory approach to make sure the principles have the desired impact. Some regulators have also expressed concerns that they lack the statutory basis to consider the application of the principles. The government says it’s committed to an approach that leverages collaboration with the regulators but agrees it may need to intervene further to make sure the framework is effective.

Following a period of non-statutory implementation, the government anticipates it will want to strengthen and clarify regulators’ mandates by introducing a new duty requiring them to have due regard to the principles.

How will AI regulation work in practice?

We won’t know further and more exact details until we see the government’s response to the consultation and the work that follows on from that. The government says further details about implementation of the regulatory framework will be provided through an AI regulation roadmap to be published alongside the response.

In implementing the new framework, the government expects that regulators will:

  • Assess the cross-cutting principles and apply them to AI use cases that fall within their remit.
  • Issue relevant guidance on how the principles interact with existing legislation to support industry to apply the principles. That guidance should also explain and illustrate what compliance looks like.
  • Support businesses operating within the remits of multiple regulators by collaborating and producing clear and consistent guidance, including joint guidance where appropriate.

Annex 1 of the white paper sets out the factors the government believes regulators may wish to consider when providing guidance/implementing each principle.

The government expects regulators to collaborate proactively to achieve the best outcomes for the economy and society, and will intervene to drive stronger collaboration if needed.

Where prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions, which may include adapting existing legislation if necessary.

In terms of oversight, the government intends to put mechanisms in place to coordinate, monitor and adapt the framework as a whole. It will deliver a range of central functions, including horizon scanning and risk monitoring, to identify and respond to situations where prioritised risks are not adequately covered by the framework, or where gaps between regulators’ remits are negatively impacting innovation.

The white paper sets out the government’s commitment to engaging internationally to support interoperability across different regulatory regimes, challenging barriers which may stand in the way of businesses operating internationally and helping to ease the burden on business.

Accountability and legal responsibility

While the government recognises that the clear allocation of accountability and legal responsibility is important for effective AI governance, it says it’s not yet clear how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be, allocated to existing supply chain actors within the AI life cycle.

The government’s not proposing to intervene and make changes to life cycle accountability at this stage. It says it’s too soon to make decisions about liability as it’s a complex, rapidly evolving issue which must be handled properly to ensure the success of the wider AI ecosystem. It will engage a range of experts, including technicians and lawyers, to further its understanding of this topic.

In relation to large language models, the government says it’s mindful of the rapid rate of advances in their power and application, and the potential creation of new or previously unforeseen risks. As a result, these models will be a core focus of the government’s monitoring and risk assessment functions. But the government says it would be premature at this stage to take specific regulatory action which would, among other things, risk stifling innovation.

Next steps towards AI regulation

Part 7 of the white paper sets out the planned next steps, ranging from the first six months since publication to 12 months or more.

In the first six months, for example, we can expect to see: the government’s consultation response; an AI regulation roadmap with plans for establishing the central functions; and initial government guidance to regulators for implementation of the principles.

This will be followed by, among other things, agreeing partnership agreements to deliver the first central functions and encouraging regulators to publish guidance on how the principles apply to their remits.

In the longer term, the plan is to: deliver the first iteration of all the central functions; publish a draft central, cross-economy AI risk register for consultation; develop a regulatory sandbox; publish the first monitoring and evaluation report; and publish an updated roadmap.

Initial views

The UK’s plans for AI regulation appear to have drawn a mixture of both praise and criticism. As always, the devil will be in the detail, and there’s some time to go before we have a clearer view of how things will actually work in practice.

Key to the success of the government’s proposals will be how well the existing regulators are able to collaborate and coordinate their efforts, and how agile they are able to be in reacting and adapting to situations that arise. There is a danger that, with so many different parties involved – regulators, government, industry, academia, other stakeholders – the whole process could become cumbersome and unwieldy, potentially causing the complexity and confusion that the government says it’s keen to avoid.

What businesses need is clarity. While there are obvious advantages to having a flexible approach, if the law isn’t clear and there are grey areas or gaps in coverage, this is likely to lead to uncertainty about what the position is. On a related note, it’s not yet clear exactly how AI regulation will be enforced. That will be essential in building trust and, again, providing certainty.

Finally, it’s interesting to consider the UK’s proposals in light of what’s happening now in the tech world. On the same day that the white paper was released, key figures in AI signed an open letter, saying that the race to develop AI systems is out of control. They’re calling for the training of AIs above a certain capacity to be halted for at least six months, or for governments to step in and institute a moratorium. The government highlights in the white paper both the need to act quickly and the need not to act prematurely in legislating or taking specific regulatory action. With the clock ticking, it remains to be seen whether the government’s proposals will come to fruition in time to be truly ahead of the curve and whether they will be effective in balancing both the risks and the benefits of AI.

How we can help

If you have any questions about the government’s proposals for AI regulation, or need help with responding to the consultation, please contact Jeanette, Andrew or Sally.

Jeanette
Burgess

Managing Partner

CONTACT DETAILS
Jeanette's contact details

Email me

CLOSE DETAILS

Andrew
Northage

Partner

Regulatory & Compliance

CONTACT DETAILS
Andrew's contact details

Email me

CLOSE DETAILS

Sally
Mewies

Partner

Head of Technology & Digital

CONTACT DETAILS
Sally's contact details

Email me

CLOSE DETAILS