Skip to main content
Comment & Opinion

AI regulation: Government confirms position

The Topline

“The government’s long-awaited response to last year’s consultation on AI regulation confirms what we already knew – that the government is averse to regulating AI specifically and will devolve responsibility to existing regulators. The government is of the view that it’s too early to legislate and AI use has to be looked at in context. The next major flurry of activity is in the spring, including the regulators outlining their strategic approach to AI, and updated guidance on the use of AI in HR and recruitment. Watch this space.”

Sally Mewies – Partner, Head of Technology & Digital

AI regulation: Government confirms positiion

What’s the background?

Unlike the European Union, where approval of an AI Act is in the final stages, the UK favours a “wait and see” approach.

It believes that AI regulation should be devolved to existing sector regulators to create bespoke measures tailored to the needs and risks posed by different parts of the economy.

The UK hosted the first global AI Safety Summit in November and is clearly positioning itself to be a global leader, with ambitions to lead on safe AI and be a science and technology superpower by the end of the decade.

UK AI regulation: Key points

  • The overall approach is pro-innovation and pro-safety combining cross-sectoral principles and a context-specific framework, international leadership and collaboration, and voluntary (for now) measures on developers.
  • There’s new guidance to support regulators to implement the principles effectively. Key regulators have until 30 April 2024 to publish an update outlining their strategic approach to AI. There seems to be an acceptance by the government that some regulators are much further along the road than others in terms of understanding how AI may impact the sectors and areas they regulate, and that significant resource will be needed to provide the tools for managing their obligations effectively.
  • This context-based approach may miss significant risks posed by highly capable general-purpose AI systems. The government lays out the case for a set of targeted, binding requirements on developers in the future to ensure that powerful, sophisticated AI develops in a safe way. But it will only legislate when confident that’s the right thing to do.
  • There’s a central function to drive coherence in the regulatory approach across government. A steering committee with government representatives and key regulators to support knowledge exchange and coordination on AI governance will be set up by spring 2024.
  • £100 million is being invested to support AI innovation and regulation, including £80 million to launch 9 new AI research hubs, and £10 million for regulators to develop the capabilities and tools they need.
  • The Digital Regulation and Cooperation Forum has shared the eligibility criteria for the support to be offered by the AI and Digital Hub pilot, which is launching in spring 2024.
  • There will be updated guidance in spring 2024 to ensure the use of AI in HR and recruitment is safe, responsible, and fair. We urgently need more clarity from the Information Commissioner on the practicalities of how to reconcile the obvious conflicts between how AI operates and data protection principles.
  • A working group convened by the UK Intellectual Property Office on the interaction between copyright and AI was unable to agree an effective voluntary code. Ministers will now lead a period of engagement with the AI and rights holder sectors. This is unfortunate as there was an opportunity to provide certainty. Understanding how models can be trained on data that includes third party IP is critical for businesses to have the confidence to develop AI tools and use them in their business. The House of Lords in their recent report on AI identified the uncertainty around IP use in the UK as being a blocker for the development of AI businesses in the UK.
  • A call for views in spring 2024 will gather further input on next steps in securing AI models. This includes a potential code of practice for AI cyber security.
  • Work to analyse the life cycle accountability for AI is ongoing.
  • The UK is committed to establishing enduring international collaboration on AI safety. Domestic and international approaches must develop in tandem. The UK’s AI Safety Institute will partner with other countries to facilitate collaboration between governments on AI safety testing and governance, and develop their own capability.
  • Next steps are set out in an AI regulation roadmap at 5.4 of the white paper response.

AI regulation: How we can support you

Watch out for further briefings looking at different aspects of the government’s response in more detail. We’ll also be releasing an AI Guide to help you navigate how best to manage the risks for the business. In the meantime, if you have any questions about AI regulation, or need help with your approach to AI governance, policies and procedures, please contact Sally, Jeanette or Andrew.

Sally
Mewies

Partner

Head of Technology & Digital

CONTACT DETAILS
Sally's contact details

Email me

CLOSE DETAILS

Jeanette
Burgess

Managing Partner

CONTACT DETAILS
Jeanette's contact details

Email me

CLOSE DETAILS

Andrew
Northage

Partner

Regulatory & Compliance

CONTACT DETAILS
Andrew's contact details

Email me

CLOSE DETAILS