23rd March 2026
“While the market for AI, and the surrounding hype, is growing, what changes have most people experienced in their day-to-day working lives? What are the UK government and regulators doing to speed up, or slow down, this rate of change? In this article we explore these questions, what’s coming next, and how you can prepare.”
The market for AI products and systems continues to grow, but so far, many organisations outside of the technology sector appear to be using AI to only a limited extent, often in low-impact ways, with little change in day to day operations.
In this article, we explore the pace of AI adoption we’re experiencing in the UK and how government and regulators are shaping that change. We give an update on what the latest UK and EU regulatory developments, including proposed amendments to the EU AI Act, mean for you and how you can prepare.
Just over a year ago the UK Government launched its AI Opportunities Action Plan to set a clear direction for how the UK would harness AI.
One year on, now feels like a good time to assess how much progress has been made, and that’s exactly what the UK Government has done.
In January this year the UK Government’s Department for Science, Innovation & Technology (DSIT), published a policy paper entitled “AI Opportunities Action Plan: One Year On”. It paints a rather rosy picture, emphasising that the Government has already met 38 of the 50 actions, three-quarters of the plan, with achievements including:
However, the DSIT’s recent research on AI Adoption strikes a more sombre tone and suggests that (perhaps inevitably) the efforts of the UK Government, and dare we say it the broader AI hype, aren’t yet translating into significant changes in the real world of business outside the technology sector.
The research found that:
If these research findings apply across UK industry as whole, and AI adoption has indeed so far been slower than some people originally predicted, to what extent is regulation (more precisely, over-regulation) part of the problem, and are regulators now changing tack?
Whilst the EU AI Act formally came into force on 1 August 2024, the Act is being phased in over several years, with many of the most significant obligations not applying until August 2026 at the earliest. Those timelines are now in question, however, as the EU shifts towards a more pragmatic implementation approach through the proposed Digital Omnibus Regulation.
By way of reminder, the Act adopts a risk‑based system, categorising AI systems into unacceptable, high‑risk, limited‑risk and minimal‑risk tiers. Obligations increase with risk and include risk‑management systems, transparency duties, incident monitoring and, in all cases, the obligation to ensure AI literacy within organisations.
If you’re a UK-based business, you may fall within scope if you place AI systems or general‑purpose models on the EU market, put systems into service there, or if your AI outputs are used within the EU.
The following obligations have been in force since 2 February 2025:
With these provisions applying from 2 August 2026, you should be reviewing your AI policies and systems now to ensure you’re compliant ahead of the deadline.
The Digital Omnibus on AI Regulation (Digital Omnibus), proposed by the Commission on 19 November 2025, intends to streamline and soften parts of the AI Act ahead of its full application (presumably to accelerate AI development and adoption). The Commission’s proposal aimed to introduce several significant changes including:
Earlier this month the Council of the European Union supported the thrust of the Commission’s proposal but introduced several additional changes:
Following the Council’s approval of its mandate, the Digital Omnibus will now be considered by the European Parliament. It remains uncertain whether it will progress in time ahead of the 2 August 2026 implementation date, or indeed whether it will be adopted at all. For now, if you’re looking for regulatory certainty, you may have to wait a little longer.
That said, the general direction of travel appears clear: while the EU remains committed to a comprehensive statutory framework, it is increasingly trying to temper that ambition with a more proportionate, phased and workable rollout(presumably to help speed up AI development and adoption).
Despite periodic signals that standalone AI legislation might emerge in the UK, no binding AI regime has materialised. Instead, the Government continues to champion the pro‑innovation, non‑statutory, regulator‑led approach as initially set out in the March 2023 White Paper – AI regulation: a pro-innovation approach. Existing regulators, including the likes of the CMA, ICO and FCA, retain primary oversight within their respective sectors with the authority to deal with AI as they see fit.
The long‑anticipated AI (Regulation) Bill, which proposed (amongst other measures) a centralised AI Authority, remains unimplemented and is not expected until summer this year at the earliest. Therefore, non‑binding guidance, such as Implementing the UK’s AI Regulatory Principles (aimed at regulators but also valuable for businesses seeking to understand regulatory expectations), remains as far as the UK has gone in developing its AI regulatory framework.
For now, the UK’s position remains unchanged: a cautious, principles‑led approach without binding legislation which could stifle innovation. Hopefully over time this approach will be fleshed out with relatively light touch, but sufficiently detailed, rules and guidance to give organisations the clarity they need to confidently develop and adopt AI at scale.
According to Elon Musk, because of AI “probably none of us will have a job”. Yet at the moment he still has about five, and many of us still spend inordinate amounts of time toiling away at our laptops.
We asked a leading AI assistant why AI adoption has been slower than some expected. It answered that the technology has moved faster than data, law, organisations, skills and trust. It also cited legal and regulatory concerns as one of the factors holding back adoption, with slower adoption being observed in more regulated environments. This feels like a very accurate and insightful assessment, rather than an AI hallucination! Perhaps we should be using the assistant more.
The UK Government has done, and is continuing to do, a lot to accelerate the development and adoption of AI. It has challenged the UK regulators to do likewise, and there are signs that the regulators are rising to that challenge. That said, all these efforts, perhaps inevitably, are taking time to bear fruit.
The Digital Omnibus suggests that the EU is moving towards a more pro-innovation stance. Some might cheer “better late than never”, but the new regulations are not a done deal, and even if implemented will take time to have a positive impact.
In the long run AI will undoubtedly reshape the way we live and work, but progress thus far suggests it may be a case of incremental and steady evolution rather than the overnight revolution that some predicted.
Ending on a positive note, the slower pace of adoption may ultimately prove to be beneficial if it reflects organisations, governments and regulators taking the time to understand where AI genuinely adds value, how risks should be managed, and how people and processes need to adapt.
Here at Walker Morris, we’re optimistic about AI’s medium and long-term impact on many activities. As expectations reset and hype gives way to experience and incremental gains, we expect to see AI moving from experiment to real-world application, and that is when durable (and potentially transformational) benefits will start to emerge.
If you have any questions about AI regulation, compliance obligations or how to respond to upcoming changes in the UK and EU, please contact Paul Armstrong or Matthew Lingard for tailored advice.