Skip to main content
Comment & Opinion

From hype to helpfulness: Is AI there yet?

“While the market for AI, and the surrounding hype, is growing, what changes have most people experienced in their day-to-day working lives? What are the UK government and regulators doing to speed up, or slow down, this rate of change? In this article we explore these questions, what’s coming next, and how you can prepare.”

Paul Armstrong – Director, Commercial & Technology

The market for AI products and systems continues to grow, but so far, many organisations outside of the technology sector appear to be using AI to only a limited extent, often in low-impact ways, with little change in day to day operations.

In this article, we explore the pace of AI adoption we’re experiencing in the UK and how government and regulators are shaping that change. We give an update on what the latest UK and EU regulatory developments, including proposed amendments to the EU AI Act, mean for you and how you can prepare.

UK market update

Just over a year ago the UK Government launched its AI Opportunities Action Plan to set a clear direction for how the UK would harness AI.

One year on, now feels like a good time to assess how much progress has been made, and that’s exactly what the UK Government has done.

In January this year the UK Government’s Department for Science, Innovation & Technology (DSIT), published a policy paper entitled “AI Opportunities Action Plan: One Year On”. It paints a rather rosy picture, emphasising that the Government has already met 38 of the 50 actions, three-quarters of the plan, with achievements including:

  • Working with industry partners to deliver over 1 million free AI courses to workers across the UK in partnership with leading AI companies and launching the AI Skills Boost platform to provide free AI learning nationwide, expanding the programme to provide 10 million workers with key AI skills by 2030.
  • Launching AI Growth Zones – backed by new reforms to planning and energy access, including energy discounts for AI Growth Zones in Scotland and North East England.
  • Backing the AI Security Institute with £240 million to expand work on frontier model testing, foundational safety and societal resilience.
  • Working with regulators to enable safe AI innovation and promote adoption, including by building sandboxes and providing additional funding for regulators.

However, the DSIT’s recent research on AI Adoption strikes a more sombre tone and suggests that (perhaps inevitably) the efforts of the UK Government, and dare we say it the broader AI hype, aren’t yet translating into significant changes in the real world of business outside the technology sector.

The research found that:

  • Adoption of AI is currently still modest. 1 in 6 businesses currently use AI, but most businesses currently have no active plans to adopt AI.
  • Among AI adopters, 30% of staff currently use AI, on average.
  • Most businesses using AI report an increase in workforce productivity. However, most business have not yet experienced a change in revenue.
  • There is a notable gap in readiness to adopt and scale AI usage between businesses. While just over half of organisations already using AI feel ready to further scale up their use, only a third of those planning to use AI feel ready to implement it, reiterating the challenge of limited skills and expertise in this area.
  • Lack of identified need and limited AI skills are the most cited barriers to AI adoption, but ethical concerns are deemed more significant. The most frequently cited reasons why businesses do not adopt AI are a lack of identified need and limited AI skills and expertise. However, among businesses that cited ethical concerns, these are considered the most significant barrier to AI adoption, followed by high costs and unclear or uncertain regulation.

If these research findings apply across UK industry as whole, and AI adoption has indeed so far been slower than some people originally predicted, to what extent is regulation (more precisely, over-regulation) part of the problem, and are regulators now changing tack?

EU regulatory update – Brakes applied on the implementation of the EU AI Act

Whilst the EU AI Act formally came into force on 1 August 2024, the Act is being phased in over several years, with many of the most significant obligations not applying until August 2026 at the earliest. Those timelines are now in question, however, as the EU shifts towards a more pragmatic implementation approach through the proposed Digital Omnibus Regulation.

By way of reminder, the Act adopts a risk‑based system, categorising AI systems into unacceptable, high‑risk, limited‑risk and minimal‑risk tiers. Obligations increase with risk and include risk‑management systems, transparency duties, incident monitoring and, in all cases, the obligation to ensure AI literacy within organisations.

If you’re a UK-based business, you may fall within scope if you place AI systems or general‑purpose models on the EU market, put systems into service there, or if your AI outputs are used within the EU.

Which of the Act’s provisions are already in force?

The following obligations have been in force since 2 February 2025:

  • AI literacy: The requirement for all in‑scope organisations to ensure its staff, and any other persons dealing with the operation and use of AI systems on their behalf, are AI literate.
  • Unacceptable risk systems: The outright prohibition on unacceptable risk AI systems (such as social scoring systems and manipulative AI).

Which provisions are not yet in force?

  • Transparency obligations: The requirement for limited risk AI systems, such as chatbots and deepfake‑generation tools, to meet transparency and disclosure obligations.
  • High‑risk systems: The provisions on high-risk AI systems, such as those used in education, employment, biometrics and essential services, which the Act imposes the most onerous obligations on.

With these provisions applying from 2 August 2026, you should be reviewing your AI policies and systems now to ensure you’re compliant ahead of the deadline.

The Digital Omnibus proposals

The Digital Omnibus on AI Regulation (Digital Omnibus), proposed by the Commission on 19 November 2025, intends to streamline and soften parts of the AI Act ahead of its full application (presumably to accelerate AI development and adoption). The Commission’s proposal aimed to introduce several significant changes including:

  • Responsibility for AI literacy would be shifted away from businesses and onto EU Member States and the Commission, who would instead encourage businesses to improve literacy.
  • Extended timelines for high‑risk AI obligations, with implementation dates pushed back and linked to the readiness of supporting guidance, subject to long‑stop dates in late 2027/2028.
  • Extended grace periods for generative AI systems placed on the market before 2 August 2026, including an additional six months to retrofit systems to meet the new transparency duties (i.e. marking artificially generated or manipulated content using watermarks, metadata or other digital tags).
  • Removal of the requirement to register certain high‑risk systems in the EU database where they are only used for internal, ancillary or procedural purposes.

Earlier this month the Council of the European Union supported the thrust of the Commission’s proposal but introduced several additional changes:

  • A new prohibited AI practice regarding the generation of non-consensual sexual and intimate content or child sexual abuse material.
  • Fixed compliance deadlines for high-risk AI systems replacing the more uncertain deadline aligning with the availability of harmonised standards. The dates are 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.
  • Reinstatement of the obligation for providers to register AI systems in the EU database for high-risk systems where the provider self-determines that their AI system is not high-risk. Commentators had criticised the proposed relaxation of this rule as creating a loophole in the protection provided by the Act.

Following the Council’s approval of its mandate, the Digital Omnibus will now be considered by the European Parliament. It remains uncertain whether it will progress in time ahead of the 2 August 2026 implementation date, or indeed whether it will be adopted at all.  For now, if you’re looking for regulatory certainty, you may have to wait a little longer.

That said, the general direction of travel appears clear: while the EU remains committed to a comprehensive statutory framework, it is increasingly trying to temper that ambition with a more proportionate, phased and workable rollout(presumably to help speed up AI development and adoption).

UK regulatory update (or lack thereof)

Despite periodic signals that standalone AI legislation might emerge in the UK, no binding AI regime has materialised. Instead, the Government continues to champion the pro‑innovation, non‑statutory, regulator‑led approach as initially set out in the March 2023 White Paper – AI regulation: a pro-innovation approach. Existing regulators, including the likes of the CMA, ICO and FCA, retain primary oversight within their respective sectors with the authority to deal with AI as they see fit.

The long‑anticipated AI (Regulation) Bill, which proposed (amongst other measures) a centralised AI Authority, remains unimplemented and is not expected until summer this year at the earliest. Therefore, non‑binding guidance, such as Implementing the UK’s AI Regulatory Principles (aimed at regulators but also valuable for businesses seeking to understand regulatory expectations), remains as far as the UK has gone in developing its AI regulatory framework.

For now, the UK’s position remains unchanged: a cautious, principles‑led approach without binding legislation which could stifle innovation. Hopefully over time this approach will be fleshed out with relatively light touch, but sufficiently detailed, rules and guidance to give organisations the clarity they need to confidently develop and adopt AI at scale.

According to Elon Musk, because of AI “probably none of us will have a job”. Yet at the moment he still has about five, and many of us still spend inordinate amounts of time toiling away at our laptops.

We asked a leading AI assistant why AI adoption has been slower than some expected. It answered that the technology has moved faster than data, law, organisations, skills and trust. It also cited legal and regulatory concerns as one of the factors holding back adoption, with slower adoption being observed in more regulated environments. This feels like a very accurate and insightful assessment, rather than an AI hallucination! Perhaps we should be using the assistant more.

The UK Government has done, and is continuing to do, a lot to accelerate the development and adoption of AI. It has challenged the UK regulators to do likewise, and there are signs that the regulators are rising to that challenge. That said, all these efforts, perhaps inevitably, are taking time to bear fruit.

The Digital Omnibus suggests that the EU is moving towards a more pro-innovation stance. Some might cheer “better late than never”, but the new regulations are not a done deal, and even if implemented will take time to have a positive impact.

In the long run AI will undoubtedly reshape the way we live and work, but progress thus far suggests it may be a case of incremental and steady evolution rather than the overnight revolution that some predicted.

Ending on a positive note, the slower pace of adoption may ultimately prove to be beneficial if it reflects organisations, governments and regulators taking the time to understand where AI genuinely adds value, how risks should be managed, and how people and processes need to adapt.

How we can support you

Here at Walker Morris, we’re optimistic about AI’s medium and long-term impact on many activities. As expectations reset and hype gives way to experience and incremental gains, we expect to see AI moving from experiment to real-world application, and that is when durable (and potentially transformational) benefits will start to emerge.

If you have any questions about AI regulation, compliance obligations or how to respond to upcoming changes in the UK and EU, please contact Paul Armstrong or Matthew Lingard for tailored advice.

Our people