23rd January 2026
“As artificial intelligence reshapes the way construction, infrastructure and energy projects are conceived, delivered and managed, its transformative potential is becoming impossible to ignore. If you’re a stakeholder in this area, now is the time to understand how AI could be harnessed to supercharge your projects, while taking steps to stay ahead of the legal and operational challenges it could bring to your business.”
As uptake of generative artificial intelligence and machine-learning technologies (AI) continues to accelerate, long‑standing project workflows are being augmented or replaced by more dynamic, technology‑driven approaches. AI isn’t just speeding up design and site monitoring; it’s beginning to run aspects of projects itself — predicting output, balancing demand, scheduling maintenance and improving quality control.
In this article, we look at how you can capitalise on the efficiencies and competitive advantages AI can offer, whilst managing legal and operational risk. We cover:
AI is becoming embedded in everything from early‑stage design to operations management. For example (non-exhaustively):
Together, applications such as these are reshaping the conception, delivery and management of major projects. This brings significant efficiencies and opportunities. But what are the attendant legal risks?
As algorithmic recommendations increasingly influence (and make) real‑world decisions, the construction and infrastructure industries are seeing risk profile changes in the following areas:
Minimising legal exposure begins with proactive governance. Legal teams, project managers and technical leads should work together to develop an ‘AI use register’, to map how AI is being deployed throughout a project lifecycle. Safeguards can then be built into each stage.
At the same time, accurate and consistent recording of data sources and model inputs, decision logs and the creation of specific opportunities for human oversight in project workflows will likely prove vitally important in disputes where AI is a factor.
An AI use register should record every AI system, digital twin, drone workflow or automated tool used on the project. For each system, the register should set out its purpose, the data on which it relies, the points where human oversight is required, the model and version in use, and the logging and retention arrangements. This can become a reference point for procurement, assurance, incident response, disclosure and ongoing governance.
Clear and forward‑looking contractual terms are critical. Agreements with contractors, consultants and AI vendors should expressly allocate ownership of AI‑generated outputs, define responsibility for errors or faulty recommendations, and address data rights, confidentiality obligations and liability caps specific to AI‑driven processes. Given the potential for disputes over IP ownership, data provenance or defective outputs, contracts must reflect the reality that AI is now a meaningful contributor to project outcomes, not merely a passive tool, and must therefore be actively managed. Key points to consider for your contracts include:
Liability allocation: Liability should be allocated in a way that reflects how AI is being used, and should be supported by appropriate warranties and indemnities from integrators or software vendors. Provisions should also clarify whether obligations are governed by fitness‑for‑purpose standards or reasonable skill and care. Human sign‑off points must be expressly defined for safety‑critical, operational or market‑sensitive decisions.
AI‑ready contract schedules: Whether you’re working under NEC, JCT, FIDIC or bespoke contractual arrangements, schedules should clearly set out the scope of AI use and specify the level of disclosure required. Appropriate provisions will also differ according to the particular AI tools covered. For example, for advisory tools, you may need “no‑reliance” wording; while automated tools may require defined acceptance tests and agreed fall‑back modes to cater for eventualities such as system failure.
IP and data rights: Ensuring proper safeguards around security, data use, retention and model training will be critical to avoiding or managing data and IP disputes. Specifying ownership and licensing arrangements for AI‑generated outputs (and the datasets that underpin them), restricting the use of confidential datasets for training, defining re‑use rights, and ring-fencing sensitive information can all be effective strategies.
As well as legal safeguards, it’s important to strengthen day‑to‑day operations. In practice, this means having a straightforward plan for what to do if an AI system goes wrong, including a simple way to pause or roll back the system quickly. It also means testing your AI tools regularly to check how they behave under pressure, limiting who can access your AI systems, and keeping training environments separate from the live systems you rely on.
AI is transforming how major projects are designed, delivered and managed. But adopting it without the right contractual, governance and operational safeguards can expose your business to risk, and the legal issues can be complex and multifaceted.
With clear planning, smart allocation of responsibility and robust oversight, AI can become a genuine competitive advantage, and liabilities can be kept to a minimum.
With AI now influencing design, procurement, programme management and on‑site delivery across the UK’s most complex and highly regulated sectors, such as transport, energy, water, defence, and major public‑sector infrastructure, project leaders need targeted, cross‑disciplinary support.
Our Construction, Commercial and Technology specialists can provide legal and practice support as you navigate this evolving landscape. We can:
Proactively addressing legal challenges will enable responsible businesses to reap the transformative benefits of AI in construction and infrastructure. Please contact Carly Thorpe or Ryan Doodson for further information or advice.
[1] Currently, the law of England and Wales doesn’t recognise AI as a legal person. However, the Law commission raised that possibility as a potential area for reform, in a discussion paper published in July 2025.