Skip to main content
Articles

Landmark EU AI Act finalised

Billed as the world’s first comprehensive legal framework on AI, the landmark European Union AI Act has now been finalised. Our Head of Technology & Digital Sally Mewies, and Regulatory & Compliance Partner Andrew Northage, summarise the headline points.

EU AI Act hero image. An image of a robot hand touching a screen

In-scope organisations – in the EU and elsewhere – will have 2 years to comply with most of the Act’s provisions. Crucially, UK businesses may be subject to the Act if they deploy AI in the EU.

Over the next few months we’ll focus in on different aspects of the Act and the practical implications to help affected organisations prepare. Watch out for our series of snapshot articles on what you need to know.

What’s it all about?

Governments across the globe have been grappling for a while now with how best to regulate AI, as the technology advances at a rapid pace. International collaboration continues, but national approaches are diverging.

With the EU choosing to specifically legislate, the United Kingdom currently has no plans to do so – instead, empowering existing regulators to produce tailored, context-specific approaches that suit the way AI’s being used in their sectors.

Who has to comply with the EU AI Act?

Obligations are placed on various operators in the AI value chain – providers and their authorised representatives, deployers, importers, distributors and product manufacturers.

Note that non-EU based businesses will still be caught if they put an AI system into service or place it on the EU market, or the output produced by the AI system is used there. We’ll be covering this in more detail in a future article.

There are additional obligations for providers of so-called ‘general-purpose AI’ models, which include large generative AI models. These models are typically trained on large amounts of data, can be used to perform a wide range of distinct tasks, and may be integrated into a lot of AI systems. Generative AI can be thought of as a subset of AI that can create content such as text, audio, images or video. ChatGPT is an example.

What’s an AI system?

There’s no universally accepted definition of AI or AI system. It’s essentially a computer system that can perform tasks usually requiring human intelligence. The EU AI Act uses the following definition, which is closely aligned with the work of international organisations working on AI and is designed to be future-proof:

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

So what does this actually mean?

We’re not talking here about simple traditional software systems or programming approaches, or systems where the output is pre-defined based on a set of rules fed in by a human.

A key characteristic is the capability of the system to infer, from inputs or data, how to generate outputs. It goes beyond basic data programming by enabling learning, reasoning or modelling. Autonomy refers to independence from human involvement. Adaptiveness after deployment refers to self-learning capabilities, allowing the system to change while in use.

Risk categories and obligations

The EU AI Act categorises AI systems according to risk, with some banned altogether. The nature and extent of obligations will depend on the type of operator and the risk category:

Minimal/no risk: Covers most AI systems, including AI-enabled recommender systems and video games, and spam filters. They can be used freely, and organisations can commit voluntarily to codes of conduct.

Limited/specific transparency risk: Refers to the risks associated with lack of transparency in AI usage. Users of AI systems such as chatbots should be aware that they’re interacting with a machine. Deep fakes and other AI-generated content will have to be identifiable and labelled, and users told when biometric categorisation or emotion recognition systems are used.

High-risk: AI systems will be high-risk where they’re either (1) products or safety components of products covered by specific EU legislation listed in the Act and subject to a third-party conformity assessment, or (2) listed in Annex III of the Act (except where the system doesn’t pose a significant risk of harm to people’s health, safety or fundamental rights, including by not materially influencing the outcome of decision making). Examples include certain uses in: biometrics; critical infrastructure; educational and vocational training; employment and management of workers; and essential public and private services such as credit scoring.

There are extensive obligations in relation to quality and risk management, documentation and traceability, transparency, human oversight, accuracy, cyber security and robustness.

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. They include harmful uses such as: social scoring; exploitation of vulnerabilities; subliminal techniques; certain biometric categorisation; emotion recognition systems at the workplace and in education; untargeted scraping of internet or CCTV for facial images to build up or expand databases.

What should you do now?

If the Act applies to your organisation, you should assess the AI systems you use (or are considering using) against the various risk categories so that you can identify and understand your obligations and plan accordingly.

We’ll be covering the risk categories and associated obligations in more detail in a follow-up article. In the meantime, please get in touch with Sally or Andrew if you have any queries or concerns.

Timing

The EU AI Act is likely to enter into force in August 2024. The ban on systems that pose an unacceptable risk will apply 6 months later and the general-purpose AI rules after 12 months. The majority of provisions, including those on high-risk systems, will apply in 2 years’ time.

Organisations are encouraged to participate in an AI Pact to bridge the gap before full implementation.

Fines for non-compliance

These range from up to €7.5 million or 1% of total worldwide annual turnover (whichever is higher) to up to €35 million or 7% of total worldwide annual turnover (whichever is higher), depending on the nature of the infringement.

The EU AI Act: How we can support you

The EU AI Act is a long and complex piece of legislation and we’ve given you the headline points here. Over the next few months we’ll be delving into more detail on different aspects of the Act and offering our practical tips.

Please contact Sally, Andrew or any member of the Technology & Digital team with any queries or concerns about how the Act may affect you.

Our people

Sally
Mewies

Partner

Head of Technology & Digital

CONTACT DETAILS
Sally's contact details

Email me

CLOSE DETAILS

Andrew
Northage

Partner

Regulatory & Compliance

CONTACT DETAILS
Andrew's contact details

Email me

CLOSE DETAILS