Skip to main content

The AI Safety Summit: What we learned and what comes next

“With AI increasingly dominating the global agenda, the AI Safety Summit being hosted by the UK is a helpful step in promoting understanding of the potential risks of the technology, while creating the right conditions to unleash its potential to deliver great benefits across society. The bringing together of minds from across the globe is no mean feat, and the impetus for international collaboration on this hugely important topic is encouraging. With some divergences of approach at a national level, it remains to be seen how effective these efforts will be in producing a global set of principles for managing risk in AI . We’ll continue to monitor developments with great interest”.

Sally Mewies, Partner, Technology & Digital

Sally Mewies B&W

An image of a complex computer with a digital display showing a LCD brain. A visual metaphor for the topic of this piece, the AI Safety Summit

Over the past 2 days the UK has been playing host to representatives from international governments, leading AI companies, civil society groups and research experts, at the first and much-anticipated global AI Safety Summit. Walker Morris Partner Sally Mewies and Director Luke Jackson, from our multidisciplinary Technology & Digital group, take a look at what we learned and what comes next.

The AI Safety Summit – how did we get here?

First, a bit of background. Particularly over the past year or so, AI has been increasingly dominating the news, with generative AI systems such as ChatGPT bringing the technology into mainstream consciousness. Governments, regulators and other bodies have been grappling for some time with how best to harness the many potential benefits of AI, while putting sufficient guardrails in place to protect against the potential risks (ranging from discrimination and privacy issues, replacement of jobs and exploitation by bad actors, to the extinction of humankind).

We’re already seeing differing approaches to regulation. In the EU, for example, a new AI Act is in the advanced stages. Here in the UK, the government has no plans to introduce specific legislation or a single regulator for AI, preferring instead to empower existing regulators to come up with tailored, context-specific approaches that suit the way AI is being used in their sectors [1].

With concerns rising, including key figures signing an open letter saying that the race to develop AI systems is out of control and calling for a pause, the UK announced that it would host the first global AI Safety Summit to consider the risks of AI, especially at the frontier of development (so-called ‘frontier AI’), and discuss how they can be mitigated through internationally coordinated action.

In the run-up to the Summit, the government published a discussion paper on the capabilities and risks of frontier AI.

The AI Safety Summit – what happened and what did we learn?

Day 1 kicked off with the Bletchley Declaration, with 28 countries from all corners of the globe agreeing to the safe and responsible development of frontier AI. A variety of roundtable discussions were held on both understanding frontier AI risks and improving frontier AI safety. The summaries of the discussions have now been published. These are the key messages we picked out:

  • AI has the capability to improve lives, but there’s also the potential for serious, even catastrophic, harm (deliberate or unintentional).
    1. Ensuring the safety of AI is paramount.
      1. There are particular concerns about frontier AI – highly capable AI that can perform a range of general tasks and also those systems that have a narrower purpose, but which have the potential to be harmful. Frontier AI includes “foundation models” which can carry out many tasks at once and process unstructured data.
        1. There needs to be more research to understand the implications of frontier AI because the world doesn’t understand it fully currently. This is especially urgent.
          1. Disinformation is a key concern (for example, you only have to look at the number of elections coming up across the world to realise the impact this could have on many millions of people).
            1. Risks are international in nature so it’s important that countries work together to come up with global risk-based policies, while recognising that approaches may differ because of national circumstances and legal frameworks – a nod to the fact that Europe and China, among others, are already doing their own thing.
              1. Inclusion is key – both in terms of engaging not just with experts but with people whose everyday lives have the potential to be affected by AI, and also understanding that AI affects the whole world – so it’s essential that the focus is not concentrated on a handful of companies or confined only to the English-speaking world.
                1. The signatories to the Declaration affirm that for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. Again, there is a nod to some of the detail in the EU AI Act here with the reference to human-centric solutions and transparency in how models are used.

                In another major development, both the UK and the US announced the establishment of separate AI safety institutes. In a speech ahead of the Summit, the UK Prime Minister announced an institute whose work will be “available to the world”, while the US Commerce Secretary chose Day 1 of the Summit to announce the US’ own version – an indication already of the national approaches we will see emerging underneath this concerted global effort.

                The discussions have continued today, with the PM wrapping up the Summit with a press conference and an evening live stream conversation with Elon Musk on X, formerly Twitter.

                What else has been happening?

                Developments have been evolving at pace, with a flurry of initiatives announced in particular in the run-up to the AI Safety Summit.

                On 30 October, the G7 leaders issued a statement on the “Hiroshima AI Process”, with publication of international guiding principles and an international code of conduct for organisations developing advanced AI systems.

                The UN Secretary-General launched a high-level advisory body on the risks, opportunities and international governance of AI; while President Biden issued an executive order on safe, secure and trustworthy AI, and the US Vice President announced an array of new initiatives to advance the safe and responsible use of AI.

                Here in the UK, we’re: uniting with global partners to accelerate development in the world’s poorest countries using AI; boosting investment in British AI supercomputing; making the country “AI match-fit” with a £118 million skills package; and accelerating the use of AI in life sciences and healthcare with a £100 million investment.

                We’ve also seen leading frontier AI companies including DeepMind outline their safety policies following a request from government.

                So what comes next?

                It’s been made very clear that this first AI Safety Summit is just the start of the discussions, and it’s essential that the momentum isn’t lost going forward. The Republic of Korea will co-host a mini virtual summit on AI in the next 6 months, with France agreeing to host the next in-person AI Safety Summit in a year’s time.

                The government is expected to publish the widely-awaited response to its AI regulation white paper later this year. We’ll be continuing to monitor developments. You can keep up to date by signing up to our regular Technology & Digital round-up here.

                *Don’t miss our webinar with Lexology on 16 November on “Unlocking and controlling AI” – you can sign up here.*

                How we can support you

                Whatever your technology needs are, we’ve got the expertise to help you. Our multidisciplinary Technology & Digital group offers the full range of services, from dealing with contract drafting and competition issues to regulatory compliance and dispute resolution.

                Click here to download our recent GC report on digital adoption and the transformative power of in-house legal teams. This series of content tackles some of the crucial tech and legal issues our clients encounter in relation to the development, implementation and operation of technically innovative services and products.

                We’re here to help, so please get in touch if you need any advice or assistance.


                [1] See our earlier briefing



                Webinar recording: Unlocking and controlling AI

                Read more

                Our people



                Head of Technology & Digital

                CONTACT DETAILS
                Sally's contact details

                Email me

                CLOSE DETAILS




                CONTACT DETAILS
                Luke's contact details

                Email me

                CLOSE DETAILS