Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

The UK AI Safety Summit and Fringe - Seven things we learned

Artificial Intelligence 10 November 2023

On 1 and 2 November the UK government hosted its AI safety summit. It brought together 28 countries and organisations including the United States, European Union and China to discuss the risks posed by ‘frontier’ artificial intelligence models, risks that they pose and how to address them. The summit made significant progress in a number of areas including the publication of several research reports, the establishment of an international AI Safety Institute, and the signing of the Bletchley Declaration to drive a shared understanding of and approach to AI’s risks and opportunities.

Alongside the summit, Clifford Chance and a range of organisations held the AI Fringe, bringing together civil society, advocacy groups, research organisations and academics as well as companies and the public.

The week also saw significant developments beyond the Summit, with the announcement of President Biden's Executive Order on AI in the United States, the Spanish presidency of the EU Council of Ministers proposing an updated governance architecture for foundation models in Europe, and the G7 announcing International Guiding Principles on Artificial Intelligence and the voluntary Code of Conduct for AI developers under the Hiroshima AI process.

In this piece, we look at what key themes emerged from these steps towards achieving greater clarity and unity of purpose on the topic of AI regulation, law, and policy at a global level.

Take away #1: involving a wide range of stakeholders is important.

The UK government brought together head of state and ministerial level officials, including representatives from the EU, with senior leaders from the tech sector and some representatives from civil society.

Alongside Milltown Partners, Google Deep Mind, faculty AI, the Alan Turing institute, the partnership on AI, and many others, we helped to organise that AI Fringe in response to a desire to include as many views and stakeholders in the conversation around AI safety as possible. The weeklong series of events went far beyond looking at risks associated with frontier models to encompass issues relating to democracy, bias, the future of work, regulatory approaches and access to justice. All of the sessions can be viewed back on the AI Fringe Youtube page. The fringe served to broaden the conversation and create a space for a more inclusive dialogue about the future of AI development, adoption and governance.

Take away #2: the participation of China in the AI summit was a significant milestone

Global consensus on critical safety issues between key AI players is crucial for a safe and sustainable future involving AI. The AI summit took an important step in this direction, with a rare snapshot of a U.S. Secretary and a Chinese Minister sharing a backdrop which read “AI safety”. Elon Musk, in his conversation with UK Prime Minister Rishi Sunak, remarked that had China not been represented, the summit would have been “pointless”. This is perhaps an overstatement, but the presence of China was remarkable in light of the tensions that exist across a range of dimensions including supply chains, and their respective agendas in the context of artificial intelligence.

One of the outcomes of the summit was an agreement that further meetings should be held in South Korea and France respectively. Those meetings may allow for additional areas of consensus to emerge between these countries, and others.

Take away #3: international collaboration is slowly taking shape, with a focus on testing new foundation models

The UK announced the creation of an AI Safety Institute, which is intended to be an internationally backed organisation setting up the interdisciplinary, sociotechnical infrastructure needed for the study, testing and governance of foundation models.  The U.S. also announced the parallel creation of an AI Safety Institute, based out of the U.S. National Institute of Standards and Technology (NIST) which will focus on the creation of industry standards, amongst other things. The U.S President’s Executive Order also directs and empowers NIST to take a wide range of additional measures in this area. Canada has also said that it is considering the creation of a similar body to the AI Safety Institute.

The European Commission President Ursula von der Leyen notably linked this to the envisaged European AI Office under the EU's landmark AI Act, on which negotiations continue, and the fact that it should cooperate with similar bodies internationally, including these newly formed AI Safety Institutes. 

As the dust settles, it does appear that these sorts of bodies are likely to be critical in the approach of governments in evaluating new AI foundation models before they are released.

These initiatives add to several existing bilateral and multilateral instances of cross-border collaboration on AI. One example is the EU-U.S. Trade and Technology Council (TTC) that was set up in 2021. With AI as one of its key focus areas, it has notably established the TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management.

The G7 and Hiroshima principles, also announced during the week of the UK summit, can be easily read with the U.S. Executive Order. The Hiroshima principles, as well as the voluntary Code of Conduct, build on the existing OECD AI Principles and are intended to be aligned with leading international human rights-related standards, such as the UN Guiding Principles on Business and Human Rights (UNGPs) and OECD MNE Guidelines. The UNGPs and OECD MNE Guidelines are the foundation of many business and human rights-related legislation and non-binding guidance around the world (including in the context of broader trends on HREDD), so framing the Principles and Code against those standards means that there is scope for wider uptake by, and collaboration with, countries outside the G7 group.

While not currently a member of the OECD, with China and the EU part of the conversation, we may see more agreement and collaboration around certain fundamental principles emerge in 2024. That is not to underplay the very significant tensions, agendas and differences in approach.

Take away #4: the USA has made a significant step forward with the EO

On October 30, 2023, President Joe Biden announced an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO). This was a significant step. the EO generally directs government agencies as to their implementation, use, and management of AI, the EO will likely serve as a blueprint for how regulators will in turn seek to regulate private enterprises.  In addition, this government action will likely shape industry best practices in how AI is implemented, used, and managed. The EO, of course, could be repealed as easily as it's been signed, by an incoming administration with a different agenda, but the changes it's likely to bring will be hard to unwind. Read more in our article: What businesses need to know (for now) about the Biden Executive Order on AI

Take away #5: AI bias is one of the biggest concerns expressed by the public and policymakers

There was palpable urgency in addressing near-term AI risks.  AI bias creates and perpetuates real-world harms for society, particularly marginalised groups now. Yet conversations on AI, bias and the law can lead to generalised recommendations or outcomes (e.g. to audit and test algorithms and build "diverse" teams prior to deployment). Sometimes these recommendations lack real-world context.  Last week's AI events moved the discourse on AI accountability into more nuanced spaces, considering new power and structural paradigms that AI creates or perpetuates. What is at stake? For whom? Who needs to be involved?

Strategically, these conversational progressions should inform practical implementation of AI oversight frameworks. There was a re-emphasised energy directed towards working on data laws and oversight to ensure equitable outcomes.  The use of AI and automated systems to mitigate unsafe outcomes (e.g. AI for bias detection) was considered carefully and, ultimately, seen as important but not a panacea.  

Take away #6: a broader and more nuanced discourse around future and near-term risks is emerging

One of the roundtables during the summit was entitled, "Risks from Loss of Control over Frontier AI." Part of the summary (available in full here), stated that, "[c]urrent models do not present an existential risk and it is unclear whether we could ever develop systems that would substantially evade human oversight and control. There is currently insufficient evidence to rule out that future frontier AI, if misaligned, misused or inadequately controlled, could pose an existential threat…" and that, "[i]t may be suitable to take more substantive action in the near term to mitigate this risk. This may include greater restrictions upon, or potentially even a pause in, some aspects of frontier AI development, in order to enjoy the existing benefits of AI whilst work continues to understand safety."

Six months ago, discussions like this were not mainstream in debates amongst policymakers. The summit declaration stated, "[s]ubstantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent."

This broader framing of the risks may lead to a more nuanced conversation that acknowledges the full spectrum of AI's potential impact, without detracting from immediate concerns about dis-information, bias, deep-fakes, cyber-security, and other issues.

Take away #7: a more joined-up approach to governance that involves governments, regulators, industry and civil society is taking shape

Common themes across the deliberations and announcements around what needs to happen in the near term are starting to emerge. While a critical piece of the puzzle will be an international, interoperable regulatory framework, it will take time to build, adopt and deploy. Over the next 6-12 months we are likely to see regulators, academia and industry working more closely with each other in areas like capacity-building, audit, assurance and red-teaming, vulnerability reporting and greater information sharing with each other, and with consumers.

Key takeaways for organisations

There will be increased requirements for and scrutiny of internal risk assessments around AI, and compliance with existing laws, particularly in relation to:

  • Transparency with users and regulators
  • Preserving and enforcing privacy rights
  • Access to data and intellectual property concerns
  • Senior management accountability
  • Workers' rights and empowerment
  • Assurance around the safety and robustness of frontier AI

Clifford Chance and Artificial Intelligence

Read more articles on Talking Tech

Clifford Chance is following AI developments very closely and will be conducting subsequent seminars and publishing additional articles on new AI laws and regulations.  If you are interested in receiving information from Clifford Chance on these topics, please reach out to your usual Clifford chance contact or complete this preferences form.

Clifford Chance was the only law firm that participated as a partner in the recent AI Fringe, which brought together civil society, academic, commercial, advocacy and government stakeholders in London in October 2023 – all the sessions can be found on the AI Fringe Youtube page

Clifford Chance has also recently published an insightful report on "Responsible AI in Practice", which examines public attitudes to many of the issues discussed in this article.