Global Adversaries and Allies Reach First Agreement on Containing AI Risks
World governments hash out a set of guiding principles for the future of AI.
November 2, 2023
(The Washington Post) -- Governments from six continents Wednesday agreed to a broad road map to limit the risks and harness the benefits of artificial intelligence, coming together in Bletchley Park, the symbolic birthplace of the digital era, for the first clear international declaration on a potentially world-altering technology.
At a time when countries and regions are pushing through varying regulations on AI, the negotiated statement – known as the Bletchley Declaration – saw global adversaries United States and China hash out a series of guiding principles with the European Union, Britain, and 24 other nations. Countries jointly called for policies across borders to prevent risk, ranging from disinformation to the potential for "catastrophic harm either deliberate or unintentional."
They agreed to support "internationally inclusive" research on the most advanced future AI models, and work toward safety through existing international organizations – including the Group of Seven, Organization for Economic Cooperation and Development, Council of Europe, United Nations, and the Global Partnership on AI. They also agreed to work through other "relevant initiatives," a seeming nod to dueling AI safety institutes announced in recent days by Britain and the United States.
The agreement came near the start of the two-day AI Safety Summit that has brought digital ministers, top tech executives, and prominent academics to the once-secret home of the famous World War II code breakers who decrypted Nazi messages. Tesla chief executive and X owner Elon Musk and officials from China, Japan, and European nations were in attendance. Vice President Harris is expected to arrive Thursday, after the White House rolled out a raft of new AI initiatives at a competing London event.
The communiqué amounted to a statement of mission and purpose, and did not contain specifics on how global cooperation could take shape. But organizers announced another summit, six months from now, in South Korea, followed by another in France six months after that.
The declaration comes as the United States, European Union, China and Britain are taking varying approaches on AI regulation, resulting in a patchwork of current or proposed rules with significant differences between them. The statement Wednesday recognized that "risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy, and responsible AI."
As the summit began, US Commerce Secretary Gina Raimondo and Wu Zhaohui, China's vice minister of science and technology, sat next to each other onstage, where they took turns delivering speeches about their responses to AI risk. The summit marked a rare meeting of high-level US and Chinese officials, amid heightened economic tensions and intense technological competition.
Zhaohui called AI governance "a common task faced by humanity," saying the Chinese government was committed to an enhanced dialogue about how to assess the risks of AI and ensure the technology remains under human control.
But not all delegates were pleased China was included in the summit. Michael Kratsios, the managing director of Scale AI and President Donald Trump-appointed chief technology officer of the United States, said he was "extremely disappointed" that the Chinese government was included.
"To believe that they're a credible player and that what they say they'll actually do ultimately is a huge mistake," he said.
The decision to issue a joint communiqué at the start – as opposed to the end – of the summit suggested that leaders had reached the limit of agreed-to cooperation ahead of the event, with in-person meetings unlikely to raise the bar significantly.
"Sadly we can't just sit back and relax," Jonathan Berry, the British AI minister, told The Washington Post. "Now we have to move on to: What are the real implications of this?"
Prime Minister Rishi Sunak has focused the summit on the riskiest uses of AI, with a particular emphasis on doomsday scenarios, such as how the technology could be abused to deploy nuclear weapons or create biological agents. At the event, global leaders emphasized the immense power of the technology.
Michelle Donelan, Britain's secretary of state for science, innovation and technology, began the event by telling attendees that they are the "architects of the AI era," who have the power to shape the future of the technology and manage its potential downsides.
King Charles III compared AI advances to humans' "harnessing of fire" in a video statement to the delegates. He likened the need for global cooperation on AI to the fight against climate change: "We must similarly address the risk presented by AI with a sense of urgency, unity and collective strength."
Dario Gil, IBM senior vice president and director of research, criticized use of the phrase "frontier model," a term that signifies advanced systems but is not grounded in AI research, at Wednesday's event.
"As we go forward, we should be more scientific, more rigorous with the language," he said.
As the summit began Wednesday, the White House hosted its own counter-programming about 50 miles away in London, where Harris delivered a speech at the US Embassy on the Biden administration's plans to address AI safety concerns. Attendees included former British prime minister Theresa May and Alondra Nelson, the former acting director of the White House Office of Science and Technology Policy.
As international policymakers – especially in the European Union – rush to develop new AI legislation, the White House is pushing for the United States to lead the world not just in AI development but also regulation. In stark contrast to the Safety Summit agenda, the vice president urged the international community to address a full spectrum of AI risks, not only catastrophic threats such as weapons.
"Let us be clear there are additional threats that also demand our action," she said. "Threats that are currently causing harm and which to many people also feel existential."
Standing at a lectern with the US presidential seal, Harris listed ways AI is already upending people's lives. She raised concerns about how facial recognition leads to wrongful arrests or how fabricated explicit photos can be used to abuse women.
At Bletchley Park, some attendees said they heard echoes of the vice president's remarks in panel sessions, which were closed to the media. Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, said government officials at one of her panels focused on current harms, including the use of automated systems in criminal justice and the risk of misinformation.
"Overall, ministers seem to be agreeing that the frontier risks that the summit was first scoped to focus on are indeed important – but that they must also tackle pressing issues around AI impacting people's lives right now," she said.
Max Tegmark, president of the Future of Life Institute, said there was a "surprising consensus" that attendees could address both current and existential threats of AI. Future of Life led a letter earlier this year that called for a pause in training of advanced AI systems, which was signed by Musk and other veteran AI scientists.
Harris also touted a new US AI safety institute within the Commerce Department that will develop evaluations known as "red-teaming" to assess the risks of AI systems, just days after Sunak announced a similar organization in Britain. The US institute is expected to share information and research with its British counterpart.
Harris also unveiled a draft of new regulations governing federal workers' use of artificial intelligence, which could have broad implications throughout Silicon Valley.
Harris's speech built on the Biden administration's Monday executive order, which invoked broad emergency powers to put new guardrails on the companies building the most advanced artificial intelligence. The order marked the most significant action the US federal government has taken so far to rein in the use of artificial intelligence, amid concerns that it could supercharge disinformation, exacerbate discrimination and infringe on privacy.
Yet there are limits to how much the Biden administration can accomplish without an act of Congress, and other legislatures around the world are outpacing the United States in developing AI bills. The European Union is expected to reach a deal by the end of the year on legislation known as the EU AI Act.
Asked about Harris's focus on the near-term risks of AI – vs. the summit's apparent focus on the longer-term risks – Matt Clifford, Britain's lead adviser on the summit, insisted the event "is not focused on long-term risk. This summit is focused on next year's models."
Pressed on the decision by the United States to announce its own AI safety institute days after Sunak announced the creation of one in Britain, Clifford said the two bodies would work closely together.
"The US has been our closest partner on this," he said.
About the Author
You May Also Like