The AI Seoul Summit

On May 21–22, 2024, the Republic of Korea and the United Kingdom cohosted the second AI summit following the United Kingdom’s launch of the series in 2023. The “AI Seoul Summit,” as it was billed, gathered leaders from government, industry, and civil society to discuss global collaboration on AI safety, innovation, and inclusivity. This “mini summit” took place both virtually and in Seoul for two days of back-to-back events. Overall, the event succeeded in its goal to continue momentum after the landmark 2023 UK AI Safety Summit. Technically there were two conferences, the AI Seoul Summit and AI Global Forum, but the events were held concurrently, in the same location, and with mostly the same participants.

Q1: What happened at the first UK AI Safety Summit?

A1: The UK AI Safety Summit was an international convening of senior government officials, tech executives, civil society, and researchers to discuss safety and regulation for advanced AI models. Its objective was to monitor frontier AI models, discuss the risks and benefits of such models, and highlight areas for global collaboration to ensure responsible AI development. To keep pace with the rapidly changing landscape of AI, the participants determined that additional follow-on summits ought to be held every six months, with the second in South Korea in May 2024, and the third in France sometime in 2025.

Q2: What were the major outcomes of the 2023 UK AI Safety Summit?

A2: While AI safety has been a key focus of leading AI researchers for more than a decade, senior policymakers have only turned to the issue in the last few years. The UK AI Safety Summit was a “global first” in demonstrating powerful international mobilization around AI safety, marking a distinct shift from the prior focus on “AI ethics” in policy circles. On November 1–2, 2023, high-profile industry executives, civil society representatives, researchers, and government officials from around the world convened at Bletchley Park—the birthplace of the digital, programmable computer—to discuss the safety and policy implications of the world’s most advanced AI models.

The most significant diplomatic outcome of the UK summit was the Bletchley Declaration, a joint commitment signed by leaders from 28 countries and the European Union in what UK prime minister Rishi Sunak called a “landmark achievement” for global AI diplomacy. The document addresses the urgent need for global cooperation on safe, responsible, and inclusive AI development and calls on developers to submit frontier AI models for safety testing. After the Bletchley Declaration, AI safety was firmly entrenched as a top AI policy concern.

More tangibly, the first AI safety summit announced the launch of new initiatives to build substantive government capacity for AI safety, with both the United States and the United Kingdom announcing the creation of AI safety institutes and the formation of an advisory panel of international AI experts that would produce a two-part “State of the Science” report on frontier AI. The interim report was published to mark the beginning of the AI Seoul Summit.

Q3: What was the AI Seoul Summit agenda?

A3: Like the UK AI Safety Summit, the AI Seoul Summit took place over two days and followed the same invitation list, including inviting China to only the ministerial meeting.

Day one (May 21) virtually hosted the leaders’ session, cochaired by South Korean president Yoon Suk Yeol and UK prime minister Rishi Sunak. This event convened the same governments as the first summit as well as a select group of AI industry representatives who presented what safety measures they have taken as outlined in the Bletchley Declaration.

Day two (May 22) saw an in-person Digital Ministers meeting in Seoul cohosted by South Korea’s Ministry of Science and Information and Communication Technology

(MSIT) and the UK Department for Science, Innovation, and Technology.

On day two (May 22), the South Korean government also hosted the AI Global Forum, an all-day event organized by the Ministry of Foreign Affairs in Seoul. The forum ran concurrently with day two of the AI Seoul Summit at the same location.

Q4: What did the 2024 AI Seoul Summit achieve?

A4: The summit reinforced international commitment to safe AI development and added “innovation” and “inclusivity” to the agenda of the AI summit series. In his speech opening the summit, South Korean president Yoon Suk Yeol said, “the AI Seoul Summit, which will expand the scope of discussion to innovation and inclusivity . . . will offer an opportunity to consolidate our efforts and promote AI standards and governance at the global level.” 

As recently as late April, skeptics had suggested that the AI Seoul Summit was on track to be a diminished and largely irrelevant successor to the original UK AI Safety Summit. One criticism, which has some merit, is that the addition of topics other than AI safety diluted what made the UK AI Safety Summit unique among a crowded landscape of international AI diplomatic initiatives. Attendance was indeed down and news coverage a bit lower, but that’s to be expected at an event billed as a “mini summit.”

However, failing to duplicate the global sensation of the first AI summit does not mean that the Seoul summit was not important. In fact, there were at least two substantive outcomes: first, the number of AI safety institutes among advanced democratic countries continues to grow, meaning that global government capacity on AI safety will soon increase dramatically. In addition to the original U.S. and UK institutes, Japan, South Korea, and Canada have now announced that they will establish their own AI safety institutes. For its part, the European Union has suggested that the European Commission AI Office, which was established as part of the EU AI Act, will serve the function of an AI safety institute for the European Union. At the AI Seoul Summit, the Korean and UK organizers secured a statement of intent signed by 10 countries plus the European Union for these institutes to cooperate as a network. If the UK AI Safety Summit’s achievement was establishing the idea of an AI safety institute, the Seoul AI Summit marks the moment that the idea reached significant international scale as a cooperative effort. 

Q5: What comes next?

A5: France is slated to hold the successor to the AI Seoul Summit in early 2025 under the new title “AI Action Summit.” While the French government has not yet indicated an official agenda for the event, officials told CSIS that AI safety will be only one of five topics for discussion. Clues to France’s potential agenda come from French president Emmanuel Macron’s recent remarks that France was engaged in an AI “battle” across “five major areas: talent, infrastructure, uses, investment, and governance.” This likely indicates that those issues will feature more prominently than safety, which might be relegated to a subset of the governance conversation.

Gregory C. Allen is director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Georgia Adamson is a research associate with the Wadhwani Center for AI and Advanced Technologies at CSIS.

Originally Appeared Here

Author: Rayne Chancer