Global AI Safety Initiative Launched at Seoul Summit
The Seoul AI Summit 2024 marked a significant step forward in the global effort to ensure the safe, innovative, and inclusive development of artificial intelligence (AI). Gathering leaders from countries including Australia, Canada, the European Union, and the United States, alongside major AI companies, the summit culminated in the Seoul Declaration, a comprehensive agreement emphasizing international cooperation on AI safety.
The Seoul Declaration, endorsed by leaders from 11 nations and the European Union, builds on the groundwork laid at the AI Safety Summit held in Bletchley Park, UK, in November 2023. This declaration underscores the intertwined goals of AI safety, innovation, and inclusivity. It recognizes the importance of interoperability between AI governance frameworks, advocating for a risk-based approach to maximize benefits while addressing potential risks.
The declaration calls for enhanced international cooperation to advance human-centric AI that supports democratic values, human rights, and fundamental freedoms. It also stresses the need to bridge digital divides, promote sustainable development, and foster socio-cultural and gender diversity in AI ecosystems. Additionally, it highlights the importance of cross-border and cross-disciplinary collaboration among governments, the private sector, academia, and civil society.
A pivotal outcome of the summit was the Frontier AI Safety Commitments, signed by 16 leading AI tech companies from around the world, including Amazon, Google, Meta, Microsoft, and Samsung. This agreement represents a historic milestone, as companies from diverse regions — including the US, China, and the UAE — pledged to uphold stringent safety measures in AI development.
The commitments require these companies to publish safety frameworks detailing their methods for measuring and mitigating AI risks. They have agreed not to develop or deploy AI models if the risks cannot be adequately mitigated. This includes implementing “kill switches” to shut down systems in the event of a catastrophic failure, thereby preventing potential misuse of AI for automated cyberattacks or bioweapon development.
Another significant achievement of the summit was the agreement to establish an international network of AI safety institutes. Modeled after the UK’s AI Safety Institute, this network aims to accelerate AI safety research and promote a unified understanding of AI safety standards and practices. Countries including Australia, Canada, France, Germany, Italy, Japan, Singapore, South Korea, the UK, and the US have signed on to this initiative.
This network will facilitate collaboration on safety research, share best practices, and support the development of policy and governance frameworks. It aims to ensure that AI technologies are developed and deployed in ways that are safe, trustworthy, and aligned with global standards.
Transparency and accountability are core principles of the new commitments. Companies are required to maintain accountable governance structures and publicly disclose their approaches to frontier AI safety. Regular reporting on AI systems’ capabilities, limitations, and risk management practices is mandated, fostering a culture of openness within the AI industry.
Global leaders and AI experts emphasized the importance of these commitments. UK Prime Minister Rishi Sunak highlighted the unprecedented nature of the agreement, noting that it sets a global standard for AI safety and paves the way for future advancements. South Korea’s Minister of Science and ICT, Lee Jong-Ho, stressed the necessity of international cooperation to manage AI risks and maximize its benefits.
The summit also tackled broader issues such as job security, copyright, and inequality. Fourteen companies, including Alphabet’s Google and Microsoft, signed a separate pledge to implement methods like watermarking to identify AI-generated content, ensuring transparency and accountability in AI outputs. These efforts aim to mitigate the negative impacts of AI on employment and support vulnerable social groups.
AI experts at the summit called for further steps to regulate AI, emphasizing that voluntary commitments, while significant, are not sufficient. Francine Bennett from the Ada Lovelace Institute advocated for obligatory safety standards, and Max Tegmark from the Future of Life Institute stressed the need for AI services to meet rigorous safety benchmarks before market deployment.
The Seoul AI Summit has set a strong foundation for global AI governance. The next AI Action Summit, scheduled to be held in France in 2025, is expected to build on these discussions, further enhancing international cooperation on AI safety.