In a historic move, the UN Security Council convened a session on Tuesday to address the implications of artificial intelligence (AI) on international peace and security for the very first time. Chaired by UK Foreign Secretary James Cleverly, the session saw the participation of UN Secretary-General Antonio Guterres, representatives from the 15 member countries, and two AI experts, Jack Clark, one of the co-founders of the prominent AI company Anthropic, and Professor Zeng Yi, who serves as the co-director of the China-UK Research Center for AI Ethics and Governance.
Highlighting the urgent need for global oversight, UN Secretary-General António Guterres stressed the necessity of establishing a worldwide organisation to monitor the advancements of AI, which has ignited both concerns and hopes. Guterres cautioned against the potential misuse of AI by criminals, terrorists, and other malicious actors, emphasising the risks of widespread destruction, trauma, and unimaginable psychological harm that could result.
UN Security Council Voices the Need for a “Global Watchdog” Restricting AI Use
James Cleverly, the Foreign Secretary of Britain, underscored the far-reaching impact of artificial intelligence and advocated for the active involvement of diverse international actors from various sectors. Cleverly urged for global regulation of AI, tied to principles supporting freedom, democracy, human rights, the rule of law, security (including physical security, protection of property rights, and privacy), and trustworthiness.
To address the challenges posed by AI, Guterres proposed the establishment of a United Nations oversight body, similar to agencies such as the International Atomic Energy Agency. This body would regulate, monitor, and enforce AI regulations. Comprising experts in the field, the proposed agency would provide valuable knowledge to governments and administrative bodies lacking the technical expertise to address the risks associated with AI effectively.
Guterres emphasised the significance of creating a legally binding agreement by 2026 that bans the use of AI in automated weapons of war. While the path to achieving such governance remains challenging, the majority of diplomats expressed support for establishing a global governing mechanism and a set of international rules.
Insights from the Experts: Opportunities and Warnings
During the session, Jack Clark and Professor Zeng Yi highlighted both the severe threats and significant opportunities associated with AI. Clark warned of the dangers arising from a lack of understanding, likening it to constructing engines without comprehending the science of combustion. He advocated for collective government involvement in AI development to ensure appropriate state capacity.
Professor Zeng Yi emphasised the United Nations’ pivotal role in establishing a comprehensive AI development and governance framework, aiming to safeguard global peace and security. Additionally, he voiced concerns about the risks of human extinction due to AI’s exploitation of human weaknesses and stressed the need for protective measures.
Reactions from Russia and China: Differing Perspectives on AI Governance
Russia, deviating from the prevailing opinion of the Council, said that enough was known about the risks AI poses to consider it as a potential source of threats to global stability. They raised questions about whether the Council, entrusted with the responsibility of upholding international peace and security, should be engaging in discussions about AI. According to Russia’s Deputy U.N. Ambassador Dmitry Polyanskiy, what is required is a professional and scientific discussion based on expertise, which may take several years. He stated that such discussions are already taking place at specialised platforms.
Describing AI as having both positive and negative aspects, Zhang Jun, China’s UN Ambassador, voiced backing for the United Nations to assume a central coordinating role in crafting guiding principles for AI. Nonetheless, he raised concerns about certain “developed countries” seeking to assert control over AI. Consequently, he advocated for international laws and norms pertaining to AI to exhibit flexibility, permitting countries to establish their own regulations at the national level.
The session shed light on the future implications of AI and its significant power. Professor Zeng Yi’s remarks underscored the importance of responsible AI regulation, stressing that AI should be an assistant rather than a replacement for human decision-making. AI should never pretend to be human or mislead humans.