Skip to content

Diplomats examine AI risks to peace and potential for global oversight

British diplomats are leading a push at the U.N. that could be a starting point for a multilateral approach to regulating AI.

A girl makes friends with a robot at a market in Osaka, Japan.
A girl makes friends with a robot at a market in Osaka, Japan. (AN/Andy Kelly/Unsplash)

Amid warnings that humanity faces new existential dangers similar to the risk of nuclear war, the United Nations' most powerful arm held its first-ever meeting on the potential threats of artificial intelligence to international peace and security.

Russia, however, once again disrupted the U.N. Security Council's work – in the same week that it single-handedly blocked key deals involving Syria and Ukraine – by challenging the premise it is an appropriate forum for weighing AI oversight.

The U.K. convened the session on Tuesday as part of its monthlong council presidency in a push for nations to adopt a multilateral approach toward addressing AI’s serious security risks.

The session reflected a belief among top diplomats and scientists that this fast-developing technology and the processing of huge reams of data pose huge risks. Diplomats, AI experts and even business leaders told the council that the major technology companies are incapable of regulating the systems they're unleashing.

“We are here today because AI will affect the work of this council,” U.K. Foreign Secretary James Cleverly said.

“It could enhance or disrupt global strategic stability. It challenges our fundamental assumptions about defense and deterrence," he said. "It poses moral questions about accountability for lethal decisions on the battlefield.”

AI and the risk of disinformation could aid “the reckless quest for weapons of mass destruction” by nations and non-government forces alike, Cleverly said. “That’s why we urgently need to shape the global governance of transformative technologies. Because AI knows no borders."

Prompted by the AI-related advances in fields ranging from healthcare to security and agriculture, Britain’s Prime Minister Rishi Sunak plans to hold a U.K.-hosted summit on AI later this year to encourage “a truly global multilateral discussion.”

U.N. Secretary-General António Guterres told the council it's clear that AI will have an impact on every area of our lives and the U.N.'s work, but military and non-military AI applications could seriously disrupt global peace and security.

He noted that AI can be used to turbocharge global development, health and education or to find patterns of violence and boost peacekeeping, but it also can amplify bias, reinforce discrimination, and enable more authoritarian surveillance.

That's because malicious actions are simplified by AI. For example, programs like WormGPT, a hacker’s ethically questionable version of ChatGPT, make it easier to create and spread malicious software.

The widespread and increasingly cheap use of machine learning could further increase security risks in the future. Due to these threats, more and more voices are calling for global regulations and standards for the use of AI technologies.

"AI models can help people to harm themselves and each other, at massive scale," Guterres said. "The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale."

Nations falling behind in push for development and regulation

A growing number of countries are proactively pushing for the advancement of modern technologies, particularly AI, by implementing strategies like subsidies and accelerators.

Yet national regulators can't keep up with the dizzying pace of new AI tools and development and their mind-boggling implications for virtually everything under the sun, according to experts at the council's session.

Zeng Yi, a professor at the Chinese Academy of Sciences' Institute of Automation, suggested the council create a working group on AI challenges to peace and security.

Generative AI systems are information processing tools that seem intelligence but "are not truly intelligent," said Zeng, who also is deputy director of the Research Center of Brain-inspired Intelligence and co-directs the China-U.K. Research Center for AI Ethics and Governance.

He told the council in a video briefing that human must maintain control over all AI systems, particularly weapons systems. Both areas of AI, development and regulation, could seriously impact intergovernmental cooperation, positively or negatively, but business leaders said politicians are somewhat powerless to act when the insider know-how and advancements are spearheaded by companies.

"Across the world private sector actors are the ones that have sophisticated computers and large pools of data and capital resources to build these systems and, therefore, private sector actors seem likely to define the development of these systems," Jack Clark, co-founder of AI company Anthropic, told the council in a video briefing.

In May, the managers behind OpenAI appealed for a global freeze on the development of superintelligent AI, calling for an organization modeled after the International Atomic Energy Agency to set rules for the development and use of AI.

Although the call for a development freeze faded, Guterres picked up on the idea and earlier this month, endorsed a proposal to create a new global watchdog agency in AI similar to the IAEA.

He said he is appointing a high-level advisory board for AI to determine the best options for global AI governance by the end of the year but supports creating a new U.N. agency “inspired by such models as" the IAEA, International Civil Aviation Organization or Intergovernmental Panel on Climate Change.

Addressing the impact and possibilities of AI in the U.N. is nothing new. UNESCO began addressing the ethical concerns of AI in 2019. The International Telecommunication Union held the first annual AI for Good Summit in 2017.

In June the European Union adopted a negotiating position on the AI law. In 2021 NATO approved an AI strategy. The U.S. and China, as the two frontrunners in AI technology, are increasingly discussing its regulation.

The problem is acknowledged - with one exception

Most nations emphasized AI as a double-edged sword that requires an international framework for potential regulation.

"While this technology advances at a mind-blowing pace, we are caught between fascination and fear, weighing benefits and worries, anticipating applications that can transform the world but also aware of its other side – the dark side – the potential risks that could impact our safety, our privacy, our economy and our security," Albania's U.N. Ambassador Ferit Hoxha told the council.

Many speakers also highlighted the link to autonomous weapons systems in their statements and spoke out in favor of an international regulatory framework. But here, too, no final compromise is in sight despite Guterres' call for negotiators to work out a legally binding agreement on autonomous weapons systems by 2026.

“Questions of governance will be complex," he acknowledged.

In a week when Russia blocked the re-authorization of Syrian aid deliveries through a Turkish border crossing then halted a major grain deal for Ukrainian food exports, Moscow once again played the spoiler.

Russia's Deputy U.N. Ambassador Dmitry Polyanskiy singled out AI as "one of the most modern groundbreaking technologies," but dismiss its significance for the council's mandate to deal with matters of international peace and security.

“It's practical significance – the potential of its application, not to mention the hypothetical impact on political processes, is yet to be fully assessed," he said. "Specific arguments in support of the premise of some kind of organic link between AI and issues of international peace and security, at least for now, is not present."

Comments

Latest