Skip to content

Nations back first call for 'responsible' use of military AI

The agreement emerged from high-level political discussions among 85 countries on the sidelines of an international summit this week in the Netherlands' seat of government.

The use of AI has been tested to keep the Boeing V22 Osprey's complex tilt-rotor planes flying longer.
The use of AI has been tested to keep the Boeing V22 Osprey's complex tilt-rotor planes flying longer. (AN/Simon Fitall/Unsplash)

THE HAGUE (AN) – Representatives of more than 60 nations endorsed a call to action on the “responsible” use of artificial intelligence in the military domain. The first-of-its-kind agreement is meant to signal a new era of international negotiations on the existential risks of a global AI-driven arms race.

The agreement emerged Thursday from high-level political discussions among 80 countries on the sidelines of an international summit this week in the Netherlands' seat of government. Signatories include the major military powers of China and the United States, as well as the full roster of the NATO alliance. Russia was not invited to attend because of its nearly yearlong invasion of Ukraine.

The document published by the Dutch foreign ministry is a non-binding statement of intent. Its 25-point call to action emphasizes a need for military AI systems, which are being adopted at a rapid clip already ubiquitous on modern battlefields, to be employed in “full accordance” with international legal obligations in a way that doesn't “undermine international security, stability and accountability."

Military applications of AI are not limited to weapon systems. Their applications include, but are not limited to, areas like logistics, surveillance, and cybersecurity.

“When it comes to warfare technology, you could say that for the first time in history, we are actually ahead of the future. But only just,” Dutch Foreign Minister Wopke Hoekstra told the conference in his closing speech. “If you compare the speed of development of ChatGPT to our track record of making agreements and reaching decisions, we’ve actually got no time to lose.”

The U.S. Department of Defense issued its own declaration on the use of AI systems in the military arena. It notes that the use of AI in armed conflict must be “in accord” with international humanitarian law and incorporate clear mechanisms for taking responsibility, including “human accountability."

It excludes, however, any language that would ban the use of fully automated weapons systems, a key ask of human rights groups.

"You are creating a technology that can encode biases, targeting black, Jews and refugees," Amnesty International Director Agnès Callamard said of the dangers of allowing lethal weapons to target people and execute kill commands without human oversight.

"Wars are dirty, and biases are a part of warfare," she said. "I just don't want us to think this is a game. Even if drones are precise, their use is not compliant with international law."

Blank pages, so far

Despite the nonbinding nature of the documents that emerged from the two-day conference, organisers hope the opening of a new forum for international dialogue – set to reconvene in South Korea, a co-host, next year – will provide a shot in the arm to a global regulatory debate that has otherwise stalled.

International mechanisms for regulating the use of AI in the military, particularly so-called lethal automatic weapons systems (LAWS), have yet to bring any concrete outcomes, leaving blank pages in the law books meant to regulate AI military capabilities.

The primary forum for discussing LAWS is under the auspices of the United Nations Convention on Certain Conventional Weapons, or CCW, a 1983 treaty negotiated in Geneva that aims to ban or restrict the use of some conventional weapons that cause excessive or indiscriminate harm. But the prospects of developing a consensus on their prohibition are seen as far off.

“The main umbrella for us is the CCW. It is really central to the global discussion,” said Marjolijn Van Deelen, head of the European Union’s security policy unit focused on global disarmament and arms control. “But [discussions] are going very slow. Some say too slow, and I think I would agree.” Negotiations under the CCW are set to resume next week.

Autonomous drones are coming

Humanitarian organizations like Amnesty International, Human Rights Watch and the Campaign to Stop Killer Robots have criticized the conference for the heavy presence of arms manufacturers, including an appearance in the opening panel by Lockheed Martin’s vice president Steven Walker.

The lack of any explicit focus on banning LAWS, widely regarded as the starting point for any international agreement and a critical demand of human rights watchdogs, was another “missed opportunity,” the Campaign to Stop Killer Robots said.

Often discussed in stark furutistic or even sci-fi terms, the reality is that these automated systems may already be active on the battlefield.

“Machines should not be making life-and-death decisions about humans,” said Wendell Wallach, director of the Artificial Intelligence and Equality Initiative at the Carnegie Council for Ethics in International Affairs, and one of the first to raise the alarm about automated weapons systems.

“If we have not already seen automated weapons systems, we are probably going to see drones in Ukraine over the next year that are autonomous and are being utilized by both sides,” said Wallach.

“We may have already witnessed that, we just don’t know," he said. "The difference between an autonomous weapon system and a non-autonomous weapon system may be nothing more than a little bit of software, or even a switch.”

Among the top priorities of the U.N.’s New Agenda for Peace is U.N. Secretary General António Guterres' call for a ban on autonomous weapons systems as part of a larger plan to bring disarmament and arms control “back to the center."

“No agenda for peace can ignore the dangers posed by new technologies,” Guterres said. “Human agency must be preserved at all costs.”