Sasha Pearson examines how the use of military AI, operating with limited human oversight, is challenging the rules-based international order and the credibility of multilateralism.
The use of AI in military operations has moved from experimental capability to operational reality. The February 2026 targeted killing of Iran’s supreme leader, Ayatollah Ali Khamenei, in a joint US-Israeli strike, was reportedly based on high-resolution intelligence and real-time location analysis. However, this is not the first incident in conflict that relies on advanced intelligence fusion, persistent monitoring, and precision targeting supported by data-driven systems. The operational significance and reality of AI integration have already been evidenced in both Ukraine and Gaza.
The governance gap in military AI
As states integrate AI into targeting, intelligence, logistics and autonomous systems, the pace and scale of decision-making in warfare are changing in ways that outstrip existing governance frameworks. This makes dedicated state-level efforts to enhance cooperation all the more necessary, particularly given that broader AI governance frameworks have largely sidestepped the military domain. For example, the EU AI Act, the most comprehensive AI regulation to date, explicitly exempts national security and defence applications from its scope.
Under international humanitarian law (IHL), human judgment is mandatory to comply with its principles of distinction, proportionality, precaution, and accountability. And this human judgment relates to any technology in warfare, including AI. Though its application becomes inherently complex when algorithms filter targets, generate strike recommendations or operate in dynamic combat environments with minimal human intervention. Ensuring meaningful human control is increasingly viewed as central, yet its operational definition remains politically contested.
Military AI governance is no longer a peripheral arms-control debate but an urgent matter of global stability.
The UN’s signal to military AI governance
The UN recognises the challenge of military AI. In September 2025, UN Secretary-General António Guterres warned that “humanity’s fate cannot be left to an algorithm” and stressed that humans must retain authority over life-and-death decisions. Four months later, the United Nations General Assembly adopted a resolution addressing the use of AI in the military domain and its implications for international peace and security. Though non-binding, it marked a significant political signal: military AI governance is no longer a peripheral arms-control debate but an urgent matter of global stability.
The resolution reflects a broader shift in UN engagement, moving beyond a narrow focus on lethal autonomous weapons systems (LAWS) to promote more inclusive dialogue among states and stakeholders on the wider military applications of AI. In this respect, the emerging governance challenges around AI echo earlier debates surrounding nuclear technology during the early Cold War and the subsequent Strategic Arms Reduction Treaty (START) negotiations. Both involve dual-use technologies and arms race dynamics, though AI differs in important ways. It is already being actively deployed and tested, its development remains far more opaque than nuclear capabilities.
US and China sideline multilateral consensus
The trajectory of global conflicts risks outpacing multilateral action. Governance efforts face structural limits. Battlefield pressures favour speed, operational advantage and technological edge, while international processes move slowly and rely on political consensus. This widening gap raises concerns about escalation risks, reduced human oversight and the rapid diffusion of increasingly accessible AI-enabled military capabilities.
Crucially, the direction of international governance debates will be shaped by the strategic rivalry between the United States and China. Both view AI dominance as central to long-term military and economic power. Neither power appears willing to cede technological advantage in the name of precautionary restraint, and both are “increasingly detached from major international dialogues”. A fact that became evident at the February 2026 REAIM Summit in Spain, where both the US and China refused to sign the non-binding declaration. While the Biden administration spearheaded the initial global dialogue on military AI governance, the Trump administration has since ‘torpedoed’ such efforts.
In practice, global governance may proceed without the full commitment of the very states whose participation is most essential. This leads to the question of whether such multilateral agreements have an impact on the ground when the most powerful nations are openly sidelining them.
If the leading global players are unwilling to bind themselves to common limits, the governance of military AI may reflect power politics more than shared principles.
Looking ahead: The fragile future of global AI governance
For the European Union and its partners, the implications are not only technical or ethical, but geopolitical. Military AI intersects with questions of strategic autonomy, alliance cohesion and technological competitiveness. Used responsibly, AI can strengthen early warning systems, enhance civilian harm mitigation and improve crisis response. Yet Europe faces a delicate balancing act: championing human-centric AI and legal accountability while avoiding strategic dependency or technological marginalisation in an increasingly polarised global order.
Ultimately, the question is no longer simply how to preserve a ‘rules-based international order,’ but whether meaningful multilateral governance can survive in a world increasingly shaped by the zero-sum logic of competing superpowers and their technologically armed allies. If the leading global players are unwilling to bind themselves to common limits, the governance of military AI may reflect power politics more than shared principles. In that case, the possibility to shape AI in accordance with international humanitarian law and collective security may narrow not because norms are absent, but because the will to uphold them is eroding. As affirmed by the UN Secretary-General, “The window is closing to shape AI – for peace, for justice, for humanity. We must act without delay”.