Ukraine’s military has contracted 1.8 million drones worth nearly UAH 147 billion for 2024-2025, a procurement surge that captures a broader global reality: cheap, mass-produced unmanned systems and the AI software guiding them are advancing faster than global negotiations and many existing policy frameworks can keep up with. From Kyiv’s assembly lines to Pentagon policy offices to stalled United Nations negotiations, the gap between battlefield technology and the rules meant to govern it keeps widening.
Ukraine’s Drone Buildup Sets the Scale
On 29 October 2024, the Ukrainian defence ministry announced it had contracted 1.8 million drones totaling nearly UAH 147 billion in collaboration with the Ministry of Digital Transformation. The order spans deep-strike, FPV kamikaze, and reconnaissance categories, reflecting how a single belligerent state now treats disposable unmanned systems as a core munition rather than a specialty asset. At those volumes, individual unit costs are low enough to make drone warfare accessible well beyond traditional military powers, while the mix of long-range and short-range platforms signals an intent to saturate the battlespace with sensors and loitering munitions.
That accessibility is precisely what makes the procurement significant beyond the Ukrainian front lines. When a nation at war can field nearly two million drones across multiple mission types in a single budget cycle, the signal to smaller states and non-state groups is clear: effective airpower no longer requires fighter jets or cruise missiles. The same commercial components powering FPV kamikaze drones, including off-the-shelf flight controllers and machine-vision chips, are available on global markets. The question is no longer whether cheap autonomous weapons will spread, but how fast and to whom, and whether any shared standards will emerge before autonomous swarms and algorithmic targeting become routine features of regional conflicts.
Washington Updates Its Playbook, Slowly
The U.S. Department of Defense has tried to keep pace on the policy side. In an attempt to clarify the rules for machine decision-making in combat, the Pentagon revised its directive on autonomy in weapon systems, the military’s primary internal rulebook for when and how machines can apply lethal force. The update is framed as a way to address rapid AI advances and to set policy expectations for human judgment and oversight over the use of autonomous functions, including in targeting-related decisions. It also sets out review processes for new systems and emphasizes testing to ensure that autonomous functions behave predictably under battlefield conditions.
Yet a directive governing American forces does nothing to constrain adversaries or commercial drone suppliers operating outside U.S. jurisdiction, and it cannot resolve broader strategic pressures. A Congressional Research Service brief notes that the U.S. position at the CCW has consistently rejected a preemptive ban on lethal autonomous weapons systems, arguing that properly designed autonomy could improve compliance with the laws of armed conflict by reducing human error. That stance reflects confidence in American technological safeguards, but it also legitimizes similar programs by other states, which may not share the same testing standards or transparency, and it leaves Washington advocating for responsible use rather than clear red lines.
Ethical Alarms and the Arms-Race Logic
Academic researchers and ethicists are increasingly skeptical that more autonomy will automatically yield more humane warfare. A peer-reviewed article in NanoEthics on the ethical legitimacy of such systems argues that autonomous platforms create powerful incentives for automated arms races, because once one actor deploys fast-reacting, opaque targeting algorithms, rivals feel compelled to respond in kind. The concern is not only that escalation becomes more likely, but that the speed and complexity of machine decision-making will outstrip human capacity to oversee or de-escalate engagements in real time.
This dynamic is not merely theoretical. The proliferation of black-box AI, where neural networks make targeting or navigation decisions that engineers cannot fully reconstruct after the fact, complicates accountability in ways existing international humanitarian law was never built to handle. If a drone strike kills civilians and the algorithm’s decision process cannot be meaningfully audited, responsibility becomes diffuse: software developers may claim they followed specifications, commanders may insist they complied with doctrine, and political leaders can point to legal reviews conducted before deployment. No widely adopted global agreement squarely resolves these attribution problems with precision, and the pace at which new systems are fielded can outstrip efforts to interpret long-standing principles like distinction and proportionality in an era of machine agency.
Europe and the UN Push Back, With Limits
The European Union has taken a different regulatory path, at least for civilian applications of AI. In negotiating its Artificial Intelligence Act, the Council of the EU endorsed a risk-based framework that classifies AI systems and imposes stringent transparency and testing obligations on high-risk uses, including many that affect fundamental rights. This approach could indirectly shape how European firms design and test the perception, planning, and decision modules that later migrate into dual-use or military systems, raising the baseline for safety and documentation even when lethal force is not immediately at issue.
However, the Act also includes broad exemptions for national security and defence, allowing member states to develop or procure autonomous weapons largely outside its formal scope. That carve-out underscores a deeper tension: governments are willing to regulate AI in consumer products, employment, and policing, but when it comes to strategic military capabilities they tend to prioritize freedom of action. The result is a widening gap between the level of scrutiny applied to algorithms that, for example, screen job applications and the comparatively looser oversight applied to algorithms that may select and engage targets, even as both rely on similar machine-learning techniques and data pipelines.
UN Warnings and the Missing Global Framework
At the United Nations, alarm over the trajectory of autonomous weapons has become more explicit. In May 2025, a UN news report highlighted Secretary-General António Guterres’s call for states to move beyond voluntary principles and negotiate binding instruments to regulate and, where necessary, prohibit certain autonomous weapon systems. He framed the issue not only as a legal challenge but as a test of whether the international community is willing to place human dignity and control above the perceived efficiencies of automation, warning that delegating life-and-death decisions to machines risks eroding the moral foundations of humanitarian law.
In a related intervention, Guterres described so-called “killer robots” as politically unacceptable and morally repugnant, urging governments to agree on a global ban before such systems become entrenched in arsenals. Yet negotiations under the Convention on Certain Conventional Weapons have moved slowly, hamstrung by disagreements between states that favor a preventive prohibition and those that prefer non-binding norms or case-by-case regulation. As Ukraine’s mass drone procurement, Washington’s cautious doctrine updates, and Europe’s partial AI rules all demonstrate, technological momentum is firmly on the side of deployment, while the emerging legal and ethical architecture remains fragmented and incomplete.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.