Artificial intelligence is accelerating how militaries identify, track, and strike targets, compressing decisions that once took days into minutes. From Gaza to Iran, AI-driven systems are generating bombing targets at a pace that outstrips human verification, raising urgent questions about whether existing laws of armed conflict can keep up. The gap between what these tools can do and what legal frameworks were designed to govern is widening fast.
AI on the Battlefield: Speed Over Scrutiny
The clearest window into how AI has changed military operations comes from the Israeli Defense Forces’ campaign in Gaza. An investigation by The Washington Post detailed how the IDF built what insiders described as an “AI factory” to generate thousands of potential bombing targets, drawing on interviews and internal documentation about AI-enabled target generation workflows, the scale and pace of targeting, and shifting verification practices. The system industrialized a process that previously depended on teams of analysts cross-checking intelligence over hours or days, turning what had been a painstaking, largely manual effort into a semi-automated pipeline.
That acceleration is not limited to one theater. According to reporting in Nature, AI-enabled targeting and decision-support systems have already appeared in previous conflicts and in attacks on Iran. Researcher Michael Horowitz noted that what is known about these deployments is limited to what is publicly available, implying that the full scope of AI-assisted operations is likely classified or undisclosed. The same analysis suggests that, in theory, better pattern recognition and predictive tools could reduce civilian casualties by improving discrimination and timing, but only if they are implemented with rigorous oversight and conservative rules of engagement.
In practice, militaries are experimenting with a spectrum of AI tools: systems that prioritize satellite images for human review, algorithms that correlate signals intelligence with movement patterns, and software that ranks likely command-and-control nodes for attack. Each of these uses promises operational advantage, yet each also shifts more of the targeting cycle into a black box that commanders and legal advisers may struggle to interrogate under wartime pressure.
Ethics Pledges vs. Operational Reality
The U.S. military was among the first to try to set formal guardrails. In 2020, the Department of Defense announced a set of ethical principles for AI, committing that any such systems would be responsible, equitable, traceable, reliable, and governable. The framework was meant to ensure that AI tools used in combat would remain lawful, auditable, and subject to meaningful human control, echoing broader concerns raised in international humanitarian law about accountability and foreseeability.
Yet the distance between stated principles and battlefield practice appears to be growing. When an AI system can flag a building as a valid target in seconds, the human operator reviewing that recommendation faces enormous pressure to approve quickly, especially during high-tempo operations with competing demands for attention. The ethical framework assumes a deliberative process in which operators can question the underlying data and assumptions, but the technology incentivizes speed and volume. This tension sits at the heart of the debate: pledging “human-in-the-loop” control means little if the loop is compressed to a near-automatic confirmation step.
Opacity compounds the problem. Most military AI targeting algorithms are proprietary and classified, which means independent auditors, legal reviewers, and even some commanding officers may not fully understand how a system weighted its inputs before recommending a strike. If a proportionality assessment (the legal requirement to weigh expected civilian harm against anticipated military advantage) is partly automated, the reasoning behind that calculation becomes harder to reconstruct after the fact. Research in the International Review of the Red Cross has explored AI and machine learning in armed conflict through a human-centered lens, emphasizing the need for transparency, contestability, and clear lines of responsibility. But much of this scholarship remains theoretical, lacking access to declassified case studies that would show how AI recommendations actually shape real-world targeting decisions.
Accountability mechanisms that exist on paper may struggle to function in this environment. Post-strike investigations rely on logs, communications, and testimony to reconstruct what happened and why. If key steps in the decision chain are encoded in opaque models, it becomes difficult to determine whether a civilian object was misclassified by the AI, misinterpreted by the operator, or approved despite clear warning signs. That uncertainty risks eroding trust in both military investigations and international legal processes.
Europe’s Regulatory Blind Spot
The European Union’s approach to AI regulation offers a telling case study in the limits of civilian governance over military technology. The bloc’s flagship law, the AI Act (Regulation (EU) 2024/1689), classifies systems by risk level and imposes strict obligations on high-risk applications in areas such as policing, employment, and critical infrastructure. It mandates transparency, data governance, and human oversight for many civilian uses that could affect fundamental rights.
But the regulation explicitly excludes military and defense purposes from its scope. AI used with lethal force is instead governed primarily by public international law, including the Geneva Conventions and customary rules on the conduct of hostilities. This carve-out reflects member states’ insistence that defense policy remains a matter of national sovereignty and that existing humanitarian law already regulates targeting decisions. In practice, however, it means the most consequential and dangerous uses of AI fall outside the EU’s most ambitious civilian safeguards.
This gap matters because it leaves compliance almost entirely to individual militaries and their internal legal review processes. No EU-wide audit mechanism exists to verify how member states’ armed forces develop, test, or deploy AI-enabled targeting tools. Analysis from IE Insights argues that AI creates clear advantages in speed, data processing, and potential precision, but simultaneously strains legal frameworks that presuppose human judgment at each stage of the targeting cycle. The more decisions migrate into algorithmic systems, the harder it becomes to ensure that long-standing principles such as distinction and proportionality are genuinely applied rather than nominally affirmed.
Even within EU institutions, oversight tools are fragmented. Systems that support border control, surveillance, or dual-use technologies may fall under certain regulatory or registration requirements, yet once similar tools are adapted for battlefield use, they can slip into a legal gray zone. Mechanisms like the EU’s central authentication service illustrate how digital infrastructure is tightly managed for civilian administration, while far less is publicly known about how comparable controls operate over experimental military AI projects.
The Arms Race for Cheaper, Faster AI
Cost dynamics are pushing AI deeper into military planning. As models become cheaper to train and deploy, and as commercial off-the-shelf tools grow more capable, the barrier to entry for autonomous or semi-autonomous targeting drops. The technology will not remain confined to a handful of wealthy militaries; it is likely to diffuse to smaller states and, potentially, to non-state actors able to repurpose commercial platforms.
Strategic analysis from organizations such as the RAND Corporation has examined how advances in AI could reshape core military functions, from logistics and maintenance to intelligence analysis and high-level decision support. The concern is not only that major powers will integrate AI into every layer of their command structures, but that rivals will feel compelled to match that pace or risk strategic disadvantage. An arms race dynamic emerges. If one side believes rapid, AI-accelerated targeting will confer decisive battlefield benefits, others may prioritize similar capabilities even if the legal and ethical implications remain unresolved.
Proliferation also complicates deterrence and escalation management. When multiple actors deploy opaque, data-hungry algorithms to monitor borders, control air defenses, or cue strikes, the risk of misinterpretation and inadvertent escalation rises. False positives, data poisoning, or adversarial manipulation of AI systems could trigger responses that humans neither intended nor fully understand. Traditional confidence-building measures and arms control mechanisms were not designed for a world in which software updates can alter the balance of power faster than treaties can be negotiated.
Bridging the Legal and Technological Gap
Closing the gap between AI capabilities and the laws of war will require more than aspirational principles. States can begin by embedding legal and ethical constraints directly into system design, ensuring that models are trained, tested, and deployed with explicit reference to humanitarian law. That includes building robust audit trails, maintaining human-readable explanations for critical recommendations, and setting conservative default rules when data are incomplete or ambiguous.
Greater transparency, within security limits, is also essential. Carefully anonymized case studies, independent expert reviews, and structured dialogues between militaries, humanitarian organizations, and technical experts could help translate abstract legal norms into concrete engineering requirements. Journals and forums that specialize in humanitarian law and technology are already laying conceptual groundwork, but they need access to real operational data to move beyond theory.
Ultimately, the question is not whether AI will shape future conflicts, but whether the rules of war and related legal and ethical frameworks can evolve quickly enough to keep its most dangerous applications in check. Without deliberate efforts to align code with law, the speed and opacity of algorithmic targeting risk outpacing the very rules meant to protect civilians in war.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.