Morning Overview

Iran war becomes real-time test for AI-powered warfare

The U.S.-Israel military campaign against Iran is functioning as the first large-scale proving ground for artificial intelligence in active combat operations, with AI systems shaping everything from target selection to command-and-control decisions at speeds that outpace traditional human review. The conflict draws on years of Pentagon experimentation and Israeli battlefield AI deployment, but it also exposes a dangerous flip side: AI-generated disinformation is flooding the information space faster than analysts can verify it. What is unfolding is not just a war between nations but a stress test for whether machine-speed warfare can be trusted.

Pentagon Experiments Laid the Groundwork

Long before the first strikes on Iranian air defenses, the U.S. Department of Defense was running controlled tests designed to integrate AI into the kill chain. The DoD’s Chief Digital and Artificial Intelligence Office used a series of global information experiments to explore how data, analytics, and algorithms could improve joint workflows, including targeting and fires. Those exercises were explicitly structured to embed machine intelligence into operational decision loops, treating AI not as a back-office tool but as a front-line participant in how commanders identify, prioritize, and authorize strikes. Planners came away convinced that the side able to fuse intelligence streams fastest would enjoy a decisive advantage in any high-intensity conflict.

A separate line of effort focused on connecting disparate military assets through AI-enabled command and control. DoD officials described advancements in Joint All-Domain Command and Control, or JADC2, built around the concept of linking any available sensor to any suitable shooter, regardless of service or platform. In practice, that means a satellite, drone, or ground radar detecting a threat can have its data triaged by AI and routed directly to the nearest weapon system, without waiting for a human operator to manually push information through multiple headquarters. In a campaign against Iran’s missile barrages and drone swarms, that kind of speed is not a luxury but a requirement, turning milliseconds of processing time into the difference between intercepting an inbound threat and absorbing the hit.

Israel’s AI Targeting Goes From Gaza to Iran

Israel entered the Iran conflict with more battlefield AI experience than any other military. The Israel Defense Forces publicly acknowledged using an AI-based system known in English as a “gospel” platform to generate bombing targets at high speed during operations in Gaza. According to that reporting, the system helped shift intelligence cycles from weeks to near-instantaneous target production, allowing planners to assemble large strike packages in hours rather than days. Israeli officials framed the tool as a way to handle the volume problem in modern warfare: when a military wants to hit dozens or hundreds of sites in a compressed window, human analysts alone cannot keep pace with the torrent of sensor data and surveillance feeds.

That experience appears to have scaled directly into the Iran theater. According to Associated Press reporting, Israel’s intelligence services combined AI analysis with smuggled-in drones to prepare attacks aimed at degrading Iranian air defenses and missile systems from the inside. Small unmanned aircraft reportedly mapped radar sites and communications nodes, feeding imagery and signals back into machine-learning tools that refined target lists before manned aircraft or long-range missiles were committed. The result is an operational template in which algorithms do not merely recommend which coordinates to strike, but actively shape the conditions of the battlefield in advance, leaving human commanders to approve or adjust plans that have already been optimized by software.

Speed That Outpaces Human Oversight

The central tension in AI-powered warfare is not whether the systems function as designed, but whether they work faster than anyone can responsibly check them. Academics examining the Iran conflict told one technology-focused outlet that AI is collapsing the time required for military decision-making, compressing what once took hours or days into minutes or seconds. That acceleration yields an obvious tactical payoff: more responsive air defenses, quicker counterstrikes, and the ability to exploit fleeting intelligence before an adversary can relocate. Yet it also introduces a category of risk that no simulation or exercise has fully captured, namely, the possibility that a machine-generated targeting recommendation is wrong, and the human “in the loop” has only seconds to spot the error before lethal force is unleashed.

Most public discussion of AI in warfare focuses on offensive gains: faster targeting, broader sensor coverage, and the ability to sift more data than any staff of analysts could manage. Far less attention goes to the defensive vulnerabilities that come with delegating so much judgment to software. An adversary that understands how an AI targeting system ingests and weights data can, in theory, feed it false inputs (spoofed radar signatures, decoy infrastructure, manipulated communications traffic) to skew its outputs. In the Iran campaign, no publicly available audit of AI targeting accuracy exists, and neither the Pentagon nor the IDF has released post-strike assessments that would allow independent verification of how many targets were misidentified or mis-prioritized. That gap between capability and accountability is where the gravest danger lies: speed without robust verification mechanisms risks not only civilian casualties but also strategic miscalculation, where a strike based on flawed machine analysis triggers escalation that no algorithm anticipated.

AI Disinformation Muddies the Battlefield

The same AI tools that are accelerating military operations are simultaneously degrading the information environment around them. Investigators with BBC Verify have been tracking and mapping attacks in Iran and across the wider Middle East since the conflict began, and they report that the volume of AI-generated forgeries has complicated their work at every step. Synthetic videos purporting to show missile strikes, fabricated audio of senior officials, and manipulated satellite images all circulate within minutes of real events, forcing journalists and analysts to spend precious hours on basic authenticity checks. In an environment where militaries are also operating at machine speed, that verification lag means false narratives can shape public perception and diplomatic responses long before the facts are known.

One BBC Verify journalist described spending an entire day scrutinizing a single suspicious video that appeared to show a strike on a religious compound believed to be linked to senior Iranian clerics, only to conclude that the footage was likely fabricated or misattributed. That kind of intensive debunking is resource-intensive and inherently reactive, occurring well after the clip has ricocheted across social media and has been picked up by partisan outlets. For civilians trying to assess their own risk, for markets reacting to perceived escalation, and for governments weighing intervention, the fog of AI-generated disinformation becomes a strategic factor in its own right, blurring the line between genuine battlefield developments and engineered illusions.

Testing the Rules of Machine-Speed Warfare

The Iran conflict is therefore doing more than showcasing the latest generation of military technology; it is exposing the inadequacy of existing rules and norms for governing machine-speed warfare. Traditional concepts such as “meaningful human control” over lethal decisions assume that commanders have time to deliberate, question sources, and consult legal advisers before authorizing strikes. In a battlespace where AI-driven systems are constantly proposing new targets and automatically routing firing solutions to available weapons, those assumptions break down. The challenge for policymakers is to define where human judgment must remain non-negotiable, even if that means accepting slower response times or higher tactical risk in exchange for strategic stability and ethical accountability.

At the same time, the information dimension of the conflict suggests that any future framework for responsible AI use in war will have to address disinformation alongside kinetic effects. Just as arms control treaties once grappled with verification mechanisms for missiles and warheads, emerging agreements may need to include transparency measures for algorithmic targeting, independent monitoring of civilian harm, and cooperative efforts to flag and debunk AI-generated forgeries that could inflame crises. The Iran campaign, with its fusion of automated command systems, AI-assisted targeting, and synthetic propaganda, is revealing how quickly the line between combat operations and information manipulation can blur. Whether governments respond by tightening guardrails or by doubling down on speed and secrecy will help determine whether the next AI-enabled war is marginally more controlled, or markedly more dangerous.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.