The U.S. military has confirmed it deployed artificial intelligence tools during its recent strikes on Iran, with the commander of U.S. Central Command stating publicly that “advanced AI tools” helped process battlefield data at speed. In the same breath, the CENTCOM commander stressed that lethal decisions stayed with human operators, not algorithms. That dual message, AI as accelerant but not decision-maker, is now being tested against a messy political backdrop in which the same administration using AI in combat has moved to ban one of its key AI vendors.
CENTCOM Commander: Humans Decide When to Shoot
Adm. Brad Cooper, the CENTCOM commander, posted a video to X in which he described how the military is using “advanced AI tools” to sift data quickly during the Iran campaign. His language was deliberate. “Humans will always make final decisions on what to shoot and what not to shoot and when to shoot,” Cooper said. The statement was designed to address growing public anxiety about autonomous weapons and to draw a clear line: AI handles information, people handle triggers.
That framing did not appear overnight. Years before the Iran strikes, CENTCOM held a press briefing outlining how it was already integrating AI into operations through computer vision and anomaly detection. Officials described software that could comb through drone feeds, radar tracks, and other sensor data to highlight unusual patterns for human review. Those early applications were pitched as tools to augment watchstanders and operators, not replace them, and they laid the conceptual groundwork for later deployments in combat.
The Iran conflict became the first large-scale proving ground for that doctrine, turning a concept briefed in peacetime into a wartime reality. Cooper’s insistence that humans retain control over lethal decisions echoed longstanding Pentagon policy, but the scale and tempo of the campaign raised new questions about how meaningful that control can be when AI is accelerating every stage of the targeting process.
What AI Actually Did in the Iran Strikes
On March 2, the United States launched strikes against Iran using an array of weapons that included B-2 bombers and suicide drones modeled after Iranian designs, alongside AI systems reportedly built by Anthropic. The AI’s role was not to fly aircraft or guide munitions but to process intelligence. According to officials and outside analysts, it helped pull data from multiple systems and organize information to provide clarity for planners, functioning as a high-speed sorting engine for the enormous volume of sensor feeds, signals intelligence, and targeting data that modern air campaigns generate.
The practical effect was speed. Analysts who might spend hours cross-referencing satellite imagery, electronic intercepts, and human intelligence reports could instead receive AI-filtered summaries and anomaly flags. That capability matters in a time-sensitive strike environment where targets move and windows close within minutes. In some cases, AI tools reportedly clustered potential targets, highlighted deviations from normal activity, and suggested which data streams deserved immediate human attention.
Researchers at Georgia Tech, examining the campaign, concluded that the military leaned heavily into AI for the attack on Iran, but emphasized that the technology did not lessen the need for human judgment in war. The report framed AI as a force multiplier for existing intelligence staffs rather than a replacement for human analysts, warning that overreliance on machine-generated outputs could magnify errors if not carefully checked.
A strike on the Iranian city of Minab has drawn particular scrutiny. The exact role of AI in that specific engagement has not been officially confirmed, with reporting noting a gap between what officials have acknowledged about AI’s broad use and what they have disclosed about individual targets. That gap is where the hardest accountability questions live: if an AI system elevates a faulty data point that contributes to a mistaken strike, the chain of responsibility can be difficult to untangle, even if a human technically “approved” the shot.
The Anthropic Paradox: Used in Combat, Banned by the White House
The political dimension of AI in the Iran campaign is as significant as the operational one. The Trump administration ordered U.S. agencies to halt their use of Anthropic technology in what has been described as a clash over AI safety and influence. The move came after months of mounting criticism from allies and lawmakers who argued that the company’s systems were too powerful, too opaque, or too loosely governed to be trusted at the heart of government decision-making.
The Pentagon then went further, labeling Anthropic a supply chain risk effective immediately. That designation signaled to contractors and program managers across the Defense Department that they should begin disentangling Anthropic software from operational systems and avoid new dependencies, even as those tools were reportedly embedded in platforms already in active use during the Iran conflict.
The timeline creates a stark contradiction. AI tools built by Anthropic were integrated into the intelligence pipeline that supported strikes on Iran. Weeks later, the company that built those tools was designated a risk to the very supply chain it had just helped operate. According to congressional reporting, Anthropic declined to comment on its role in the conflict, while Palantir, another major AI firm, was cited as a continuing partner in military intelligence work, including in Iran.
Administration officials have framed the ban as a necessary step to protect national security and ensure that critical infrastructure is not overly reliant on any single commercial vendor. Yet the abrupt shift has fueled criticism from both hawks and civil libertarians. Defense hawks argue that sidelining a proven system in the middle of a conflict risks degrading battlefield awareness, while civil liberties advocates question why the government embraced a tool for lethal operations before resolving its safety and governance concerns.
The White House has also faced questions about whether its broader AI policy is coherent. In one context, it touts AI as a competitive advantage that can help U.S. forces out-think adversaries and limit collateral damage by improving target discrimination. In another, it warns that the same class of systems represents an unacceptable vulnerability inside federal networks. The Anthropic case has become a focal point for that tension.
Speed, Accountability, and the Future of AI in War
The Iran strikes highlight a core dilemma for modern militaries: AI can compress decision timelines in ways that commanders find operationally irresistible, but democratic societies expect careful, accountable use of force. When AI systems pre-digest intelligence, the human role can shift from active analysis to rapid endorsement or rejection of machine suggestions. That “human on the loop” posture is harder to scrutinize than traditional chains of command, especially when much of the underlying code and training data remains classified or proprietary.
Cooper’s insistence that humans retain the final say is meant to reassure both domestic and international audiences that the United States is not delegating life-and-death decisions to algorithms. Yet as AI tools become more deeply woven into targeting, logistics, and battle management, the line between assistance and de facto delegation will be tested. If a commander has only seconds to decide and the AI’s recommendation is the only synthesized view available, critics argue that the machine is effectively calling the shot.
For now, Pentagon leaders are trying to hold two positions at once: that AI is indispensable to managing the complexity of modern warfare, and that human judgment remains the ultimate safeguard against error and abuse. The Iran campaign, and the controversy over Anthropic’s role in it, suggests that maintaining both claims will require more than assurances. It will demand transparent rules for how AI is designed, tested, audited, and retired, along with clear explanations to the public when those rules are bent in the fog of war.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.