The U.S. military is relying on artificial intelligence to accelerate its campaign against Iran, compressing planning timelines and processing target volumes at speeds that would have been impossible just a few years ago. The effort combines billion-dollar AI platforms already embedded in Pentagon workflows with commercial generative AI tools, though the relationship with at least one key AI vendor has fractured in real time. The result is a live stress test of whether algorithmic warfare can deliver on its promise while keeping human judgment in the loop.
Billion-Dollar Bets on the Maven Smart System
The operational backbone of the Pentagon’s AI push in Iran traces back to Project Maven, the military’s flagship effort to apply machine learning to intelligence analysis and targeting. That program took a major step forward when Palantir USG Inc. secured a major contract for the Maven Smart System prototype, designated W911QX-24-D-0012, signaling that the Department of Defense was ready to move AI-assisted targeting from experimental projects into core operational infrastructure. Maven’s software is designed to ingest vast streams of imagery and sensor data, flagging patterns and potential targets for human analysts far more quickly than traditional methods.
The Pentagon then doubled down on that bet. A subsequent contract modification worth hundreds of millions funded expanded Maven Smart System software licenses, pushing total investment well past $1.2 billion on a single AI platform. That scale of spending reflects a calculation that AI-enabled workflows are no longer optional for the kind of rapid, high-volume military operations now underway against Iranian targets. When the U.S. deployed suicide drones and Tomahawk missiles in strikes on Iran on March 1, Maven was part of the toolchain processing intelligence and helping commanders prioritize targets, according to Bloomberg’s account of the Iran operations, which describes AI tools as central to speeding up planning while still feeding into a chain of command that retains formal authority over lethal decisions.
From Task Force Lima to Live Combat
The Pentagon’s path to deploying generative AI in a shooting war ran through a deliberate bureaucratic pipeline rather than ad hoc experimentation. In 2023, the Department of Defense stood up Task Force Lima under the Chief Digital and Artificial Intelligence Office, with Deputy Secretary Kathleen Hicks describing the initiative as a way to assess and employ generative AI while managing national security risks. Task Force Lima was charged with mapping potential applications across warfighting, business, and support functions, from planning and logistics to intelligence summarization, and with identifying where such tools might introduce unacceptable vulnerabilities.
Those early assessments fed directly into the Pentagon’s next phase. The department’s Chief AI Officer launched an Artificial Intelligence Rapid Capabilities Cell and a set of “Frontier AI” pilots designed to move promising concepts into real-world use cases. The pilots focused on warfighting and enterprise applications, with an explicit mandate to accelerate experimentation into scalable deployment. What distinguishes the Iran campaign from these earlier efforts is that many of the capabilities once confined to controlled pilots now appear to be operating under combat conditions, compressing planning cycles and handling target volumes at a pace that suggests the transition from lab to battlefield has happened faster than many defense analysts anticipated.
Anthropic’s Claude and the Pentagon Rift
The most volatile element of the story is the role of Anthropic’s Claude, a large language model that, according to detailed reporting from Washington, became part of the AI-enabled toolchain the military used alongside Maven during Iran operations. Officials described Claude as helping to sift and summarize intelligence, generate structured target lists, and quickly prioritize options for human review, effectively acting as a high-speed analyst that could read and synthesize vast quantities of text and sensor-derived reports. Reuters and other outlets cited by that reporting indicated that Anthropic’s tools were tapped to support what were characterized as massive, time-sensitive operations against Iranian assets.
Yet even as Claude was reportedly integrated into the targeting workflow, the relationship between the Pentagon and Anthropic collapsed in public view. According to subsequent coverage of the fallout, U.S. officials moved to blacklist the company from future defense work after political backlash, with President Trump boasting that he had “fired” Anthropic and denouncing its leadership. That tension between using a commercial AI model in active combat and simultaneously cutting ties with its creator raises a practical question most coverage has only hinted at: if commanders and analysts have come to rely on Claude’s speed for triaging data and shaping target options, what replaces it once the blacklist fully takes effect, and how quickly can alternative models be integrated without disrupting ongoing operations?
Speed Gains and the Human Oversight Gap
The core promise of AI in military operations is speed, and the Iran campaign appears to deliver on that promise in ways that are reshaping expectations inside the Pentagon. Compressed planning timelines and the ability to process large volumes of potential targets simultaneously give commanders options that traditional intelligence analysis cannot match at the same pace, especially when operations span multiple domains and time zones. In accounts of the recent strikes, officials emphasized that AI systems helped generate and refine target packages in hours rather than days, while still routing final decisions through human-led approval chains intended to preserve legal and ethical oversight.
Critics argue that this acceleration blurs the line between human judgment and machine suggestion. Stop Killer Robots, a coalition of 270 human-rights groups cited in Bloomberg’s coverage of AI in the Iran strikes, warns that decision-support systems can erode the meaningful separation between humans and lethal force even if they do not directly pull the trigger. Their concern is that commanders facing immense time pressure may increasingly defer to AI-generated rankings and risk assessments, especially when those outputs are wrapped in technical jargon or statistical confidence scores that are difficult to independently verify. That dynamic, they argue, could make it harder to ensure that civilians are never deliberately or inadvertently targeted, even under policies that formally require human control.
The Future of Algorithmic Warfare After Iran
The Iran campaign is likely to be remembered as a turning point for algorithmic warfare, not only because of the scale of AI integration but also because of the political and commercial backlash it triggered. On one side, the Pentagon’s investments in Maven and the Frontier AI pilots show that senior leaders now view advanced algorithms as indispensable for modern operations, with billion-dollar contracts and dedicated task forces to match. On the other, the rupture with Anthropic underscores how fragile those dependencies can be when military needs collide with public concern over tech companies’ role in lethal force, and when political leaders choose to make examples of specific vendors.
What comes next will test whether the United States can institutionalize AI-enabled targeting in a way that is both operationally sustainable and politically defensible. Replacing or replicating Claude’s capabilities will require either rapid onboarding of alternative commercial models or expanded in-house systems, both of which must be integrated into existing oversight and testing frameworks built through efforts like Task Force Lima and the AI Rapid Capabilities Cell. At the same time, pressure from civil-society groups and some allied governments is likely to intensify calls for clearer rules on how far AI can go in shaping lethal decisions. The Iran operations have demonstrated that algorithmic tools can dramatically compress the time between intelligence collection and kinetic action; the unresolved question is whether democratic institutions can keep pace with that acceleration and impose guardrails robust enough to prevent speed from eroding accountability.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.