Morning Overview

US used AI to hit 1,000+ targets in first 24 hours of war

The U.S. military struck more than 1,000 targets in the first 24 hours of its campaign against Iran, according to people familiar with the targeting system cited by The Washington Post, with artificial intelligence used to help identify, prioritize, and assign coordinates at a pace traditional analyst-driven workflows struggle to match. The system described in that reporting is Project Maven’s targeting software, with Anthropic’s Claude integrated into parts of its architecture. The speed of the opening salvo, and the commercial AI technology involved, has exposed a deepening rift between the Pentagon and one of Silicon Valley’s most prominent AI companies over how far military applications of the technology should go.

Claude Inside the Kill Chain

The Pentagon began integrating Anthropic’s Claude into the Maven system in late 2024, building a targeting engine that draws from 179 sources of data to generate target suggestions, precise strike coordinates, and threat prioritization rankings. When the Iran campaign launched, that system processed intelligence feeds at machine speed, allowing commanders to cycle through target packages far faster than traditional analyst-driven methods. The result was a first-day tempo that dwarfed previous U.S. opening strikes in the region and showcased how deeply commercial AI has been woven into U.S. military operations.

According to people familiar with the system, the Maven Smart System with Claude embedded helped identify and prioritize the 1,000 targets struck in that initial window. The AI does not pull the trigger; human operators retain authorization over each strike, reviewing proposed coordinates and collateral damage estimates before approving weapons release. But the system’s ability to compress the targeting cycle, from raw intelligence to recommended coordinates, can shrink processes that traditionally took far longer. That compression is what made the blistering opening pace possible, and it underscores how commercial AI is being used in active U.S. combat operations, turning Claude into a critical node in what military planners call the “kill chain.”

Project Maven’s Road to War

The Iran campaign did not emerge from a blank slate. Project Maven, originally launched as a Pentagon initiative to apply machine learning to drone surveillance footage, has evolved into a full-spectrum targeting platform spanning land, air, sea, and cyber domains. In early 2024, U.S. officials acknowledged that Maven-based tools were used to help find targets struck in Iraq, Syria, and Yemen during U.S. Central Command operations. Those earlier deployments were limited in scope and tempo compared with the Iran campaign, but they established both the technical feasibility and the policy precedent for embedding AI into live strike workflows.

The system works by fusing multiple data feeds, including signals intelligence, satellite imagery, radar returns, and human reporting, into a unified picture that highlights potential targets and ranks them by military value. As detailed examinations of Maven have shown, the platform is designed to handle target identification and prioritization while keeping human authorization as the final step before weapons release. That distinction matters because it is the line Anthropic and the Pentagon are now fighting over: whether AI should move closer to autonomous decision-making in lethal operations, or whether the current human-in-the-loop model is the ceiling. With Claude woven into Maven’s analytics layer, the Iran strikes became the first large-scale test of how that line holds under wartime pressure.

Anthropic Blacklisted Over Guardrails

Even as Claude’s code helped power the Iran strikes, the Trump administration moved to punish the company that built it. The Pentagon, backed by the White House, declared Anthropic a security risk, issuing a blacklist and supply-chain-risk designation against the firm that could cut it off from federal contracts and some key components. The dispute centers on Anthropic’s refusal to remove certain guardrails from Claude, specifically restrictions related to autonomous weapons applications and large-scale domestic surveillance uses that the company considers incompatible with its safety commitments. For the Pentagon, those limits have become flashpoints as planners push for more flexible AI behavior in contested environments.

Defense Secretary Pete Hegseth escalated the pressure publicly, warning Anthropic to let the military adapt its models as it sees fit, and officials, according to the reporting, discussed the Defense Production Act as a possible tool to compel deeper cooperation. Anthropic has indicated it will challenge the designation in court, setting up a legal confrontation with no clear precedent in the commercial AI sector. The standoff reveals a tension that most coverage of military AI has treated as theoretical: what happens when a vendor’s safety policies directly conflict with wartime operational demands, and the government decides those policies are obstacles rather than features. For Anthropic, the fight is about keeping Claude from being repurposed into fully autonomous weapons or pervasive domestic monitoring; for the Pentagon, it is about ensuring that critical battlefield software is not constrained by a private company’s ethics charter.

AI at War Speed Changes the Calculus

The 1,000-target opening salvo is not just a military statistic. It signals a shift in how quickly the United States can initiate and sustain large-scale strikes, with direct consequences for adversary planning, civilian risk assessment, and arms-control frameworks that were designed around human decision timelines. When AI compresses the targeting cycle this dramatically, the window for diplomatic intervention, target verification, and collateral damage review shrinks in proportion. Commanders can cycle through potential targets so rapidly that political leaders may have less time to weigh escalation risks, while independent observers struggle to reconstruct how specific sites were selected. No public data on AI accuracy rates or post-strike assessments from the Iran campaign has been released, leaving a significant gap in accountability for a system operating at this scale.

The Pentagon’s growing dependence on AI as a tool to accelerate operations in Iran also sets a precedent that other nations will study and attempt to replicate. China, Russia, and regional powers have their own military AI programs in various stages of development, and the demonstrated effectiveness of Maven with Claude integrated gives those efforts a concrete benchmark to match or exceed. Most analysis of military AI has focused on whether the technology works at all under battlefield conditions. The Iran campaign offers an answer: with enough data and compute, AI can dramatically increase strike tempo. The harder question, which the Anthropic blacklisting fight now forces into the open, is who gets to decide the limits on how far that acceleration should go, and what safeguards must be in place when commercial chatbots become part of the machinery of war.

The Next Phase of the AI–Pentagon Conflict

As the Iran operation continues, the immediate tactical benefits of AI-enabled targeting are colliding with longer-term strategic and legal concerns. Internally, defense officials are debating whether Claude’s role should expand from recommending targets to orchestrating broader campaign planning, such as sequencing strikes across multiple theaters or dynamically reallocating assets based on real-time sensor data. Externally, lawmakers and civil society groups are pressing for clearer rules on when and how systems like Maven can be deployed, especially in densely populated areas where the margin for error is thin. The absence of transparent metrics on false positives, misidentifications, or near-miss incidents makes it difficult to evaluate whether the speed gains justify the potential risks to civilians and regional stability.

For Anthropic and other AI firms watching the confrontation unfold, the stakes go beyond one contract or one conflict. The outcome of the blacklist fight will shape how much leverage U.S. technology companies have to enforce their own safety standards once their products are embedded in critical national-security infrastructure. If the government prevails in compelling looser guardrails, it could normalize a model in which military requirements routinely override corporate policies on autonomy and surveillance. If Anthropic succeeds in defending its constraints, it may establish a precedent for negotiated limits on how far battlefield AI can evolve. Either way, the Iran campaign has ensured that the debate over AI in warfare is no longer hypothetical: it is playing out in real time, with Claude’s code already inside the kill chain and both the Pentagon and its critics racing to define what comes next.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.