Morning Overview

US military relied on Claude AI for Iran strike planning despite ban

The U.S. military used Anthropic’s Claude AI to help plan strikes against Iran, just hours after the Trump administration moved to ban the company’s technology from defense operations, according to reporting by the Wall Street Journal. The episode exposed a sharp conflict between Pentagon leadership and one of the most prominent AI safety companies over how artificial intelligence should be used in lethal military operations. What followed was a rapid escalation that put a $200 million contract, corporate ethics commitments, and the future of AI in warfare on a collision course.

Claude Entered the Pentagon Through Palantir

The technical pipeline that brought Claude into classified military environments was established well before the current dispute. In November 2024, Anthropic, Palantir, and Amazon Web Services announced a partnership to bring Claude AI models into U.S. government intelligence and defense operations. Under that arrangement, Claude was operationalized within Palantir’s AI Platform on AWS, including environments accredited at DISA Impact Level 6, a classification tier reserved for handling sensitive national security data. That integration gave military planners direct access to one of the most capable large language models available, embedded inside tools they were already using for operational planning, as described in a Business Wire release.

The depth of that integration became clear when Claude was used via Palantir in planning and preparation for a raid targeting Venezuelan President Nicolas Maduro, according to reporting in the Washington Post. That deadly operation preceded the broader power struggle between Anthropic and Pentagon leadership, but it demonstrated how thoroughly AI tools had already been woven into the military’s decision-making chain. For readers unfamiliar with how defense AI contracts work, the key point is this: once a model like Claude is deployed inside a classified platform at this level, removing it is not as simple as flipping a switch. The technology becomes part of the operational fabric, which is exactly what made the subsequent ban so difficult to enforce.

The Pentagon Demanded Full Access, Anthropic Refused

Tensions between the Defense Department and Anthropic had been building for months over the terms governing how Claude could be used. The Pentagon pushed for authorization covering “all lawful purposes,” a phrase that would effectively strip away the safety guardrails Anthropic had built into its models. Anthropic drew a line. The company stated publicly that it “cannot in good conscience” allow the Pentagon to remove AI checks from Claude, framing its position as a matter of core safety principles rather than negotiable contract language. Defense Secretary Pete Hegseth responded by threatening to cancel a $200 million contract, turning a policy disagreement into a direct financial confrontation and signaling that the administration was prepared to walk away from a cutting-edge capability rather than accept corporate limits on its use.

This was not a theoretical argument about AI ethics in the abstract. Anthropic had designed Claude with specific restrictions on how the model could participate in decisions involving violence or lethal force, including guardrails intended to prevent the system from directly recommending targets or optimizing for casualty counts. The Pentagon viewed those restrictions as obstacles to operational effectiveness and as an unacceptable precedent in which a private vendor could effectively veto certain categories of military action. Hegseth’s position reflected a broader view within the administration that AI companies should not be able to dictate the terms under which the military uses commercially available technology, particularly during active military operations. The standoff forced both sides into positions that left little room for compromise: either Anthropic relaxed its controls or the Pentagon severed ties and sought alternatives.

A Ban That Did Not Hold

The dispute reached a breaking point when, according to the Wall Street Journal, a deadline passed for Anthropic to agree to terms on how its tools could be used, and Hegseth formally moved to end the relationship with the company. The Journal characterized the split as driven by a “fight about vibes,” a phrase that captured the cultural gulf between a Silicon Valley AI safety firm and a defense establishment eager to deploy every available tool. President Trump formalized the ban on a Friday, issuing an order that barred Anthropic’s technology from defense operations and was intended to send a message to other AI vendors that the Pentagon would not tolerate contractual limits it regarded as operationally constraining.

Yet hours later, U.S. strikes in the Middle East targeting Iran still relied on Anthropic’s technology, according to a Wall Street Journal live report on the Iran campaign. The administration and Anthropic had been feuding for months over how the AI models could be used by the Pentagon, but the ban did not immediately sever the connection. This created a contradiction at the heart of U.S. military operations: a tool the president had just prohibited was actively supporting lethal strikes. The most likely explanation is that Claude’s deep integration into Palantir’s classified systems meant that removing it from active operational workflows required time the military did not have in the middle of a live campaign, especially if alternative models needed testing and validation before being trusted with mission-critical analysis.

What This Reveals About AI in Warfare

The dominant assumption in much of the coverage of AI in defense is that governments hold ultimate control over which technologies they deploy. This episode challenges that assumption in a specific and troubling way. Once an AI model is embedded inside classified infrastructure at the IL6 level, operating within a platform like Palantir’s AIP that military planners depend on daily, the practical ability to enforce a ban erodes rapidly. The technology becomes load-bearing. Ripping it out mid-operation carries its own risks, potentially degrading the quality of intelligence analysis or planning at a moment when accuracy matters most. In that setting, operational commanders may treat political directives as aspirational timelines rather than immediate orders, especially when lives and high-stakes missions are on the line.

That dynamic creates a shadow ecosystem where pre-integrated AI tools give the military a degree of operational autonomy from corporate oversight. Anthropic can declare its ethical red lines, and the White House can announce a ban, but the combination of legacy contracts, technical dependencies, and the urgency of ongoing operations can keep the system running anyway. The Iran strikes highlight how quickly formal policy can lag behind reality once AI is woven into the day-to-day fabric of targeting, logistics, and battlefield analysis. They also underscore a deeper structural tension: as militaries become more reliant on complex commercial AI stacks, the line between vendor policy, executive authority, and on-the-ground necessity grows harder to draw, and even harder to enforce in the heat of war.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.