Morning Overview

Pentagon tapped Anthropic Claude during Iran strike hours after Trump ban

The U.S. military relied on Anthropic’s Claude AI for intelligence analysis and target selection support during the February 28, 2026 strikes against Iran, just hours after the Trump administration moved to ban the technology over a dispute about ethical safeguards. The episode exposed a sharp contradiction at the center of Pentagon AI policy, the same tool the Defense Department was threatening to sever ties with was actively embedded in live combat operations. What followed has become one of the most consequential confrontations between a private AI company and the national security establishment.

Claude in the Kill Chain on February 28

On the evening of February 28, 2026, as U.S. and Israeli forces launched a coordinated bombardment of Iranian targets, Anthropic’s Claude was running inside defense workflows. According to live updates from the Wall Street Journal, the AI system was used for intelligence processing and target selection support during the strikes, even though the Trump administration had issued its ban just hours earlier. Follow-up reporting in British press described Claude assisting with scenario analysis and red-teaming of potential strike packages, underscoring that the model was not a peripheral tool but part of the decision-support fabric surrounding the operation.

The timing raises a basic question about how military technology bans actually function in practice. A prohibition issued at the policy level does not instantly sever software already integrated into classified networks. The fact that Claude remained operational during an active strike suggests that Pentagon procurement and IT systems lack a kill switch for AI tools once they are embedded in secure environments. That gap between policy announcement and technical enforcement is not merely bureaucratic. It means decisions about which AI tools participate in lethal operations can lag behind political directives by hours or even days, creating a gray zone where banned technology continues to shape life-and-death targeting.

How Claude Reached Classified Networks

Claude’s presence inside defense systems was not accidental. In April 2025, Anthropic joined Palantir’s FedStart initiative, a move detailed in a Business Wire release describing how the program provides a pathway for deploying AI models in government environments that meet FedRAMP High and DoD IL5 standards. Those certifications are the highest tiers of federal cloud security, designed for controlled unclassified information and national security workloads. Through FedStart, Claude was integrated into defense workflows well before the political dispute over its use escalated, giving it a foothold in data pipelines and analytic dashboards used by combatant commands.

That infrastructure pipeline matters because it shows the Pentagon was not improvising when it turned to Claude during the Iran strikes. The system had been deliberately onboarded through an established government cloud program, stress-tested against federal security benchmarks, and woven into operational tools months in advance. By the time the ban arrived, Claude was not a pilot project in a lab. It was load-bearing software inside the military’s analytical stack, which helps explain why it was not simply switched off when the political winds shifted. Disentangling a model from these environments requires code changes, security reviews, and operator retraining, steps that do not fit neatly into the tempo of crisis decision-making.

Pentagon Threats and the Defense Production Act

The ban itself grew out of an escalating pressure campaign. Reporting from the Washington Post describes how the Pentagon demanded broader access to Anthropic’s AI technology, with the Defense Department’s negotiating position including explicit threats. Among them were canceling existing contracts, designating Anthropic a supply-chain risk, and invoking emergency authorities to compel compliance. The central demand was that Anthropic relax safeguards that limited Claude’s use in mass surveillance and autonomous weapons applications, effectively giving the department more control over how the model could be fine-tuned and deployed.

Legal experts quoted in that reporting expressed skepticism about whether the Defense Production Act and related authorities could actually be used to force a private company to remove ethical constraints from its software. The relevant supply-chain statute, 10 U.S.C. 3252, was written to address foreign-manufactured components and compromised vendors, not to override a domestic company’s product safety decisions. Stretching it to cover AI guardrails would set a precedent with no clear legal foundation, effectively allowing the Pentagon to dictate the internal design choices of any technology firm it contracts with. That prospect alarms civil libertarians and some industry executives, who see it as a path toward government-directed redesign of consumer technologies whenever national security is invoked.

Senators Accuse Hegseth of Coercion

The dispute drew direct congressional intervention. Senators Chris Van Hollen and Ed Markey issued a formal demand that Defense Secretary Pete Hegseth halt what they characterized as a coercive campaign against Anthropic. In their letter, which is summarized on the Senate website, the lawmakers recounted specific threats the Department of Defense allegedly made: canceling contracts, labeling Anthropic a supply-chain risk, and leveraging emergency powers, all conditioned on the company dropping its safeguards by a fixed deadline. They framed these actions as an attempt to strong-arm a private entity into enabling mass surveillance and lethal autonomous systems.

The senators’ framing cast the conflict not as a routine procurement disagreement but as a test of whether the executive branch can coerce private companies into building tools for dragnet surveillance and automated warfare. That reframing carries weight because it shifts the debate from contract law to constitutional territory, touching on questions about government compulsion of speech and product design. If the Pentagon can threaten a company into removing safety features from AI software, the same logic could apply to any technology vendor that embeds restrictions the military finds inconvenient—from encryption defaults to content-moderation policies. Civil society groups have warned that such leverage could chill corporate efforts to build in ethical constraints, lest they be treated as obstacles to national security rather than safeguards.

What the Iran Episode Reveals About AI Governance

Most coverage of the Anthropic dispute has focused on the company’s resistance and the Pentagon’s demands as a bilateral negotiation. That lens misses the deeper structural problem the Iran strikes exposed. The real issue is not whether Anthropic should or should not work with the military. It is that the U.S. lacks clear, enforceable rules for how AI systems can be integrated into what military planners call the “kill chain” and how those systems should be governed once they are embedded. As a detailed Wall Street Journal analysis notes, the fight over Claude is ultimately about who gets to decide the boundaries of acceptable risk in national security AI: elected officials, defense bureaucracies, or private firms that design the models.

The Iran episode shows that, in practice, AI governance is being improvised through contract clauses, ad hoc threats, and back-channel negotiations rather than codified law. Claude’s continued use after the ban illustrates how policy announcements can lag behind technical reality, while the pressure campaign against Anthropic highlights how much leverage the Pentagon can exert over suppliers without ever going to court or Congress. Absent statutory limits, these dynamics will likely repeat as new models enter defense workflows. That is why some policy analysts argue for binding rules on AI use in targeting and surveillance, akin to arms control regimes, rather than relying on the informal norms and internal ethics boards that currently dominate the field.

The Role of Public Scrutiny and Civil Society

The standoff has also underscored the importance of independent journalism and civil society in surfacing the stakes of military AI adoption. Outlets that invested early in technology and national security reporting have been central to revealing how Claude was used in the Iran strikes and how the Pentagon sought to reshape its safeguards. Sustaining that kind of scrutiny requires resources, which is why some organizations now urge readers to consider supporting print editions or other subscription products that fund investigative work on AI and defense. Without sustained coverage, many of these negotiations would remain opaque, leaving the public with little insight into how lethal technologies are governed.

At the same time, the episode has galvanized advocacy groups and individual citizens to engage more directly with AI policy. Campaigns encouraging readers to create accounts on news platforms, sign open letters, or join digital rights organizations are part of a broader push to build a constituency for responsible AI. Donation drives and membership programs, such as those highlighted on support pages, help sustain watchdog efforts that can counterbalance state pressure on technology firms. As AI systems like Claude move deeper into national security infrastructure, the ability of journalists, lawmakers, and the public to monitor and contest those deployments may prove as important as any individual company’s ethics policy.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.