Anthropic’s Claude AI is actively powering targeting operations inside the U.S. military campaign against Iran through Palantir’s Maven Smart System, according to reporting from The Washington Post, even as the Trump administration has officially ordered federal agencies to stop using Anthropic technology. The contradiction between battlefield reliance on Claude and a formal government blacklist of its maker has drawn congressional scrutiny and raised hard questions about who controls AI in wartime. The dispute traces back to Anthropic’s refusal to lift safety restrictions the Pentagon wanted removed, setting off a chain of escalation that now runs from classified intelligence environments to the Senate floor.
How Claude Reached the Battlefield
The technical pathway connecting Anthropic’s AI to military operations was established well before the current feud. In late 2024, Anthropic and Palantir announced a partnership to bring Claude models onto Amazon’s classified cloud, with Palantir becoming the first commercial partner to deploy the system inside Impact Level 6 environments on AWS, as detailed in a joint corporate release. That integration embedded Claude within the existing defense software stack months before the political confrontation erupted, allowing it to be called from within Palantir’s analytic platforms rather than as a standalone tool Anthropic could easily switch off.
The Department of Defense separately awarded Palantir USG Inc. a long-term prototype contract worth up to $480 million to build out the Maven Smart System, a next-generation intelligence platform scheduled to run through 2029, according to the Pentagon’s contract announcement. Maven is designed to ingest vast streams of sensor data and quickly surface patterns for human analysts, and with Claude already integrated into Palantir’s classified offerings, the model had a direct pathway into Maven’s operational architecture. That channel, The Washington Post reports, is now being used to accelerate targeting and threat prioritization in the Iran campaign by processing classified intelligence feeds and proposing ranked lists of potential targets for review.
The Blacklist That Did Not Stick
The political timeline is as striking as the technical one. In late February 2026, the Pentagon delivered an ultimatum demanding that Anthropic allow broader, less restricted use of Claude across defense applications. Anthropic refused, declining to remove its prohibitions on mass domestic surveillance and fully autonomous weapons, a stance described in a Washington Post account of the negotiations. Within days, the Trump administration escalated by designating Anthropic a supply-chain risk and labeling it a national security concern, signaling to contractors and agencies that continued dependence on the company could carry political and legal consequences.
That escalation culminated in a directive ordering federal agencies to halt use of Anthropic technology, an instruction that followed the administration’s decision to formally brand the firm a threat to national security, as reported in another Post story. Yet the blacklist appears to have had little practical effect on the most consequential use of Claude in government. The Iran campaign’s Claude-powered targeting tools remained active inside Maven’s classified systems, suggesting either that the Pentagon could not quickly disentangle the AI from its operational stack or that senior officials quietly chose not to. For a government that has publicly labeled Anthropic a security risk, continuing to rely on its software for strike decisions represents a glaring contradiction that no official has yet fully explained.
Anthropic’s Red Lines and the DPA Threat
At the core of the conflict is not a blanket objection to military work but a dispute over limits and control. Anthropic has signaled willingness to support defense and intelligence missions by making Claude available through Palantir’s government platforms, but it has insisted on hard boundaries around particularly sensitive uses. Those red lines include enabling mass surveillance of domestic populations and powering fully autonomous lethal weapons systems, categories the company argues pose unacceptable risks if left solely to military discretion. When Pentagon officials pressed to relax those safeguards in order to expand how Claude could be used, Anthropic declined, triggering the ultimatum and subsequent retaliation.
The backlash from within Congress has focused less on the blacklist itself than on the tools the Defense Department reportedly considered to force compliance. Senators Chris Van Hollen of Maryland and Ed Markey of Massachusetts sent a public letter to Defense Secretary Pete Hegseth urging him to halt what they described as a pressure campaign to coerce Anthropic into dropping its guardrails, warning against threats to invoke the Defense Production Act to override the company’s policies, as laid out in the senators’ formal complaint. Using the DPA, a Cold War era law typically reserved for ensuring access to critical materials and industrial capacity, to compel an AI firm to loosen safety constraints would be unprecedented. The senators framed such a move as an attempt to conscript private-sector ethics into military priorities, and as of now, neither Hegseth nor the Pentagon has publicly addressed their specific concerns.
Selective Enforcement as Operational Doctrine
The most unsettling aspect of the Anthropic episode is the pattern it suggests about how AI rules are applied in practice. On paper, the Pentagon and the White House have declared Anthropic a risk and ordered its technology out of federal systems. In reality, the same AI remains woven into one of the most sensitive operations the U.S. is currently conducting, helping sort, label, and prioritize potential targets in an active conflict. That divergence between formal policy and operational behavior looks less like bureaucratic confusion than a deliberate form of selective enforcement, where blacklists and risk designations function as bargaining chips while mission-critical systems continue to run unchanged in the background.
This dynamic has immediate implications for other AI firms weighing whether to work with national security agencies. The implicit message is that insisting on safety constraints can trigger public punishment while leaving the technology itself locked inside classified environments that the original developers cannot see or audit. Anthropic has no direct visibility into how Claude is being used within Maven, and the government has not offered any independent oversight to bridge that gap. As one academic analysis of AI in war and surveillance notes, systems like Claude can be tasked to rapidly sort intelligence, flag priority threats, and recommend strikes, even when their creators intend them only as decision-support tools, raising urgent questions about who ultimately sets and enforces the limits on their use once deployed in secret.
Who Controls AI in Wartime?
The unresolved tension in this case is about authority: whether the companies that build powerful AI systems retain any meaningful say over how they are used once they enter the defense ecosystem. By embedding Claude into Palantir’s infrastructure and then into Maven, the Pentagon effectively moved the system behind a classified curtain, turning it into a component of a larger weapons-adjacent platform. From that vantage point, Anthropic’s policy decisions (its red lines on surveillance and autonomy) become advisory at best. The government can denounce the company, threaten legal compulsion, and still quietly depend on its code so long as it resides on government-controlled servers and within contractor-run systems that Anthropic cannot access or shut down.
For lawmakers and the public, the combination of a national security blacklist and ongoing operational reliance on the same technology exposes a gap in current oversight frameworks. Export controls, procurement rules, and supply-chain risk designations are designed to manage who the government buys from and under what conditions, not to resolve ethical disputes once a system is already embedded in classified infrastructure. Absent new mechanisms that tie continued use of AI tools to compliance with negotiated safety standards, and that provide independent verification inside secret programs, the Anthropic-Maven episode may become a template: public condemnation, private dependence, and a widening disconnect between the values AI companies claim to uphold and the realities of how their systems are used in war.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.