The Pentagon and some of America’s most prominent artificial intelligence companies are locked in an escalating dispute over how far military applications of their technology should go. Defense Secretary Pete Hegseth has directly pressured Anthropic to remove restrictions on how the military uses its AI models, while the White House has moved to cut the company off from federal contracts entirely. The conflict is forcing the AI industry to confront a question it has long tried to defer: whether the companies building the most powerful AI systems get any say in how those systems are used in war.
Hegseth’s Ultimatum to Anthropic
The confrontation between the Pentagon and Anthropic has grown unusually personal. According to the Associated Press, Hegseth personally warned Anthropic’s chief executive and set a deadline for the company to permit unrestricted military use of its AI tools. Officials also discussed designating Anthropic as a supply-chain risk and even explored invoking the Defense Production Act to compel cooperation, per the same AP reporting. These are not routine procurement negotiations. The Defense Production Act is a Cold War-era statute typically reserved for national emergencies, and wielding it against a domestic AI firm would be a dramatic escalation of state power over a single technology vendor.
The pressure campaign intensified when, according to a separate AP account, Trump ordered federal agencies to halt their use of Anthropic products, casting the move as a response to the company’s safety rules. Anthropic has said its red lines include enabling mass domestic surveillance and fully autonomous weapons, and it has signaled that it would challenge any supply-chain risk designation in court. That stance has created a paradoxical situation in which the government is simultaneously threatening to force Anthropic into deeper military work while also moving to bar the firm from federal contracts, turning a policy disagreement over AI safeguards into a constitutional and commercial showdown.
Anthropic’s Defense Work Was Already Expanding
What makes the confrontation so striking is that Anthropic was not refusing all defense work. The company had already partnered with Palantir to make its Claude 3 and 3.5 family of models available on Amazon Web Services for a range of U.S. intelligence and military workflows. The intended tasks included high-volume data operations, pattern identification, and document review, according to the joint announcement. These are the kinds of back-office and analytic roles that AI companies often describe as “non-lethal enablement,” several steps removed from the use of force on the battlefield.
That limited engagement had already translated into significant real-world use. As the New York Times reported, Claude has become a widely used tool inside the Pentagon for collecting and processing intelligence. Analysts have leaned on the system to sift through vast troves of documents and communications, accelerating work that once required teams of human staff. Anthropic was therefore not an outsider to the defense establishment; it was already helping power key parts of the national security bureaucracy while trying to draw a firm line against direct involvement in autonomous targeting or mass domestic monitoring.
OpenAI Took a Different Path
While Anthropic drew boundaries, OpenAI moved in the opposite direction. The company quietly removed a longstanding blanket ban on military use of its AI tools and began working with the U.S. armed forces on cybersecurity support for veterans, as Bloomberg first detailed. That policy shift cleared the way for a far more ambitious relationship. The Pentagon later announced that it would integrate ChatGPT into its GenAI.mil environment, describing the collaboration with OpenAI as a key part of both an internal acceleration push and the broader White House-backed generative AI rollout across defense systems.
OpenAI’s trajectory illustrates how quickly the competitive landscape for defense AI contracts can change. A company that once positioned itself as wary of military applications now sits at the center of the Pentagon’s generative AI infrastructure, with its tools embedded in everything from training pipelines to operational planning aids. The contrast with Anthropic is stark: both firms emerged from similar research circles and espouse overlapping safety concerns, yet one has chosen to accept military work without public red lines while the other insists on explicit limits. For now, OpenAI’s more flexible stance has spared it from the kind of direct political confrontation Anthropic faces, but it has also left outside observers with few details about what substantive guardrails, if any, govern how its models are used in classified or combat-adjacent settings.
The Pentagon’s Autonomous Systems Push
The urgency behind these corporate negotiations becomes clearer when set against the Pentagon’s broader investment in autonomous warfare. Deputy Defense Secretary Kathleen Hicks has unveiled the first tranche of the Replicator initiative, a program aimed at fielding large numbers of “attritable” drones and robotic platforms across every domain, backed by roughly half a billion dollars in current-year funding. In Pentagon jargon, “attritable” means cheap enough to be lost in combat, signaling a shift toward disposable, swarming systems whose effectiveness depends on automation and scale rather than the survivability of any single platform.
For AI companies, Replicator is precisely the kind of initiative that blurs the line between decision-support tools and weapons. Autonomous systems that are designed to be expendable in combat sit uncomfortably close to the “fully autonomous weapons” category that Anthropic has flagged as unacceptable. The Defense Department has tried to reassure critics by updating its policy on autonomy in weapon systems, codified in a revised directive that emphasizes human judgment and layered oversight, as described in an official announcement on weapon autonomy rules. Yet those internal safeguards do little to resolve the core question for outside technology providers: whether contributing models or software to such programs makes them complicit in the deployment of machines that can select and engage targets with minimal human intervention.
A Test Case for AI Governance in War
The clash over Anthropic has quickly become a test case for how much control AI developers can maintain over the downstream uses of their systems once they enter the national security arena. On one side, Pentagon leaders argue that battlefield requirements and strategic competition demand maximum flexibility, and that private companies should not be allowed to unilaterally veto categories of use that elected officials and military commanders deem necessary. On the other, Anthropic and likeminded firms maintain that they have both a moral obligation and a business imperative to prevent their general-purpose models from being turned into engines of unbridled surveillance or fully autonomous lethality, even if that stance costs them lucrative contracts.
How this standoff is resolved will reverberate far beyond a single vendor. If the government succeeds in labeling Anthropic a supply-chain risk or uses emergency economic powers to override its policies, other AI companies may conclude that the only viable path is quiet acquiescence, following the OpenAI model of engagement without public red lines. If, instead, Anthropic prevails in court or negotiates a compromise that preserves meaningful usage restrictions, it could establish a precedent for what “responsible” participation in defense work looks like. Either outcome will shape not just the future of U.S. military AI, but also the global norms that govern when and how private actors can say no to the militarization of their most powerful technologies.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.