The startup Anthropic and the U.S. Department of Defense are locked in a public standoff over whether the military can use the company’s Claude AI system without restrictions, including in scenarios involving nuclear threats. The dispute, which has escalated over several days, has drawn congressional intervention, a presidential threat to cut off Anthropic from government contracts entirely, and sharp questions about where the line falls between national security and the ethics of autonomous AI in warfare.
A Nuclear Drill Ignites a Contract Fight
The friction between Anthropic and the Pentagon centers on a contractual clause that would grant the Defense Department authorization to use Claude for “all lawful purposes,” according to Washington Post reporting. Anthropic, which built its brand around AI safety, pushed back and requested limits on how Claude could be deployed in defense settings. The breaking point reportedly involved a hypothetical nuclear attack scenario in which military officials wanted to test whether Claude could assist in shooting down an incoming threat. That simulation, and the broader question of whether an AI company can dictate how a paying government client uses its product, turned a procurement disagreement into a national security flashpoint.
Pentagon spokesperson Sean Parnell disputed characterizations that the Defense Department intended to use Claude for autonomous weapons systems or domestic surveillance, according to additional coverage of the dispute. But the gap between the two sides proved too wide to bridge through negotiation alone. The DoD’s insistence on unrestricted access collided directly with Anthropic’s internal safety policies, and neither party showed signs of yielding before the conflict spilled into Congress and the White House. What began as a disagreement over contract language has become an early test of whether AI developers will be allowed to enforce their own red lines when dealing with the national security establishment.
Senators Accuse Hegseth of Intimidation
Senators Chris Van Hollen and Ed Markey responded to the standoff with a pointed letter to Defense Secretary Pete Hegseth, demanding he halt what they called a pressure campaign against Anthropic for refusing to enable mass surveillance and autonomous warfare. The senators set a deadline of 5:00 p.m. on Friday, February 27, for Hegseth to respond. Their letter framed the dispute in concrete contractual terms: the DoD demanded “all lawful purposes” authorization, Anthropic requested limits, and the Pentagon threatened cancellation when the company would not comply. By casting the disagreement as a matter of coercive leverage rather than routine bargaining, Van Hollen and Markey positioned Anthropic as a test case for whether ethical constraints can survive in defense procurement.
The senators also raised concerns about the legal framework governing this kind of procurement dispute. Federal law under 10 U.S.C. Section 3252 allows the government to restrict procurement from suppliers deemed a supply chain risk. Van Hollen and Markey’s letter suggested that invoking or threatening such a designation against a company for maintaining safety guardrails would represent a dangerous precedent, one that could chill any AI vendor from setting ethical boundaries on military contracts. If companies fear being blacklisted for refusing certain use cases, the senators argued, the market will favor firms willing to offer unqualified access to powerful AI systems, regardless of the long-term risks.
Trump Escalates With a Government-Wide Cutoff
President Trump sharpened the stakes by stating the government would no longer use Anthropic’s AI, according to reporting from the New York Times. That declaration directly contradicts the terms of a government-wide procurement agreement announced months earlier. The General Services Administration had struck a OneGov deal with Anthropic to make Claude for Enterprise and Government editions available across all federal branches for just $1, as described in the GSA’s own announcement of the agreement. That deal highlighted compliance with FedRAMP High security standards and was designed to streamline AI adoption across the government through existing contract vehicles and shared services.
The contradiction is striking. One arm of the federal government built a procurement pipeline to distribute Anthropic’s AI as widely as possible, while another now seeks to sever the relationship entirely because the company will not remove its safety restrictions. If the OneGov deal is canceled or frozen, the disruption would ripple across agencies that had already begun integrating Claude into their workflows. Federal buyers accustomed to searching opportunities on the SAM.gov portal or relying on centralized tools like the Vendor Support Center could suddenly find one of their flagship AI options politically radioactive, forcing rewrites of solicitations and delaying modernization projects that assumed Claude would be available.
What the Pentagon’s Position Actually Means
The Defense Department’s framing of “all lawful purposes” sounds reasonable in isolation. Military procurement contracts routinely include broad usage rights, and the Pentagon argues its requests align with legitimate defense needs rather than the surveillance and lethal autonomy scenarios critics describe. Sean Parnell’s on-the-record pushback against those characterizations suggests the DoD views the dispute as a standard contract negotiation that Anthropic and its congressional allies have inflated into a political spectacle. From this perspective, allowing a vendor to dictate operational limits could create a patchwork of incompatible tools and constrain commanders in a crisis, including in nuclear or missile-defense scenarios where every second counts.
Anthropic’s position, however, underscores that “lawful” does not always equate to “safe” or “wise” when dealing with rapidly evolving AI systems. The company has argued in public and private that guardrails against autonomous targeting, mass biometric surveillance, and certain forms of disinformation are core to its brand and technical design, not optional features that can be toggled off for a key customer. The standoff with the Pentagon therefore raises a deeper question about whether AI providers will be treated more like traditional defense contractors, expected to adapt their products to mission requirements, or more like publishers and platforms that retain some say over how their technology is used, even after a sale.
Procurement Power, Ethics, and the Future of Military AI
Behind the legal citations and contract clauses lies the raw power of the federal procurement system. Agencies that want to buy AI tools typically work through established frameworks and guidance collected on sites like Acquisition.gov, which aggregate regulations and best practices for contracting officers. Small and midsize firms often depend on standardized instruments and templates that assume vendors will accept broad usage rights, making it difficult for a company like Anthropic to insist on bespoke ethical carve-outs without risking disqualification. If the Pentagon succeeds in punishing Anthropic for its stance, other AI startups may conclude that the only viable way to win large contracts is to remain silent on contested uses such as autonomous weapons.
The shockwaves could extend beyond headline-grabbing defense deals. Federal vendors and small businesses that rely on Small Business Administration contracting programs or sell through platforms like GSA auction channels often build offerings on top of commercial AI services. If Anthropic is effectively blacklisted, these integrators will have to reassess whether using Claude exposes them to reputational or contractual risk, potentially rewriting their technical stacks around competitors more willing to meet “all lawful purposes” demands. In that sense, the Anthropic–Pentagon clash is not just a fight over one company’s principles but an early indicator of how much leverage governments will wield over the ethical boundaries of frontier AI.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.