Defense Secretary Pete Hegseth summoned Anthropic’s chief executive to Washington this week and delivered a blunt ultimatum: open the company’s AI models to unrestricted military use by Friday or face severe consequences. The closed-door meeting, confirmed by people familiar with the talks, has triggered the most direct confrontation yet between the Pentagon and a leading AI company over who controls the ethical boundaries of artificial intelligence in warfare.
A Friday Deadline and Three Threats
Hegseth called Anthropic CEO Dario Amodei to the Pentagon on Tuesday and laid out a tight timeline. According to officials briefed on the session, Hegseth gave Anthropic until Friday to drop its internal safeguards limiting how the military can deploy its Claude AI system. The defense secretary did not frame the conversation as a negotiation. He presented it as a compliance demand, according to multiple accounts of the exchange.
The coercive tools Hegseth reportedly put on the table were specific and escalating. He threatened to cancel the company’s existing defense contract, designate Anthropic a “supply chain risk” that would effectively blacklist it from future government work, and invoke the Defense Production Act to compel access to the technology. That last option, if exercised, would represent an extraordinary use of wartime production authority against a private AI firm. Legal experts cited by the Washington Post have expressed skepticism about whether the Defense Production Act can actually override a company’s self-imposed ethical constraints, but the threat alone signals how far Hegseth is willing to push.
Anthropic’s Red Lines on Weapons and Surveillance
The dispute centers on two specific restrictions Anthropic has maintained since entering the defense market. The company has refused to allow Claude to be used for building lethal autonomous weapons or for domestic surveillance operations. These are not vague corporate platitudes; they are contractual conditions Anthropic has embedded into its agreements with defense and intelligence partners. Hegseth views them as unacceptable barriers that prevent military commanders from fully exploiting AI capabilities at a time of rising global competition.
Anthropic’s position reflects a calculation that some applications of its technology carry risks too severe for a private company to absorb. People close to the ongoing discussions told the Washington Post that the company has deep reservations about how its technology could be misused once those guardrails are removed. The concern is not hypothetical: once an AI model is deployed without use restrictions in a classified environment, the company loses visibility into how it is applied. That loss of oversight is precisely what Anthropic’s leadership has tried to prevent, and precisely what Hegseth is demanding.
Millions in Contracts Already at Stake
The confrontation did not emerge from thin air. Anthropic already has significant financial ties to the defense establishment. The Pentagon’s Chief Digital and Artificial Intelligence Office awarded contracts to Anthropic, Google, OpenAI, and xAI, each carrying a $200 million ceiling, to develop agentic AI workflows across national security missions. Separately, Anthropic partnered with Palantir to deploy Claude models on Amazon Web Services infrastructure accredited at DISA Impact Level 6 for classified intelligence and defense operations. The company is not an outsider resisting military work; it is already inside the defense supply chain, which makes Hegseth’s threat to label it a supply chain risk all the more pointed.
That existing footprint also explains why the Pentagon chose pressure over patience. Hegseth’s approach, described by the Economist as a my-way-or-the-highway stance on military AI, suggests the Pentagon believes it already has enough contractual and legal weight to force the issue rather than renegotiate terms. The $200 million contract ceiling gives the Defense Department real financial pressure to wield, and Hegseth appears willing to use it.
Why This Fight Extends Beyond One Company
The standoff between Hegseth and Anthropic is really a test case for the entire relationship between the U.S. military and the AI industry. Every major frontier AI company now holds or is pursuing defense contracts. If Hegseth succeeds in stripping Anthropic’s guardrails through coercion, it sets a precedent that no private company can maintain independent ethical limits on military applications of its technology. Other firms watching this dispute, including Google and OpenAI, would face enormous pressure to preemptively drop their own restrictions rather than risk similar treatment.
The alternative outcome carries its own risks. If Anthropic holds firm and the Pentagon follows through on its threats, the most safety-focused major AI lab could be locked out of defense work just as AI becomes central to planning, logistics, and targeting. Reporting by the Associated Press on military AI programs has documented how deeply automated systems are being woven into everything from drone coordination to battlefield analysis. Removing a key player that has invested heavily in safety research could tilt the balance of influence inside the Pentagon toward vendors more willing to accept opaque or high-risk uses.
The Broader AI Governance and Surveillance Backdrop
Hegseth and his allies argue that the United States cannot afford self-imposed limits when adversaries are racing ahead with unrestrained AI militarization. In their view, Anthropic’s red lines amount to a unilateral disarmament in areas like autonomous targeting, real-time intelligence fusion, and persistent monitoring of potential threats. Supporters of the ultimatum point to years of experimentation in military labs and commands, described in recent AP coverage of Pentagon AI initiatives, as evidence that the technology is mature enough to move from pilot projects to fully operational systems. To them, corporate ethics policies look less like responsible stewardship and more like private vetoes over national security strategy.
Critics counter that the same technologies Hegseth wants to unshackle on the battlefield are already testing the limits of civil liberties at home. AI-powered monitoring tools have spread through American schools, where companies sell software that scans students’ messages and documents for signs of self-harm, violence, or misconduct. Civil liberties groups and some parents, cited in AP reporting on classroom surveillance, warn that such systems normalize constant observation and can misinterpret harmless behavior as threats. For technologists inside Anthropic and other labs, those domestic examples are cautionary tales: once powerful models are deployed without strict use constraints, mission creep is hard to reverse, and the line between legitimate security uses and pervasive surveillance quickly blurs.
That tension, between military urgency and democratic oversight, runs through the current showdown. A detailed Washington Post account of the ultimatum notes that Hegseth is not merely seeking technical access but a political victory that would signal the Pentagon, not private firms, sets the boundaries for AI in war. Yet the very fact that a single cabinet official may try to override a company’s ethical commitments using emergency economic powers alarms some lawmakers and legal scholars, who see it as a stress test for how far executive authority can reach into corporate governance when emerging technologies collide with security fears.
What happens by Friday will ripple far beyond Anthropic’s balance sheet. If the company capitulates, internal ethics teams across the industry could find their leverage sharply reduced, knowing that government clients can escalate disagreements into existential threats. If Anthropic resists and Hegseth backs down, it may embolden other firms to codify hard limits on autonomous weapons or mass surveillance into their contracts. And if neither side yields, a protracted legal and political battle over the scope of the Defense Department’s AI ambitions could force Congress to clarify how much control the government can exert over the values embedded in privately built models. In any scenario, the clash has already exposed a core dilemma of the AI age: when algorithms become as strategically important as aircraft carriers, the question of who writes their rules is no longer a technical detail but a defining choice about the future of war and democracy.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.