Morning Overview

Pentagon taps ex-Uber exec as Anthropic AI feud explodes

The Department of Defense has formally designated AI company Anthropic as a federal supply chain risk, effective immediately, cutting one of Silicon Valley’s most prominent AI firms out of defense contracting channels. The action follows weeks of escalating tension between Pentagon leadership under Defense Secretary Pete Hegseth and Anthropic over the company’s refusal to build AI tools for autonomous weapons systems. The designation, which Anthropic has vowed to fight in court, now threatens to reshape how the U.S. military sources its artificial intelligence capabilities at a time when the Pentagon is simultaneously recruiting private-sector tech talent, including former executives from companies like Uber, to accelerate its AI strategy.

Supply Chain Label Freezes Anthropic Out

The Pentagon formally notified Anthropic of the supply chain risk designation, a move that bars the company from new federal procurement actions and sends a warning signal to any agency or contractor doing business with the firm. The legal authority behind the designation traces to the Federal Acquisition Supply Chain Security Act, which empowers federal agencies to exclude entities deemed to pose risks that cannot be mitigated through standard contract safeguards.

Exclusion and removal orders issued under this framework are normally published through the official FASCSA portal on SAM.gov, the government’s central procurement database. Whether a formal order has appeared there for Anthropic is not confirmed in available reporting, which raises procedural questions about how the designation was executed and whether standard administrative channels were followed. That gap matters because contractors and subcontractors rely on the portal to verify compliance obligations. Without a published order spelling out the scope and duration of the ban, the practical enforcement mechanism remains unclear, even as the political message is unmistakable.

In parallel, Congress has layered on additional authorities to police AI-related national security risks, including a separate Title 10 provision that directs the Defense Department to manage vulnerabilities in its digital supply chain. The Anthropic case now sits at the intersection of these overlapping powers, testing how far the Pentagon can stretch “risk” to encompass a company’s ethical choices rather than traditional security concerns like foreign ownership or hidden backdoors.

Autonomous Weapons Dispute at the Core

The designation did not emerge from a routine security review. It grew directly out of a clash between the Pentagon’s senior technology leadership and Anthropic over the company’s ethical boundaries on military AI. A senior defense technology official described internal debates over autonomous weapons use cases, revealing that the dispute centered on whether Anthropic’s AI models could be applied to lethal systems and battlefield targeting. Anthropic drew a firm line, and the Pentagon escalated.

That friction reflects a deeper structural problem. The Defense Department needs cutting-edge AI to maintain its technological edge, but the companies building the most advanced models often operate under safety policies that restrict military applications. Anthropic, founded by former OpenAI researchers who left partly over safety concerns, has been especially vocal about limiting how its Claude models can be used for surveillance, weapons guidance, and real-time combat analytics. For Pentagon officials pushing to integrate AI into targeting, logistics, and battlefield decision-making, those restrictions look less like responsible engineering and more like an obstacle to national security.

The standoff burst into public view on February 17, 2026, when the dispute moved from behind-the-scenes negotiations to open confrontation. Defense Secretary Pete Hegseth’s team framed Anthropic’s safety stance as an expression of ideological bias, casting the company’s refusal to support autonomous weapons as a kind of “woke” resistance to U.S. military power. That framing turned what might have remained a technical procurement disagreement into a culture-war flashpoint, raising the political costs for any future compromise.

Anthropic Calls the Action Unprecedented

Anthropic has not accepted the designation quietly. The company said it would challenge what it called a legally unsound action “never before publicly applied” to an American company. If that characterization holds, the Pentagon is using a tool originally justified as a shield against foreign technology threats, such as compromised telecom equipment, to punish a domestic AI firm for drawing ethical red lines.

The legal stakes are significant. The FASCSA framework was built to address risks from foreign adversaries infiltrating U.S. government technology systems through hidden hardware, opaque software supply chains, or covert ownership structures. Applying it to a U.S.-headquartered company over a policy disagreement about weapons ethics stretches the statute well beyond its original intent. Anthropic’s lawyers are expected to argue that the designation amounts to retaliation for exercising a business judgment about product use, not a genuine finding of supply chain risk.

Any court challenge will likely probe whether the Pentagon can point to specific vulnerabilities, such as data exfiltration channels, foreign control, or technical backdoors, or whether the record shows only frustration over Anthropic’s refusal to build certain tools. If a judge concludes the latter, the case could set limits on how supply chain authorities may be used against domestic firms, especially those whose products touch politically sensitive domains like AI, encryption, or content moderation.

Beyond the courtroom, the designation sends a chilling signal to the broader AI industry. Startups and established labs alike are watching to see whether declining a particular defense application can be reinterpreted as a security risk. If so, companies may feel pressure either to loosen their own safety policies or to avoid government work altogether, undermining the Pentagon’s stated goal of drawing top AI talent into national security projects.

Congressional Pushback and Oversight Gaps

The political response so far has been pointed but limited. U.S. Sen. Ed Markey, a Massachusetts Democrat, demanded rapid legislative action to reverse the designation, framing it explicitly as retaliation against a company for its safety principles. Markey warned that if the government can punish firms for declining to support autonomous weapons, the chilling effect will reach far beyond a single contractor and could deter responsible AI development across the private sector.

Yet Markey’s call has not, to date, produced public committee hearings, subpoenas for Pentagon documents, or visible bipartisan support for a statutory fix. The absence of hearing records or document requests in the public domain suggests that congressional oversight remains at the press-release stage. Without a formal inquiry, the Pentagon faces no structured requirement to explain how it evaluated Anthropic, what specific risks it identified, or whether it followed each procedural step that FASCSA and related authorities require.

That vacuum benefits the executive branch, which can maintain the designation indefinitely while revealing little about its internal deliberations. It also leaves contractors, civil society groups, and allied governments guessing about the criteria that might trigger similar actions in the future. If ethical constraints on AI use can be recast as security vulnerabilities, other firms that decline to work on surveillance, predictive policing, or offensive cyber tools may wonder whether they, too, could be labeled risks to the federal supply chain.

For now, Anthropic’s fate will likely hinge on a mix of litigation, quiet lobbying, and the broader politics of military AI. The company must persuade courts that the Pentagon overstepped its statutory authority, while convincing lawmakers that allowing the designation to stand would damage both civil liberties and long-term U.S. technological leadership. The Pentagon, for its part, appears intent on signaling that access to federal contracts comes with expectations about how far leading AI labs will go to support the nation’s warfighting capabilities.

However the dispute is resolved, it is already reshaping the boundaries between national security and corporate AI ethics. Future defense contractors will have to navigate not only technical requirements and security clearances, but also the risk that principled limits on weapons development could be reinterpreted as disloyalty to the state. In that emerging landscape, the Anthropic case may become an early test of whether Washington can harness advanced AI without demanding that every leading lab help build the autonomous arsenals of tomorrow.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.