The Trump administration has ordered every federal agency to stop using Anthropic’s Claude AI, a directive that has already rippled into the defense contracting sector and drawn sharp rebukes from members of Congress. The ban, issued on February 27, 2026, is not simply a procurement decision. It is a direct confrontation over whether the federal government can punish an AI company for maintaining safety guardrails the Pentagon finds inconvenient.
A Government-Wide Purge of Claude
Within hours of the presidential directive, the General Services Administration announced it would immediately cease all use of Anthropic’s technology, removing the company from USAi.gov and from the Multiple Award Schedule, the federal government’s primary vehicle for purchasing commercial products and services. That removal effectively cuts Anthropic off from the federal bid listings that vendors rely on to find and compete for government work.
The speed of the rollout signals intent. Rather than a phased review or a narrowly scoped restriction, the administration chose a blanket prohibition across all agencies. The Pentagon went further, formally designating Anthropic and its products as a supply chain risk, effective immediately. That label, typically reserved for foreign adversary-linked technology, carries serious consequences: it can trigger debarment proceedings and force existing contractors to sever ties with the flagged vendor.
Inside GSA, officials also began scrubbing Anthropic from vendor-facing tools. Guidance circulated to acquisition officers warned that ordering activities could no longer rely on Claude-based tools or services when fulfilling mission needs. For agencies that had begun experimenting with AI assistants to draft reports, summarize documents, or triage public inquiries, the message was clear: shut it off and find something else.
Defense Contractors Begin Pulling Claude
The practical fallout is already visible. Defense contractors including Lockheed Martin are phasing out Anthropic’s AI over a six‑month period, according to reporting from early March 2026. For companies that had integrated Claude into logistics, analysis, or engineering workflows, the transition is not trivial. Replacing an embedded AI tool means retraining staff, validating alternative systems, and absorbing transition costs that the government has not offered to cover.
The ripple effect reaches beyond the prime contractors. Smaller firms that built products on top of Claude’s API and sold them through established GSA channels now face a choice: rebuild on a different model or lose their federal customer base entirely. Many of these companies lack the capital reserves to re-architect software stacks on short notice, especially when future policy for any AI vendor now looks uncertain.
Consultants who specialize in federal procurement say the message to the broader industry is unmistakable. By treating Anthropic as a security risk rather than a vendor with whom it has a policy dispute, the administration has created a chilling effect. Any company that insists on firm safety constraints, particularly around military use, must now factor in the possibility of being cut off from Washington overnight.
The Legal Authority Is Thinner Than It Looks
The administration has leaned on Title 10 authorities to justify the supply chain risk designation. Those statutes give the Secretary of Defense power to exclude sources or technologies that pose security threats to the defense supply chain. But the law also contains a critical constraint: the Secretary must use the least restrictive means necessary to address the identified risk.
Anthropic has pointed to that requirement as the legal limit on what the government can do. A blanket ban across every federal agency, covering civilian departments that have nothing to do with weapons systems, is difficult to square with provisions designed for targeted supply chain interventions. Legal scholars have begun questioning whether the government’s authority stretches this far, and the gap between the statute’s text and the administration’s actions could become the foundation of a court challenge.
The underlying dispute is not really about supply chain integrity. According to reporting on a meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth, the government framed its demand as access to Anthropic’s technology “for all lawful purposes,” including offensive cyber operations and battlefield targeting. The Pentagon also threatened escalation, including the possible invocation of the Defense Production Act. When Anthropic did not comply on the government’s terms, the supply chain risk designation followed within days.
Procurement lawyers note that the administration has other, more tailored tools it could have used: limits on classified environments, conditions on specific contracts, or targeted mitigation plans. By skipping those options and jumping directly to a government-wide purge, the White House opened itself to arguments that national security was a pretext for coercing product design decisions.
Congress Calls It Retaliation
Several lawmakers have rejected the national security framing outright. Senator Kirsten Gillibrand, a member of the Senate Armed Services Committee, called the designation a misuse of a tool intended for adversary‑controlled technology. Senator Edward Markey went further, characterizing the move as direct retaliation for Anthropic’s safety safeguards. Markey, along with Senator Chris Van Hollen, wrote to Secretary Hegseth urging the Department of Defense to drop what they described as an intimidation campaign.
The retaliation framing matters because it shifts the debate from procurement policy to constitutional territory. If the government is using its purchasing power to punish a company for the content of its product design, specifically for building in safety restrictions the administration dislikes, that raises First Amendment and due process questions that go well beyond AI policy. A company’s decision about what its product will and will not do is, at its core, an editorial and engineering choice. Weaponizing federal contracts to override that choice sets a precedent that could reach any technology sector.
Members of Congress are already probing what internal analyses justified the designation. Committees with oversight of the Pentagon and GSA have requested documents explaining how Anthropic’s safety policies translated into a “risk” classification, and why less sweeping alternatives were rejected. Depending on what those records show, lawmakers could move to narrow the statutory authorities the administration relied on, or to bar agencies from using supply chain tools as leverage over product design.
OpenAI Steps Into the Vacuum
The timing of what happened next was hard to miss. Hours after the Anthropic ban took effect, OpenAI announced its own deal with the Pentagon, positioning its systems as a compliant alternative that would support “full-spectrum” military applications. Officials highlighted the new partnership as evidence that the administration was not opposed to AI safety in principle, only to what they portrayed as Anthropic’s inflexibility.
Critics, however, see the sequence differently. By clearing Anthropic out of the federal marketplace just as a rival was expanding its footprint, the government effectively picked a winner in a still‑nascent industry. That kind of intervention would be controversial even if it were grounded solely in performance or cost. When it is tied instead to how aggressively a company is willing to constrain military uses of its technology, the implications for future innovators are stark.
For vendors trying to navigate this landscape, the message from contracting officers has been to stay close to official guidance. GSA has encouraged firms to update their registrations through the vendor support center and to certify that none of their offerings rely on Claude. Companies that once touted Anthropic integrations as a selling point are now quietly reworking marketing materials and scrambling to demonstrate compatibility with the administration’s preferred models.
A Test Case for AI Governance
What began as a dispute over one company’s safety guardrails has become an early test of how far governments will go to shape the behavior of powerful AI systems. The Trump administration has made clear that, in its view, access for military and intelligence uses must be non‑negotiable for any AI supplier that wants federal business. Anthropic, by contrast, has treated limits on certain applications as a core part of its mission, even at the cost of lucrative contracts.
The outcome of this clash will reverberate beyond Washington. Other democracies are watching how the United States balances national security demands against corporate autonomy and civil liberties in the AI domain. If courts bless the administration’s approach, future presidents could wield procurement and supply chain tools to pressure not just AI labs, but cloud providers, chipmakers, and social platforms to align with their policy preferences.
For now, agencies are ripping out Claude, contractors are rewriting code, and Anthropic is weighing its legal options. The ban may succeed in pushing one company to the margins of federal work. But it also crystallizes a larger question that Congress, the courts, and the public will have to answer: who ultimately decides how safe powerful AI systems must be, the companies that build them, or the governments that buy them?
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.