Weeks after President Trump tried to cut the federal government off from one of the most widely used AI systems in Washington, agencies are plugging back in. A federal judge’s late-March ruling blocking enforcement of Trump’s ban on Anthropic technology prompted the General Services Administration to restore the company’s Claude AI to government procurement platforms, reopening access that had been shut down in early March 2026. The result: federal workers who had been using Claude to draft memos, summarize records, and run research workflows can pick up where they left off, at least for now.
The reversal highlights a collision between presidential power and the federal procurement system, a bureaucratic machine that moves slowly by design and resists sudden course changes. It also raises a question with implications well beyond one AI company: Can a president unilaterally sever a technology vendor from government use after contracts have already locked in access across every branch?
How the ban unfolded
The chain of events started on Feb. 27, 2026, when Trump posted a directive on Truth Social ordering agencies to stop using Anthropic technology. He framed the move as a response to disputes over AI safety, though the specific grievance was never detailed in a formal executive order or Federal Register notice. The directive arrived through social media, not through the channels presidents typically use to issue binding policy.
The GSA treated it as an order anyway. Within days, the agency removed Anthropic from USAi.gov and the Multiple Award Schedule, the federal government’s primary catalog for approved commercial products and services. That cut off the standard path agencies use to purchase Claude-based tools.
But pulling Anthropic out of government turned out to be harder than posting about it. By the time the ban landed, Claude was already embedded across federal operations, thanks to a deal the GSA itself had brokered just months earlier.
The $1 deal that spread Claude across government
In August 2025, the GSA announced a government-wide arrangement with Anthropic that offered Claude AI to all branches of government for $1. The deal gave executive, legislative, and judicial agencies a low-cost on-ramp to adopt the technology without new appropriations or lengthy procurement cycles.
The mechanics behind that price are documented in a Government Accountability Office bid protest decision. In the case of Ask Sage, Inc. (B-423827), the GAO detailed how the GSA routed Anthropic access through Carahsoft, a technology reseller, under Federal Supply Schedule contract No. 47QSWA18D008F. The contract was modified to make Claude Enterprise and Claude Government editions available for $1 for one year. Agencies could simply place orders against an already-competed schedule contract, no new bidding required.
That structure meant adoption was fast and frictionless. By February 2026, Claude had become a routine tool in offices across the government. Ripping it out required undoing procurement infrastructure that had been deliberately designed for speed.
The court steps in
On March 26, 2026, a federal judge issued a preliminary injunction blocking the Department of Defense from designating Anthropic a supply chain risk and halting enforcement of the president’s directive, according to Associated Press reporting. The ruling targeted the legal mechanism the administration had used to justify the ban, finding enough grounds to pause it while litigation continues.
The GSA moved quickly in response. In an official statement in early April 2026, the agency confirmed it had restored Anthropic technology to its procurement platforms. Claude products are once again available through federal buying channels, and existing contract vehicles remain active while the injunction holds.
The speed of the GSA’s reversal underscored a practical reality: the underlying contracts that enabled Claude’s rapid adoption were never terminated. They were dormant, waiting for the legal barrier to lift.
What remains unclear
The injunction is preliminary, not permanent, and its boundaries are not fully defined in available public records. The court blocked the Pentagon from branding Anthropic a supply chain risk and stopped enforcement of the directive, but the ruling could be narrowed, reversed, or allowed to expire as the case proceeds. Whether the administration plans to appeal, seek a stay, or issue a revised directive through formal executive order channels has not been publicly disclosed.
No official agency memos have surfaced showing which specific departments have resumed Anthropic testing since the injunction took effect. The GSA’s statement confirms restoration at the platform level, but individual agencies make their own deployment decisions. Whether the Department of Defense, which was specifically named in the original phase-out language, has reactivated Claude-based programs remains unconfirmed.
Anthropic itself has not issued a public statement about the operational impact of the ban or the restoration. The company’s stance on AI safety, which triggered the original clash with the White House, is documented in prior public communications but not in any post-injunction response. Without direct statements from Anthropic or from agencies that were actively using Claude before February, the full picture of what was disrupted and what has resumed remains incomplete.
There is also an open question about how far the injunction reaches beyond the Pentagon. The order clearly restrains Defense officials, but public summaries do not spell out whether other agencies are similarly constrained or whether they could independently launch their own risk reviews of Anthropic. That ambiguity matters because large civilian departments often look to Defense assessments when making vendor decisions, and a chilling effect could persist even with formal access restored.
A procurement system that resists quick shutoffs
The gap between a presidential social media post and formal regulatory action sits at the center of this story. Trump’s directive arrived through Truth Social, not through the Federal Register or a signed executive order. The GSA treated it as binding, but the informal channel created legal vulnerabilities the court’s injunction exploited. Agencies that had already signed contracts through the Carahsoft vehicle or the OneGov deal occupied a gray zone where existing agreements potentially conflicted with new instructions from the top.
For procurement officers navigating the current landscape, the practical situation is straightforward but temporary. The GSA has restored access, and buying channels are open under the same terms that applied before February. But any agency that ramps up Claude deployments now faces the risk that access could be cut again if the administration prevails in further proceedings or issues a more formally structured ban. The prudent move, according to federal contracting specialists, is to verify that existing contracts through the Multiple Award Schedule or the OneGov vehicle remain active and to document current usage in case compliance requirements shift again.
What this episode has already demonstrated is a structural tension in how the federal government adopts and regulates commercial AI. A $1 contract can spread a technology across every branch of government in months. Pulling it back takes longer, costs more, and now faces judicial resistance. The Anthropic case is the first major test of whether a president can unilaterally sever an AI vendor from government use after procurement infrastructure has already locked in access. The early answer from the courts: the process is far more constrained than a single social media post might suggest. As the litigation continues through spring 2026, agencies, vendors, and policymakers are watching not just the fate of Claude in federal offices, but the precedent this fight sets for future attempts to rapidly switch off widely adopted AI systems inside government.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.