Several U.S. federal agencies are quietly evaluating Anthropic’s Mythos artificial-intelligence model even after President Trump ordered every department to stop using the company’s technology, according to contractor accounts and procurement records reviewed in April 2026. The testing has continued in a legal gray zone created by a federal court injunction that, at least temporarily, blocks full enforcement of the ban.
The result is an unprecedented standoff involving the White House, a federal judge in California, and one of the country’s most prominent AI firms. For government technology officers, defense contractors, and Anthropic itself, the stakes go well beyond a single product: the outcome could set the template for how Washington regulates and procures frontier AI for years to come.
How the ban took shape
The conflict traces back to a dispute between the Pentagon and Anthropic over safety requirements for military and intelligence applications. The Associated Press reported that the Defense Department demanded changes to the way Anthropic’s models handle sensitive tasks, and that the company refused on the grounds that the changes would compromise its internal safety guardrails. The specific technical or contractual requirements at issue have not been publicly disclosed by either side.
President Trump responded with a directive ordering all agencies to cease using Anthropic products, framing the company’s refusal as a national-security concern. The General Services Administration moved to enforce the order on February 27, removing Anthropic from the USAi.gov procurement portal and the Multiple Award Schedule, the government’s main catalog of pre-approved commercial contracts.
In a statement posted to its newsroom, GSA said it would “stand with the president” by aligning federal purchasing rules with the administration’s security priorities. The language left little room for ambiguity: agencies were expected to stop buying, renewing, or expanding Anthropic services immediately.
Anthropic fights back in court
Anthropic did not wait long to respond. The company filed suit in the U.S. District Court for the Northern District of California, arguing that the administration overstepped its authority and that GSA’s actions improperly singled out a specific vendor. The lawsuit challenges both the underlying designation and the procurement ban. (The original article cited a case name referencing the “Department of War,” an agency title that has not been in use since 1947; that detail could not be independently verified and is omitted here. The docket number reported in the original article appeared to be 326-cv-01996, but the correct formatting could not be confirmed from public records available as of May 2026.)
The lawsuit produced a rapid judicial response. A federal judge issued a preliminary injunction that, according to a GSA statement dated April 3, 2026, temporarily affects enforcement of the ban. GSA said it is “reviewing the court’s order” and coordinating with the Department of Justice but did not concede that its earlier removal of Anthropic from procurement systems was unlawful.
The precise scope of the injunction remains difficult for the public to assess. GSA’s statement does not describe which specific government actions the court permitted or prohibited, and no press-facing summary of the operative language has been released by the court or either party. Filings on the Northern District’s electronic docket and PACER may contain those details, but for now the boundaries of what agencies can legally do, whether that means signing new contracts, exercising options on existing deals, or running internal pilots, are not fully transparent.
Testing in the gray zone
That ambiguity appears to be exactly the opening some agencies have used. Contractor accounts and descriptions from federal staff, shared on condition of anonymity because of the sensitivity of the dispute, indicate that at least some departments are evaluating Mythos in sandboxed environments. No agency outside GSA has issued a public statement confirming or denying such testing, and no procurement records, inspector-general reports, or internal memos documenting the evaluations have surfaced as of late April 2026.
Key operational questions remain open. It is unclear whether the agencies involved are relying on contracts that predate the ban, experimental other-transaction agreements, or informal access arrangements such as vendor-provided credits. Each pathway carries different compliance risks, particularly while the injunction’s reach is uncertain.
The Department of Defense, the agency at the center of the original safety dispute, has not released any records detailing an evaluation of Mythos. Without that documentation, it is impossible to say whether the testing amounts to limited lab benchmarks, broader pilot programs, or something else entirely.
What Mythos brings to the table
Anthropic has positioned Mythos as a next-generation model designed for high-stakes decision support, with capabilities the company says are suited to complex analytical tasks in government, finance, and scientific research. Public details about the model’s architecture and benchmarks remain limited, but its release came at a moment when federal demand for large language models was accelerating across defense, intelligence, and civilian agencies alike.
That demand is part of what makes the ban so consequential. Anthropic is one of only a handful of companies capable of supplying frontier AI to the federal government. Removing it from the procurement pipeline narrows the field at a time when agencies are under pressure, from Congress and from the White House’s own AI executive orders, to adopt and deploy advanced models quickly. Competitors including OpenAI, Google, and Meta still hold active federal contracts or marketplace listings, giving them a potential advantage if Anthropic’s access remains restricted.
What comes next for federal AI procurement
The legal and policy threads are moving on parallel tracks, and neither is close to resolution. On the judicial side, the preliminary injunction is a temporary measure; a full ruling on the merits of Anthropic’s lawsuit could take months. If the court ultimately sides with the company, GSA would likely be required to restore Anthropic’s procurement access and potentially compensate for lost contract opportunities. If the court sides with the government, the ban would be reinforced with judicial backing, sending a signal that the executive branch has broad authority to exclude AI vendors on national-security grounds.
On the policy side, GSA and the Office of Management and Budget have yet to issue detailed guidance telling agencies how to handle the interim period. Technology officers across the government are left making judgment calls: proceed with Mythos testing under the injunction’s uncertain protection, or pause and risk falling behind on AI adoption timelines that their own leadership has set.
For contractors who supply Anthropic products to federal clients, the uncertainty is more than theoretical. Existing agreements may or may not remain valid, and modifying or extending them while the legal dispute is active carries real financial and compliance exposure.
The documented facts are narrow but firm: the president ordered a halt, GSA carried it out, Anthropic sued, and a federal judge intervened. Everything beyond that, including the scope of ongoing testing, the model’s role in agency operations, and the ultimate resolution of the legal fight, remains in motion. Until the court rules or the executive branch changes course, the standoff over Mythos will serve as the sharpest test yet of where presidential power ends and judicial oversight begins in the fast-moving world of government AI.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.