Morning Overview

U.S. spy agencies want control of AI regulation — and Anthropic got frozen out after refusing to let the Pentagon surveil Americans

In late February 2026, the federal government cut off one of the world’s most advanced AI companies from every U.S. government contract overnight. Anthropic, the maker of the Claude AI system, had refused to give the Pentagon unrestricted access to its technology. Within days, the company was scrubbed from the government’s purchasing platforms, locked out of a market worth billions, and cast as a national security liability.

The confrontation did not end there. Anthropic fought back in court and won a preliminary injunction that restored its access. But the episode revealed something larger than a contract dispute: U.S. intelligence agencies are quietly assembling the institutional machinery to become the primary regulators of artificial intelligence, and companies that push back risk being shut out of the federal marketplace entirely.

The ultimatum and the ban

The clash began when Defense Secretary Pete Hegseth delivered a direct warning to Anthropic, according to people familiar with the meeting as reported by the Associated Press. Hegseth told the company to let the military use its AI tools without restrictions. The warning came with a deadline and explicit threats: the administration could designate Anthropic as a supply-chain risk or invoke the Defense Production Act to compel cooperation.

People familiar with the discussions told the AP that the Pentagon’s demands included potential applications involving surveillance of Americans. No official government statement has confirmed or denied that characterization, and the Pentagon has not publicly detailed the specific military uses it sought. Anthropic did not comply with the demands.

On February 27, 2026, President Trump signed a national security directive ordering federal agencies to stop using Anthropic’s products. The General Services Administration moved quickly, removing Anthropic from USAi.gov and the Multiple Award Schedule, the government’s primary purchasing platform for commercial technology. GSA stated publicly that it stood with the president, framing the action as a national security measure.

The speed was striking. A single company went from active federal vendor to blacklisted in a matter of days, not because of a security breach or a failed audit, but because it declined to accept terms the Pentagon set behind closed doors.

The court steps in

Anthropic challenged the ban in federal court. On April 3, 2026, GSA issued a statement acknowledging that a preliminary injunction had been granted. The agency restored Anthropic’s status on federal procurement platforms while the legal battle continued.

The injunction did not resolve the underlying policy conflict. GSA’s statement confirmed the outcome but did not detail the court’s reasoning. Whether the judge found the procurement ban likely unconstitutional, procedurally defective, or simply harmful to the public interest remains unclear from available public records. The full legal arguments on both sides have not been released in a form that allows independent analysis.

Still, the ruling carried a clear signal: courts are willing to scrutinize how national security justifications are used to reshape the AI marketplace. For Anthropic, the injunction was a reprieve. For the administration, it was a check on the speed and scope of executive action in this space.

Intelligence agencies as AI gatekeepers

The Anthropic dispute played out against a broader institutional shift that has received far less attention. U.S. intelligence agencies are not just buying AI tools. They are building permanent organizations designed to set the rules for how AI is developed, secured, and deployed across the federal government.

The National Security Agency established the Artificial Intelligence Security Center, known as AISC, with a stated mission to “defend the Nation’s AI.” The center operates through partnerships with industry, academia, the Intelligence Community, and other government agencies. The NSA has also published formal guidance on AI system security, positioning itself as a standard-setter for the technology, not merely a consumer of it.

On the civilian side, the National Institute of Standards and Technology runs the Center for AI Standards and Innovation, or CAISI. That program coordinates government-wide AI evaluation methods and works with the Intelligence Community on standards development. Through NIST’s broader cybersecurity and standards program, CAISI also maintains an international posture, meaning the frameworks it develops could shape how AI is assessed across allied nations.

Together, these bodies are creating a governance architecture in which intelligence agencies define what “secure AI” means, civilian standards bodies translate those definitions into benchmarks, and procurement authorities enforce compliance. The Anthropic case showed what happens when a company falls outside that architecture: it loses market access almost immediately.

What Anthropic has not said

One of the most significant gaps in this story is Anthropic’s own silence. The company has not released detailed public statements about the substance of its refusal. Whether Anthropic objected specifically to domestic surveillance applications, to the breadth of unrestricted military access, or to some narrower technical or ethical concern has not been clarified on the record by any company executive.

Anthropic has published a Responsible Scaling Policy that outlines how it evaluates the risks of its AI systems before expanding their capabilities and access. That policy includes commitments to safety evaluations and deployment limits, but it does not explicitly address the scenario the company apparently faced: a government demand for unrestricted military use with no disclosed guardrails.

This silence leaves room for competing narratives. Was Anthropic defending civil liberties? Protecting proprietary technology? Drawing a line on ethical grounds that other AI companies have not? Without on-the-record statements, outside observers are left to infer the company’s reasoning from its actions and its published policies.

What other AI companies have not said either

Equally notable is the silence from Anthropic’s competitors. OpenAI, Google DeepMind, and Meta all operate large AI systems with potential military and intelligence applications. None have publicly commented on whether they have received similar demands from the Pentagon, or how they would respond to an ultimatum like the one Anthropic reportedly faced.

That silence matters. If the administration’s approach to Anthropic becomes a template, every major AI company selling to the federal government will eventually face the same question: accept unrestricted government access or risk losing the contract. How the industry responds collectively will shape whether the Anthropic episode remains an outlier or becomes the norm.

The stakes for federal AI procurement

For AI companies, defense contractors, and federal technology buyers, the practical consequences are already here. Any firm selling AI tools to the U.S. government now operates in an environment where intelligence agencies are setting security standards, the executive branch can revoke market access on national security grounds, and judicial intervention may or may not arrive in time to prevent lasting damage.

Companies that depend on federal contracts should be reviewing their terms of service and acceptable-use policies with an eye toward government carve-outs, dispute-resolution mechanisms, and explicit limits on surveillance or weapons applications. The Anthropic case demonstrated that vague or aspirational safety commitments are not enough when the government demands specifics.

Agencies face their own trade-offs. Aggressive use of procurement bans can chill innovation and shrink the pool of vendors willing to challenge expansive security demands. Over time, that could concentrate federal AI work in a smaller set of firms that are either structurally dependent on government contracts or comfortable aligning closely with intelligence priorities. Whether that produces safer systems or simply more secretive ones depends on how much of the emerging governance framework is made public and subject to democratic oversight.

Where this goes next

As of late May 2026, the Anthropic injunction holds, but the underlying legal and policy questions remain unresolved. The court has not issued a final ruling. The administration has not withdrawn its national security rationale. And the institutional machinery that intelligence agencies are building continues to expand regardless of how the Anthropic case is decided.

Congressional oversight has been minimal. No major legislation specifically addressing the use of procurement bans as AI policy tools has advanced, and public hearings on the intelligence community’s expanding role in AI governance have been limited. The gap between the speed of executive action and the pace of legislative response is itself part of the story.

The verified facts paint a clear picture: an administration willing to weaponize procurement to enforce AI policy, an intelligence community formalizing its role as gatekeeper, and a standards ecosystem that increasingly blurs the line between civilian and classified priorities. The unresolved questions are just as important: what exactly was demanded, why Anthropic refused, how judges will ultimately rule, and whether Congress will assert any role at all. Those answers will determine whether this episode is remembered as an early warning or as the moment the rules of AI governance were rewritten behind closed doors.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.