President Donald Trump on February 27, 2026, directed every federal agency to stop using technology built by Anthropic, the AI safety startup whose CEO publicly refused to let the Pentagon deploy its tools without ethical restrictions. The order caps months of escalating tension between the administration and Anthropic over whether military AI should operate under the company’s safety guardrails or be available for “all lawful purposes,” as the Defense Department demanded. The fallout could reshape how Washington buys artificial intelligence and which companies are willing to sell it, especially as agencies scramble to replace Anthropic systems they only recently adopted under governmentwide purchasing agreements.
Hegseth’s Ultimatum and Anthropic’s Refusal
The confrontation traces back to Defense Secretary Pete Hegseth, who, according to an Associated Press account, warned Anthropic to let the military use its AI as the Pentagon sees fit. Hegseth’s message was blunt: accept contract language granting the Defense Department unrestricted access for all lawful purposes, or face consequences. Those consequences, as described in the same reporting, included termination of existing awards, a formal supply-chain-risk designation, and potential pressure under the Defense Production Act to prioritize defense needs. Multiple AI firms held Department of Defense awards valued at up to $200 million, putting real money and future research funding behind the standoff and signaling to the broader tech sector that resistance could carry steep costs.
Anthropic CEO Dario Amodei chose confrontation over compliance. In a public statement, Amodei said his company “cannot in good conscience accede” to the Pentagon’s demands, stressing that its models would not be stripped of the safety constraints the company considers essential to preventing misuse. That line in the sand effectively converted a contract dispute into a clash over who sets ethical boundaries for military AI, private engineers or national security officials. By declining to loosen guardrails for lethal targeting or autonomous decision-making, Anthropic accepted the likelihood of losing federal revenue and access, but it also forced the administration to reveal how far it was willing to go to ensure that defense agencies could deploy AI systems without vendor-imposed limits.
What the Federal Ban Actually Does
Trump’s directive goes well beyond the Pentagon. By ordering all federal agencies to cease using Anthropic’s technology, the administration effectively nullified a deal the General Services Administration had struck just months earlier. That agreement, announced in August 2025, was a governmentwide arrangement that, as the GSA described, made Anthropic’s Claude tools available across the executive, legislative, and judicial branches for a nominal license fee. The contract covered both enterprise-grade and government-tailored offerings and was touted as a way to democratize access to advanced AI while centralizing security and compliance oversight. Trump’s order abruptly reverses that strategy, forcing agencies that had begun pilots or integrations to pause or unwind deployments.
The supply-chain-risk designation the Pentagon floated carries especially sharp teeth. Under 10 U.S. Code Section 3252, a covered agency head must issue a written determination and notify the appropriate congressional committees with a summary of the risk assessment and less-intrusive alternatives considered. If applied to Anthropic, the designation could bar tens of thousands of defense contractors from using the company’s AI when working on government projects, a reach that Reuters reporting notes would extend far beyond direct contracts. That ripple effect would touch consulting firms, weapons manufacturers, logistics providers, and research laboratories, effectively walling off a significant portion of the U.S. AI market from Anthropic’s tools whenever federal dollars are involved.
OpenAI Steps Into the Vacuum
The timing of what came next was difficult to read as coincidence. Hours after Trump issued his directive, rival OpenAI announced a fresh arrangement with the Defense Department that, as Politico reported, positioned its systems as critical to national security. Both Anthropic and OpenAI have publicly said they want ethical safeguards in military AI, yet only one company now holds a defense contract. The contrast sends a clear signal to every AI firm weighing government work: resist the Pentagon’s preferred terms, and a competitor will fill the gap before the news cycle ends, gaining not only revenue but also influence over how battlefield AI evolves.
This dynamic raises a question that most coverage of the ban has glossed over. If the administration can effectively punish a company for maintaining safety restrictions that the government’s own procurement policies previously encouraged, the incentive structure for the entire AI industry tilts sharply toward compliance with military demands. GSA’s internal guidance, laid out in Directive 2185.1B, emphasizes risk management, transparency, and lifecycle accountability, principles closely aligned with the kind of guardrails Anthropic tried to preserve. The ban essentially overrides the procurement framework the administration helped shape, signaling to vendors that national security imperatives will trump responsible-AI design whenever the two collide, and that “ethics” clauses may be tolerated only so long as they do not constrain operational flexibility.
Legal Tripwires and Congressional Oversight
The administration’s path to enforcing the ban is not without friction. The supply-chain-risk authority under Section 3252 requires more than a presidential statement; it demands a formal written determination, consultation with acquisition and security officials, and congressional notification that includes both a risk assessment summary and an explanation of less-intrusive measures that were considered. Whether the Pentagon has completed those steps is not yet clear from available public records, leaving open the possibility of procedural challenges. Lawmakers who championed AI oversight may demand to see the underlying analysis, particularly members who supported the Office of Management and Budget’s updated guidance on federal AI use and procurement, which the White House outlined in a public policy explainer last year.
The broader legal question is whether the Defense Production Act, another tool the Pentagon reportedly considered, could be invoked to pressure Anthropic or its infrastructure providers even after the ban. That statute gives the executive branch sweeping powers to prioritize and allocate industrial resources for national defense, but using it to force an AI company to relax safety constraints would likely invite court scrutiny and a fierce civil-liberties debate. Congress also retains leverage through oversight hearings, appropriations riders, and potential amendments to procurement law that could clarify whether agencies may condition eligibility on a vendor’s willingness to remove ethical safeguards. If legislators conclude that the administration has undermined its own AI-governance framework, they could respond by hard-coding certain safety requirements, or limits on executive pressure, into statute, reshaping the balance of power between commercial labs and the national security state.
What Comes Next for Federal AI Procurement
For federal agencies, the immediate challenge is practical: replacing Anthropic tools without derailing modernization projects or violating emerging AI-governance norms. Many departments had only just begun experimenting with large language models for tasks like drafting correspondence, summarizing case files, or assisting with regulatory analysis under the umbrella of the GSA agreement. Now, procurement officers must pivot to alternative vendors while ensuring that any new contracts still satisfy the risk-management expectations embedded in OMB guidance and agency-specific directives. The administration’s own digital-services infrastructure, including centralized portals like the TrumpCard platform, will face pressure to integrate substitute AI capabilities quickly, raising questions about whether speed will come at the expense of thorough security and bias evaluations.
For the AI industry, Trump’s move functions as a stress test of corporate resolve on safety commitments. Some firms may quietly adjust their policies to avoid Anthropic’s fate, softening public rhetoric about limits on military use or carving out broader exceptions for “national security” applications. Others could seek strength in numbers, coordinating around shared standards for acceptable defense work and betting that the government ultimately needs access to top-tier models enough to negotiate. The outcome will help determine whether Washington’s AI ecosystem is dominated by companies willing to tailor their systems to government demands with minimal friction, or whether a viable market remains for vendors that insist on retaining independent control over how their technologies can be used, even if that stance means walking away from the largest customer in the world.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.