OpenAI lost a staggering share of its U.S. mobile users in a single weekend after announcing a partnership with the Department of Defense, triggering one of the sharpest consumer revolts the AI industry has seen. The backlash, driven by concerns over AI militarization and the timing of the deal, sent ChatGPT uninstalls soaring while rival Anthropic’s Claude app picked up the defectors. What began as a federal procurement dispute between the White House and an AI safety lab has now spilled into the consumer market, forcing OpenAI’s CEO to publicly admit he mishandled the rollout.
How a Pentagon Clash Opened the Door for OpenAI
The chain of events started on February 26, when Anthropic refused the Pentagon’s request to remove safety guardrails from its Claude AI system. The company told defense officials it could not in good conscience allow the military to strip AI checks designed to prevent misuse in surveillance and autonomous weapons applications. That refusal set off a rapid federal response: by February 27, President Trump ordered U.S. agencies to suspend the use of Anthropic tools entirely, citing the clash over AI safety compliance. Anthropic has stated it intends to challenge the order and argue that its safety standards are a core part of its product, not an optional feature to be negotiated away.
The directive reversed a relationship that had been deepening for months. In August 2025, the General Services Administration struck a OneGov deal with Anthropic to offer Claude AI to all branches of government for $1 per user, embedding the technology across federal procurement channels. That agreement is now effectively frozen, with the company’s status reflected on the federal supply chain exclusion and removal orders list that is refreshed daily on SAM.gov. OpenAI moved quickly to fill the vacuum, announcing its own Pentagon partnership on February 28 and presenting ChatGPT as a compliant alternative that would meet defense requirements while, in its telling, still respecting core safety commitments.
295% Uninstall Spike and a Collapsing Download Rate
The consumer response was immediate and severe. According to Sensor Tower data, U.S. app uninstalls of ChatGPT surged by 295% day over day on February 28, the same day the Pentagon deal became public. The spike was not limited to deletions: ChatGPT’s U.S. downloads also fell sharply, with growth reportedly turning negative as the controversy spread across social platforms and tech forums. That combination of accelerating deletions and slowing acquisition represents a two-front problem for OpenAI’s consumer business, undermining both its existing base and its ability to attract new users at the very moment it is trying to showcase strength to government buyers.
Follow-on reporting has underscored how unusual the swing was for a mature app. One analysis described a mass exodus from ChatGPT as the uninstall spike coincided with a visible reshuffling of AI app rankings in multiple countries, including the United States. Another outlet noted that the defense deal fueled a backlash that cut into OpenAI’s mobile momentum just weeks after the company had touted record engagement numbers. For a consumer product that relies on habitual, daily use, the optics of users deleting the app in protest matter almost as much as the raw metrics, signaling to fence-sitters that quitting ChatGPT is both feasible and socially validated.
The QuitGPT Movement Was Already Building
The Pentagon deal did not create anti-OpenAI sentiment from scratch. It accelerated a boycott movement that had been gaining traction for weeks under the QuitGPT banner, which framed OpenAI as too close to political power and too willing to prioritize rapid deployment over caution. Activists had already been urging users to cancel subscriptions and delete the app over a mix of concerns about AI influence on politics, the company’s late-2025 fundraising, and its shifting governance model, arguing that concentrated control over advanced models was inherently risky. Within that context, the defense partnership landed less as a surprising pivot than as confirmation of fears that ChatGPT would become entwined with military and intelligence work.
Coverage of the revolt has repeatedly emphasized how values-driven the shift appears to be. One report on the war over AI in war described users cancelling ChatGPT subscriptions explicitly because they did not want to subsidize tools that might be adapted for battlefield use, even indirectly. In that telling, the 295% uninstall spike is less a fleeting social media campaign than the visible crest of a deeper rethinking of which companies people are willing to trust with their data and daily workflows. The fact that many of those users immediately installed a rival app suggests they were not abandoning AI altogether, but rather trying to align their technology choices with their ethical preferences.
Anthropic’s Gains and the New Competitive Fault Line
The exodus benefited Anthropic directly. Internally, the company has said its free user base grew by more than 60% during the surge, while downloads of the Claude app climbed as U.S. users switched services. App store rankings shifted visibly in the days following the OpenAI–Pentagon announcement, with Claude jumping several spots in productivity and education charts as ChatGPT slid. For a challenger that has long positioned itself as more cautious and research-driven than its larger rival, the moment provided a rare opportunity to turn an abstract brand promise about safety into concrete user acquisition.
Anthropic’s public stance in its dispute with the Pentagon has amplified that differentiation. By insisting that it would not relax guardrails for military clients, the company effectively invited consumers to see Claude as the “civilian” alternative in a landscape where some leading systems may increasingly be optimized for national security use cases. Analysts quoted in coverage of the uninstall spike have noted that the backlash has given Anthropic an opening to frame its refusal to weaken safeguards as both a moral stand and a competitive advantage. Whether that narrative holds will depend on how consistently the company applies its principles in future government negotiations, but for now the contrast with OpenAI is sharp enough that many users are voting with their downloads.
Altman Admits a “Sloppy” Rollout and What Comes Next
OpenAI CEO Sam Altman has tried to contain the fallout by acknowledging missteps in how the defense partnership was communicated. In interviews and social posts, he has described the announcement as “sloppy” and conceded that he underestimated how strongly users would react to the idea of ChatGPT being embedded in Pentagon workflows. Reporting on the episode notes that Altman’s comments came only after the scale of the uninstall spike became public, suggesting that internal teams may have been caught off guard by the speed and intensity of the backlash. According to one account, even some employees who work on safety and policy were frustrated that they learned key details of the deal at the same time as the general public, limiting their ability to anticipate and prepare for user concerns.
At the same time, OpenAI has defended the substance of the partnership, arguing that engagement with defense agencies is necessary to ensure that powerful AI systems are deployed responsibly in national security contexts. Company representatives have pointed to statements in which Altman emphasized that OpenAI is not yet ready to support lethal use cases and intends to keep strict limits on what its models can be used for in military settings. Whether that reassurance will be enough to stem the user revolt is unclear. For now, the numbers tell a stark story: a partnership meant to solidify OpenAI’s status as a trusted government vendor has instead exposed how fragile its consumer loyalty can be when people feel shut out of consequential decisions about where, and for whom, their favorite AI tools work.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.