Morning Overview

Sanders questions Anthropic’s Claude about AI and personal data

Senator Bernie Sanders, the independent from Vermont, has turned an unusual spotlight on AI and privacy by directly questioning Anthropic’s chatbot Claude about the risks artificial intelligence poses to personal data. The exchange, part of Sanders’ broader public campaign on AI governance, arrives at a moment when federal agencies and lawmakers are actively clashing over how AI companies handle sensitive information collected from ordinary people. With the Pentagon recently banning Anthropic from certain operations and at least one other senator demanding contract transparency from major AI firms, the stakes of this debate have moved well beyond theoretical.

Sanders Puts Claude on the Record

Sanders chose a deliberately unconventional method to press his concerns: asking an AI system itself about the dangers it might create. In a public statement on AI and humanity, the senator laid out a broad critique of how AI development threatens workers, democratic institutions, and civil liberties. His decision to question Claude directly served as a high-visibility tactic, drawing attention to the gap between what AI companies promise about data protection and what their systems can actually do with personal information.

The approach was strategic. By framing the conversation as a direct interrogation of the technology, Sanders bypassed the usual cycle of press conferences and committee hearings. Instead, he placed the AI itself at the center of the public record, forcing a visible demonstration of how these systems respond when asked about their own capacity to analyze, store, and combine user data. No primary transcript of the full exchange has been published, and Anthropic has not released a formal response to Sanders’ probing. That silence itself has become part of the story, underscoring how little the public knows about the internal safeguards and policies that govern powerful AI models.

Sanders’ broader argument ties AI privacy risks to economic concentration. His position is that a small number of corporations now control both the technology and the data pipelines feeding it, creating conditions where personal information can be exploited at scale without meaningful consent or oversight. This framing distinguishes his critique from narrower technical concerns about data breaches or algorithmic bias. He is making the case that the structure of the AI industry, not just its products, is the problem, and that without structural limits on data collection and use, AI will deepen existing inequalities while eroding basic privacy rights.

Pentagon Ban and Wyden’s Contract Demands

Sanders is not operating in isolation. Senator Ron Wyden, the Oregon Democrat, has been pursuing a parallel track focused on federal procurement. After the Pentagon banned Anthropic from certain defense operations, Wyden sent letters to major AI companies seeking information about their federal contracts and any prohibitions placed on their technology by government agencies.

Wyden’s inquiry zeroed in on a specific and alarming capability: the potential for AI systems to analyze and combine commercially obtained location data with web-browsing histories. That combination is particularly dangerous because it can effectively reconstruct a person’s movements, habits, and associations without a warrant or any judicial review. Wyden asked the companies to disclose whether their contracts include safeguards against this kind of data fusion, and whether any agency has restricted their tools because of privacy failures. His questions implicitly acknowledge that once AI systems can cheaply and automatically correlate disparate datasets, traditional legal distinctions between “anonymous” commercial data and regulated surveillance collapse.

The Pentagon’s decision to ban Anthropic signals that at least one arm of the federal government concluded the company’s data practices, technical controls, or both fell short of acceptable standards. That is a significant finding, because defense agencies typically have higher tolerance for surveillance tools than civilian regulators do. If the Department of Defense determined that Anthropic’s approach to personal data was too risky for certain military uses, the implications for commercial applications, where oversight is far weaker and users rarely understand how their information travels, are serious.

Surveillance Roots Run Deep

The concerns Wyden and Sanders are raising did not emerge from nowhere. Reporting from early 2021 revealed that the Defense Intelligence Agency had been purchasing commercially available data on Americans’ movements and online activity, sidestepping the warrant requirements that would normally apply to government monitoring. That practice established a template: intelligence agencies buying what they could not legally collect, using private-sector data brokers as intermediaries.

AI dramatically amplifies this risk. Where earlier surveillance programs relied on human analysts to sift through purchased datasets, modern AI systems can process and cross-reference billions of data points in seconds. An AI model trained on commercially obtained location records and browsing histories could, in theory, identify patterns of behavior, predict movements, and flag individuals for further scrutiny, all without triggering any of the legal protections designed to prevent government overreach. Sanders’ questioning of Claude implicitly asks whether the companies building these systems have any internal limits that would prevent such use, or whether they will simply follow the money wherever it leads.

These historical examples also highlight why lawmakers are now focusing on the intersection of commercial data markets and advanced analytics. The underlying data is already being bought and sold; the new variable is the sophistication and speed with which AI can draw inferences from that data. Without explicit rules, the same tools that power consumer conveniences, like personalized recommendations or traffic predictions, can be repurposed into engines of continuous, opaque surveillance.

Defense Contracts and the AI Gold Rush

The tension between privacy and national security spending is intensifying as AI companies compete for lucrative government work. Reporting from early March 2026 detailed ongoing negotiations between Anthropic, OpenAI, and the U.S. defense establishment over AI contracts. These talks are happening even as the Pentagon’s ban on Anthropic remains in place, suggesting that the relationship between AI firms and military buyers is more complicated than a simple accept-or-reject decision.

For the companies involved, federal contracts represent both revenue and legitimacy. A defense partnership validates an AI system’s reliability and security in ways that commercial adoption alone cannot. It also offers access to unique datasets and mission-critical use cases that can shape the next generation of models. But for privacy advocates and the senators now asking hard questions, these same contracts represent a pipeline through which personal data collected for commercial purposes could flow into government surveillance programs with little public accountability.

The current regulatory environment offers few clear answers. No comprehensive federal law specifically governs how AI companies must handle personal data obtained through commercial channels when that data is later used in government contexts. The result is a patchwork of agency-level policies, executive orders, and voluntary commitments that vary widely in their rigor and enforcement. Wyden’s demand for contract details is, in part, an attempt to map this gap and determine how much personal data is already moving through these channels without public knowledge, and whether any agencies have quietly imposed their own restrictions in response to emerging risks.

What This Means for Ordinary Users

The practical consequences for people who use AI chatbots, search engines, and location-enabled apps are direct. Every interaction with an AI system generates data. Every search query, every location ping from a phone, every browsing session contributes to datasets that can be logged, retained, and analyzed. In isolation, a single query or map request may seem trivial. In aggregate, across months or years and stitched together by powerful models, those interactions can reveal intimate details about a person’s health, politics, relationships, and routines.

Sanders’ decision to question Claude puts this reality into sharp relief. If an AI system can credibly explain the dangers of large-scale data collection (how combining location trails with browsing histories can expose visits to clinics, religious services, or political meetings), then the question becomes why companies are allowed to keep building products that depend on exactly that kind of accumulation. Wyden’s focus on contract terms adds another layer: even if individuals are uneasy about how much data they are sharing, they have almost no visibility into whether that information will ultimately help train systems sold back to the government.

For now, the burden of protection largely falls on users who have limited tools and even less leverage. People can adjust privacy settings, use more restrictive apps, or limit what they share with chatbots, but these steps only go so far in an ecosystem designed to capture as much data as possible. Sanders and Wyden are arguing that meaningful privacy cannot depend on individual vigilance alone. It requires legal boundaries on how data can be collected, combined, and repurposed, especially when powerful AI systems are in the loop.

The emerging clash between senators, AI firms, and the Pentagon is therefore more than a dispute over technical safeguards or contract language. It is a test of whether democratic institutions can keep pace with technologies that make it trivial to turn everyday digital traces into comprehensive dossiers. By dragging these questions into public view (through pointed letters, formal statements, and even direct interrogation of an AI chatbot), lawmakers are forcing a reckoning over who controls the data that defines modern life, and what limits, if any, will be placed on its use.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.