The Federal Trade Commission finalized an order in January 2025 prohibiting data brokers Gravy Analytics and Venntel from selling sensitive location data, a move that exposed how phone tracking collected without verifiable consent has quietly fed both commercial and government surveillance operations. The enforcement action, paired with separate complaints against firms like Mobilewalla and declassified intelligence community reports, reveals a data pipeline that converts ordinary app usage into tools of institutional power. For anyone who carries a smartphone, the consequences are direct: location records from visits to clinics, houses of worship, and protest sites can end up in federal databases without a warrant.
Phone Pings Become Enforcement Tools
The FTC’s case against Gravy Analytics centered on a straightforward allegation: the company collected and sold location data tied to sensitive places, including medical facilities and religious sites, without obtaining verifiable consent from the people being tracked. In its January announcement, the agency described how the companies were barred from sharing or using precise location information linked to sensitive locations, mandated to delete previously gathered records, and required to implement safeguards to prevent future misuse, framing the order as a necessary response to a market where sensitive movements were treated as commodities. Separately, the FTC alleged that Mobilewalla harvested location signals through real-time bidding and ad exchange processes, then packaged that information into marketable audience segments that could be sold to clients interested in behavioral insights or targeted outreach. The complaint against Mobilewalla lays out how the company tapped into programmatic advertising infrastructure to assemble profiles from routine smartphone activity, not from any targeted investigation or court order.
What makes these cases significant beyond consumer protection is where the data traveled next. Venntel, the government-facing subsidiary named in the FTC’s order, held contracts with the Department of Homeland Security to provide analytical services based on location records. FOIA-released documentation confirms that DHS maintained procurement relationships with Venntel, with contract files describing the scope of work, license terms, and performance periods that governed the government’s access to commercially sourced device signals. Customs and Border Protection records obtained through separate FOIA requests show the agency used Venntel analytics in operational settings, including briefings, user guides, and query screenshots that illustrate how purchased phone pings were layered into immigration enforcement workflows. The commercial data that the FTC concluded was collected and sold without adequate consent was, in parallel, flowing into federal law enforcement systems that could track border crossings, identify patterns near sensitive locations, and inform investigative leads.
Why Purchased Data Sidesteps Traditional Legal Checks
The legal architecture that allows this pipeline to function rests on a distinction the intelligence community itself has codified. The Office of the Director of National Intelligence defines “commercially available information,” or CAI, as data that government entities can purchase from private vendors under rules that differ from those governing compelled collection through subpoenas or warrants. In public guidance, the intelligence community explains that purchasable datasets are treated as a distinct category, often acquired via procurement contracts rather than court orders, even when they contain precise geolocation or other identifiers. That definitional move is not incidental: by classifying these records as ordinary commercial products, agencies can argue that buying them is more akin to acquiring a software license than conducting a search, sidestepping the procedural safeguards that typically accompany direct surveillance.
A declassified report from the ODNI’s Senior Advisory Group highlights the tension embedded in this approach. The panel found that commercially available data can be “revealing, hard to avoid, and capable of enabling large-scale tracking and deanonymization,” acknowledging that modern data brokerage practices generate dossiers that rival or exceed the sensitivity of traditional intelligence sources. The same report notes that government stakeholders view CAI as operationally powerful yet potentially privacy-invasive, emphasizing that location trails, advertising profiles, and other consumer records can be aggregated to reconstruct lives in granular detail. By warning that purchased information can support continuous monitoring across time and space, the advisory group implicitly challenges the idea that commercial data is benign simply because it originated in consumer transactions. Instead, the report suggests that the very ubiquity of digital services makes opting out unrealistic, turning routine participation in the economy into a de facto consent to pervasive observation.
Data Collected for Services, Repurposed for Monitoring
The pipeline does not stop at location tracking or advertising identifiers. Data originally collected to facilitate health care, eligibility determinations, and the administration of public services is increasingly being repurposed by government agencies for surveillance, risk scoring, and other secondary uses that extend far beyond the contexts in which it was first gathered. When someone applies for benefits, visits a clinic, or interacts with a public agency, the records generated in those transactions can travel through data-sharing agreements, vendor relationships, or analytics contracts into broader ecosystems where they support enforcement or intelligence analysis. This repurposing can be subtle: a database created to verify income eligibility might later feed algorithms that flag “fraud risk,” which in turn trigger investigations, home visits, or cross-checks with other datasets, effectively transforming social support infrastructure into a mechanism for monitoring the same populations it was designed to assist.
The FTC has framed this broader dynamic as a structural problem, not just a series of isolated violations. In its ongoing commercial surveillance rulemaking, the agency documented concerns about constant collection, downstream harms, manipulation, and security risks that arise when surveillance operates as a business model rather than an incidental byproduct of digital services. The rulemaking notice describes how companies routinely gather more information than they need, retain it indefinitely, and share it widely with third parties, building an environment in which extensive profiling becomes a default feature of everyday technologies. That framing matters because it shifts the regulatory question from whether individual firms broke specific rules to whether the overall data economy is structured in ways that make large-scale surveillance inevitable. Every app permission, every ad auction, and every public-service interaction becomes a potential data point in a system where collection precedes any defined purpose, and where the same information can be repackaged repeatedly for new, increasingly intrusive uses.
Legislative Responses and Their Limits
Congress has begun to respond to the gap between commercial data collection and government acquisition, but current proposals remain modest relative to the scale of the problem. The Purchased Data Inventory Act, introduced as S.2292 in the 118th Congress, would require federal agencies to catalog the datasets they buy that can identify individuals, including the types of information acquired and the purposes for which it is used. The bill’s text describes an obligation for agencies to report their holdings of personally identifiable records obtained from vendors, aiming to provide lawmakers and the public with a clearer picture of how commercial surveillance tools enter government operations. Transparency of this kind would not, by itself, restrict purchases of location feeds or advertising profiles, but it would establish a baseline of visibility that is currently missing, enabling more targeted oversight and debate about specific contracts, such as those involving Venntel or other data brokers.
Yet the limitations of this approach are evident. An inventory requirement does not impose substantive constraints on what types of data agencies may buy, how long they may retain it, or under what conditions it must be deleted or anonymized. It also does not directly resolve the constitutional questions raised when law enforcement acquires information that would typically require a warrant if obtained directly from a phone or service provider. Without broader privacy legislation that limits both commercial collection and government access, agencies can continue to lean on the CAI framework to justify purchases, while companies have strong financial incentives to keep building ever more detailed profiles. The FTC’s enforcement actions and rulemaking efforts signal that regulators recognize the systemic nature of the issue, but absent clear statutory boundaries, the same underlying pipeline, from app to broker to government, will remain intact, reshaped only at the margins by case-by-case settlements and incremental transparency measures.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.