The Federal Trade Commission took action against an AI-powered photo app that secretly harvested biometric data from users’ personal images, marking one of the clearest examples of how apps distributed through major platforms like Google Play can quietly exploit the private photos and videos people trust them to store. The case against Everalbum, Inc. alleged the company applied facial recognition technology without proper consent, derived geometric data from faces in users’ photos, and then refused to delete that material when users asked. The enforcement action and its fallout offer a sharp warning about the gap between what AI photo apps promise and what they actually do with sensitive personal data.
How Everalbum Exploited Users’ Photos
Everalbum marketed itself as a convenient cloud storage tool for photos and videos, but the FTC’s complaint told a different story. According to the agency’s case filing, the company applied facial recognition without proper consent and extracted biometric information, specifically facial geometric data, from the images users uploaded. This type of data is uniquely sensitive because, unlike a password, a person’s facial geometry cannot be changed once it is compromised. The complaint also alleged that Everalbum derived this biometric data from users’ photos and videos on an ongoing basis, building a growing repository of information that users never agreed to hand over.
The second major allegation was equally troubling. The FTC documented that Everalbum failed to delete users’ photos and videos even after those users deactivated their accounts and requested removal. In practice, this meant that people who decided they no longer trusted the app had no real way to reclaim their data. The company retained images and the biometric information extracted from them, creating a stockpile of sensitive material that persisted well beyond the user relationship. The FTC’s Decision and Order required Everalbum to destroy the biometric data it had collected and to implement stronger safeguards around data retention and consent going forward, signaling that regulators view deceptive handling of AI-generated biometric profiles as a serious violation rather than a technical oversight.
Why Google Play Distribution Amplifies the Risk
The Everalbum case highlights a structural problem with how AI apps reach consumers. Google Play serves as the primary distribution channel for Android apps worldwide, and users generally treat its presence in the store as a signal of baseline trustworthiness. But the FTC’s action against Everalbum shows that an app can pass through platform distribution while still engaging in practices that violate federal standards for data protection and consent. The gap between platform listing requirements and actual privacy enforcement creates a blind spot that AI-driven apps, which often require broad access to photos, contacts, and device sensors, are well positioned to exploit.
What makes AI photo apps particularly dangerous is the nature of the permissions they request. An app that promises smart organization or automatic tagging needs access to a user’s entire photo library, which may contain images of children, government-issued IDs, medical documents, and location-tagged snapshots of homes and workplaces. When an app like Everalbum then applies facial recognition to that library without clear disclosure, the scope of the privacy breach extends far beyond a single data point. It touches identity, family relationships, and physical whereabouts, all derived from images users believed were simply being stored in the cloud. The combination of broad permissions, opaque AI processing, and the trusted aura of a major app store turns what appears to be a simple photo backup tool into a powerful engine for covert biometric surveillance.
Biometric Data and the Identity Theft Connection
Facial geometric data sits in a category of personal information that carries outsized risk. Unlike email addresses or even Social Security numbers, biometric identifiers are permanent. Once extracted and stored by a third party, they can be used to train facial recognition models, sold to data brokers, or leaked in a breach with no practical remedy for the affected individual. The FTC’s complaint against Everalbum specifically alleged the company derived biometric data from users’ photos and videos, treating personal images as raw material for machine learning without meaningful user awareness or a clear opportunity to opt out.
This kind of data misuse feeds directly into broader fraud and identity theft risks. The FTC maintains a dedicated portal where consumers can report fraud they encounter, and a separate resource for people who believe their identity has been stolen. When biometric data enters circulation through unauthorized collection, it can be combined with other leaked personal information to defeat authentication systems that rely on facial verification. A criminal who gains access to a trove of facial templates, along with names and contact details, can attempt to impersonate victims in systems that use face-based logins or remote identity checks, raising the stakes far beyond conventional password theft.
What Users Can Do to Protect Themselves
The most direct lesson from the Everalbum case is that app permissions deserve far more scrutiny than most people give them. When an AI photo app requests access to an entire camera roll, users should ask whether that level of access is necessary for the features being offered. A simple photo filter does not need to scan every image on a device, and a basic backup tool should not need to analyze faces to function. Reading the privacy policy, while tedious, can reveal whether an app claims the right to use uploaded images for training machine learning models or sharing data with unnamed third parties. If the policy is vague on these points, that ambiguity itself is a red flag that the app may be doing more with your photos than it openly admits.
Users who suspect their data has already been misused have options, even if those options cannot fully reverse unauthorized biometric collection. The FTC’s identity theft resource provides step-by-step guidance for people who believe their personal information has been compromised, including instructions for placing fraud alerts with credit bureaus, creating written recovery plans, and documenting communications with affected institutions. Revoking app permissions retroactively through Android’s settings menu can limit future data collection, and deleting accounts can help reduce the volume of information an app retains, though it cannot undo extraction that has already occurred. The harder truth is that once biometric data leaves a user’s device, the user loses meaningful control over it, which is precisely why the FTC’s enforcement against Everalbum demanded deletion of all collected biometric information as part of the settlement and imposed future limits on how the company can deploy facial recognition.
Enforcement Gaps Leave Users Exposed
The Everalbum case, while significant, also exposes the limits of the current regulatory approach. The FTC acts on a case-by-case basis, investigating individual companies after harm has already occurred. There is no pre-market review process for AI apps that handle biometric data, and no federal law specifically requires app stores to verify that listed apps comply with biometric consent standards before distribution. This means that for every Everalbum that gets caught, other apps with similar practices may continue operating without scrutiny. The enforcement model is reactive, and the speed at which new AI photo tools appear on Google Play far outpaces the agency’s capacity to identify and investigate each potential violator in real time.
These gaps leave users in a precarious position. People must navigate a marketplace where AI-powered apps can quietly turn intimate photo libraries into training sets for commercial facial recognition, with regulatory intervention arriving only after patterns of abuse surface. Until there are stronger, proactive safeguards, such as clearer federal rules for biometric consent, more rigorous oversight of how apps describe their data practices, and greater accountability for platforms that host high-risk tools, the burden will continue to fall on individuals to read fine print, manage permissions, and monitor for signs of fraud. The Everalbum enforcement action underscores that regulators recognize the danger, but it also illustrates how much of the work of protecting biometric privacy still happens too late, after the data has already been taken and the trust has already been broken.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.