The Federal Trade Commission finalized a settlement with Everalbum, a California-based photo app developer, over allegations that the company deceived users about its use of facial recognition technology and retained their photos and videos even after they deactivated their accounts. The case, which concluded in May 2021, offers a concrete example of why privacy researchers have long warned that granting apps unrestricted access to photo libraries carries risks most users never anticipate. Those risks extend well beyond simple image viewing, touching on biometric data extraction, covert dataset creation, and the persistence of personal data long after a user believes it has been deleted.
How a Photo App Became a Biometric Pipeline
Everalbum marketed itself as a convenient way to store and organize personal photos, but the FTC’s complaint told a different story. The agency alleged that Everalbum applied facial recognition technology to users’ photos without adequately disclosing the practice. According to an FTC press release, the company extracted facial images from user photos and combined them with publicly available datasets to build its own facial recognition training set. Users who had simply wanted cloud photo storage found their faces repurposed as raw material for machine learning development they never agreed to.
The settlement also addressed data retention failures. Everalbum allegedly kept photos and videos after users deactivated their accounts, contradicting reasonable expectations that deletion means deletion. The final FTC order required the company to delete models and algorithms derived from users’ photos and to obtain express consent before using facial recognition on any content going forward. The case demonstrated a pattern that privacy experts had been flagging for years. Once an app has full access to a photo library, the developer controls what happens to those images, and users have limited visibility into downstream uses.
Research Shows Broad Permissions Create Blind Spots
Academic research supports the concern that photo access permissions are poorly understood by the people granting them. A study of smartphone photo protections, described in the PhotoSafer research, found that a large share of popular mobile apps possessed complete access to photo libraries along with network capabilities. That combination is significant because it means an app can not only read every image on a device but also transmit data to external servers. The same work included user surveys that revealed widespread awareness gaps: many people did not fully grasp what “photo access” allowed an app to do or how far that permission reached into their personal archives.
The practical consequence is straightforward. When a user taps “Allow Access to All Photos” on a permission prompt, they are not simply letting an app display thumbnails. They are granting programmatic access to every image file, including screenshots of banking apps, photos of identification documents, and images shared in private conversations. For apps that also hold network permissions, those files can be uploaded, analyzed, or stored remotely without any additional user approval. The all-or-nothing permission model that dominated mobile operating systems for years gave developers broad latitude, and users had little practical way to audit what was happening behind the scenes or to detect quiet shifts from benign features to more invasive data collection.
Social Engineering Turns Permissions Into Attack Surfaces
Even when permission prompts are technically accurate, the way apps request access can manipulate user decisions. Research on mobile device access controls, presented in the Aware framework, showed that users can be socially engineered into authorizing access to sensitive device capabilities, including cameras and storage. The study found that once authorization was granted, apps could misuse those capabilities in ways the user did not intend or expect, such as accessing data in the background or at times unrelated to visible features. A photo editing app, for instance, might request camera access for a filter feature but then use that same permission to capture images or scan files when the user is not actively taking pictures.
This dynamic creates what amounts to a trust gap between what users believe they are approving and what they are actually enabling. The permission dialog box on a phone screen is a legal gate, not a technical safeguard. It does not restrict how an app uses the access it receives, only whether it receives access at all. For privacy researchers, this is the core problem. The permission system treats authorization as binary, while the risks of misuse exist on a spectrum. A simple game and a data broker’s tool receive the same level of photo access if the user taps the same button, even though their intentions and data practices differ enormously. That asymmetry turns every permission request into a potential attack surface, where persuasive design can override cautious instincts.
Biometric Data Persists Beyond App Deletion
The Everalbum case illustrates a risk that most users do not consider when they delete an app or close an account. Even after a user removes a photo app from their phone, the data that app collected, including biometric information derived from facial recognition, may continue to exist on the developer’s servers. Everalbum allegedly retained user content after account deactivation, and the facial recognition models it built from user photos represented a separate asset that would not disappear simply because someone uninstalled the app. The FTC required Everalbum to delete those models, but that remedy came only after a federal investigation and a formal order. In the absence of such scrutiny, similar datasets at other companies could remain intact and commercially valuable.
This persistence problem is compounded by the growing sophistication of AI tools. A photo library that once served as a simple collection of memories can now be processed to extract biometric identifiers, location patterns, social connections, and behavioral data. When that processing happens server-side, the user’s phone settings become irrelevant. Revoking photo access on a device does not recall data already transmitted or undo models already trained. For users, this means the decision to grant full photo access carries consequences that outlast the app itself, and those consequences are difficult to reverse once biometric data enters a developer’s pipeline. In effect, a single tap on a permission prompt can seed a long-lived profile that persists across products, business models, or even corporate ownership changes.
What Users Can Do to Limit Exposure
Both Apple and Google have introduced more granular photo permission options in recent operating system updates, allowing users to share selected photos rather than their entire library. These controls represent a meaningful improvement over the earlier all-or-nothing model, but they require active user engagement. Most people still tap through permission dialogs quickly, and many apps still request full access even when their core functionality requires only a handful of images. Privacy researchers recommend reviewing app permissions periodically and defaulting to the most restrictive setting that still allows the service to function. When an app insists on broad access for a narrow task, that mismatch can be a useful red flag.
Users can also adopt simple habits to reduce the impact of a worst-case scenario. Storing highly sensitive images (such as documents, medical information, or children’s identifying photos) in separate, encrypted apps or offline storage limits what a compromised or deceptive photo service can see. Regularly pruning old screenshots and uploads reduces the volume of data available for analysis if an app quietly expands its data collection. And when closing an account, users can look for explicit deletion options, send written requests for data removal, and keep records of those requests. None of these steps eliminate the structural risks highlighted by the Everalbum settlement and related research, but they can narrow the attack surface while regulators and platform providers work to align mobile permissions with the realities of biometric and AI-driven data use.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.