The UK’s Information Commissioner’s Office (ICO) has written to Meta seeking answers about how recordings from the company’s Ray-Ban smart glasses are handled, after the BBC reported that audio and video clips captured by the devices can be reviewed by offshore workers in Kenya. The inquiry centers on whether Meta’s data practices align with its own Privacy Policy and whether users fully understand who may see their recordings. The case exposes a growing tension between the rapid adoption of AI-powered wearables and the low-visibility labor systems that train the artificial intelligence behind them.
UK Regulator Flags Meta’s Data Practices
The ICO said it had contacted Meta about the reports. The regulator’s letter sought clarity on the specific protocols governing how user-generated recordings are processed, stored, and shared with third-party workers. The move signals that privacy authorities are no longer content to treat AI training pipelines as an internal corporate matter, particularly when personal footage is involved.
What makes this inquiry different from routine data protection complaints is the product at its center. Ray-Ban Meta glasses look and function like ordinary sunglasses, but they carry built-in cameras and microphones capable of capturing high-fidelity audio and video of everyday life. Meta has promoted the glasses’ AI features, including in public appearances by its founder Mark Zuckerberg, according to the BBC. The ICO’s intervention suggests that the privacy infrastructure behind those features has not kept pace with the marketing.
How Recordings Reach Offshore Workers
The core concern is straightforward: when a user activates the AI assistant on their Ray-Ban Meta glasses, the device captures audio or video and uploads it to Meta’s servers for processing. Some of those clips are then flagged for human review, a standard industry practice meant to improve the accuracy of AI models. But the human reviewers in question are reportedly based in Kenya, working under outsourcing arrangements that place them far from the regulatory reach of European or American data protection agencies.
This offshore review model is not unique to Meta. Major technology companies have relied on contract workers in lower-wage countries to label data, moderate content, and evaluate AI outputs for years. The difference here is the intimacy of the data involved. Smart glasses capture what a person sees and hears in real time, from private conversations to the interiors of homes and workplaces. When those recordings land on the screens of workers thousands of miles away, the gap between user expectation and corporate practice becomes difficult to dismiss as a procedural detail.
The reported working conditions add another layer of concern. The BBC’s reporting and other investigative accounts have raised questions about pay and training for some Kenyan data workers and about what safeguards exist in the facilities handling sensitive recordings. Meta has not publicly released detailed protocols for any offshore review operations related to the Ray-Ban glasses.
What Users Are Told vs. What Happens
Meta’s Privacy Policy covers the collection and use of data from its hardware products, including the Ray-Ban glasses. But privacy policies are legal documents written for compliance, not for clarity. The average buyer of a pair of smart glasses is unlikely to read the full policy, and even those who do may not grasp that their recordings could be reviewed by human workers in another country. The ICO’s inquiry appears to focus on this gap between disclosure and meaningful informed consent.
Most privacy frameworks, including the UK’s Data Protection Act and the EU’s General Data Protection Regulation, require that individuals understand how their data will be used before they agree to share it. If Meta’s policy language is too vague to convey that recordings may be sent to offshore workers for AI training, the company could face enforcement action regardless of whether the practice itself is technically lawful. The question is not only whether the data is protected, but whether users were given a genuine choice.
This matters beyond the UK. Regulatory decisions by the ICO often influence enforcement strategies in other jurisdictions, particularly across Europe. If the ICO determines that Meta’s disclosure practices fall short, it could prompt scrutiny from other data protection authorities as well. Meta’s European operations are primarily regulated through Ireland’s Data Protection Commission, but the ICO retains authority over UK-specific data processing.
The Shadow Economy Behind AI Training
The broader pattern here is worth examining on its own terms. The AI features that make products like the Ray-Ban Meta glasses appealing depend on massive volumes of labeled data. Someone has to listen to audio clips and confirm what the AI heard. Someone has to watch video segments and verify what the AI identified. That someone is often a contract worker in East Africa, Southeast Asia, or Latin America, hired through outsourcing firms that operate with minimal public scrutiny.
This arrangement creates what might be called a shadow economy of AI training. The workers who refine these systems are invisible to the end user, poorly compensated relative to the value they generate, and largely unprotected by the privacy laws that govern the data they handle. Their labor is essential to the product, yet it exists in a regulatory gray zone where neither the country of origin nor the country of processing has clear oversight authority.
For Meta, the cost savings are obvious. For users, the trade-off is less visible but no less real. Every time someone asks their Ray-Ban glasses to identify a landmark or summarize a conversation, there is a chance that the resulting recording will be reviewed by a human being whose working conditions and data security environment are unknown to the person who made the recording. That asymmetry of knowledge is precisely what privacy regulators are designed to address.
Why This Case Sets a Wider Precedent
Smart glasses are still a niche product compared to smartphones, but the trajectory is clear. Meta has invested heavily in wearable AI, and competitors including Google, Apple, and several Chinese manufacturers are developing their own versions. If the ICO’s inquiry results in binding guidance or enforcement action against Meta, it will shape how every company in the sector handles AI training data from wearable devices.
The stakes extend beyond corporate compliance. Wearable cameras and microphones capture not just the user’s data but the data of everyone around them. A person wearing Ray-Ban Meta glasses in a coffee shop, a train carriage, or a workplace meeting can record bystanders who have no idea they are being filmed, let alone that their images and voices might be transmitted to overseas reviewers. Those bystanders have not seen Meta’s Privacy Policy, and they have not been given any mechanism to opt out.
That raises difficult questions about collective privacy in public and semi-public spaces. Traditional data protection law is built around identifiable relationships between a data controller and a data subject: a company collects information from a customer, who can then exercise rights of access, correction, or deletion. With wearable devices, the people captured in recordings may never know the data exists, making those rights effectively impossible to exercise. Regulators are being pushed to decide whether existing consent and transparency rules are fit for this new reality.
The ICO’s engagement with Meta could therefore become a test case for how far regulators are willing to stretch current law to cover ambient data collection. One possible outcome is stricter requirements for on-device indicators and social cues, such as more prominent recording lights or audible alerts when AI features are activated. Another is a demand for clearer, simpler explanations of when human review occurs and where those reviewers are located, written in language that ordinary buyers can understand.
There is also a geopolitical dimension. When personal data captured in the UK is routed to workers in Kenya, questions arise about cross-border data transfers, contractual safeguards, and the responsibilities of outsourcing firms. If regulators conclude that existing contractual clauses are not enough to protect people whose lives are recorded, they may push for direct oversight of the offshore facilities themselves or limit the kinds of data that can be exported for training.
For Meta and its competitors, the lesson is that AI innovation cannot be cleanly separated from the human and legal systems that sustain it. The promise of frictionless, voice-driven computing rests on an infrastructure of human labor and cross-border data flows that is anything but frictionless. As the ICO and other regulators probe that infrastructure, companies may find that the real challenge is not building smarter glasses, but convincing the public that the invisible people and processes behind those glasses deserve their trust.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.