
Artificial intelligence is often sold as a frictionless, automated upgrade to daily life, yet behind the glossy dashboards and “smart” cameras are human beings working long hours for low pay to keep the systems running. When those systems are aimed at tracking cars, workers, and entire neighborhoods in the United States, the gap between the marketing and the labor reality becomes impossible to ignore. The story of one AI surveillance startup that quietly leaned on digital sweatshops to monitor Americans is not an outlier, it is a window into how the industry is being built.
I see a pattern emerging in which the same tools that watch drivers on U.S. streets and workers in U.S. warehouses are trained and maintained by people in the Philippines, Africa, and other low income regions who have little power to challenge the conditions they face. That pattern is not just a labor issue, it is a civil liberties problem and, as some experts now argue, a national security risk.
How a “safety” startup turned into a global surveillance machine
The surveillance startup at the center of this controversy built its brand on a simple promise: use automated cameras to make communities safer by tracking vehicles and flagging suspicious activity. In practice, its network of license plate readers constantly scans for car movements, feeding a vast database that local police can query in seconds. Reporting shows that in a growing number of cases, those local agencies are also using the system to help Immigration and Customs Enforcement, with Flock data flowing into the hands of Immigration and Customs Enforcement agents who are looking for people they want to detain.
What the company did not advertise to city councils or residents was that the supposedly automated pipeline depended on low paid human workers overseas. Instead of a clean, fully machine driven process, the system relied on people in the Philippines to review images, correct errors, and keep the data usable for police and for ICE. The result is a two tiered surveillance machine in which Americans are watched by cameras on their own streets while workers in another country, with far fewer protections, quietly sustain the infrastructure that makes that watching possible.
Digital sweatshops in the Philippines and the “race to the bottom”
The Philippines has become a hub for this kind of hidden AI labor, with contractors recruiting people to label data, moderate content, and clean up the messy outputs of machine learning systems. Filipino AI ethicist Dominic Ligot has described these workplaces as “digital sweatshops,” arguing that the outsourcing model is creating a “race to the bottom” in wages and protections. In his view, the combination of low pay, withheld earnings, and opaque contracts leaves workers exposed while companies and their clients, including U.S. law enforcement, benefit from the cheap labor that keeps their tools running, a dynamic he has warned about in detail when discussing Dominic Ligot and similar outsourcing arrangements.
In the case of the surveillance startup feeding data to ICE, that “race to the bottom” is not an abstraction. Workers in the Philippines were reportedly tasked with reviewing license plate images and other sensitive material tied directly to Americans’ movements, yet they had little leverage to demand better conditions or to question how the data they handled would be used. The outsourcing model allowed the company to present itself as a lean, high tech operation to U.S. police departments while offloading the human cost to a workforce that is largely invisible to the people being tracked.
Invisible humans in the AI loop
Researchers who study AI supply chains have been blunt about the extent to which human labor props up supposedly autonomous systems. One analysis of global AI production networks describes how people are hired to label images, transcribe audio, and correct model outputs, often through fragmented gig platforms that keep them hidden from end users. The authors argue that the industry has much to learn from “old school” physical supply chains, and that the same kind of scrutiny once applied to garment factories should now be turned on the invisible workers who make AI function, a point they underline when they write that “Moreover, the researchers suggest there may be some learning to be had by studying the world of old-school physical supply chains,” a line that appears in a study of Moreover and the hidden human workforce.
Professor Mark Graham has made a similar point about how the labor in AI production networks is “almost always hidden from view,” comparing it to the way consumers once bought coffee or clothing without seeing the plantations or factories behind them. He argues that when people use AI powered services, they are often “complicit in this subterfuge,” because the design of the platforms keeps the workers out of sight and out of mind. That critique applies directly to the surveillance startup’s model, where Americans see a sleek interface for tracking cars while the people who make the system work, including those in the Philippines, remain obscured, a pattern Graham has described in his work on the hidden cost of AI.
From data labeling to national security risk
Some in the industry now warn that this dependence on low paid, poorly protected workers is not only unethical but dangerous. By Peter Kant, CEO, Enabled Intelligence, has argued that AI data “sweatshops” are bad news for national security because they put sensitive information in the hands of contractors who may not be vetted, trained, or bound by strong confidentiality rules. He points to recent media reports in the Washington Post and elsewhere that describe how Silicon Valley firms send large volumes of classified or sensitive data to overseas vendors, and he asks whether the United States can really claim to be building “world class AI technology” if it rests on such fragile foundations, a concern he lays out explicitly in an opinion piece labeled By Peter Kant, CEO, Enabled Intelligence.
When the data in question involves Americans’ movements, license plates, and potential immigration status, the stakes are even higher. The surveillance startup’s decision to route that information through digital sweatshops in the Philippines raises obvious questions about data security, chain of custody, and the potential for abuse. It also underscores a broader pattern in which Silicon Valley AI firms rely on cheap labor overseas and task those workers with the grueling jobs that make their products viable, a pattern that has been documented in reporting on how Silicon Valley AI companies send difficult, poorly paid tasks to people in the developing world.
“Digital sweatshops” are not just offshore
The phrase “digital sweatshop” often conjures images of cramped offices in Manila or Nairobi, but the logic behind it is increasingly visible inside the United States as well. Warehouse workers describe being tracked, timed, and disciplined by AI systems that monitor every movement, turning the job into a kind of high tech assembly line where the algorithm decides who is too slow. In one analysis of warehouse surveillance, researchers note that Artificial intelligence can be used by employers like Amazon “to essentially have 24/7 unregulated and algorithmically driven surveillance of workers,” a description that captures how the same tools used to monitor productivity can also erode privacy and autonomy, as detailed in a study of how Artificial systems shape warehouse labor.
That kind of surveillance is not limited to warehouses. The same company that runs one of the world’s largest e commerce platforms has also become a symbol of how AI can be used to squeeze more output from workers while keeping them under constant watch. The experience of scanning items, racing to meet quotas, and knowing that every pause is logged by a system designed in Seattle but deployed on the warehouse floor is a domestic echo of the digital sweatshops abroad. The difference is that in the United States, the brand is familiar, whether through its retail site at Amazon or its cloud services, while the offshore workers who train and maintain the AI remain largely anonymous.
When AI bosses berate workers by number
The same mentality that treats offshore data labelers as interchangeable parts is now surfacing in startups that promise to “optimize” factory and warehouse labor. Earlier this year, a Y Combinator backed company drew outrage after a video showed its co founder using a dashboard to monitor individual workers on a factory line. In the clip, Baid identifies a worker causing a bottleneck as “No. 17” and berates the person through the interface for underperforming, reducing a human being to a number on a screen in front of potential investors, a moment that was captured in coverage of how Baid pitched the product.
The backlash was swift enough that Y Combinator later pulled support for the startup, but the underlying idea has not gone away. Another report described the company’s sales pitch as “dehumanizing,” noting that the software was marketed as a way to squeeze more productivity out of workers by tracking their every move and flagging those who fell behind. Unfortunately for workers around the world, public outrage over one viral video does little to slow the broader trend of AI tools that treat people as data points to be optimized, a trend that was highlighted in a piece that opened with the line “It’s a dehumanizing sales pitch” and concluded with the observation “Unfortunately for workers around the world, public backlash only goes so far,” a framing that appears in coverage of how Unfortunately for the workers, the tools keep spreading.
Workers describe the grind of scanning and sifting
For the people inside these digital sweatshops, the work is often monotonous, psychologically draining, and poorly compensated. One account of AI data work describes how people spend hours scanning through vast amounts of text and images, flagging harmful content, labeling objects, or checking whether an AI output is accurate. Scanning through vast amounts of text and images is draining whatever the content, but many workers have to sift through graphic violence, hate speech, and other disturbing material, and if they push back or miss a deadline, they risk losing the contract or having their pay withheld, a pattern detailed in reporting that opens with the phrase “Scanning through vast amounts of text and images is draining whatever the content. But many workers have to sift through,” a description that appears in a discussion of how Scanning and sifting define the job.
Those conditions are not an accident, they are the result of a business model that treats human labor as a flexible, disposable input. Contracts are often short term, pay is tied to piecework rather than hours, and workers are expected to be available at odd times to meet the demands of clients in different time zones. But when the client is a U.S. surveillance startup feeding data to police and ICE, the ethical stakes rise sharply. The people doing the work are not just cleaning up spam or labeling cats in photos, they are helping to build a system that can determine whether someone is stopped, questioned, or detained on an American street, even as they themselves have little say in how that system is governed, a dynamic that has prompted human rights advocates to warn that “But many workers have to sift through” harmful content and still struggle to get the work back if they raise concerns, as noted in the same analysis that emphasizes the word But when describing their situation.
Public backlash and the limits of outrage
When the surveillance startup’s labor practices came to light, the reaction on social media was sharp and immediate. One post that circulated widely framed the issue bluntly: “So the company tracking Americans ‘for safety’ is secretly paying offshore workers in digital sweatshops,” a line that captured the sense of betrayal among people who had been told the system was about community protection. The post, shared by Robert Morton, drew 12 likes and 434 views, modest numbers by viral standards but enough to show that the story resonated with a specific audience that is already skeptical of AI surveillance, as reflected in the metrics attached to the tweet from Robert Morton that included the figure 434.
Yet as with the Y Combinator surveillance startup, outrage alone has not forced a fundamental change in how these systems are built. Contracts with police departments remain in place, ICE continues to tap into the data, and the offshore labor pipelines that support the technology keep humming in the background. The pattern is familiar from other industries: a scandal briefly exposes the hidden labor, companies issue statements or tweak their messaging, and then the status quo resumes. In the AI context, that cycle is especially troubling because each new deployment of surveillance tools deepens the dependence on digital sweatshops, making it harder to unwind the model later.
AI’s dirty secret: sweatshops, carbon, and the race to the bottom
Behind the specific case of one surveillance startup lies a broader structural problem. AI’s rapid advancement comes with a hidden human cost, not just in the number of jobs that may be eliminated, but in the way new jobs are created in conditions that look a lot like old fashioned sweatshops. Analysts warn that the industry is locked in a race to the bottom for workers and the environment, with companies competing to find the cheapest labor and the least regulated jurisdictions to host energy hungry data centers, a dynamic captured in a discussion of how AI’s rapid advancement is tied to sweatshops, carbon, and a global race to the bottom.
That “dirty secret” is not limited to offshore data labeling or to one company’s partnership with ICE. It runs through the entire AI ecosystem, from the content moderators who shield users from the worst of the internet to the warehouse workers whose every move is tracked by algorithms. As Professor Graham and others have argued, the labor is deliberately kept out of sight, and consumers are encouraged to think of AI as a purely technical achievement rather than a socio economic system built on human effort. Until that illusion is broken, the incentives will continue to favor companies that cut corners on labor and privacy, whether they are selling smart cameras to police departments or promising faster delivery on a shopping app.
What accountability could look like
If the United States is serious about protecting both civil liberties and workers’ rights, the model exposed by the surveillance startup’s use of digital sweatshops cannot stand. One starting point would be to require full transparency about the human labor behind AI systems used by government agencies, including clear disclosures about where data is processed, who has access to it, and under what conditions they work. Another would be to treat AI supply chains more like traditional manufacturing, with audits, labor standards, and enforcement mechanisms that reach all the way down to the contractors in places like the Philippines, echoing the call from researchers who argue that the AI industry should learn from the scrutiny once applied to garment factories and other global supply chains.
There is also a role for procurement rules that bar agencies from buying tools built on exploitative labor, and for civil society groups to push for stronger protections for both the people being watched and the people doing the watching. The same energy that has gone into exposing how Silicon Valley AI firms rely on cheap overseas labor could be channeled into concrete standards that make it harder for companies to hide behind the language of “automation” while quietly running digital sweatshops. Without that kind of structural change, the next scandal will look a lot like this one: a startup promising safety and efficiency, a hidden workforce bearing the brunt of the work, and Americans’ lives quietly funneled through a system they never really agreed to.
More from MorningOverview