
A fast-growing AI startup has found a way to turn unemployment into training data, recruiting jobless professionals to teach algorithms how to perform the very roles they once held. The arrangement offers short-term income and a sense of purpose, but it also raises a blunt question about whether people are being paid to accelerate their own obsolescence.
I see in this model a compressed version of the broader AI economy: a handful of companies racing to automate white-collar work, while displaced workers are pulled back in as temporary tutors for the systems that may replace them again.
How Mercor turned layoffs into a training pipeline
The startup at the center of this story is Mercor, an AI company that has built a business around hiring unemployed and underemployed professionals as contractors to train its models. According to reporting on the company, Mercor hired over 30,000 contractors in the past year, a scale that turns individual hardship into a structured labor pool for algorithmic training. The company pitches itself as a bridge between job seekers and automation, offering gigs that range from annotating data to walking AI systems through the step-by-step logic of tasks those workers once did full time.
Not just anyone can work for Mercor. Applicants have to demonstrate their abilities in the industries they came from, completing tests and trial projects that can stretch for weeks or even months before they see a steady flow of assignments. The company advertises a vast list of subject areas and pay that can reach as much as 150 dollars an hour for highly specialized work, although many contractors report lower effective earnings once unpaid screening and downtime are factored in. One report notes that Mercor hired tens of thousands of contractors after signing partnerships with established AI firms, turning the platform into a kind of outsourced training arm for larger players in the sector, and those gigs now include work directly with Mercor.
The workers teaching AI to do their old jobs
For the people on the other side of the screen, the arrangement is both lifeline and reminder of what they have lost. Many of the contractors are job seekers who were laid off from software engineering, marketing, customer support, or operations roles and now find themselves explaining those same workflows to an AI system. They are asked to break down complex tasks into granular instructions, label edge cases, and correct the model’s mistakes, effectively encoding their professional judgment into a dataset that can be reused at scale.
One account describes job seekers who had been out of work for weeks or months before turning to these gigs as a new source of income, often juggling multiple platforms while they search for a permanent role. The irony is hard to miss: the better they are at translating their expertise into machine-readable form, the more capable the AI becomes at performing similar tasks for considerably less pay. An MIT study cited in coverage of Mercor’s work found that AI systems can already handle a significant share of routine office work, and the contractors’ contributions are designed to make that performance even more reliable. As one founder, identified as Jan, put it when asked whether he felt responsible for the consequences of this automation, “I didn’t invent AI and I’m not going to uninvent it,” a line reported in coverage of Jan and his company’s strategy.
Mercor’s defense: inevitability and opportunity
Mercor’s leadership frames the model as a pragmatic response to forces that would reshape the labor market regardless of any single startup’s choices. Jan has argued that AI’s advance is inevitable and that his company is simply giving displaced workers a way to participate in, and profit from, that transition. In his telling, if he stopped building these systems, someone else would step in, and the workers he hires would lose a rare chance to monetize their expertise while they search for something more stable.
The company also emphasizes that its contractors are not just passive data points but active collaborators in making the technology safer and more accurate. By recruiting people who have actually done the jobs being automated, Mercor claims it can avoid some of the brittle, error-prone behavior that plagues generic models. That pitch has helped the startup secure partnerships with AI industry stalwarts and scale its contractor base into the tens of thousands, according to reporting on the tech startup. From the company’s perspective, the more the tech “gets even better,” the more valuable its platform becomes to corporate clients that want finely tuned automation.
The ethical tension: short-term pay, long-term risk
I see the core ethical tension here in the mismatch between short-term benefits and long-term risks. On one hand, contractors gain flexible work, sometimes at relatively high hourly rates, and a way to keep their skills sharp while they navigate a brutal job market. On the other, they are contributing to systems that could reduce the number of roles available in their fields, or at least push wages down as employers compare human salaries to the cost of an AI subscription. The fact that Mercor’s workers are often “desperate unemployed people,” as one report bluntly put it, raises questions about how voluntary this trade-off really is when rent is due.
There is also the issue of power and transparency. Contractors typically have little visibility into how their contributions will be used, how long the data will be retained, or whether they will share in any upside if the models they help train underpin lucrative products. The broader AI ecosystem, including companies like Anthropic, has started to talk more about responsible development and alignment, but the labor conditions behind that alignment work often remain opaque. When a startup like Mercor signs deals with larger AI firms, the human tutors at the bottom of the chain may have no say in how their knowledge is packaged, priced, or deployed.
What this model signals about the future of work
Mercor’s approach is not an isolated curiosity, it is an early template for how white-collar automation may unfold. Instead of a clean break between human jobs and machine replacements, I expect to see more hybrid arrangements where displaced workers cycle back in as trainers, evaluators, and safety checkers for the systems that undercut their old roles. That pattern is already visible in the way job seekers are turning to AI training gigs as a stopgap, as described in coverage of people who now find a “new source of income” by teaching models to perform their former responsibilities.
For policymakers and employers, the lesson is that reskilling cannot just mean nudging workers toward the very AI platforms that threaten to compress wages and reduce headcount. There is a difference between using AI as a tool and being hired to make oneself redundant. As more startups follow Mercor’s lead, regulators will have to decide whether this kind of labor should be treated like any other freelance work or whether it deserves special scrutiny, given its systemic impact on employment. The uncomfortable reality is that the same economic logic driving Mercor’s partnerships with AI industry stalwarts, documented in reports on the company’s rapid hiring of tens of thousands of contractors, will push other firms to copy the model. I suspect the question is not whether this approach spreads, but how quickly and under what rules, a point underscored by detailed accounts of how desperate workers are already being drawn into the training loop.
More from Morning Overview