Andrej Karpathy, a co-founder of OpenAI, briefly published a ranking of U.S. jobs most exposed to artificial intelligence on his X account before quietly removing it. The post, which categorized occupations by their vulnerability to AI-driven automation, drew immediate attention and criticism from workers, researchers, and fellow technologists. Its rapid deletion raises a pointed question: why would someone at the center of AI development pull back from a public statement about the technology’s effects on employment?
What Karpathy Posted and Why It Disappeared
Karpathy’s now-deleted post reportedly featured a visualization ranking U.S. occupations by how susceptible they are to displacement or significant change from AI tools. Roles heavy in routine data processing, text generation, and customer interaction appeared near the top of the risk list. The post circulated widely through screenshots before its removal, but Karpathy has not offered a public explanation for taking it down.
Without a direct statement from Karpathy or an official response from OpenAI, the reasons behind the deletion are a matter of informed speculation. One plausible reading is that the post, coming from someone so closely tied to AI development, carried an implicit authority that could amplify public anxiety about job losses. Another possibility is that the underlying methodology or data presentation drew private criticism for oversimplifying a complex problem. Both interpretations point to a growing tension among AI leaders between transparency about the technology’s effects and the risk of stoking fear.
The Federal Data Behind Job Risk Rankings
Any serious attempt to rank U.S. occupations by AI exposure relies on standardized employment data. The primary government resource for this kind of analysis is the Occupational Outlook, published by the U.S. Bureau of Labor Statistics. The handbook catalogs hundreds of occupations, providing job counts, pay data, growth projections, and descriptions of typical duties for each role.
This dataset is not a niche academic tool. It forms the backbone of workforce planning across federal agencies, universities, and private employers. The broader employment framework maintained by the Labor Department underpins policy development, apprenticeship programs, and worker training initiatives that depend on reliable occupational categories.
Supporting data infrastructure from the Bureau of Labor Statistics, including tools for custom queries and generating detailed series reports, allows analysts to pull time-series data on specific job categories. These datasets make it possible to weight occupations by actual employment volume rather than treating every job title as equally significant. A ranking that flags “data entry clerk” as high-risk, for example, carries different weight depending on whether that category employs tens of thousands or millions of workers.
For researchers constructing exposure scores, the ability to drill down into occupation-level counts is crucial. It lets them distinguish between small, specialized roles and broad categories like office support or customer service. In turn, this supports policy conversations about which groups of workers might need reskilling support first if AI adoption accelerates.
Why AI Insiders Keep Hedging in Public
Karpathy’s decision to post and then retract a job-risk ranking fits a pattern among senior AI figures who oscillate between candor and caution. The commercial incentives are obvious: companies building AI products benefit from excitement about the technology’s capabilities, but they face backlash when that excitement translates into concrete fears about unemployment. A ranking list, by its nature, names winners and losers. That specificity is far harder to walk back than a vague statement about “AI changing work.”
The episode also highlights a gap in how AI companies communicate about labor market effects. OpenAI and its competitors regularly publish research on model capabilities, safety benchmarks, and alignment techniques. What they rarely produce is detailed, occupation-level analysis of which jobs their products are most likely to reshape or eliminate. That analytical work is largely left to outside researchers, government agencies, and journalists, who must piece together implications from capability announcements and employment data.
This asymmetry matters because AI developers hold information that outside analysts do not. They know which tasks their models perform well, which industries are adopting their tools fastest, and where the next generation of products is headed. When an insider like Karpathy shares a risk ranking, it carries a signal value that a third-party academic study does not, precisely because it suggests access to proprietary knowledge about the technology’s trajectory.
What Standard Employment Data Can and Cannot Tell Us
The Bureau of Labor Statistics tracks employment statistics and occupational characteristics across hundreds of job categories, but the data was not designed to measure AI exposure. The handbook describes what workers in each occupation do, how much they earn, and how fast the field is growing or shrinking. It does not assign an “automation risk score” or predict which roles AI will affect first.
Researchers who build AI-exposure indexes typically overlay the handbook’s task descriptions with assessments of current AI capabilities. A job that involves primarily routine text processing, for instance, might score high on exposure because large language models already perform that work at commercial scale. A job that requires complex physical manipulation in unpredictable environments would score low. But these overlays involve judgment calls about what AI can and cannot do, and those judgments shift as the technology improves.
The raw numbers and classifications provided through tools like the BLS “top picks” interface are stable; the uncertainty lies in how they are interpreted. Two research teams using the same employment data can reach different conclusions about which occupations face the greatest risk, depending on how they define “exposure,” whether they account for partial automation versus full replacement, and how they weight the speed of adoption in different industries.
In practice, most occupations sit somewhere between full displacement and complete insulation. AI systems may automate specific tasks within a role, such as drafting routine emails for paralegals or triaging customer inquiries for support agents, without eliminating the job category altogether. Standard employment datasets can show where these workers are and how many there are, but they cannot specify how job content will evolve as AI tools are integrated.
The Silence That Speaks Loudest
Most coverage of AI and employment focuses on what companies and researchers say. Karpathy’s deleted post is notable for the opposite reason: it draws attention to what AI leaders choose not to say, or choose to unsay. The act of publishing a job-risk ranking and then removing it within hours communicates something distinct from never publishing it at all. It suggests that the analysis existed, that someone with deep technical knowledge thought it was worth sharing, and that something (whether internal pressure, public reaction, or second thoughts about accuracy) changed the calculus.
For workers in occupations that appeared on the list, the deletion does not erase the underlying concern. If anything, it underscores how little direct guidance they receive from the people building the tools most likely to reshape their jobs. Public agencies can map out where workers are using the official statistics and occupational categories, and independent analysts can estimate where AI might land hardest. But only insiders like Karpathy can connect those maps to the concrete capabilities of frontier systems and the product roadmaps that determine how quickly those capabilities will reach workplaces.
Karpathy’s brief foray into public ranking, and his decision to pull back, captures a broader moment in the AI economy. The infrastructure for understanding work (federal data, occupational taxonomies, long-running surveys) was built for a slower era of technological change. The infrastructure for understanding AI (capability demos, benchmark papers, model cards) is still evolving and remains largely under corporate control. Where those two worlds overlap, in the day-to-day reality of workers wondering whether their job is next, the loudest message right now may be the quietest one: a chart that flashed across social media, then vanished, leaving behind more questions than answers.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.