Anthropic, the maker of the Claude AI assistant, has published research that maps millions of real AI conversations onto the U.S. labor market’s official task classifications, producing a detailed picture of which jobs face the greatest exposure to automation. The findings shift the debate from theoretical guesses about AI’s reach to observed patterns of use, and the occupations at the top of the exposure list, including software developers, writers, and financial analysts, may surprise workers who assumed their roles were safe from disruption.
From Theory to Observed AI Usage
Most prior attempts to measure AI’s effect on jobs relied on expert judgment about which tasks a large language model could, in principle, perform. A widely cited 2023 working paper by Eloundou et al., with authors from OpenAI, OpenResearch, and the University of Pennsylvania, introduced a theoretical exposure framework built on the U.S. Department of Labor’s O*NET task taxonomy. That framework scored each occupation by the share of its tasks that a language model could feasibly handle, often summarized as the percentage of workers with at least 50% of their tasks exposed. Anthropic’s new research borrows that task-level exposure parameter but adds a critical empirical layer: evidence of what people actually ask AI to do.
The Anthropic team’s preprint, titled “Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations,” carries arXiv ID 2503.04761 and was authored by Anthropic researchers. Rather than speculating about capability, the paper classifies real Claude usage sessions against the same O*NET task codes that the federal government uses to describe every occupation in the economy. The result is what the authors call an Economic Index, a measure that reflects not just what AI can do but what workers are already doing with it.
Software and Writing Dominate the Usage Map
The concentration of AI activity is sharply uneven. According to the Anthropic preprint, software-related and writing-related task domains account for a disproportionate share of real Claude conversations. That finding tracks with anecdotal reports from employers and freelancers but now carries empirical weight because it draws on millions of sessions rather than surveys or self-reports.
This skew matters for how we read the chart. A job can be theoretically exposed to AI across many of its tasks yet see little actual AI adoption if the tasks are physical, interpersonal, or context-dependent. Conversely, an occupation with only a handful of AI-eligible tasks might still face heavy disruption if those tasks, such as drafting code or producing marketing copy, represent a large share of billable hours. The gap between theoretical feasibility and real-world adoption is the core tension the Anthropic data tries to resolve, and the chart makes that gap visible at the occupation level.
How Government Labor Data Anchors the Analysis
Anthropic did not build its framework in isolation. The study draws on multiple layers of official federal data. The O*NET task taxonomy, maintained by the Labor Department, provides the standardized descriptions of what each occupation involves. To test whether AI exposure is already showing up in hiring and unemployment trends, the researchers turned to the Current Population Survey, the official household survey that produces U.S. unemployment statistics. CPS microdata and public-use files are published by the Bureau of Labor Statistics and offer granular, occupation-level detail on job-finding rates and labor force transitions.
Because CPS is a recurring survey, it can be connected to other BLS tools that describe the labor market from different angles. Summary tables in the agency’s Top Picks portal provide headline indicators such as unemployment rates and payroll employment, while series-level views via the series report interface let researchers track specific occupation codes over time. These datasets supply the backbone that allows Anthropic’s AI usage measures to be compared against the same benchmarks that policymakers use.
By linking conversation-level AI usage data to these government benchmarks, the Anthropic team can ask a question that pure theory cannot answer: are workers in high-exposure occupations already experiencing different labor market outcomes? The CPS data serves as the test bed for that question, grounding what could otherwise be a speculative exercise in the same survey infrastructure that policymakers rely on for monthly jobs reports.
Why Theoretical Exposure Alone Misleads
One of the sharpest takeaways from the Anthropic chart is that theoretical and observed exposure do not always line up. The Eloundou et al. framework, which Anthropic’s appendix uses as an input for defining whether tasks are theoretically feasible, was designed before widespread consumer and enterprise adoption of tools like ChatGPT and Claude. It estimated exposure based on what language models could plausibly do, not what users were choosing to delegate. That distinction is not academic. It determines which retraining programs, corporate strategies, and policy responses make sense.
Consider an occupation like paralegal work. On paper, many paralegal tasks, including document review, summarization, and legal research, fall within AI capability. But if actual usage data shows that paralegals are not yet offloading those tasks to AI at scale, the urgency of disruption looks different than the theoretical score suggests. The reverse is also true: marketing coordinators or data analysts whose roles include heavy writing or spreadsheet manipulation may face faster displacement than a simple task count would predict, because those specific tasks are exactly what millions of Claude users are already performing.
The distinction also matters for policymakers. If theoretical exposure scores are treated as destiny, resources might be poured into reskilling workers in occupations that, in practice, see slow AI adoption while neglecting roles where a smaller set of highly automatable tasks accounts for most of the economic value. Observed usage data helps narrow that gap by indicating where AI is already embedded in workflows, not just where it could be.
Early Labor Market Signals Worth Watching
The Anthropic researchers use CPS data to look for early statistical signals of AI’s effect on employment. Because CPS is a survey-based dataset with rotating panels, it can track whether individuals in high-exposure occupations are finding new jobs at different rates than those in low-exposure fields. The paper describes this as testing for early impacts on unemployment and job-finding patterns, an approach that treats AI disruption as something measurable in real time rather than a future scenario.
Other official datasets help put those signals in context. Detailed occupation and industry series accessible through the BLS time series catalog let analysts compare AI-exposed roles to adjacent fields that rely less on language-heavy tasks. If, over time, employment growth or wage trends diverge between these groups, it will strengthen the case that AI is reshaping demand for certain skills rather than simply moving workers around within the same broad categories.
Separately, long-run occupation projections from BLS offer a longer time horizon. These projections estimate which job categories will grow or shrink over the coming decade, and they provide a useful cross-check against the Anthropic findings. If the occupations flagged as high-exposure in the Claude conversation data also appear in BLS projections as flat or declining, the case for near-term disruption strengthens considerably. Conversely, if projected growth remains strong in a high-usage occupation, that may indicate that AI is augmenting productivity rather than eliminating roles outright.
What the Chart Means for Workers and Employers
The practical value of Anthropic’s Economic Index lies in how it reframes planning. For workers in software, writing, and analytical roles, the message is not that their jobs will vanish overnight, but that the core tasks that justify their paychecks are already being shared with AI systems at scale. That reality increases the premium on skills that are complementary to AI (such as problem framing, domain expertise, client communication, and integration across tools), rather than on routine code writing or boilerplate drafting.
Employers, meanwhile, can use observed task exposure as a guide for redesigning roles and training programs. Instead of generic “AI literacy” workshops, companies can focus on the specific O*NET tasks that Anthropic’s data shows are most frequently delegated to AI, building playbooks and guardrails around those workflows. This targeted approach can accelerate productivity gains while reducing the risk that individual employees are left to improvise with powerful tools and little oversight.
For educators and workforce agencies, the combination of theoretical exposure measures and real usage data offers a more nuanced map of where to invest. Training programs can prioritize pathways that blend technical familiarity with AI tools and durable human capabilities that remain hard to automate. Policymakers can monitor CPS and related BLS indicators for widening gaps between high-usage and low-usage occupations, treating those gaps as early warning signs that may call for adjustments in social insurance, job placement services, or regional development strategies.
The Anthropic research does not settle the debate over AI and jobs, but it does move the conversation onto firmer empirical ground. By tying millions of Claude conversations to the same task and occupation codes that underpin official labor statistics, the study shows where AI is already woven into day-to-day work and where its presence remains mostly hypothetical. For now, the heaviest exposure is concentrated in software and writing-heavy roles, yet the methodology could be extended as AI tools spread into more domains. As that happens, the gap between what AI could do and what workers actually use it for will remain a critical space for analysis, and for choices about how to shape the future of work.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.