Morning Overview

OpenAI cofounder’s AI exposure scores show high earners most at risk

Researchers linked to OpenAI have produced occupation-level exposure scores showing that the workers most vulnerable to large language model disruption are not low-wage laborers but higher-earning professionals in law, finance, and education. The finding inverts a common assumption about automation, which has historically threatened factory floors and service counters first. With roughly 80 percent of U.S. workers facing at least some task-level exposure to LLMs, the scores raise hard questions about who benefits from AI advancement and who absorbs the cost.

What the Exposure Scores Actually Measure

The core research behind the headline comes from a 2023 paper by Tyna Eloundou and colleagues, including OpenAI researchers, that built a task-based, occupation-level rubric for measuring LLM exposure using both human annotators and GPT-4 classification. Rather than asking whether a job can be fully replaced, the rubric evaluates what share of an occupation’s individual tasks could be performed or significantly accelerated by a large language model. The result is a granular score for each occupation, not a binary “safe or doomed” label.

That rubric produced a striking topline estimate: approximately 80 percent of U.S. workers have at least 10 percent of their tasks exposed to LLMs, and roughly 19 percent of workers face exposure on 50 percent or more of their tasks. These numbers do not predict layoffs. They measure technical feasibility, the degree to which current LLM capabilities could handle a given task if deployed. The distinction matters because exposure is a precondition for disruption, not proof of it. A task can be technically automatable while remaining embedded in workflows, norms, or regulations that delay or prevent full automation.

Why Higher Wages Correlate With Greater Risk

A separate paper by Ed Felten, Manav Raj, and Robert Seamans reported a clear positive link between wages and AI exposure. Their analysis ranked specific occupations and industries by vulnerability. Legal services, securities and commodities trading, and various categories of post-secondary teachers all scored near the top. So did telemarketers, a notable outlier at the lower end of the pay scale, but the broader pattern tilted clearly toward well-compensated knowledge work.

The logic is straightforward. High-wage occupations tend to revolve around language-intensive tasks: drafting contracts, writing reports, synthesizing research, preparing lectures, and analyzing financial disclosures. These are precisely the capabilities where large language models like GPT-4 perform best. Manual labor, spatial reasoning, and hands-on caregiving, which dominate many lower-wage jobs, remain largely outside the reach of text-based AI systems. The result is a reversal of the classic automation story, where robots replaced assembly-line workers. This time, the technology targets the office.

Felten and coauthors built their exposure measures by mapping AI-relevant skills and tasks onto detailed occupational categories. That approach echoes earlier work on computerization risk but focuses on generative models that can produce fluent text, code, and images. The outcome is a ranking in which many of the most exposed roles (lawyers, financial analysts, marketing professionals, and professors) also sit near the top of the income distribution.

Federal Reserve Analysis Ties Exposure to Education

The Federal Reserve Board of Governors extended this line of research by combining Felten et al.’s generative AI occupational exposure scores with data from the National Survey of College Graduates, a dataset that complements labor information from the long-running Current Population Survey. The resulting analysis maps exposure patterns by education level, college major, and institution type, and it covers both language modeling and image generation dimensions of generative AI.

In their note, Fed economists use the survey’s detailed information on degrees and fields of study to show how education channels workers into more or less exposed occupations. By linking graduates’ majors to the exposure scores attached to their current jobs, the analysis traces a pipeline from classroom to career that shapes who faces the sharpest AI-related shifts.

The Fed’s findings reinforce the wage-exposure link through an educational lens. Workers with advanced degrees, particularly in fields like business, engineering, and the social sciences, tend to land in occupations with higher exposure scores. That creates an uncomfortable feedback loop: the credentials that command premium salaries also channel graduates into the jobs most susceptible to LLM-driven change. A law degree, for instance, leads to legal services, one of the highest-ranked industries on the exposure index. An MBA funnels graduates into analytical and managerial roles where report generation and data synthesis are daily tasks. The Fed’s educational exposure analysis underscores that AI risk is not randomly distributed; it is structured by the higher-education system itself.

Exposure Is Not Replacement, but the Gap May Narrow

Most coverage of AI and jobs conflates exposure with displacement. The research itself is more careful. The rubric developed by Eloundou and colleagues measures whether a task could technically be handled by an LLM, not whether employers will actually automate it. Institutional friction, regulatory barriers, client preferences, and the cost of integration all slow the path from “technically feasible” to “actually deployed.” A Pew Research Center study of U.S. government occupation and task data draws a similar distinction, ranking occupations by AI exposure while cautioning that exposure does not equal replacement.

Still, the gap between exposure and displacement may shrink faster than previous automation waves suggest. The GPT-4 technical report documented the model’s ability to pass standardized exams in law, medicine, and business at or near expert human levels, providing concrete evidence of high-stakes test performance. Each new model generation expands the set of tasks that cross the feasibility threshold. When exposure scores were first published, GPT-4 was the benchmark. Successive models have only widened the range of professional tasks within reach, which means the 80 percent figure from the original rubric likely understates current technical exposure.

Moreover, once firms invest in integrating AI into workflows, by building interfaces, setting policies, and training staff, the marginal cost of automating additional tasks falls. That dynamic could accelerate the transition from partial assistance to deeper restructuring of roles, especially in sectors where work is already highly standardized and documented.

What High Earners Stand to Lose, and Gain

The concentration of AI exposure among high earners creates a split outcome. On one side, professionals whose value rests on routine knowledge work, such as document review in law firms, standard financial modeling, or lecture preparation, face genuine pressure. If an LLM can draft a competent first version of a legal brief or an earnings analysis in seconds, the billable hours attached to those tasks erode. Firms that adopt AI tools aggressively could reduce headcount in mid-level professional roles while maintaining output.

Junior and support roles may be particularly vulnerable. Associates, paralegals, research assistants, and entry-level analysts often spend much of their time on precisely the kinds of repeatable, text-heavy tasks that LLMs now handle well. If those tasks shrink, the traditional apprenticeship ladders that move workers from low autonomy to high autonomy roles could narrow, changing how careers in law, finance, and academia are built.

On the other side, professionals who learn to direct AI tools effectively may see their productivity and earning power rise. A lawyer who uses an LLM to handle initial research can take on more cases. A professor who automates grading can spend more time on original scholarship or student mentoring. A financial analyst who relies on AI for first-draft models can devote more energy to judgment calls, client communication, and strategy. The exposure scores do not predict which outcome wins. They simply confirm that the stakes are highest where the paychecks are largest.

Policy choices and organizational strategies will shape the balance. Employers can use AI primarily to cut costs and headcount, or they can redeploy time savings into new services and higher-quality work. Professional associations and universities can respond by updating training, emphasizing skills that complement rather than compete with generative models, such as ethical reasoning, client management, and cross-disciplinary problem-solving.

For individual workers, the message is less about fleeing exposed occupations than about reshaping roles within them. The same research that highlights vulnerability also points to opportunity: the capabilities of generative AI align most closely with high-status, language-centric work. Those who learn to treat LLMs as collaborators rather than competitors may find that the technology amplifies their expertise instead of replacing it.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.