OpenAI CEO Sam Altman has repeatedly warned that artificial intelligence is redirecting economic power away from workers and toward the owners of capital, a shift he says lacks an obvious policy remedy. His framing, that AI tilts bargaining dynamics in favor of those who own the machines rather than those who operate them, has drawn both agreement and sharp pushback from economists and policy analysts. With labor’s share of national income already on a decades-long slide before generative AI arrived, the question now is whether the technology will deepen that trend or whether targeted interventions can blunt it.
Altman’s Warning on Bargaining Power
In a 2021 interview on a New York Times podcast, Altman laid out a theory of AI-driven abundance that came with a political catch. He argued that AI could generate enormous wealth but that the distribution of that wealth would depend on who controlled the technology. The conversation covered inequality, political power, and the difficulty of designing policy responses to a fast-moving technological shift. Altman’s central concern was that workers would lose their primary source of economic influence, the ability to withhold labor, as AI systems grew capable enough to replace or reduce the need for human effort in more and more tasks.
That argument has only gained traction since 2021. Altman’s phrasing, a “shift of leverage from labor to capital,” has become a reference point in the broader debate about AI economics. A commentary from the Cato Institute flagged that exact language while arguing that AI does not fundamentally change the underlying economics of labor and capital. The disagreement is instructive: even skeptics of Altman’s framing treat the claim seriously enough to rebut it with data, suggesting that the stakes of getting the diagnosis right are widely recognized.
What Federal Data Shows About Labor’s Declining Share
The debate over whether AI accelerates a power shift is not happening in a vacuum. U.S. government data already documents a long decline in the portion of national income going to workers. The Bureau of Economic Analysis tracks national income aggregates, including employee compensation and gross operating surplus, the two main buckets that split income between labor and capital. Those series show that labor’s share peaked in the 1970s and has trended lower since, settling around roughly three-fifths of national income in the 2000s while the capital share has risen.
Supporting indicators from the Federal Reserve’s FRED database tell a similar story through a different lens. FRED aggregates official series from the Bureau of Labor Statistics, BEA, and other agencies, tracking real wages, productivity, unemployment, and job openings. The persistent gap between productivity growth and real wage growth over recent decades is one of the clearest signs that workers have not captured the full value of their output, even during periods of low unemployment and elevated job vacancies. That divergence is precisely what many economists mean when they say labor has been “losing bargaining power” relative to capital.
This is the backdrop against which Altman’s claim lands. If labor was already losing ground before large language models and AI coding assistants arrived, the concern is that AI tools capable of automating white-collar tasks, from drafting legal briefs to writing software, could accelerate the trend by reducing demand for human workers in roles that previously offered strong bargaining positions. In industries where a small number of firms control access to powerful AI systems, the imbalance between those who own the tools and those who rely on selling their time could grow more pronounced.
Universal Basic Income Falls Short as a Fix
Altman has not only diagnosed the problem; he has also funded research into potential solutions. One of the most prominent is universal basic income, the idea of providing all adults with a regular, unconditional cash payment. But the evidence from his own investment in the idea has been mixed at best. A study published in early 2025 and supported by Altman’s funding found that UBI is not a comprehensive solution to the economic dislocations AI may cause. The research characterized AI-justified UBI proposals as more symbolic than effective, warning that they can be more self-serving for technology firms than beneficial for workers.
That finding matters because UBI has been the default answer in many Silicon Valley circles whenever the question of AI displacement comes up. A simple, universal cash transfer is easy to explain and easy to pitch as a humane response to automation. Yet if one of the most closely watched experiments in the space concludes that cash transfers alone cannot offset the structural loss of bargaining power, then the policy conversation needs to move to harder terrain: tax reform, new ownership models, sector-specific retraining mandates, or direct limits on how and where AI can substitute for human labor.
None of those alternatives have the clean simplicity that makes UBI attractive to tech executives. Designing wage insurance, portable benefits, or worker equity stakes in AI-intensive firms requires granular rules and enforcement capacity. Experimenting with co-determination or worker representation on corporate boards in AI-heavy sectors would be politically contentious. And tying AI deployment to commitments around job quality or retraining would force regulators to make difficult judgments about which uses of automation are socially acceptable. The complexity of these options helps explain why progress has been slow, even as the rhetoric about AI disruption has intensified.
The Counterargument and Its Limits
Not everyone accepts the premise that AI fundamentally rewrites the economics of labor and capital. The Cato Institute’s analysis argues that historical data does not support the idea that automation technologies permanently shift income distribution. Their case rests on the observation that past waves of technological change, from mechanized agriculture to factory robotics, eventually created new categories of work that absorbed displaced workers and sustained overall employment. By this logic, AI is just another chapter in a familiar story in which productivity gains ultimately benefit society through lower prices, higher output, and new industries.
The counterargument has merit as a historical observation, but it may underestimate the speed and breadth of AI’s reach. Previous automation waves primarily targeted physical, repetitive tasks. Generative AI targets cognitive work, including analysis, writing, design, and programming, that was long considered resistant to automation and that often anchors middle-class careers. The question is not whether new jobs will eventually emerge but whether the transition period will be long enough and painful enough to permanently weaken labor’s position. If capital owners can deploy AI tools faster than workers can retrain or move into new roles, the gap between productivity and wages could widen further before any new equilibrium takes hold.
There is also a political dimension. Even if the long-run employment effects of AI resemble past technologies, the distribution of income and power during the transition might look very different. Concentrated ownership of data, compute infrastructure, and proprietary models could give a small number of firms outsized influence over both markets and policy. In that scenario, the issue is less about aggregate job counts and more about who captures the surplus generated by AI, and whether workers have any institutional leverage to claim a share.
Why Sector-Specific Policy May Matter More Than Broad Redistribution
The gap between Altman’s diagnosis and the available policy toolkit is the real story. He has identified a structural problem (the erosion of worker bargaining power through technological displacement), but neither he nor the broader policy establishment has offered a fully developed fix. UBI research funded by Altman’s own resources suggests that broad cash transfers are insufficient. Libertarian critics argue no fix is needed because markets will self-correct, pointing to historical resilience in labor markets. Between those poles lies a more incremental approach that treats AI as a sector-by-sector challenge rather than a single, sweeping shock.
Sector-specific policy could focus on how AI is adopted in particular industries and what conditions attach to that adoption. In health care, for example, regulators could encourage AI tools that augment clinicians rather than replace them, while tying reimbursement rules to demonstrated improvements in patient outcomes and job quality. In logistics and warehousing, where automation can rapidly displace large numbers of workers, policymakers might require advance notice, transition plans, or funding for retraining when firms implement AI-driven systems at scale.
Another lever is ownership. If AI systems become core infrastructure for many sectors, there is a case for experimenting with models that give workers or the public a direct stake in the returns. That could mean employee stock ownership plans in AI-intensive firms, data trusts that share licensing revenue with contributors, or public options in foundational AI services that reduce the pricing power of dominant private platforms. None of these ideas are simple, and each raises its own design challenges, but they speak directly to the bargaining-power problem Altman has highlighted.
Ultimately, the question is whether policymakers treat AI as just another productivity shock or as a moment that demands new institutional arrangements. Altman’s warning about a shift of leverage from labor to capital resonates because it connects visible trends in the data with plausible scenarios about how AI will be deployed. The evidence so far suggests that familiar tools like UBI will not, on their own, preserve workers’ bargaining power in the face of rapid automation. If governments want to avoid a future in which AI-driven gains accrue primarily to a narrow class of capital owners, they will need to move beyond symbolic fixes and into the unglamorous work of rewriting rules, sector by sector, that determine who benefits when machines learn to do what humans once did.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.