Morning Overview

Poll: Many Americans think AI will widen wealth inequality

Nearly half of Americans believe artificial intelligence will hurt their daily lives more than help them, and a growing share worry the technology will widen the gap between rich and poor. That anxiety, captured in multiple recent polls and reinforced by academic research, reflects a public that sees AI’s benefits flowing disproportionately to those who already hold wealth and power. The question is whether the evidence supports that fear or whether the picture is more complicated than the polls suggest.

Public Opinion Tilts Toward Skepticism

A Quinnipiac University poll released in April 2025 found that 44 percent of Americans think AI will do more harm than good to their day-to-day lives, while just 38 percent expected a net positive effect. The rest offered no opinion. That gap between pessimists and optimists is significant because it captures a population already living alongside AI tools in their workplaces, phones, and online interactions, yet still largely unconvinced that the trade-offs favor them.

The inequality dimension sharpens that skepticism. A survey cited by Brookings reports that about half of Americans think the increased use of AI will lead to greater income inequality in the next five years. That figure tracks closely with the Quinnipiac finding: the share of people who expect harm and the share who expect inequality are nearly identical, suggesting these concerns are linked in the public mind.

Public unease is reinforced by the way AI often appears in news coverage and political debate. Layoffs attributed to automation, splashy announcements of AI-powered productivity tools, and high-profile warnings from tech leaders all contribute to a sense that AI is a force acting on people rather than something they can shape. In that context, it is not surprising that many Americans default to viewing AI primarily as a risk to their jobs, wages, and privacy.

Why the Public Connects AI to Inequality

The fear is not abstract. When automation replaces routine tasks, the workers performing those tasks lose bargaining power while the firms deploying the technology capture the savings. A Brookings analysis of AI and inequality synthesizes academic literature showing two main channels through which AI can deepen existing gaps: productivity gains that skew toward higher-income workers and an automation dynamic that shifts returns from labor to capital. In plain terms, if a company replaces customer-service agents with chatbots, the savings go to shareholders and executives, not to the displaced workers.

AI is also quietly embedded in everyday routines in ways many people do not fully register, from auto-generated email replies to algorithmic content feeds and automated hiring screens. That invisibility may feed distrust. People sense their economic position shifting but cannot always point to a specific AI system responsible, which makes the threat feel both pervasive and hard to resist.

At the same time, the most visible winners from the AI boom tend to be large technology companies and highly compensated professionals. Stock-market gains, rising valuations for AI start-ups, and multimillion-dollar pay packages for executives reinforce the impression that AI is primarily a tool for those already near the top. For workers in lower-wage roles, the technology often shows up as new monitoring software, automated scheduling, or performance dashboards, which can feel more like surveillance than assistance.

What Peer-Reviewed Research Actually Shows

Academic studies paint a more layered picture than the polls alone convey. A peer-reviewed paper in the journal Science, often summarized as “GPTs are GPTs,” estimated which occupational tasks could be affected by large language models like GPT. The researchers measured task exposure and productivity potential across a wide range of jobs and mapped those measures to wage distributions. The implication is that exposure to AI tools, and the potential productivity gains they bring, are not randomly distributed. Higher-paid knowledge workers whose jobs involve writing, analysis, and coordination may see substantial boosts from AI assistance, while lower-paid manual workers whose tasks are harder to automate may see fewer direct benefits.

Yet a separate economics working paper from the National Bureau of Economic Research complicates that narrative. Researchers studied a real customer-support workplace where agents used a generative-AI tool that suggested responses during live chats. They found that productivity gains were largest for less-experienced workers, who benefited most from AI-assisted suggestions that effectively embedded the know-how of top performers. In that specific setting, the technology compressed skill gaps rather than widening them. Novice agents improved faster, and performance differences between top and bottom performers shrank.

These two findings are not contradictory so much as they describe different scales. Within a single workplace, AI can act as a leveler, helping the least skilled catch up by giving them access to expert guidance in real time. Across the broader economy, however, the gains from deploying AI systems tend to accrue to capital owners and highly paid professionals who design, manage, and profit from those systems. The customer-support experiment shows what is possible under controlled conditions where management chooses to share AI’s benefits with frontline workers. The macro-level research shows what tends to happen when market forces, and existing bargaining power, determine who captures the value.

Another nuance is that “exposure” to AI does not always translate into job loss. Many occupations that are highly exposed in the Science study involve tasks that can be augmented rather than replaced. A lawyer or doctor might use AI to draft documents or summarize records, increasing throughput while still retaining responsibility and pay. By contrast, for workers whose tasks are more routine, even modest automation can weaken their negotiating position, because employers can credibly threaten to shift more work to machines if wages rise.

A Gap Between Experts and the Public

The disconnect between public anxiety and expert optimism adds another dimension. While many Americans worry about job loss and inequality, AI researchers and industry leaders often emphasize gains in productivity, medical breakthroughs, and new forms of creative expression. That gap matters because policy decisions about AI regulation, workforce retraining, and corporate accountability will be shaped by whichever perspective carries more political weight.

Trust in government oversight is itself a point of division. Surveys show that Americans are roughly split on whether they have at least some confidence in the country’s ability to regulate AI well, with the rest expressing little or no trust. When about half the population doubts that regulators can keep up with rapidly evolving systems, calls for protective rules face a credibility problem before they even reach a legislative hearing. Skeptical citizens may see new frameworks as either too weak to matter or too captured by industry to protect workers and consumers.

The composition of the AI workforce reinforces these doubts. Stanford’s AI Index has documented that progress on diversity has been minimal, with women and many racial and ethnic groups still significantly underrepresented in technical and leadership roles. When the people building and deploying AI tools do not reflect the broader public, it becomes harder to convince skeptics that the technology will serve everyone’s interests rather than those of a narrow elite.

Policy Choices Will Shape the Outcome

Whether AI ultimately widens or narrows inequality is not preordained. The research record suggests that AI can empower lower-skilled workers when designed and implemented with that goal in mind, but that absent deliberate intervention, the default trajectory is one in which capital and highly skilled labor capture most of the gains. That puts a premium on policy choices that influence how AI is adopted and who shares in the benefits.

One set of tools involves the labor market directly: strengthening collective bargaining rights, updating wage and hour laws to account for algorithmic management, and funding large-scale retraining programs that help workers move into roles that are complemented by AI rather than displaced by it. Another involves corporate governance, from requiring transparency about how AI systems affect hiring, pay, and promotion to encouraging profit-sharing or employee-ownership models in AI-intensive firms.

Education policy will also be crucial. If higher-paid knowledge workers are best positioned to benefit from AI, expanding access to quality education and digital skills becomes a key lever for spreading those gains. That means not only advanced degrees in computer science, but also vocational programs that teach workers how to use AI tools in fields like healthcare, logistics, and construction.

Finally, public engagement will matter. The polling data and inequality research make clear that many Americans are entering the AI era with deep skepticism. Addressing that skepticism will require more than optimistic speeches from tech executives. It will demand visible examples of AI being used to improve conditions for ordinary workers, robust oversight that earns trust, and a commitment to including diverse voices in decisions about where and how AI is deployed. If those pieces fall into place, the technology’s benefits need not be reserved for those already at the top.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.