When Amazon rolled out an AI-powered resume screening tool several years ago, the company eventually scrapped it after discovering the system penalized applicants from women’s colleges. Engineers who built the tool saw a fixable technical glitch; job seekers and advocacy groups saw proof that algorithms could not be trusted with high-stakes decisions. That kind of split — between the people who create AI and the people affected by it — now runs through nearly every major policy debate about the technology, and a sweeping report released in April 2026 puts hard numbers on just how wide it has become.
According to the 2026 Stanford AI Index Report, 64% of U.S. adults said they expect AI to produce fewer jobs over the next 20 years. AI researchers surveyed for the same analysis were far less likely to share that expectation. The gap is not just academic. It shapes which regulations gain public support, which AI tools people are willing to adopt, and how fast companies can deploy systems that touch millions of lives.
The numbers behind the divide
The employment finding is the report’s starkest data point. The public opinion chapter draws on a Pew Research Center study that surveyed two distinct groups: a nationally representative public sample from Pew’s American Trends Panel (with 11,494 U.S. adult respondents, surveyed between November 2024 and February 2025) and an expert sample of 2,778 authors and presenters at 21 major AI conferences held in 2023 and 2024, with unweighted responses.
Among the public, 64% said they expect AI to lead to fewer jobs over the next two decades. Among the AI experts, only 22% shared that view — a 42-percentage-point gap that the report’s editors flagged as one of the year’s most significant findings.
The split extends well beyond employment. On questions about AI’s effect on the broader economy and on medical care, experts and ordinary Americans landed on opposite ends of the confidence scale. Experts tended to see AI as a net positive; the public leaned toward anticipating disruption and risk.
Trust in government oversight scored low across the board. Only 31% of U.S. adults said they were confident the government can manage AI responsibly. That figure reflects a broader erosion of institutional trust that predates the current AI cycle but appears to have been deepened by it. When people doubt that regulators understand or can control AI systems, they are more likely to resist deployment in sensitive areas like policing, credit scoring, and health insurance — even when those systems show technical promise on paper.
Who was surveyed, and why it matters
The distinction between the two survey pools is critical. Conference presenters at top-tier AI venues are disproportionately affiliated with large technology companies and well-funded research universities. Their optimism about AI’s economic effects may partly reflect proximity to the financial upside of the technology, not a detached reading of labor-market data. The unweighted design of the expert sample means responses were not adjusted for demographic or institutional balance, which limits direct comparisons with the nationally weighted public panel.
“The people building these systems and the people subject to them are essentially living in different informational worlds,” said Meredith Whittaker, president of the Signal Foundation and a longtime AI accountability researcher, in a May 2026 interview with MIT Technology Review. That framing echoes what the Stanford report documents statistically: proximity to AI development correlates with optimism, while distance from it correlates with anxiety.
Stanford’s Institute for Human-Centered Artificial Intelligence, known as HAI, elevated the expert-public divergence to one of the top takeaways of the entire 2026 report. The AI Index is an annual benchmarking effort that tracks technical performance, investment flows, policy activity, and public sentiment across dozens of countries. By giving the perception gap headline billing, the report’s editors signaled that the social dimension of AI adoption now ranks alongside raw capability gains as a defining challenge for the field.
What the data does not settle
The 31% trust figure is a snapshot, not a trend. The report does not establish whether confidence in government oversight is falling, holding steady, or recovering from a recent low. Without year-over-year comparisons using identical question wording, it is hard to say whether the number marks a new floor or a temporary dip tied to specific controversies, high-profile system failures, or media coverage of AI-related harms.
On jobs, the 64% figure captures what people expect but not the texture of their concern. A respondent who believes AI will eliminate some roles while creating others could still answer that AI will lead to “fewer jobs” in net terms. Someone else might picture a dramatic collapse in employment. The survey framing leaves room for very different interpretations of the same answer, and the report does not break down whether respondents who expect fewer jobs also expect lower wages, worse working conditions, or losses concentrated in particular regions or industries.
Competing frameworks exist outside the Stanford analysis. MIT economist David Autor, whose research on automation and labor markets is among the most cited in the field, has argued that past waves of automation anxiety — from ATMs to industrial robots — outpaced actual net job losses because new industries and roles eventually emerged. Others, including researchers at the Brookings Institution, counter that generative AI is different because it targets cognitive and creative work rather than routine manual tasks, making historical analogies less reliable. The Stanford report surfaces the perception gap but does not attempt to adjudicate between these economic theories.
There is also an open question about how durable expert optimism will prove. Many of the researchers surveyed work on model development and evaluation, not on implementation in hospitals, courtrooms, or classrooms. As real-world evidence accumulates about bias, robustness failures, or unintended social effects, expert views could shift toward greater caution. Conversely, if AI systems deliver consistent, measurable benefits in areas like medical diagnostics or climate modeling, public skepticism could soften on its own.
Where the expert-public gap is already visible in practice
The divergence documented in the Stanford report is not hypothetical. In health care, AI diagnostic tools for detecting diabetic retinopathy have shown strong accuracy in clinical trials, yet a 2025 Pew survey found that most Americans said they would be uncomfortable if their own provider relied on AI for diagnosis. Hospitals rolling out such tools in early 2026 reported that patient opt-out rates ran higher than administrators expected, even when clinicians endorsed the technology.
In hiring, the pattern is similar. Automated video interview platforms that use AI to score candidates have drawn legal challenges in Illinois and New York City, where local laws now require employers to disclose when AI is used in hiring decisions. Employers and vendors have argued the tools reduce human bias; applicants and labor advocates have pushed back, citing opaque scoring criteria and the absence of meaningful appeal processes.
For policymakers, the divergence creates a practical bind. If legislators and regulators lean heavily on expert input, they risk building frameworks that lack public legitimacy and invite backlash. If they respond primarily to public anxiety, they risk slowing adoption in domains where early evidence of benefit is relatively strong, such as diagnostic imaging or accessibility tools for people with disabilities.
Both the 64% jobs figure and the 31% trust figure come from survey instruments with defined question wording and response scales, not from sentiment estimates scraped from social media. That makes them more reliable than informal polling or anecdotal reporting, but they still carry the standard limits of opinion research: they capture stated beliefs at a single moment, they are sensitive to question framing, and they may not predict behavior. A person who tells a pollster they distrust AI might still use AI-enabled products every day because they are convenient or because an employer requires them.
As of May 2026, the safest reading of the Stanford data is straightforward: Americans and the researchers building AI are looking at the same technology and seeing different futures. Several states are already drafting AI disclosure and accountability bills in response to exactly this kind of public unease, while industry groups lobby for lighter frameworks grounded in expert assessments. How that tug-of-war resolves will depend less on what the models can do and more on whether the people affected by them feel they had any say in the outcome.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.