When the Pew Research Center asked a nationally representative sample of American teenagers how they feel about artificial intelligence, the answers were not what Silicon Valley might have hoped for. Teens are adopting generative AI tools at a rapid clip, but a growing share say they are uneasy about what those tools are doing to their ability to concentrate. The findings, published in Pew’s February 2026 survey on teen AI use, arrive as schools scramble to write AI policies, parents struggle to set boundaries around chatbots, and state legislatures debate whether minors need legal protections from a technology that barely existed three years ago.
The survey is the most detailed national snapshot yet of how U.S. adolescents interact with generative AI and how they judge its consequences. It found that teen use of AI tools has climbed sharply, with chatbots now woven into homework routines, group chats, and entertainment. But alongside that adoption, teens reported mounting concern that AI is fragmenting their focus, a worry that stands out because it comes from the users themselves rather than from anxious adults projecting fears onto a younger generation.
What the Pew data actually shows
Pew’s methodology is the gold standard for measuring opinion among U.S. teens: a large, nationally representative sample with results that can be generalized beyond any single school district or demographic slice. The survey asked structured questions with predefined response options, which means it is strong at measuring how widespread a belief is but less equipped to capture the texture of how distraction plays out in a teenager’s day, whether a chatbot pulls them away from a textbook at 11 p.m. or whether an AI-generated summary replaces the deeper reading that builds sustained focus.
That distinction matters. The Pew findings are best understood as a measure of teen perception, not a clinical diagnosis. They tell us that a significant and growing number of adolescents believe AI is eroding their attention. That belief is itself a meaningful data point: when the people closest to a technology raise alarms about it, researchers and policymakers have reason to investigate further, even before controlled studies confirm or refute the concern.
What the survey does not provide is a before-and-after comparison. No longitudinal study has yet tracked the same group of teenagers over months or years to isolate AI’s independent effect on attention from the overlapping influences of smartphones, social media, sleep deprivation, and academic pressure. Until that research exists, the causal question, whether AI tools are actually degrading adolescent cognition or whether teens are channeling a broader cultural anxiety, remains open.
A federal warning that predates the AI surge
Teen unease about AI did not emerge in a vacuum. In May 2023, the Office of the U.S. Surgeon General published a formal advisory on social media and youth mental health, warning that algorithmically driven platforms can erode attention and contribute to anxiety among young users. The advisory synthesized existing research and carried the weight of a federal health authority declaring that digital engagement patterns pose a serious enough risk to warrant public action.
The advisory focused on social media, not on AI chatbots or generative tools. That distinction is important because the two technologies engage attention differently. A social media feed pushes an endless scroll of algorithmically curated content designed to maximize time on screen. A chatbot, by contrast, responds to user prompts, which means the interaction is more conversational and, at least in theory, more bounded. Whether chatbots replicate the same attention-draining dynamics as social media feeds is a question researchers have not yet answered rigorously.
Still, the overlap is hard to ignore. Many of the AI tools teens use most, including Snapchat’s My AI and character-based chatbots on platforms like Character.ai, are embedded inside social apps that already use engagement-maximizing design. The line between “social media” and “AI tool” is blurring in the products teens actually touch, even if researchers and regulators still treat them as separate categories.
California’s veto and the policy vacuum
The gap between teen concern and government action was on full display in September 2024, when California Governor Gavin Newsom vetoed SB 1381, a bill that would have restricted children’s access to AI chatbots. Associated Press reporting on the veto noted that Newsom acknowledged potential harms to young users but sided with concerns about limiting speech and stifling innovation.
The veto left California, home to most of the companies building consumer AI products, without a dedicated statute governing minors’ interactions with chatbots. Supporters of the bill argued that minors needed guardrails before harm became entrenched. Opponents countered that broad restrictions would cut off educational benefits and set a chilling precedent for regulating emerging technology. Neither side presented peer-reviewed evidence tying chatbot use to measurable attention deficits in adolescents, because that evidence does not yet exist.
Since the veto, several other states have introduced narrower proposals. Some focus on transparency requirements for AI systems used in K-12 education. Others target default settings, such as limiting late-night chatbot access for users under 16. As of spring 2026, none has become law in a form that directly addresses the attention concerns teens raised in the Pew survey. The legislative landscape remains a patchwork of proposals shaped more by political instinct than by settled science.
Where the research needs to go
Researchers face a familiar problem accelerated to an unfamiliar speed: the technology is reaching teens faster than studies can be designed, funded, and published. The most urgent gap is longitudinal work that follows the same adolescents over time and distinguishes among different types of AI use. A student who spends 20 minutes with an AI tutor to untangle a calculus problem is having a fundamentally different experience from one who spends three hours chatting with an AI persona that mimics a fictional character. Lumping both under “AI use” obscures more than it reveals.
Equally important is research that looks beyond deficits. Some educators report that structured AI tutoring can reduce frustration for struggling students, potentially improving focus by removing barriers to understanding. If that effect is real and measurable, it complicates any blanket narrative that AI is bad for teen attention and argues for policies that distinguish between productive and passive use.
The Pew survey itself points toward a methodological next step. Because it relied on structured questions with predefined answers, it captured prevalence but not lived experience. Diary studies, in which teens log their AI interactions in real time, or screen-time tracking paired with attention assessments, would add the qualitative depth that survey data alone cannot provide.
What parents and educators can do now
For families and schools making decisions today, the most defensible reading of the available evidence is straightforward: teens are telling credible researchers that they are worried about losing focus. A federal health authority has identified digital engagement patterns as a risk factor for youth mental health. And the policy system has not yet produced binding protections. That combination argues for a prudent, interim approach rather than either panic or complacency.
In practical terms, that might mean encouraging teens to use chatbots for specific, time-limited tasks, such as clarifying a concept or outlining a paper, rather than as an open-ended companion filling every idle moment. It could mean schools and families setting clearer norms around device-free periods for reading, sleep, and face-to-face conversation, on the logic that any always-on digital tool competes with sustained attention regardless of whether it is powered by AI.
The clearest signal in a noisy debate is the one coming from teenagers themselves. They are among the earliest and most enthusiastic adopters of generative AI, and they are also among the first to voice discomfort with how it may be reshaping their concentration. Taking that discomfort seriously, without dismissing it as panic or inflating it into proof, is the starting point for adults who want to help rather than simply react.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.