Morning Overview

Over 50% of teens now use AI for homework and say there’s ‘no need to research’

More than half of American teenagers now turn to AI chatbots when they need help with schoolwork, a finding that marks a rapid acceleration in how young people approach learning and research. A nationally representative Pew Research Center survey of 1,458 teen-parent pairs found that 54% of U.S. teens have used chatbots for school-related tasks, up from just 13% who reported using ChatGPT for that purpose in 2023. The speed of adoption raises hard questions about whether traditional research skills are being replaced rather than supplemented, and whether parents and schools are keeping pace.

From 13% to 54% in Two Years

The trajectory is steep. In 2023, roughly one in eight teens said they had used ChatGPT for schoolwork. By late 2024, that figure had doubled to 26%. The latest Pew data, drawn from fieldwork conducted September 25 to October 9, 2025, shows the shift has gone well beyond a single platform. When the question broadened to include any AI chatbot, not just ChatGPT, 54% of teens reported using one for homework or school-related help.

That jump matters because it signals a behavioral shift, not just a product trend. Teens are not simply experimenting with a novelty; they are folding chatbots into their daily academic routines at a rate that outpaces most school policy responses. The survey, which carries a margin of error of plus or minus 3.3 percentage points and relies on the Ipsos KnowledgePanel probability sample, has a methodological foundation that separates it from informal polls or platform-sponsored studies. As a result, the numbers offer one of the clearest pictures yet of how quickly AI tools have become embedded in teenage academic life.

Teens See Efficiency; Peers See Cheating

The data reveals a split in how teens themselves view what they are doing. Many describe chatbot use as a practical tool for brainstorming, getting explanations of difficult concepts, or pulling together background information quickly. But a significant share also believe their classmates cross the line. Reporting on the Pew findings notes that most teens believe their peers use AI to cheat in school, even as they frame their own use as legitimate. That gap between personal justification and collective suspicion suggests teens understand the ethical tension but resolve it differently depending on whether they are judging themselves or others.

This perception split creates a practical problem for educators. If students broadly assume cheating is widespread, the incentive to use AI as a shortcut grows stronger, not weaker. A student who believes half the class is getting answers from a chatbot faces pressure to do the same just to keep up. Detailed breakdowns in Pew’s tabulated results show that teens are more likely to say AI is acceptable for help understanding material than for generating entire assignments, yet the suspicion that others are using it for the latter may quietly normalize more aggressive use. The result is a cycle where norms erode quickly, and the phrase “no need to research” stops being an outlier attitude and starts reflecting a common calculation about effort and reward.

Parents Consistently Underestimate Usage

One of the more striking findings in the Pew report is the gap between what teens say they do and what their parents think is happening. According to Pew’s companion analysis on parents, caregivers significantly underestimate the extent to which their children rely on chatbots for academic work. The survey design, which paired teen responses with those of a parent or guardian in the same household, makes this disconnect especially visible. Parents are not just unaware of a general trend; they are misjudging the behavior of the specific teenager living under their roof.

This awareness gap has consequences. Parents who do not know their child regularly uses AI for homework are unlikely to set boundaries around it, discuss its limitations, or help their teen develop the judgment to use it well. Families may assume that as long as grades remain stable, study habits are unchanged, when in fact the underlying process has shifted from independent research to AI-mediated shortcuts. Schools, meanwhile, cannot rely on home reinforcement of academic integrity policies if families do not realize the policies are being tested. The mismatch between teen self-reports and parental perceptions points to a supervision vacuum that neither institution has filled, leaving many teenagers to improvise their own rules about what counts as acceptable assistance.

Schools Confront a Skills Erosion They Cannot Yet Measure

The concern is not simply that teens are using a new tool. It is that the tool may be replacing the cognitive work that homework was designed to build. When a student asks a chatbot to summarize a historical event or solve a math problem step by step, the student receives an answer but can easily skip the process of locating sources, evaluating their reliability, and constructing an argument from evidence. That process is what educators often mean by “research,” and its erosion is difficult to detect through grades alone. A student who submits polished, AI-assisted work may score well on individual assignments while developing weaker analytical habits over time.

Educators are only beginning to grapple with how to respond. At a recent gathering hosted by Stanford’s Human-Centered AI Institute, participants described AI as challenging core assumptions about what assignments are supposed to measure and how students learn. In discussions summarized by Stanford’s education-focused initiative, teachers and researchers debated whether to double down on in-class, handwritten assessments, redesign tasks to emphasize personal reflection and process, or explicitly teach students how to use AI as a starting point rather than an endpoint. Yet even as these debates unfold, most districts lack reliable ways to track how much AI support went into any given homework submission, making it hard to know whether foundational skills are quietly weakening beneath apparently solid performance.

Teaching Teens to Question the Answers

What emerges from the Pew data and early institutional responses is a picture of teens racing ahead of the adults responsible for guiding them. More than half of teenagers now treat chatbots as part of their academic toolkit, but many do so without structured instruction on how these systems work, where they fail, or how to cross-check their claims. AI models can invent citations, misstate facts, or present majority viewpoints as settled truth, yet a teen under deadline pressure may accept a fluent answer at face value. When that pattern repeats across subjects and years, students risk internalizing not only errors but also a habit of deference to whatever appears most confident on screen.

Closing that gap will require schools and families to treat AI literacy as a basic component of research skills rather than an optional add-on. That means teaching students to ask for sources, compare chatbot responses with textbooks or primary documents, and recognize that speed is not the same as understanding. It also means being explicit about when AI use is allowed, when it is not, and why those boundaries exist. The Pew findings suggest that teens are already negotiating these questions on their own, often in private and under peer pressure. Bringing the conversation into the open, grounded in data about how widely these tools are used, may be the only way to ensure that “no need to research” does not become the default mindset of a generation that still needs to learn how to think critically for itself.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.