Roughly one in eight American teenagers say they have used AI chatbots for emotional support or advice, according to a new survey of U.S. teens ages 13 to 17 released by Pew Research Center. The finding sits within a broader pattern of rapid chatbot adoption among young people that Pew has documented, including widespread use for schoolwork and a sizable minority using chatbots daily. That a measurable share of teens is bringing personal feelings to AI tools, not just using them as homework shortcuts, raises questions about how these digital interactions may intersect with adolescent emotional development.
One in Eight Teens Seek Emotional Guidance From AI
The latest Pew survey reports that about 12% of U.S. teens say they have used AI chatbots specifically for emotional support or advice. That figure may look modest next to the much larger share who use chatbots for information or schoolwork help, but it signals a qualitatively different kind of interaction. Asking a machine for comfort or guidance about personal problems is a distinct behavior from asking it to summarize a textbook chapter, and the distinction matters for anyone tracking teen well-being or mental health trends.
The emotional-use number gains weight when placed against the speed at which chatbot adoption has grown. An earlier Pew wave from 2025 found that nearly two-thirds of teens had tried AI chatbots, with about three in 10 using them daily. Pew’s 2025 report lists several chatbot brands teens recognized and used, including ChatGPT, Gemini, and Meta AI. If daily chatbot interaction is already routine for a large minority, the step from homework help to asking for emotional advice may be shorter than many adults assume, particularly for adolescents already accustomed to digital tools as default companions.
Schoolwork Drives Daily Chatbot Habits
The dominant gateway to chatbot use remains academic. More than half of teens now rely on conversational AI for schoolwork, a pattern that outside observers have described as a regular feature of student life rather than a passing fad. That level of integration means teens are spending sustained time interacting with chatbots, building familiarity with their tone, responsiveness, and apparent empathy. The tools are not occasional novelties; they are embedded in nightly homework routines, group projects, and last-minute cramming.
This habitual use creates a plausible pathway toward emotional reliance. A student who already trusts a chatbot to explain algebra or outline an essay has a low barrier to typing a question about a fight with a friend or anxiety about an upcoming test. The interface is the same, the response is instant, and there is no social risk of judgment or embarrassment. For teens who feel uncomfortable confiding in parents, teachers, or peers, a chatbot offers a friction-free alternative. Whether that alternative actually helps is a separate and harder question, but the behavioral bridge from academic to emotional queries is easy to see.
Parents Struggle to Keep Pace
The same research wave examined what parents think about their teens’ chatbot activity. In Pew’s companion work on parental perceptions of AI use, emotional reliance on chatbots ranked among the least acceptable uses adults were asked about. Parents were far more comfortable with teens turning to AI for schoolwork help or general information than for personal advice or emotional comfort. That gap between what some teens are actually doing and what many parents consider appropriate suggests a growing blind spot in household conversations about technology and mental health.
The report also notes that many adults are still forming their own understanding of what chatbots can and cannot do, which makes it difficult to set informed boundaries. A parent who has never interacted with ChatGPT or Gemini may not realize how conversational and seemingly empathetic these tools can sound, or how easily a teenager might mistake fluent, supportive text for genuine understanding. Without first-hand experience, adults may underestimate both the pull of these systems and the likelihood that a stressed or lonely teen will experiment with them as a private outlet.
Short-Term Comfort, Long-Term Trade-Offs
The 12% figure invites a harder analytical question: for teens who use chatbots for emotional support, does it help in the moment, or could it become a crutch that weakens their ability to handle real interpersonal conflict? AI chatbots are designed to be agreeable, polite, and responsive. They typically avoid confrontation, steer away from controversial topics, and emphasize validation. They do not push back the way a friend, sibling, or counselor might. They do not hold teens accountable for their role in a conflict or challenge distorted thinking unless explicitly prompted to do so. For a teenager dealing with social anxiety or peer friction, a chatbot conversation might provide immediate relief without building any of the skills needed to resolve the underlying problem.
This concern sits against what developmental research has long established about adolescence. The teenage years are when people learn to tolerate discomfort in relationships, read complex social cues, and repair trust after disagreements. Those abilities develop through practice with other humans, not through exchanges with a language model optimized to produce satisfying responses. If a subset of teens begins substituting chatbot interactions for the messy, sometimes painful work of human connection, the short-term comfort could come at the cost of long-term emotional resilience. The Pew data does not track outcomes over time, so this remains an inference rather than a measured result.
What the Data Does and Does Not Show
The Pew findings are drawn from surveys of teens and their parents, which means they capture self-reported behavior rather than observed usage logs. A teen who says they sought emotional support from a chatbot may have typed a single question late one night, or may have developed a recurring habit of debriefing every stressful event with AI. The survey does not distinguish between those scenarios, and the 12% figure should be read with that limitation in mind. Likewise, self-reporting may undercount use if teens are reluctant to admit turning to chatbots for emotional help, or overstate it if a one-off experiment is remembered as more significant than it was.
Cross-sectional survey snapshots also cannot show whether emotional chatbot use is increasing, stable, or concentrated in particular demographic or psychological profiles. The data does not reveal whether teens who lean on AI for support are already struggling with higher levels of loneliness or anxiety, or whether chatbot conversations themselves might contribute to changes in mood or coping over time. Nor does it clarify how chatbot-based support compares with other digital outlets, such as social media, online forums, or mental health apps. What the numbers do establish is a baseline: in a relatively short span, conversational AI has become woven into teen life to the point that a notable minority now treat these systems as potential confidants. For parents, educators, and designers, the policy challenge is to respond to that reality before emotional reliance on AI becomes an invisible but entrenched feature of adolescence.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.