Morning Overview

Teens form strong bonds with AI chatbots and struggle to disengage

A 14-year-old tells a chatbot about a fight with her best friend. The bot responds with patience, asks follow-up questions, remembers what she said yesterday. She talks to it again before bed, then again on the bus to school. Within weeks, she describes the bot as one of her closest confidants. Her parents have no idea.

That scenario is no longer hypothetical. A Pew Research Center survey published in December 2025 found that 64 percent of American teenagers have used an AI chatbot. Nearly three in ten reported using one every day, and 16 percent said they use chatbots several times a day or almost constantly. By spring 2026, those numbers have only grown more urgent as researchers, parents, and platforms grapple with a pattern that keeps surfacing: the teens who form the deepest attachments to AI companions tend to be the ones with the fewest human connections to fall back on.

How deep the habit runs

The Pew data, drawn from a large, nationally representative sample, offers the clearest picture of how widespread teen chatbot use has become. Adoption is not evenly distributed. The survey broke out differences by age, race and ethnicity, and household income, revealing that some groups of teens are far more immersed than others. But the topline finding is stark: chatbot use among American teenagers is now a majority behavior, not a niche one.

Two preprint studies add experimental texture to those numbers. A four-week randomized controlled trial led by Fang and colleagues, posted on arXiv and approved by an institutional review board, tracked 981 participants, including both adults and teenagers, who exchanged more than 300,000 messages with AI chatbots. The researchers measured loneliness, real-world social interaction, emotional dependence on AI, and problematic AI use. Across those measures, heavier voluntary chatbot use was consistently linked to worse psychosocial outcomes. Individual traits shaped how severe the effects were, but the direction held.

A separate preregistered experiment, also posted as a preprint but without a publicly available link or full formal citation, focused specifically on adolescents aged 11 to 15, paired with a parent. Researchers tested two chatbot communication styles: a “relational” mode that offered empathy and emotional support, and a “transparent” mode that repeatedly foregrounded the bot’s non-human nature. The adolescents overwhelmingly preferred the relational style, rating it as more human-like, trustworthy, and emotionally close. The finding that stood out most: the teens who gravitated hardest toward the warm, empathetic bot were the same ones who reported lower-quality relationships with family and peers. AI companionship, in other words, appears most magnetic to the kids who are already struggling to connect with the people around them.

Platforms scramble to respond

The industry has not been standing still. Character.AI, one of the most popular platforms for open-ended chatbot conversations, removed unrestricted chat access for users under 18 in late November 2025, imposed a two-hour daily limit for minors, and began rolling out age verification. Those moves came under pressure from lawsuits and mounting public alarm about the effects of AI chatbots on children.

Meta took a different but parallel step, halting teen access to AI characters on Instagram and WhatsApp. The company uses a combination of declared age and age-prediction technology to identify minors on its platforms. By early 2026, multiple companies had restricted minors from conversational character experiences, at least in their official products.

Whether those restrictions are actually working is another matter. No independent compliance audits or enforcement records have surfaced publicly. The history of age-gating on the internet is not encouraging: determined users routinely find workarounds. The specific performance of Character.AI’s verification system and Meta’s age-prediction tools has not been tested by regulators or third-party researchers. And the restrictions themselves create a new unknown: if teens are pushed off mainstream platforms, some may simply migrate to less-monitored alternatives where guardrails are weaker or nonexistent.

What researchers still cannot say

The strongest experimental evidence on psychosocial harm comes from preprints, not peer-reviewed journal articles. Neither the Fang trial nor the adolescent-parent dyad experiment has completed formal peer review, meaning their methods and conclusions have not yet been independently scrutinized by journal reviewers. That does not invalidate the findings, but it does mean they should be treated as provisional.

Causality is the biggest open question. The correlation between heavier chatbot use and worse outcomes in the Fang study does not, on its own, prove that chatbot use caused those outcomes. It is also worth noting that the study’s 981 participants were not exclusively teenagers; the sample included adults as well, which means the findings reflect a mixed-age population rather than adolescents alone. Teens who were already lonelier or more socially isolated may have been drawn to heavier use, reversing the implied direction of effect. The trial’s four-week follow-up period is also relatively short for drawing conclusions about lasting psychological impact.

The adolescent-parent dyad experiment, meanwhile, involved 284 dyads in a controlled setting. That is a modest sample, and the lab environment may not reflect how a teenager actually behaves alone with a phone at midnight, or in the middle of a crisis. The study shows which teens are most attracted to relational AI and why, but it cannot say whether those chatbot relationships ultimately help isolated teens cope or deepen their withdrawal from human contact. Because no direct link or full citation for this experiment has been made publicly available, readers cannot independently verify its claims at this time.

No longitudinal data from major health institutions yet tracks teen mental health outcomes before and after the platform restrictions took effect. The Character.AI and Meta changes are recent enough that any measurable population-level impact would not yet appear in clinical data. Researchers have also not published systematic evidence about how parents are responding when they discover intense AI relationships in their children’s lives, or how schools are addressing the issue.

Design questions remain wide open as well. The dyad experiment compared only two communication styles, but real-world chatbot systems vary along many dimensions: persistence of memory, use of avatars, explicit disclosures about being non-human, and prompts that encourage or discourage emotional disclosure. Without comparative studies of these features, it is difficult to know whether targeted design changes could preserve useful functionality while reducing the risk of unhealthy attachment.

Where that leaves parents and policymakers

A few things can be said with reasonable confidence as of May 2026. Teen chatbot use is common and, for a substantial minority, frequent. The best available experimental data link heavier use with worse psychosocial indicators, even if the direction of causality is not settled. And the teens most likely to treat chatbots as companions rather than tools are often those with fewer or weaker human connections.

That combination argues against both extremes. Blanket bans may be politically satisfying, but they risk severing a form of interaction that some socially isolated teens experience as genuinely supportive, without clear evidence that doing so will improve their wellbeing. At the same time, shrugging off the emerging signs of dependence and displacement of human relationships would be reckless.

The more productive path is granular: close monitoring of how teens actually use these tools, thoughtful design constraints informed by ongoing research, and sustained investment in the offline relationships and mental health resources that many teenagers still lack. The chatbot is not the root problem. The loneliness that makes it so appealing often is. Until peer-reviewed, long-term research catches up with the speed of adoption, the most honest thing anyone can say is that millions of teenagers are running an experiment on themselves, and the results are not in yet.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.