
Generative AI swept into classrooms promising personalized tutoring, instant feedback, and relief for overworked teachers. A new wave of research now argues that, at least for the moment, the damage to children’s minds, privacy, and equality is outpacing those gains. Instead of quietly boosting learning in the background, AI is reshaping how students think, feel, and are monitored, often in ways they barely understand.
I see a pattern emerging across these findings: schools have raced ahead with chatbots, grading tools, and behavior trackers faster than they have built guardrails. The result is an education system where children are encouraged to lean on opaque systems that can blunt critical thinking, erode trust, and harden existing divides.
The Brookings warning: shortcuts that stunt thinking
At the center of the alarm is a detailed analysis from the Brookings Institution that concludes the risks of classroom AI currently outweigh its benefits. One of the report’s authors, senior fellow Rebecca Winthrop, argues that when kids use generative systems that simply tell them the answer, they can complete assignments without ever wrestling with the underlying ideas. In her view, that pattern turns AI into a shortcut that undermines the very skills schools are supposed to build, a concern echoed in Rebecca Winthrop’s warning that students risk getting through homework “without learning to think critically.”
The same research, produced by the Brookings Institution’s Center for Universal Education, goes further and describes a kind of “cognitive offloading” in which students hand over more and more mental work to machines. The report likens this pattern to changes more commonly associated with aging brains, suggesting that heavy reliance on generative tools could weaken memory and problem solving over time, a comparison laid out in its description of a kind of cognitive decline. When I look at how quickly students have adopted chatbots for everything from essays to math proofs, that risk no longer feels theoretical.
Cognitive and emotional fallout in the classroom
The Brookings findings are reinforced by a broader body of research that zeroes in on how AI is reshaping children’s minds and emotional lives. A large survey of students and educators, cited in the same body of work, lists as a top “Con” that AI poses a grave threat to cognitive development, especially when it replaces struggle with instant solutions. One student captured the danger bluntly, explaining that if a chatbot can always generate the response, there is little incentive to learn how to reason through the problem, a sentiment highlighted in the section that begins with As one student told the researchers.
The same survey work also flags serious risks to social and emotional development, not just academics. Researchers report deep concern that heavy use of AI, particularly conversational agents that mimic empathy, could interfere with healthy relationships and mental health. The “Con: AI poses serious threats to social and emotional development” section of the survey notes that students who lean on chatbots for comfort or advice may withdraw from peers and adults, blurring the line between human connection and scripted responses. As schools experiment with AI “wellness” tools, that warning should be front and center.
Safety, surveillance, and the erosion of trust
Beyond cognition, the Brookings Institution report highlights a cluster of risks around student safety, privacy, and trust that I find just as troubling. The authors warn that AI systems deployed in schools can collect and analyze vast amounts of data about children, from their writing and browsing habits to their emotional tone, often without clear limits or transparency. The study, produced by the Center for Universal Education, links this growing dependence on opaque systems to weakened trust in schools, especially when families discover that AI is quietly shaping discipline, grading, or counseling decisions.
Separate research on school technology use connects the embrace of AI directly to increased risks for students. A detailed analysis titled “Hand in Hand: Schools’ Embrace of AI Connected to Increased Risks to Students” traces how tools that monitor behavior, flag “concerning” messages, or predict academic failure can expose children to new harms if they are biased or poorly secured. The authors, Elizabeth Laird, Maddy Dwyer, and Hanna, argue that as schools’ embrace of AI deepens, so do the chances of data breaches, mislabeling of students as threats, and chilling effects on what young people feel safe to say online. When I talk to teachers who now rely on automated alerts to track student “risk,” many admit they do not fully understand how those systems work or what they might miss.
Inequality and the global picture
The harms are not distributed evenly. A major report from UNESCO warns that, without strong safeguards, AI in education is likely to deepen inequality rather than close gaps. The organization cautions that students in under resourced schools are more likely to be subjected to experimental or low quality systems, while wealthier districts can afford better tools and human oversight. UNESCO explicitly calls for a global framework so that AI is not deployed in classrooms without “deliberately robust safeguards,” a phrase that anchors its warning that AI in education risks threatening access to quality learning.
Teachers across Europe are voicing similar concerns. A summary of the Brookings findings circulated among education unions notes that the risks of AI in education currently outweigh the benefits, echoing worries raised by educators who see technology widening divides between students who can navigate it and those who cannot. The document, shared with unions affiliated with the European Trade Union Committee for Education, stresses that the new Brookings report aligns with classroom experience: students with strong support at home can use AI as a supplement, while those without that safety net are more likely to let it replace real learning. In my view, that dynamic turns AI from a potential equalizer into yet another sorting mechanism.
Inside the classroom: shortcuts, stress, and what schools can do
On the ground, teachers are already seeing how AI reshapes daily learning, often in ways that match these warnings. In one detailed account of classroom practice, English teacher Casey Cuny describes reading in his classroom while students quietly rely on generative tools to draft essays and answer questions. A report on the Rising Use of in schools notes that one of the negative consequences is a measurable increase in academic dishonesty, with a specific percentage of students admitting to using AI to complete work they present as their own. When I speak with educators, many say they now spend more time trying to detect AI written assignments than giving feedback on authentic student work.
The Brookings research and related coverage also point to a growing sense of anxiety among students who feel they must keep up with AI enhanced peers. A technology brief on Technology Jan highlights that the new Brookings Institution report concludes generative AI might do more harm than good in schools right now, especially when it is introduced without clear rules. Another summary of the same findings, shared through coverage of the Brookings work, underscores that the report’s authors, including Rebecca Winthrop at Brookings, are not calling for a ban but for a reset in how schools approach AI. I read that as a call to slow down, not to turn back the clock.
Some of the most practical guidance comes from educators and child development experts who are already trying to blunt the harms. A public radio segment shared by NPR and Michigan Public outlines concrete steps: teach digital literacy and source checking so students learn to verify an AI answer, schedule “deep focus” times when devices are put away, and help children recognize when they are outsourcing too much thinking. The Brookings team itself suggests similar strategies, arguing that schools should treat AI as a tool for guided practice, not a replacement for effort. If there is a path to tipping the balance back toward benefit, it runs through those kinds of deliberate choices rather than blind adoption.
More from Morning Overview