Morning Overview

Harvard professor warns AI users their brains are getting weaker

Harvard faculty members are raising alarms that routine reliance on generative AI tools like ChatGPT may be quietly eroding users’ cognitive abilities, from basic memory to higher-order critical thinking. The warning draws support from a growing body of experimental research, including brain-scan studies and longitudinal pilots, that documents measurable drops in mental engagement when people offload thinking to AI. As these tools become standard in classrooms and workplaces, the question is no longer whether AI changes how people think, but how much damage passive use can inflict before users notice.

What Harvard Faculty Found About AI and Mental Sharpness

In a series of interviews published by the Harvard Gazette, university researchers laid out a blunt assessment of AI’s cognitive toll. One faculty member argued that routine dependence on AI tools can diminish lower-order capacities such as memory and factual recall. The same expert warned that higher-order skills like critical thinking and creativity are also vulnerable if people habitually reach for AI instead of wrestling with problems themselves. The concern is not that AI is inherently harmful, but that default, unreflective use trains the brain to skip the very processes that build intellectual capacity over time.

Another underappreciated dimension of the problem is homogeneity. When users pose the same question to different AI platforms, the answers tend to converge because the underlying training data overlaps heavily. Harvard researchers note that if AI is effectively curating information, people may be exposed to a narrower range of perspectives than they would encounter through independent reading, discussion, or debate. Over time, this narrowing of intellectual inputs can subtly shape users’ reasoning, compounding the skill loss that passive AI use already encourages and making it harder to notice when one’s own thinking has become less flexible or original.

Brain Scans Show Measurable Cognitive Decline

The most direct neurological evidence so far comes from an MIT Media Lab experiment led by researchers Nataliya Kosmyna and Pattie Maes. Their preprint used EEG-based measures of cognitive engagement to test 54 participants under three writing conditions: one group drafted with the help of a large language model, another used a conventional search engine, and a third wrote without any digital assistance. The team combined brain connectivity data with linguistic analysis and teacher scoring, building a layered picture of how each tool shaped both the writing process and the writer’s mental state. By comparing across conditions, they could see not just differences in essay quality but also differences in how hard the brain was working.

Participants who leaned on the language model showed lower cognitive connectivity than those who used search or wrote unaided, suggesting reduced mental effort during AI-assisted writing. The authors describe this pattern as an accumulation of “cognitive debt,” meant to capture how repeated use of AI for demanding tasks might gradually lower the brain’s baseline engagement. A subset of participants returned for a later session, and the follow-up measurements reinforced the initial signal that language-model assistance was associated with diminished activation. The researchers stress that the work is preliminary, with a modest sample and preprint status, but the EEG evidence is among the first to show that AI tools may change not only what people produce but also how their brains function while they are producing it.

The Verification Bottleneck on Hard Problems

Concerns about cognitive offloading extend beyond writing to complex reasoning and problem-solving. A separate longitudinal pilot tracking an academic cohort examined how people integrate AI into their workflows over time. This three-wave study of human problem-solving identified what the authors call a “verification bottleneck”: participants relied most heavily on AI for the hardest tasks, precisely where verifying AI output requires the deepest expertise. While AI support initially seemed to boost speed and confidence, participants’ ability to independently check the correctness of answers did not keep pace, especially on the most challenging items.

This dynamic creates a troubling feedback loop. The more difficult a problem is, the more tempting it becomes to outsource the heavy lifting to AI, and the more users do so, the less they practice the detailed reasoning needed to evaluate whether the AI’s answer is sound. Over repeated cycles, verification skills can atrophy, making users even more dependent on AI the next time they face a complex question. For students, researchers, and professionals who must tackle progressively harder material, this bottleneck raises the risk that AI assistance will erode deep competence at exactly the moments when careful thinking and error detection matter most.

Forcing the Brain Back Into the Loop

Not all the research points toward inevitable decline, however. Some studies suggest that relatively small design changes can keep humans mentally engaged while still benefiting from AI. A team led by Zana Buçinca tested whether “cognitive forcing” interventions could counteract overreliance. In an experiment with 199 participants making decisions with algorithmic assistance, they found that requiring users to commit to an answer before seeing the AI’s suggestion, or delaying the display of that suggestion, significantly reduced blind acceptance. Participants who had to think first were more likely to catch AI errors and adjust or reject recommendations, even though they reported the experience as more effortful and less satisfying.

A related line of work explored a lighter-touch strategy that does not hide AI output but reframes how users encounter it. Researchers found that adding short “provocations” (brief critiques, alternative framings, or questions attached to AI suggestions), nudged users toward more critical and metacognitive reflection during knowledge work. Instead of blocking the AI or inserting long delays, these prompts work alongside the recommendation, encouraging people to question assumptions, consider counterexamples, or articulate reasons for agreement or disagreement. For designers of AI systems, the distinction is crucial: if the goal is to preserve human cognitive engagement without sacrificing speed, embedding small moments of intellectual challenge into the interface may be more sustainable than heavy-handed gates that users experience as pure friction.

Designing AI Use That Strengthens, Rather Than Weakens, Minds

Taken together, the Harvard interviews, the MIT brain-scan study, and the longitudinal and behavioral experiments sketch a consistent picture of risk and opportunity. Passive, convenience-driven use of generative AI seems to encourage mental shortcuts, narrowing the diversity of information people see and reducing the intensity of their engagement with hard problems. Over time, this pattern can generate cognitive debt and a verification bottleneck, leaving users less able to detect errors, reason independently, or sustain focus without algorithmic scaffolding. Yet the same body of work also shows that relatively modest interventions, from forcing users to think before seeing a suggestion to seeding interfaces with targeted provocations, can keep the human brain in the loop.

For educators, employers, and policymakers, the emerging research suggests that the central question is not whether to allow AI, but how to structure its use so that it functions more like a coach than an autopilot. That might mean asking students to draft before consulting a model, requiring professionals to document their reasoning when they accept AI recommendations, or encouraging developers to build tools that surface dissenting perspectives rather than a single, polished answer. The evidence from Harvard and MIT does not support panic, but it does undercut the notion that generative AI is cognitively neutral. Unless people and institutions actively design for engagement, the default trajectory of AI use may be toward quieter minds (ones that feel efficient in the moment, but over time lose some of the very capacities that education and expertise are meant to cultivate).

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.