Morning Overview

Study suggests ChatGPT use can weaken memory by acting as a cognitive crutch

Researchers at the MIT Media Lab have published a new study measuring what happens inside the brain when people use ChatGPT to write essays, and the results point to a worrying pattern: the AI tool appears to reduce neural engagement in ways that resemble cognitive disuse. Combined with a separate randomized controlled trial showing that ChatGPT users retain significantly less knowledge over time, the findings add hard data to a growing concern that AI assistants may be weakening the very mental muscles they are meant to support.

Brain Scans Show Reduced Connectivity During AI-Assisted Writing

The MIT Media Lab study, titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” used EEG monitoring to track brain activity while participants wrote essays under three different conditions: with a large language model, with a search engine, or with no tools at all. The researchers describe how participants cycled through these conditions across sessions so that the same person would sometimes write independently and sometimes rely on an AI assistant, allowing direct comparisons within individuals. The full methodology and data are available in the preprint manuscript, which details how the team quantified changes in neural connectivity.

The results were striking. Participants who used the LLM showed weaker brain connectivity, a pattern the researchers describe as indicating under-engagement. In plain terms, the brain regions that normally coordinate during effortful thinking were less active when ChatGPT handled the heavy lifting. The study frames this as “cognitive debt,” a term suggesting that each session of AI-assisted work leaves a small deficit in the brain’s practiced capacity for independent thought. That debt, the researchers argue, may accumulate over time as people repeatedly offload the most demanding parts of their writing and reasoning to the model instead of exercising those skills themselves.

Notably, this pattern did not appear to the same degree when participants used a search engine. Searching still required them to skim, evaluate, and synthesize information, keeping more of the cognitive labor in human hands. By contrast, the LLM could generate entire paragraphs of argument and structure, allowing users to accept fluent text with minimal internal processing. The EEG data suggest that this difference in how work is distributed between human and machine shows up directly in the coordination of brain networks associated with attention and executive control.

A 45-Day Retention Test Reveals the Knowledge Gap

The brain-scan data gains additional weight from a separate randomized controlled trial published in Social Sciences and Humanities Open. That study enrolled 120 undergraduates who were learning about artificial intelligence and split them into two groups: one that studied using ChatGPT and another that used traditional methods such as textbooks and notes. Both groups performed comparably on an immediate knowledge test given right after the learning phase, suggesting that in the short term, AI-assisted study can keep pace with conventional approaches.

The real difference emerged 45 days later, when a surprise retention test caught the students off guard. As detailed in the peer-reviewed trial, students who had relied on ChatGPT scored 57.5% on the delayed test, compared with 68.5% for those who studied the traditional way. That 11-percentage-point gap is significant because it appeared despite both groups starting from roughly the same baseline and having similar scores immediately after learning. The implication is that the AI-assisted learners encoded the material less deeply, treating ChatGPT as an external memory bank rather than engaging in the kind of effortful processing that builds durable understanding.

The study’s authors explicitly used the phrase “cognitive crutch” to describe this dynamic. When the chatbot was available, students tended to ask it for explanations and summaries instead of generating their own. That approach may feel efficient in the moment, but it deprives the brain of the struggle that cements knowledge. By the time the delayed test arrived, the students who had leaned on ChatGPT seemed to have offloaded not just the work of studying, but also the responsibility for remembering.

Why Offloading Thinking to Technology Erodes Memory

Neither of these findings exists in a vacuum. The idea that outsourcing mental work to external tools can degrade internal memory has been studied for over a decade. A foundational 2011 paper published in Science demonstrated that when people expect information to remain externally available, they remember the information itself less well and instead remember where to find it. In that experiment, participants who believed that facts would be saved on a computer later showed weaker recall of the facts but stronger recall of the storage location, a pattern documented in the original experimental report.

Cognitive scientists have formalized this behavior as “cognitive offloading,” a process in which people deliberately shift mental tasks to external devices. A review published in Consciousness and Cognition examined how people decide when to offload and what consequences follow for internal memory and metacognition. Drawing on dozens of experiments, the authors concluded that offloading changes not only what we remember, but how we judge our own abilities, as summarized in their theoretical overview of the phenomenon.

The key insight is that offloading is not just a convenience; it reshapes how the brain evaluates its own capabilities. Over time, people who offload frequently begin to underestimate their own memory, which makes them offload even more, creating a self-reinforcing loop. Once that loop is established, tools like ChatGPT do not simply sit in the background as neutral helpers. They become default destinations for any task that feels even slightly demanding, from drafting emails to interpreting research papers.

ChatGPT intensifies this loop because it does not just retrieve information. It synthesizes, organizes, and even generates arguments, performing cognitive work that a search engine still leaves to the user. A reflection published in PMC argued that the potential for AI chatbots to shape human cognition extends beyond information retrieval and could affect the nature of human thought itself. The authors coined the term AICICA (AI-induced cognitive atrophy) to describe the risk that people might gradually replace active reasoning with passive consumption of chatbot outputs, blurring the line between assistance and substitution.

What the Coverage Gets Wrong About “Brain Rot”

Popular summaries of the MIT study have gravitated toward alarming shorthand, with some outlets claiming that ChatGPT “rots your brain.” That framing overstates the evidence. The EEG study measured connectivity during specific writing tasks in a controlled setting; it did not track participants over months or years to confirm lasting structural changes. The 45-day retention trial, while well designed, studied undergraduates learning a single subject, not long-term professional use across diverse domains.

A more accurate reading is that these studies identify a risk pattern, not a settled verdict. The brain-connectivity findings show that ChatGPT use correlates with reduced neural engagement during a task, but correlation during a lab session is not the same as permanent cognitive decline. Similarly, the retention gap may partly reflect how the students used ChatGPT rather than the tool itself. A student who passively copies AI-generated text will retain less than one who uses ChatGPT to challenge and test their own ideas. The studies do not yet distinguish between these modes of use or explore how deliberate strategies, like summarizing in one’s own words or quizzing oneself, might mitigate the downside.

This matters because the dominant narrative risks pushing people toward an all-or-nothing stance: either embrace AI without limits or avoid it entirely. The actual evidence suggests a more nuanced approach. AI assistants appear most dangerous when they replace effortful thinking, especially in learning and writing, and less problematic when they augment work that users are already actively engaging with. The challenge is not whether to use ChatGPT, but how to structure that use so that the human remains cognitively in the loop.

Using AI Without Paying the Cognitive Price

For students and professionals, the practical takeaway is to treat AI assistants as prompts, not prosthetics. One strategy is to draft first and consult ChatGPT second, using it to critique, reorganize, or expand ideas that already exist in the writer’s own words. Another is to require a manual step between AI output and final work, such as rewriting every paragraph in a personal voice or explaining each key point without looking at the screen, to ensure that the underlying concepts are actually understood.

Educators can respond by designing assignments that reward process over product. That might mean grading outlines, notes, and revision histories, or asking students to defend their work orally. In these settings, ChatGPT can still play a role as a brainstorming partner or practice examiner, but it cannot stand in for the student’s own reasoning. The goal is to harness the speed and fluency of AI while preserving the desirable difficulty that builds long-term memory.

Who Supports the Science Behind These Warnings?

Both the MIT EEG study and the retention trial were disseminated through arXiv and journal platforms that depend on a broader research ecosystem. The preprint server itself is maintained by a network of institutional partners listed among arXiv’s supporting members, and its operations are partly funded through community contributions described on its donation page. For readers who want to dig deeper into the primary literature on AI and cognition, arXiv also provides guidance on how to search, filter, and interpret preprints in its general user help resources.

As debates over AI’s cognitive impact intensify, these infrastructures matter. They make it possible to move beyond anecdotes and headlines toward careful, replicable studies of how tools like ChatGPT change the way we think, learn, and remember. The emerging message is not that AI is inherently corrosive, but that its effects depend heavily on whether we use it to shortcut thought, or to scaffold it.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.