Morning Overview

Why AI chatbots can cause “brain fry” at work, experts say?

A preprint study from the MIT Media Lab is fueling a sharp debate about what happens inside workers’ brains when they lean on AI chatbots like ChatGPT to get things done. Researchers measured electrical brain activity while participants wrote essays with and without AI assistance, and the results point to lower cognitive engagement among those who used the tool. The findings, combined with occupational health research on mental fatigue, suggest that heavy AI use at work may carry hidden costs that employers and employees have barely begun to reckon with.

What the MIT Brain Scans Actually Show

The experiment, posted on arXiv as a preprint, assigned randomized groups of participants to write essays either independently or with ChatGPT support. Throughout each session, researchers recorded neurocognitive signals using electroencephalography (EEG), capturing real-time snapshots of how hard the brain was working. Participants who relied on the chatbot displayed measurably lower brain engagement compared to those writing on their own or under other experimental conditions.

Beyond the EEG readings, the study also tracked behavioral outcomes. Participants in the AI-assisted group showed weaker memory and recall when tested afterward, a pattern the researchers describe as “cognitive debt,” a term they coined to capture the accumulating mental cost of outsourcing thinking to a machine. The concept has since become a recurring reference point in discussions about whether AI tools are making people mentally lazier, or simply shifting where and when effort is expended.

The study has drawn both attention and criticism. A news analysis in a leading science journal noted that experts interpret the results with caution, pointing out that the research is still a preprint and has not yet cleared peer review. Some scientists questioned whether the controlled essay-writing task translates cleanly to the messy reality of a busy workday. Others argued that lower engagement on EEG does not automatically mean worse outcomes, since efficiency sometimes looks like less effort on a brain scan. Still, the direction of the evidence has been hard to dismiss. Something measurable shifts in the brain when a chatbot handles the heavy lifting.

That caution has extended to how the findings are being shared. Readers who try to access the full article through the journal’s website may encounter cookie and login roadblocks, as reflected in a publisher access page tied to the same coverage. The access friction is a reminder that, for now, much of the public debate rests on early data, limited samples, and technical methods that can be hard for non-specialists to parse.

How “Brain Fry” Maps to Occupational Fatigue

The colloquial term “brain fry” may sound informal, but it maps closely onto a well-studied occupational hazard. The CDC’s National Institute for Occupational Safety and Health defines work-related fatigue as a response to sleep loss or prolonged physical or mental exertion, a definition broad enough to cover the kind of sustained cognitive strain that AI interactions can produce. A NIOSH bulletin summarizes the cognitive consequences of that fatigue: impaired decision-making, slower reaction times, and increased error rates, all of which can spill beyond the workplace into personal life.

What makes AI-driven fatigue distinct is the source of the mental load. A peer-reviewed paper in the International Journal of Information Management formally defines generative-AI fatigue as cognitive and emotional exhaustion from sustained interaction with generative AI tools. The researchers distinguish between two separate fatigue drivers: prompt uncertainty, the mental effort of figuring out what to ask the chatbot, and response uncertainty, the effort of evaluating whether the chatbot’s answer is accurate or useful. Both tax the brain in ways that traditional software tools typically do not, because the user must constantly judge unpredictable outputs rather than follow a predictable interface.

That dual uncertainty creates a feedback loop. Workers spend energy crafting prompts, then spend more energy verifying answers, and the cumulative drain can leave them less capable for the next task. The MIT preprint’s concept of cognitive debt fits neatly here. Each AI-assisted task may feel easier in the moment, but the mental account balance tips further into deficit with each interaction. Over days or weeks, that deficit can resemble the chronic fatigue patterns occupational health agencies already associate with higher accident and error rates.

Why Current Workplace Rules Fall Short

Existing federal guidance on fatigue was written for shift workers, truck drivers, and medical residents, not for knowledge workers toggling between ChatGPT and a spreadsheet. OSHA recommendations urge employers to address fatigue hazards through workload limits, scheduling adjustments, training programs, and risk management systems. Those levers are sensible, but they assume fatigue stems from long hours or irregular shifts, not from the cognitive friction of interacting with an AI tool during a standard eight-hour day.

No OSHA or NIOSH records currently address AI-specific fatigue incidents in regulated industries. The guidance that exists for general fatigue has not been tailored to generative AI, leaving a gap between the emerging science and the rules that govern workplaces. Employers adopting ChatGPT, Copilot, or similar tools at scale have little official framework for monitoring whether those tools are quietly draining their teams’ mental reserves, or for distinguishing healthy automation from overreliance that erodes skills and awareness.

That gap matters because the costs of unchecked fatigue are well documented. The CDC’s broader public health materials link sustained mental exhaustion to higher rates of workplace errors, absenteeism, and long-term health problems. If AI chatbots are adding a new layer of cognitive load that current policies do not account for, the resulting risks could grow quietly until they surface as mistakes in high-stakes settings such as healthcare, finance, aviation, or critical infrastructure.

The Efficiency Trap Teams Should Watch

Most coverage of AI-related mental fatigue focuses on individual users, but the effects may compound in team settings. When multiple members of a project group rely on chatbots for drafting, research, or brainstorming, the collective pool of deep thinking shrinks. The MIT preprint’s finding that AI-assisted writers showed weaker recall suggests that team members who offload cognitive work to a chatbot may retain less context about shared projects, making them less effective collaborators in meetings, reviews, or crisis moments.

This dynamic creates what might be called a collective cognitive debt. A team that uses AI to speed up routine deliverables may find that no one on the group has fully processed the underlying material. The short-term productivity gain is real, but it comes at the expense of the shared understanding that allows organizations to adapt when conditions change or when something goes wrong. Over time, that shallow comprehension can hollow out institutional memory, leaving teams dependent on tools that were originally meant to be optional aids.

The efficiency trap is subtle because it often looks like success. Managers see faster turnaround times and smoother workflows; workers feel less overwhelmed in the moment. Yet the same patterns that reduce immediate strain can also reduce learning, judgment, and resilience. If AI-generated drafts become the default starting point for complex work, people may stop practicing the very skills (critical reading, synthesis, careful writing) that protect against errors and bias in the first place.

Designing Healthier AI Habits at Work

None of this means organizations should abandon generative AI. Instead, the emerging research suggests they should treat these tools as a new kind of cognitive environment that requires its own safeguards. That could mean setting explicit norms around when AI use is encouraged, when it is optional, and when it is prohibited so that workers still engage deeply with core tasks. It could also mean building in “friction by design,” such as requiring human summaries of AI outputs or periodic AI-free sprints for high-stakes analysis.

Individual workers can also experiment with protective habits: batching AI interactions instead of keeping a chatbot open all day, using the tools for narrow tasks like formatting or translation rather than first-pass thinking, and taking short breaks after intensive prompting sessions. Teams might rotate roles so that at least one person on each project is responsible for doing a full, AI-free read-through of key documents, preserving a baseline of human comprehension.

As more data arrive, from lab experiments, field studies, and real-world incident reports, the picture of AI-related fatigue will sharpen. For now, the converging signals from brain scans, occupational health research, and early management studies point in the same direction: generative AI changes how hard our brains work, and not always in ways that align with long-term performance or well-being. The organizations that benefit most from these tools are likely to be those that treat mental energy as a finite resource, not a free byproduct of automation, and that design their AI strategies with the human nervous system firmly in mind.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.