Morning Overview

The little pauses and ‘ums’ in your speech may reveal far more about your brain health than anyone realized

You probably do it dozens of times a day without thinking: you pause mid-sentence, say “um,” then pick back up. Most listeners barely register it. But a growing body of research suggests those tiny interruptions in speech may carry surprisingly specific signals about what is happening inside the brain, particularly when it comes to early cognitive decline.

A preprint posted on medRxiv by researchers at the University of Miami, led by speech-language scientist Kimberly Mueller, analyzed what people say immediately after pausing mid-sentence. The team compared adults with mild cognitive impairment (MCI) to healthy controls and found measurable differences not just in how often people paused, but in the words they chose right afterward and in how those patterns shifted depending on the type of speaking task. Describing a picture, for instance, produced different pause signatures than speaking freely about a personal memory.

The preprint has not yet undergone peer review, which means its specific findings about post-pause word choices remain preliminary. But it does not exist in a vacuum. It builds on years of peer-reviewed work that, taken together, paints an increasingly detailed picture of how speech breaks down in the earliest stages of dementia.

Why pauses matter more than people think

Mild cognitive impairment affects roughly 15 to 20 percent of adults over 65, according to the Alzheimer’s Association. It sits in a gray zone: noticeable enough to show up on cognitive tests, but not severe enough to interfere with daily life the way full-blown dementia does. Some people with MCI stay stable for years. Others progress to Alzheimer’s disease. The challenge is identifying who is at risk early, when interventions have the best chance of helping.

Current screening typically relies on in-office cognitive tests, which require a clinic visit, a trained administrator, and a patient who is willing to be tested. That creates bottlenecks, especially in rural areas and communities with limited access to neurologists. A tool that could flag early warning signs from ordinary conversation, recorded during a telehealth visit or even a phone call, would be a meaningful step forward.

That is the promise driving pause research. And the evidence supporting it, while still incomplete, is more substantial than many people realize.

The peer-reviewed foundation

A 2020 study published in the American Journal of Alzheimer’s Disease and Other Dementias by Pistono and colleagues demonstrated that silent pauses increase as dementia progresses. The team used an automated detection tool called Calpy to measure pause duration and frequency, removing the subjectivity of having human raters mark pauses by ear. Importantly, the study also raised a methodological red flag: filled pauses (“um,” “uh”) and unfilled pauses (pure silence) behave differently and should not be lumped together in analysis.

A pilot study from Shanghai, published in the Journal of Alzheimer’s Disease, applied computer-assisted speech analysis to a classic clinical task called the Cookie Theft picture description. Comparing healthy controls, patients with amnestic MCI, and people diagnosed with Alzheimer’s disease, the researchers found statistically significant group differences in pause-derived measures, including the percentage of total speaking time spent in silence and the frequency of long pauses.

A clinical study published in Neuropsychologia went a step further by correlating pause patterns with brain imaging data in patients with early Alzheimer’s disease. That team found that pause frequency and duration varied depending on the type of narrative being produced and proposed that many pauses reflect compensatory effort: the brain working harder to retrieve a word or plan a sentence, rather than simply failing to produce language. This distinction matters because it suggests pauses are not just a symptom of breakdown but a window into how the brain adapts under strain.

On the technology side, researchers participating in the ADReSS Challenge, a benchmark competition designed to test automated Alzheimer’s detection from speech, showed that encoding both filled and unfilled pauses improved classification accuracy. The challenge framework, published on arXiv, was built with age and gender balancing and standardized audio preprocessing to reduce biases that had plagued earlier dementia speech datasets.

Where the science still has gaps

The most significant limitation is the absence of longitudinal data. Every study in the current evidence base is cross-sectional, meaning it compares groups at a single point in time rather than tracking the same individuals over months or years. That design can reveal associations between pauses and cognitive status, but it cannot answer the question patients and families care about most: do changes in speech predict who will develop dementia, and how far in advance?

Methodological inconsistency is another problem. A systematic review and meta-analysis published in the Journal of Prevention of Alzheimer’s Disease confirmed that acoustic features like pause duration and speech rate do differ between Alzheimer’s patients and controls across multiple studies. But the same review flagged substantial variation in how researchers define a “pause.” Some studies set the threshold at 100 milliseconds of silence. Others use 150 milliseconds or longer. That may sound like a trivial difference, but it changes what gets counted and makes it difficult to compare results across labs or set a single diagnostic cutoff.

Linguistic and cultural diversity remains underexplored. Most of the existing research draws on English-language datasets from North America and Europe. The Shanghai study tested Mandarin speakers, but it stands as an exception rather than a norm. Whether pause-based markers transfer across languages, dialects, and cultural speech norms is an open question with real clinical stakes: a screening tool that works only for English speakers would leave out a large portion of the people who need it.

And then there is the gap between the lab and the real world. The classification results from benchmark challenges come from controlled audio recordings made in quiet rooms with research-grade microphones. Noisy phone calls, budget telehealth setups, and the unpredictable acoustics of a living room are a different story. Automated tools like Calpy have demonstrated feasibility in research settings, but as of June 2026, no independent validation has confirmed their performance in everyday clinical environments.

What this means for patients and families

For clinicians and health systems, the current evidence supports viewing speech pause analysis as a promising screening adjunct, not a standalone diagnostic. In practice, that might look like an automated tool that flags patients whose speech patterns warrant a more comprehensive cognitive workup, rather than a system that renders diagnoses from recorded conversations. Any eventual screening tool would also need to account for variables that influence how people pause: age, education level, native language, hearing ability, and even whether someone is tired or anxious on a given day.

For patients and families, the research validates something many caregivers notice intuitively: subtle shifts in how a loved one speaks can be meaningful. But noticing more “ums” or longer silences in your own speech is not grounds for alarm. Stress, fatigue, medication side effects, multitasking, and speaking in a second language all affect fluency in ways that have nothing to do with neurodegeneration. If concerns about memory or communication arise, the right step is still a formal evaluation by a qualified professional who can interpret speech patterns alongside broader cognitive testing and medical history.

What comes next in pause research

The Mueller team’s preprint at the University of Miami, along with the DementiaBank project’s effort to standardize speech sample collection and transcription across labs, points toward a field that is moving from small proof-of-concept studies toward the kind of large, diverse datasets needed to build clinical tools. The most informative next steps will likely involve longitudinal follow-up with MCI patients, harmonized pause definitions across research groups, and real-world pilot testing in clinics and telehealth platforms.

Some researchers are also exploring whether combining speech measures with other digital biomarkers, such as typing cadence or gait analysis, could improve predictive accuracy beyond what any single signal offers alone. That kind of multimodal approach is still in early stages, but it reflects a broader shift in dementia research toward passive, continuous monitoring rather than one-off clinic visits.

None of this is ready for routine use. But the underlying insight is hard to dismiss: the way people hesitate, fill silence, and recover mid-sentence is not random. It is shaped by the brain’s real-time capacity to find words, hold plans in working memory, and coordinate the muscles of speech. When that capacity starts to erode, the pauses may be among the first things to change. The science is not yet sharp enough to act on that insight clinically, but it is getting closer, one carefully measured silence at a time.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.