MART PRODUCTION/Pexels

Artificial intelligence is starting to do something human clinicians have struggled with for decades: quietly flag patients with Alzheimer’s disease who have slipped through the cracks, and do it in a way that narrows long‑standing racial gaps in diagnosis. Instead of relying only on memory tests or specialist visits, new models are combing through routine medical data and brain scans to surface hidden cases earlier and more fairly.

As these tools move from research labs into health systems, they are beginning to challenge the assumption that better technology inevitably worsens bias. By design, several of the latest Alzheimer’s models are built to perform at least as well for Black, Latino, Asian and white patients, and in some cases to outperform traditional methods across every group.

How SSPUL finds the patients medicine has been missing

The clearest sign that the diagnostic status quo is broken is the number of people who show clear signs of dementia in their records but never receive a formal label. A model known as SSPUL was created to attack that blind spot directly, scanning electronic health records for patterns that suggest Alzheimer’s disease even when no diagnosis code appears. In Findings reported by clinicians, SSPUL outperformed supervised baseline models across all racial and ethnic groups in identifying patients with missed diagnoses, and it did so while also recognizing related conditions such as decubitus ulcers that often accompany advanced cognitive decline.

What makes SSPUL notable is not just its accuracy but its training strategy. Instead of depending only on clearly labeled Alzheimer’s cases, it learns from large pools of unlabeled data, which better reflect the messy reality of primary care. That approach appears to help the system avoid overfitting to the kinds of patients who already get diagnosed, and instead pick up on subtler signals in underdiagnosed groups. The Dec report describing SSPUL’s performance emphasizes that this semi‑supervised method allowed the model to generalize across diverse populations, a crucial step if health systems want to use AI to close gaps rather than hard‑code them.

Inside the UCLA model that targets undiagnosed Alzheimer’s

While SSPUL shows what is possible with semi‑supervised learning, a team of Researchers at UCLA has pushed the idea further by building an AI system explicitly tuned to find undiagnosed Alzheimer’s cases in routine care. Their model mines electronic health records for combinations of symptoms, prescriptions and visit patterns that often precede a formal dementia label, then flags patients who match those trajectories but have never been coded with the disease. According to Researchers at UCLA, the tool was designed to improve Alzheimer’s detection particularly among patients aged 65 and older who often cycle through multiple providers before anyone connects the dots.

The same group has stressed that the model is not meant to replace clinical judgment but to act as a high‑sensitivity radar in the background of everyday care. When the system flags a patient, it prompts clinicians to consider a cognitive workup or referral that might otherwise be delayed for years. In a separate Dec account of the project, the team explained that they built the algorithm to slot into existing workflows rather than demand new hardware or long appointments, a design choice that makes it more likely to reach the very communities that have historically had the least access to specialist memory clinics.

Fairness by design: building equity into Alzheimer’s AI

One of the most striking aspects of the UCLA effort is how early the developers put fairness at the center of the work. Instead of training a single global model and hoping it would behave equitably, they incorporated population‑specific criteria and explicit fairness measures throughout development. That meant checking performance separately for Black, Latino, Asian and white patients, and adjusting the system until it produced equitable predictions across groups. As described in a Dec analysis, the researchers incorporated fairness measures specifically to address disparities in Alzheimer’s diagnosis rather than treating them as an afterthought.

That focus on equity is not just a moral stance, it is a technical one. If a model is trained only on patients who already have a diagnosis, it will inevitably learn the biases of the clinicians and systems that produced those labels. By using unlabeled data and fairness constraints, the UCLA team and the SSPUL developers are trying to teach their systems to recognize disease patterns that are consistent across populations, not just the patterns that have historically been noticed in white or wealthier patients. A Dec report on the AI model that uncovers hidden Alzheimer’s cases from EHRs notes that by ensuring equitable predictions across demographic groups, the system learned consistent and generalisable patterns of disease, a finding that aligns with the broader push to embed fairness into the core of medical AI rather than bolting it on later.

Zero‑cost digital detection and the promise of passive markers

Not every innovation in this space depends on complex imaging or new clinic visits. Another line of work focuses on what researchers call passive digital markers, signals that can be extracted from existing electronic health records without adding any burden to clinicians. In one project, a team combined multiple AI tools to scan routine data and identify Alzheimer’s and related dementias, increasing the rate of new diagnoses by 31 percent compared with usual care. The group behind this work emphasized that the method of combining these tools required no extra clinician time, a point underscored in a report from Nov that described how the approach could be implemented by the right personnel using existing infrastructure for Alzheimer detection.

A companion account framed the same strategy as Zero‑cost, AI‑driven digital detection, highlighting that it identifies Alzheimer’s and related dementias without additional clinician time in primary care settings where Few providers have specialized dementia training. By treating routine EHR activity as a passive digital marker, the system can flag high‑risk patients in the background and surface them for follow‑up when clinicians log in. The description of this work notes that the Zero‑cost approach is particularly attractive for health systems that are already stretched thin, because it layers intelligence onto existing workflows rather than asking overburdened staff to do more.

Mayo Clinic’s scan‑based AI and the nine‑dementia frontier

While EHR‑based models quietly sift through text and billing codes, imaging specialists are pursuing a complementary path that starts with the brain itself. At Mayo Clinic in ROCHESTER, Minn, researchers have developed an AI tool that can identify nine types of dementia, including Alzheimer’s disease, from a single brain scan. The system analyzes patterns of brain activity and structure that are often too subtle for the human eye, then classifies patients into specific dementia subtypes or as people without cognitive impairment. In one report, Dr. David Jones is described reviewing brain scans on a computer as he evaluates how Mayo Clinic‘s AI tool distinguishes among these nine conditions.

A separate account of the same project explains that the New AI tool helps clinicians identify brain activity patterns linked to nine types of dementia, and that Mayo Clinic researchers designed it so it can be deployed even in clinics that lack neurology expertise. By packaging the model into a user‑friendly interface, they aim to give community hospitals access to the same level of diagnostic nuance that patients might receive at a major academic center. The report on this New AI tool underscores how imaging‑based models can complement EHR systems by clarifying which dementia subtype a patient has once they are flagged as high risk.

Breaking bias in diverse communities

For all the technical sophistication of these models, their real test is whether they improve care for people who have historically been overlooked. That is where the UCLA work on equity stands out. A detailed account of the project describes how UCLA Researchers Develop AI Tool to Improve Alzheimer’s Detection Among Underserved patients, with a particular focus on diverse communities that have long faced barriers in disease recognition and treatment. The report notes that Researchers Develop AI Tool to Improve Alzheimer detection among underserved groups by training on data that reflect the full spectrum of patients seen at UCLA, not just those who make it to specialty clinics.

Another account aimed at a broader audience explains that UCLA researchers developed an AI model that analyzes electronic health records to identify undiagnosed Alzheimer’s disease and reduce racial gaps in diagnosis. That report emphasizes that the model was built at UCLA and that its performance was evaluated across racial and ethnic groups to ensure it did not simply replicate existing disparities. By tying the algorithm’s success to its ability to narrow those gaps, the Dec coverage of this UCLA Alzheimer tool makes clear that equity is not a side benefit but a core design goal.

From hidden cases to health‑system change

Finding undiagnosed patients is only the first step. For AI to meaningfully change outcomes, health systems have to act on those alerts in ways that are sustainable and fair. The Dec report on the AI model that uncovers hidden Alzheimer’s cases from EHRs while reducing diagnostic bias describes how the developers validated the system across multiple sites and ensured that its predictions were consistent and generalisable. By doing so, they aimed to give health systems confidence that when the model flags a patient, it is seeing a real pattern rather than noise. The same account notes that by ensuring equitable predictions across demographic groups, the model avoided the common pitfall of performing well overall while failing specific communities, a key requirement if it is to be trusted as part of routine care for Alzheimer care.

Other teams are thinking just as hard about implementation. The Zero‑cost, AI‑driven digital detection work is explicitly framed as something that can be rolled out in primary care clinics where Few clinicians have dementia expertise, while the Mayo Clinic imaging tool is being designed for use in centers that lack neurology specialists. Together, these efforts suggest a future in which AI quietly runs in the background of health systems, surfacing patients who need attention and guiding non‑specialists toward more accurate diagnoses. The challenge now is less about proving that the models work in principle and more about integrating them into workflows, reimbursement structures and community outreach so that the people they are meant to help actually feel the difference.

What this wave of Alzheimer’s AI means for patients and families

For patients and families, the most immediate impact of these tools is likely to be earlier and more consistent recognition of cognitive decline. When SSPUL and the UCLA models scan EHRs and flag someone who has been quietly accumulating risk factors, they create an opportunity for clinicians to start conversations about memory, safety and planning that might otherwise be delayed until a crisis. The imaging work at Mayo Clinic adds another layer, helping to distinguish among nine dementia types so that treatment and support can be tailored more precisely to what is actually happening in the brain. Together, these systems promise to replace some of the uncertainty that has long surrounded dementia diagnosis with clearer, data‑driven guidance.

There are, of course, open questions about consent, data use and how to communicate AI‑generated risk to patients without causing unnecessary alarm. But the direction of travel is clear. Instead of reinforcing existing inequities, the most ambitious Alzheimer’s AI projects are trying to reverse them by baking fairness into their design and targeting the very gaps that have left so many people undiagnosed. If health systems can match that technical progress with thoughtful implementation, the next few years could see a quiet but profound shift: fewer missed cases, narrower racial disparities and a diagnostic process that finally reflects the diversity of the people it is meant to serve.

More from MorningOverview