Researchers have built a machine-learning model that can distinguish between Alzheimer’s disease, dementia with Lewy bodies, frontotemporal dementia, and mild cognitive impairment using proteins extracted from a single blood draw. The approach pairs high-sensitivity plasma proteomics with statistical and AI-driven classification, offering a potential path toward faster, less invasive differential diagnosis at a time when ten million new dementia cases are identified worldwide each year.
One Blood Draw, Multiple Diagnoses
The core study, published in the journal Alzheimer’s and Dementia, analyzed plasma samples with a technique called NULISA proteomics and then fed the resulting protein measurements into statistical and machine-learning models. The system was designed to differentiate dementia diagnoses including Alzheimer’s disease (AD), dementia with Lewy bodies (DLB), frontotemporal dementia (FTD), mild cognitive impairment (MCI), and other dementias within the cohort. Among the biomarkers the model relied on, pTau217 figured prominently, with the study reporting area-under-the-curve (AUC) performance metrics for that protein across its classification tasks.
What makes this work notable is not just the breadth of conditions it targets but the simplicity of the input. A single tube of blood, processed through one proteomics platform, replaces what has traditionally required combinations of cerebrospinal fluid taps, PET brain scans, and lengthy clinical observation. For patients and clinicians alike, collapsing that diagnostic pipeline into a routine blood draw could shorten the months-long wait many families endure before receiving a specific diagnosis.
How NULISA Detects Trace Proteins
The measurement technology behind the study is NULISA, a proteomic liquid biopsy platform first described in Nature Communications. NULISA combines specialized assay chemistry with a sequencing-based readout to achieve attomolar sensitivity, meaning it can detect proteins at concentrations of roughly one quintillionth of a mole per liter. It also supports high multiplexing, allowing researchers to measure many protein targets simultaneously from a small plasma volume.
That sensitivity matters because the blood-brain barrier dilutes neurodegeneration-related proteins by the time they reach the bloodstream. Older immunoassay platforms often could not reliably quantify those faint signals, which is one reason cerebrospinal fluid remained the gold standard for years. NULISA’s ability to pick up trace-level proteins from plasma is what enabled the dementia-differentiation study to work with a standard blood sample rather than requiring a spinal tap. A separate large-cohort study using the same NULISA-based plasma proteomics approach confirmed that disease-specific protein signatures can be identified across AD, DLB, FTD, Parkinson’s disease, and cognitively unimpaired individuals, reinforcing the platform’s utility for differential diagnosis at scale.
Behind these individual reports is a broader shift in neurology toward high-throughput molecular profiling. Large databases such as the National Center for Biotechnology Information now host thousands of proteomic and genomic datasets, enabling cross-cohort validation and meta-analyses that were not feasible when biomarker work relied on small, single-site studies. Platforms like NULISA are emerging into this ecosystem, where their readouts can be compared and combined with existing molecular and clinical data.
Where Blood Biomarkers Stand Today
The new proteomics-plus-AI approach arrives at a moment when blood-based biomarkers for neurodegeneration are gaining clinical traction but still face real limits. A review in Nature surveyed the state of biofluid biomarkers in Alzheimer’s disease and other neurodegenerative dementias, drawing a clear line between markers that have been analytically validated and those that remain experimental. The review highlighted that while certain blood and cerebrospinal fluid biomarkers can reliably flag Alzheimer’s pathology, their ability to distinguish between non-Alzheimer’s dementias is far less established, and significant standardization work is still needed before routine clinical deployment.
That gap is exactly what the new study tries to close. Most commercially available blood tests focus on a single question: does this patient have Alzheimer’s-related amyloid or tau pathology? The NIH’s 2025 progress report on Alzheimer’s disease and related dementias noted that a commercially available blood test for p-tau217 showed promise in a study supported in part by NIH. But a p-tau217 test alone cannot tell a clinician whether a patient’s cognitive decline stems from Lewy body disease, frontotemporal degeneration, or a mixed pathology. The new multi-protein, multi-model framework attempts to answer several of those questions at once, using a panel of markers to infer which disease process is most likely driving symptoms.
Still, the field is far from consensus. The same Nature review emphasized the need for harmonized assay protocols, agreed-upon reference ranges, and prospective validation in diverse populations. Blood biomarkers behave differently across age groups, ethnic backgrounds, and comorbid conditions such as kidney disease or chronic inflammation. Any algorithm trained on a single health system or narrow demographic slice risks misclassification when deployed more broadly.
AI Diagnosis Beyond Blood Alone
The blood-based approach is not the only AI system tackling differential dementia diagnosis. A separate study published in Nature Medicine demonstrated AI-based differential diagnosis across ten dementia etiologies using multimodal clinical data, including imaging, cognitive testing, and medical history, drawn from tens of thousands of cases. That system achieved strong accuracy but required far more data inputs per patient than a single blood sample.
The contrast between these two approaches reveals a practical tension. Multimodal AI models can capture richer clinical context and may ultimately prove more accurate for complex or overlapping presentations. But they depend on data that many primary care settings simply do not collect. A blood-only pipeline, by comparison, could be deployed in community clinics, rural hospitals, and low-resource health systems where PET scanners and specialized neuropsychological testing are unavailable. The trade-off between depth and accessibility will likely shape which approach gains wider adoption first.
Regulators and guideline bodies will also have to decide how much evidence is enough before such AI systems can influence treatment decisions. Some researchers have argued, in commentary accessed through publisher sign-in portals, that diagnostic algorithms should be evaluated similarly to drugs or devices, with phase-like trials and post-market surveillance to monitor for bias or drift over time.
Clinical Caution and the Road to Routine Use
Even the most enthusiastic proponents of blood-based biomarkers acknowledge that clinical readiness lags behind laboratory performance. The NIH has cautioned that blood tests for markers like p-tau217 are not yet a replacement for comprehensive evaluation, especially in borderline or atypical cases. Instead, early adopters are likely to use these assays as triage tools: helping decide who needs more definitive testing, who might be eligible for disease-modifying therapies, and who can be monitored conservatively.
For the NULISA-based model, several hurdles remain before routine use. The assay itself must be standardized across laboratories, with clear quality-control metrics and external proficiency testing. The machine-learning models need prospective validation in independent cohorts, including patients recruited from primary care rather than specialty memory clinics. Clinicians will also need training on how to interpret probabilistic outputs (such as a 70% likelihood of Alzheimer’s versus a 20% likelihood of DLB) without overconfidence or undue alarm.
Health systems will have to weigh costs as well. High-sensitivity proteomics and AI infrastructure are not trivial investments, even if they are cheaper than serial PET scans. Payers may demand evidence that earlier and more precise diagnosis leads to better outcomes, whether through timelier use of symptomatic therapies, enrollment in clinical trials, or more appropriate support services for patients and caregivers.
Despite these challenges, the trajectory is clear: dementia diagnosis is moving from a largely clinical art toward a data-rich science. In this shift, molecular signatures and computational models complement bedside judgment. A single blood draw that can distinguish among multiple neurodegenerative diseases will not, on its own, solve the global dementia burden. But it could give patients and families answers sooner, guide them toward the right specialists and treatments, and provide researchers with sharper tools to dissect the biology of these devastating conditions.
If the field can align on standards, validate models across diverse populations, and integrate new tools thoughtfully into care pathways, blood-based AI diagnostics may soon become a routine part of how clinicians approach memory complaints, marking a shift from uncertainty and watchful waiting to earlier, more tailored decision-making.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.