Morning Overview

65% of U.S. doctors are quietly using the same AI tool across 27 million patient encounters every month

On March 10, 2026, a clinical AI platform called OpenEvidence logged one million consultations with verified physicians in a single day. Not medical students. Not curious patients Googling symptoms. Licensed, credentialed doctors, each authenticated through the National Provider Identifier system, turning to the same AI for help with diagnosis, treatment options, and clinical evidence before walking back into the exam room.

Most patients have never heard of OpenEvidence. Most regulators have not publicly weighed in on it. Yet the company says a majority of practicing U.S. physicians now use the platform daily, a claim that, if accurate, means the tool is already shaping care decisions in exam rooms and hospital wards across the country. The speed of that adoption, and the near-total absence of independent oversight, raises questions that go well beyond one company’s growth metrics.

The numbers behind the claim

OpenEvidence’s growth curve is unusually steep, even by Silicon Valley standards. In July 2024, the platform recorded roughly 358,000 logged-in consultations by verified U.S. physicians in a single month. By mid-2025, that figure had climbed past 8.5 million monthly clinical consultations, a roughly 24-fold increase. The company disclosed those numbers alongside a $210 million funding round that valued it at $3.5 billion.

The March 2026 milestone pushed the trajectory further. One million consultations in 24 hours implies monthly volumes well beyond the mid-2025 baseline. The company has characterized monthly encounters as reaching 27 million, though that figure has not appeared in any publicly filed document or independently audited disclosure. According to the same mid-2025 PR Newswire announcement, OpenEvidence says it operates across more than 10,000 hospitals and medical centers nationwide, a self-reported figure that has not been independently confirmed.

The “65 percent” framing comes from the company’s own description of “majority-daily use by U.S. physicians.” That is a significant claim. The United States has roughly one million active physicians, according to the Association of American Medical Colleges. But OpenEvidence has not published the denominator it uses to calculate that share: whether it counts all licensed physicians, only those in active clinical practice, or only those in certain specialties. Without that denominator, the percentage cannot be independently confirmed.

What no one has verified

Every adoption metric in this story originates from OpenEvidence itself, disclosed through company announcements distributed via PR Newswire. No independent audit from a medical society, federal agency, or academic research group has publicly confirmed the consultation volume, the adoption rate, or the accuracy of the platform’s clinical outputs.

The American Medical Association, the FDA, and the Office of the National Coordinator for Health IT have not issued public assessments of OpenEvidence’s claims or its effect on patient care. That silence is notable given the scale the company describes. A tool that says it reaches the majority of American doctors would, in any other context, attract immediate regulatory attention.

Also undefined is what “clinical consultation” actually means in practice. A physician querying the system about a rare drug interaction during a complex case is a fundamentally different event from a quick dosing lookup. OpenEvidence has not disclosed the average length, depth, or clinical weight of these interactions. That distinction matters because it determines whether the tool functions primarily as a reference library or as something closer to a co-pilot actively shaping diagnostic and treatment decisions.

No outcomes data exists yet

Perhaps the most consequential gap: no peer-reviewed study has measured whether OpenEvidence consultations lead to fewer diagnostic errors, shorter hospital stays, or better survival rates. The platform may deliver those benefits. It may also introduce subtle biases or reinforce outdated treatment patterns in ways that only surface over time. The evidence base to confirm or deny either possibility does not yet exist in published form.

Physicians and patients are, in effect, participating in an uncontrolled natural experiment at enormous scale, with the underlying protocol largely defined by a single private company.

For context, established clinical reference tools like UpToDate and DynaMed have been used by physicians for decades and have accumulated a body of research on their impact on clinical decision-making. OpenEvidence differs from those platforms in a key way: rather than presenting curated, editorially reviewed summaries, it uses AI to synthesize answers from medical literature in real time. That approach can surface newer evidence faster, but it also introduces the risk that the AI weights studies incorrectly or misses nuance that a human editor would catch.

What this means for patients right now

If a doctor treats 20 patients a day and consults OpenEvidence for several of those encounters, the AI’s outputs are already shaping care. Patients have no way to know when their physician’s recommendation was informed by the platform. No regulatory framework currently requires that disclosure. This is not a hypothetical scenario about the future of medicine. According to the company’s own reporting, it is the present state of practice in thousands of facilities.

For physicians, widespread adoption of a single AI system creates a different kind of pressure. When the majority of peers across thousands of hospitals rely on the same tool, its recommendations can become a de facto standard of care. Deviating from that standard, even when clinically justified, could carry legal and professional risk. There is also the problem of cognitive lock-in: when an answer appears authoritative and arrives instantly, it becomes harder to question, especially under the time pressure that defines modern clinical practice.

Concentration risk in clinical decision-making

Clinical practice has always been shaped by guidelines, textbooks, and expert opinion, but those sources are diverse and frequently contested. An AI platform that aggregates research and presents a single synthesized answer risks compressing that diversity into one dominant narrative. If most U.S. physicians turn to the same system, any systematic bias, outdated assumption, or subtle modeling error could propagate rapidly across the healthcare system before anyone detects it.

Resilience is a related concern. If OpenEvidence were to suffer a prolonged outage, change its pricing model, or alter its algorithms in ways that clinicians find less reliable, the disruption would ripple through hospitals and clinics that have quietly built the tool into daily workflows. Unlike an electronic health record system, which is visible and heavily regulated, an AI reference layer can be adopted informally and deeply before policymakers recognize its systemic importance.

Transparency compounds the problem. OpenEvidence markets itself as evidence-based, but its public documentation does not detail how it weighs conflicting studies, how often its models are updated, or how it handles edge cases where the literature is thin or contradictory. Physicians must treat the system as a sophisticated black box, trusting that its outputs reflect current best practice. Patients, in turn, are trusting both their doctor and an invisible AI assistant they were never told about.

What oversight could look like

Several concrete steps could bring more clarity without freezing innovation. Professional societies and academic medical centers could conduct independent evaluations of AI tools like OpenEvidence, measuring not just accuracy on test questions but real-world effects on prescribing patterns, diagnostic timelines, and patient outcomes. Those studies would not require access to proprietary code; they could rely on observed physician behavior and de-identified clinical data.

Regulators could develop disclosure standards tailored to AI-assisted care. That might include requiring platforms to publish high-level information about their evidence sources, update cycles, and known limitations, along with clear labeling inside clinical interfaces when recommendations rest on low-quality or conflicting data.

Hospitals and health systems could treat AI reference platforms as critical infrastructure, subject to the same governance that applies to electronic health records and formulary decisions: formal review committees, documented risk assessments, and ongoing monitoring for unexpected effects, rather than leaving adoption entirely to individual clinicians and word of mouth.

The oversight gap is already the story

OpenEvidence’s trajectory illustrates how quickly a new layer of decision support can become embedded in everyday medicine. In under two years, a tool that most patients have never encountered has, by its own account, become a constant presence in American clinical practice. The verified numbers describe a remarkable feat of engineering and distribution. The unanswered questions describe a widening gap between the pace of technological adoption and the slower machinery of oversight, independent research, and public accountability.

Whether OpenEvidence ultimately improves outcomes or introduces new risks will depend on details that remain largely invisible: how its models are trained, how its recommendations are interpreted by physicians under pressure, and how the broader system responds when problems surface. What is already clear, as of June 2026, is that more than a million doctor-AI consultations can occur in a single day, and almost no one outside the medical profession is watching closely enough.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.