Morning Overview

65% of U.S. doctors are using the same AI tool across 27 million patient encounters every month — most patients don’t know

On March 10, 2026, a physician somewhere in the United States typed a clinical question into OpenEvidence and became the platform’s one-millionth consultation of the day. The patient on the other side of that encounter almost certainly had no idea. Neither, most likely, did the thousands of other patients whose doctors queried the same AI system during those 24 hours.

OpenEvidence, a platform that delivers AI-generated, citation-backed medical answers to verified physicians, announced the single-day milestone in a March 2026 press release. The company says its platform is now used by 65 percent of U.S. doctors and handles roughly 27 million patient encounters per month. If those figures hold up to independent scrutiny, OpenEvidence may already be the most widely used clinical AI tool in American medicine. As of June 2026, no outside body has confirmed either number.

What the verified evidence shows

The single-day figure is the firmest data point available. Every physician who used OpenEvidence on March 10 was authenticated through the National Provider Identifier (NPI) registry, the federal database that assigns a unique ID to every licensed clinician in the country. The company describes its outputs as evidence-grounded answers paired with citations from peer-reviewed medical literature, positioning the tool as a real-time research assistant rather than a diagnostic engine.

That usage did not stay confined to a standalone app for long. Late in March 2026, Mount Sinai Health System announced a collaboration that embeds OpenEvidence directly inside the Epic electronic health record used across its hospitals and clinics. The integration extends access beyond physicians to nurses and pharmacists, meaning a broader pool of clinicians can now query the AI without leaving the charting software they already use every day. Mount Sinai leadership provided attributable quotes supporting the partnership, a signal of institutional confidence that carries weight in academic medicine.

OpenEvidence also released what it calls the AI-Integrated Doctor Dialer, a unified telemedicine platform combining phone calls, messaging, voicemail, and fax with live clinical-decision AI. When a physician selects “Create Visit,” the system transcribes patient calls into structured clinical notes automatically. That feature moves the AI from a behind-the-scenes reference tool into the medical record itself, raising the stakes for accuracy.

An independent evaluation adds a layer of outside scrutiny. A medRxiv preprint (DOI: 10.64898/2026.04.23.26351526v1) tested OpenEvidence outputs in triage-recommendation scenarios. The study includes full methodology details and provides a reproducibility trail with prompts, responses, and code through a linked GitHub repository, with data deposition to Zenodo planned upon journal acceptance. The key finding: OpenEvidence errs on the conservative side, tending to recommend higher levels of care rather than risk under-triaging a patient.

Where the numbers get shaky

The 65 percent adoption claim originates from OpenEvidence itself. No independent medical board survey, professional association audit, or federal dataset has confirmed it. Translating one million consultations on a single day into a share of the roughly 1.1 million active physicians in the U.S. requires knowing how many unique doctors participated and how many queries each one submitted. Those breakdowns have not been disclosed.

The 27-million-monthly figure follows the same logic gap. It appears to be an extrapolation from the peak day rather than a separately measured monthly average. Healthcare activity swings widely by season, day of the week, and even conference schedules. Without a full month of disclosed daily data, the number is best treated as an estimate, not a confirmed baseline.

Patient awareness is the hardest claim to pin down. No published study or regulatory filing quantifies how many patients know their doctor consulted an AI tool during a visit. The inference is reasonable: OpenEvidence operates inside physician workflows, not patient-facing screens. But “reasonable inference” is not the same as evidence. The FDA has not issued specific guidance requiring disclosure when a physician uses an AI-powered clinical reference, and while several states have introduced AI transparency bills (California’s SB 1120 and Colorado’s AI Act among them), none has yet created a mandate specific to clinical-reference tools of this type.

The medRxiv triage study, while valuable, is a preprint that has not completed peer review. Its scope is limited to a controlled testing environment. Whether conservative triage behavior helps or hurts in a high-volume emergency department is an open question: a tool that consistently recommends escalation could protect patients from missed emergencies, or it could contribute to overcrowding in settings already stretched thin.

How OpenEvidence fits the broader AI-in-medicine landscape

OpenEvidence is not operating in a vacuum. Microsoft’s Nuance DAX Copilot already generates clinical notes from physician-patient conversations inside Epic and other EHR systems. Google has developed Med-PaLM, a large language model tuned for medical question-answering. Epic itself has rolled out AI features natively within its software. What distinguishes OpenEvidence, at least by its own account, is the combination of NPI-verified access, citation-backed outputs, and the sheer volume of reported daily use.

That competitive context matters for patients. When multiple AI tools are embedded in overlapping parts of the clinical workflow, the question shifts from “Is my doctor using AI?” to “How many AI systems touched my care today, and who is responsible for each one?” Neither the American Medical Association nor any federal agency has published comprehensive guidance answering that question as of June 2026, though the AMA’s 2023 principles on augmented intelligence continue to call for transparency, physician oversight, and bias mitigation.

What this means for patients and clinicians

For physicians, the appeal is straightforward. A doctor who once spent 20 minutes searching PubMed or UpToDate can now surface relevant studies, guidelines, and differential diagnoses in seconds. Embedding that capability inside the EHR and telemedicine platforms removes the friction of switching between applications. In fast-moving fields like oncology and infectious disease, where treatment protocols can shift within weeks, real-time access to curated evidence has obvious value.

For patients, the picture is more complicated. An AI system that consistently surfaces current evidence could improve care quality and reduce the variability that comes from one doctor being more up-to-date than another. But opacity creates risk. If an AI-generated note mischaracterizes a symptom or suggests an inappropriate follow-up, a patient may struggle to challenge the record or even know that an automated system contributed to it.

There is also a resource gap to consider. A system like Mount Sinai can stand up governance committees, run internal audits, and assign clinical informaticists to monitor AI outputs. A two-physician family practice in rural Georgia likely cannot. Without standardized evaluation frameworks, the same tool could be deployed with rigorous oversight in one setting and virtually none in another, with patients in both places equally unaware.

What would change the picture

Several developments could sharpen what is still a blurry snapshot. Independent audits of usage patterns, conducted by neutral academic or regulatory bodies, would verify how many clinicians actually rely on OpenEvidence and in what clinical contexts. Peer-reviewed studies examining real-world outcomes (diagnostic accuracy, treatment appropriateness, resource utilization) would move the conversation beyond simulated triage. Transparent reporting of adverse events or near-misses linked to AI-assisted decisions would build the kind of safety record that patients and regulators need.

Regulators may eventually set disclosure and documentation standards. Even if tools like OpenEvidence are classified as reference systems rather than medical devices, agencies could still require that their use be recorded in the chart and communicated to patients. Professional societies could develop best-practice guidelines covering when clinicians should rely on AI-generated summaries, how to cross-check them, and how to explain their role during shared decision-making.

For now, the core reality is clear even if the precise numbers are not: AI systems are already woven into clinical practice at a scale that would have seemed speculative just two years ago. Verified doctors are using OpenEvidence in large numbers, at least one major health system has built it into everyday workflows, and early independent testing suggests the tool leans toward caution in triage. What has not kept pace is the public infrastructure around that reality: independent measurement, transparent oversight, and meaningful patient awareness of the invisible systems now helping to shape their care.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.