Morning Overview

65% of U.S. doctors use an AI tool called OpenEvidence — and it’s been involved in 27 million clinical encounters in a single month

Dr. Mira Patel, a hospitalist at a mid-sized community hospital in Ohio, used to keep a browser tab open to UpToDate during every shift. Now she reaches for OpenEvidence. “I asked it about a tricky drug interaction at 2 a.m. last Tuesday and had a sourced answer in seconds,” she told a colleague in a conversation later shared on a physician forum. Her experience, while anecdotal, mirrors a pattern playing out across the country: on March 10, 2026, the clinical AI platform logged one million consultations with verified physicians in a single 24-hour period. Over the course of that same month, the company recorded nearly 27 million clinical consultations total. And according to OpenEvidence, 65 percent of U.S. doctors now use its platform, a figure that, if accurate, would mean roughly 715,000 of the country’s approximately 1.1 million active physicians have turned to an AI assistant for help with patient care.

Those numbers landed weeks before another milestone: on March 31, 2026, Mount Sinai Health System announced a collaboration to embed OpenEvidence directly inside its electronic health records. Mount Sinai consistently ranks among the top hospital systems in the United States, and the decision to wire an AI tool into the same interface where clinicians review labs, imaging, and patient histories signals that institutional buyers, not just individual early adopters, are now investing in AI-assisted clinical support at the infrastructure level.

What the numbers actually show

The strongest confirmed data points come from OpenEvidence’s own press release, distributed through PR Newswire. The company reported that its one-million-consultation day involved interactions with physicians verified through the National Provider Identifier system. NPI verification ties each query to a credentialed doctor rather than an anonymous user, which gives the usage figures a layer of professional accountability that most consumer-facing AI tools lack. While NPI verification is not the same as an independent third-party audit, it does add a meaningful layer of credibility by confirming that real, credentialed physicians are behind the queries.

The monthly total of nearly 27 million consultations, combined with the single-day peak, suggests sustained daily usage in the hundreds of thousands with periodic spikes. These figures describe individual AI-physician interactions, not unique patients or unique doctors. A single physician asking five questions in a shift counts as five consultations. Still, even a conservative reading implies that tens of thousands of clinicians are engaging with the system regularly enough to generate that volume.

Mount Sinai’s announcement adds a different kind of evidence. Academic health systems typically subject new clinical tools to internal reviews covering privacy, security, and clinical appropriateness before embedding them in EHR workflows. The fact that Mount Sinai proceeded suggests its leadership found enough value and acceptable risk to move forward. The integration means physicians can pull up evidence-based answers without leaving a patient’s chart, eliminating the friction of switching to a separate browser tab or mobile app. That workflow change alone could increase both the frequency and depth of each consultation.

Where the evidence gets thinner

The 65 percent adoption claim has appeared in company communications and subsequent news coverage, but the primary sources reviewed here do not include the underlying methodology, sample size, or an independent audit. OpenEvidence’s own press release emphasizes consultation volume and NPI verification rather than a specific adoption percentage. Without access to the original data or third-party confirmation from a body like the American Medical Association, the figure should be treated as a company-reported estimate, not an independently verified statistic.

Equally unclear is what “clinical consultation” means in practice. A single consultation could be a quick drug-interaction lookup or a detailed differential-diagnosis session lasting several minutes. OpenEvidence has not published a breakdown of consultation types, average session length, or the clinical specialties driving the bulk of queries. That granularity matters: checking antibiotic dosing is a fundamentally different use case from generating a differential diagnosis for a complex presentation, and the stakes of an error differ accordingly.

No published outcome data, such as error rates, diagnostic concordance studies, or adverse-event tracking, are available in the public record as of June 2026. Hospitals considering adoption have limited external evidence on whether the tool improves patient outcomes, reduces diagnostic errors, or simply speeds up information retrieval without measurable clinical benefit. Until such data appear in peer-reviewed journals, most performance claims will rest on internal evaluations that are rarely shared in full.

The regulatory picture

No independent regulatory body, including the FDA, has publicly commented on OpenEvidence’s safety profile or accuracy in the sources available. That is not necessarily unusual. Under the FDA’s 2022 final guidance on Clinical Decision Support software, tools that meet certain criteria, such as displaying the basis of their recommendations and allowing clinicians to independently review the underlying information, may fall outside the scope of device regulation requiring 510(k) clearance. Whether OpenEvidence qualifies under those criteria has not been publicly confirmed.

The Mount Sinai collaboration describes the integration as providing “evidence-based knowledge,” but the available sources do not specify whether the system pulls from peer-reviewed literature in real time, relies on a periodically updated knowledge base, or combines both approaches. That distinction affects how quickly the tool can incorporate new research and how vulnerable it might be to outdated or incomplete data. It also raises questions about how the system handles conflicting studies, low-quality trials, or emerging safety signals that have not yet been widely disseminated.

Transparency is another open question. The available materials do not detail whether clinicians can see citations for every recommendation, whether the system discloses levels of evidence, or how it flags areas where medical consensus is weak. For a tool positioned as evidence-based, those design choices sit at the center of clinical trust. Without a clear view into how recommendations are generated, physicians may struggle to know when to accept, question, or override the AI’s output.

How OpenEvidence fits the broader landscape

OpenEvidence is not operating in a vacuum. Physicians have long relied on clinical decision-support tools like UpToDate, a subscription-based reference platform owned by Wolters Kluwer that has been a staple in hospitals for decades. Newer entrants, including Glass Health and Elion, are also building AI-powered diagnostic and clinical reasoning tools aimed at physicians. What distinguishes OpenEvidence’s pitch is the combination of NPI-verified usage at scale and direct EHR integration through a major academic health system. If the Mount Sinai model proves successful, it could set a template for how AI tools get embedded in hospital infrastructure rather than used as standalone apps.

The speed of adoption is real. Nearly 27 million consultations in a month, even if concentrated among a subset of enthusiastic early users, indicates that AI is already woven into the daily routines of a significant number of physicians. But adoption and impact are not the same thing. The critical next chapter will be written by researchers, not press releases: controlled studies comparing diagnostic accuracy, treatment decisions, and downstream outcomes like readmissions or complications in settings where the tool is used versus where it is not.

What physicians should weigh before adopting AI at the point of care

For clinicians evaluating whether to use OpenEvidence, the practical first step is straightforward: confirm that any AI tool used in clinical settings is compatible with existing EHR infrastructure, and check whether the institution’s compliance and risk teams have reviewed its data-handling practices. The Mount Sinai integration suggests that at least one major system has cleared those hurdles, but each hospital’s regulatory environment and patient population differ. Physicians should ask their administrators whether an internal review has been completed before relying on the tool for patient care decisions, and should clarify whether local policies treat AI recommendations as advisory or as part of the formal medical record.

The consultation numbers are striking, and the institutional backing from Mount Sinai lends credibility that a press release alone cannot. But the gap between rapid adoption and proven clinical benefit remains wide. Until independent data fill in the blanks around safety, accuracy, and patient outcomes, OpenEvidence is best understood as a fast-scaling experiment at the intersection of medicine and machine intelligence, one that thousands of doctors are already running in real time, with real patients, every day.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.