Morning Overview

An AI tool called OpenEvidence is used by 65% of U.S. doctors — most patients have no idea it’s helping make their diagnoses

Somewhere between the moment a doctor listens to your symptoms and the moment they recommend a treatment, a step you cannot see may be taking place. A growing number of physicians are querying an artificial intelligence platform called OpenEvidence, typing in clinical questions and receiving answers drawn from peer-reviewed medical literature, before deciding what to do next. On March 10, 2026, the company behind the tool announced that one million clinical consultations between verified physicians and its AI system occurred in a single day. As of June 2026, no federal regulation requires your doctor to mention that an algorithm played any role in your care.

How OpenEvidence actually works

OpenEvidence is not a consumer chatbot. Access is restricted to clinicians whose credentials are confirmed through National Provider Identifier checks, the same unique ID numbers that appear on every insurance claim a doctor files. Once verified, a physician can pose a clinical question, such as which antibiotic is best supported by evidence for a specific infection in a patient with kidney disease, and receive a synthesized answer grounded in published research.

That design separates it from general-purpose AI tools like ChatGPT or Google’s Gemini, which anyone can use and which draw on broad internet data rather than curated medical literature. OpenEvidence positions itself as an evidence engine, not a diagnostic oracle. The distinction matters because it means the tool is designed to support a physician’s reasoning, not replace it.

A preprint study posted on medRxiv, independent of the company, tested the platform using 960 constructed clinical scenarios. In 65 of those 960 cases, OpenEvidence declined to assign a triage level entirely, instead requesting additional clinical data before making a recommendation. The system also performed measurably better when objective information like lab values or imaging results accompanied the patient scenario. Both behaviors suggest the tool is calibrated to be cautious when information is thin, a trait that could reduce reckless guesses but also slow decisions when time is short.

That preprint has not yet undergone formal peer review, so its findings carry less weight than a published journal article. Still, it offers the most detailed independent look at how the system behaves under structured testing conditions.

The 65 percent claim and what the evidence actually supports

The statistic in the headline, that 65 percent of U.S. doctors use OpenEvidence, has circulated widely in news coverage. But tracing it to a primary source reveals a problem: no published survey, government dataset, or audited company filing available as of June 2026 confirms that specific number. OpenEvidence’s own press release focuses on the single-day consultation volume, not a share-of-physicians figure. Secondary outlets have repeated the percentage without linking to original methodology.

That does not mean the number is fabricated. It may originate from internal company data, investor materials, or an unpublished survey. But until an independent, representative physician survey validates it, the figure should be understood as widely cited rather than independently confirmed. What is confirmed is the scale of activity: one million consultations in 24 hours is an enormous volume, regardless of how many individual physicians generated those queries.

Why patients are left in the dark

Consider a routine scenario. A patient visits a primary care physician for persistent fatigue. The doctor suspects a thyroid issue, but before ordering labs, opens OpenEvidence on a second screen and types in the clinical details. The AI returns a ranked list of differential diagnoses with supporting citations. The doctor adjusts the workup accordingly. The patient leaves with a lab order and no knowledge that an algorithm shaped which tests were chosen.

This is not a hypothetical edge case. It is the ordinary workflow the platform is designed to support. And right now, nothing in federal law compels the physician to disclose that consultation.

The FDA’s framework for clinical decision support software, established under Section 3060 of the 21st Century Cures Act, exempts certain tools from device regulation if they are intended to support (not replace) a clinician’s independent judgment and if the clinician can independently review the basis for the recommendation. OpenEvidence appears to fit within that exemption, which means it operates largely outside the FDA’s premarket review process.

The American Medical Association’s Policy H-480.939 on augmented intelligence, adopted in 2023, calls for transparency, physician oversight, and safeguards against bias in clinical AI. But AMA policies are guidance documents, not enforceable rules. Individual health systems can adopt them, ignore them, or create their own standards.

The Office of the National Coordinator for Health IT finalized its HTI-1 rule in 2024, which introduced transparency requirements for AI and predictive algorithms embedded in certified electronic health record systems. But a standalone tool like OpenEvidence, accessed separately from the EHR, may fall outside that rule’s scope. The result is a patchwork: some AI tools face disclosure requirements, others do not, and the boundaries depend on technical architecture more than patient impact.

The documentation gap

Even when a physician uses OpenEvidence responsibly, a second problem emerges: documentation. No public standard requires that AI-informed reasoning be recorded in a patient’s medical chart. If a doctor queries the tool, receives a suggestion, and adjusts a treatment plan, the chart may reflect only the final decision, not the AI input that shaped it.

That gap has consequences in multiple directions. If an AI-influenced recommendation later contributes to a missed diagnosis, malpractice attorneys and insurers will want to reconstruct what the clinician knew and when. Without a log of the AI consultation, that reconstruction becomes guesswork. Conversely, if the tool helps a physician catch a rare condition that might otherwise have been overlooked, the absence of documentation means the save goes unrecorded, invisible to quality improvement programs and institutional learning.

Researchers face the same wall. Studying whether AI-supported decisions improve patient outcomes requires linking consultation data to clinical records. If the AI step is never captured in the chart, the research question becomes nearly unanswerable at scale.

What patients and physicians should be asking

For patients, the most practical question is direct: “Are you using any AI tools to help with my care?” Physicians are not required to volunteer the information, but most will answer honestly if asked. Patients who want to understand the basis for a recommendation can also request the clinical reasoning behind it, a right that exists independent of whether AI was involved.

For physicians, the questions are harder. How much weight should an AI-generated suggestion carry relative to clinical experience? Should the consultation be documented, even if no rule demands it? What happens when the tool’s recommendation conflicts with a physician’s instinct? These are not abstract ethics-seminar questions. They arise every time a clinician opens the platform and types in a query.

For policymakers, the urgency depends on scale. If AI consultation tools are used by a small cohort of early adopters, voluntary guidelines may be sufficient. If adoption is as widespread as the circulating figures suggest, then the absence of disclosure mandates, documentation standards, and outcome tracking represents a regulatory gap that grows wider with every consultation.

Where the evidence needs to catch up

Several developments could clarify the picture in the months ahead. Independent physician surveys, conducted by professional societies like the AMA or academic research groups, could establish reliable adoption numbers across specialties and practice settings. Peer-reviewed outcome studies linking de-identified EHR data to AI consultation logs could begin to measure whether these tools improve diagnostic accuracy or introduce new categories of error.

Regulators may also sharpen their expectations. CMS, the ONC, and state medical boards are all positioned to issue guidance on when clinicians should disclose AI use, how AI-influenced reasoning should be charted, and what evidentiary standards tools must meet before integration into routine care. None of those frameworks exist in comprehensive form yet, but the volume of AI-assisted clinical activity makes their absence harder to justify with each passing month.

What is clear right now is this: a powerful class of AI tools is being adopted rapidly by physicians, with early signs that at least one major platform behaves cautiously when clinical information is incomplete. What is not clear is how often these tools change outcomes, whether patients benefit from knowing about them, and what happens when something goes wrong. The confirmed facts are striking. The unmeasured consequences are where the real stakes lie.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.