When a sepsis-prediction algorithm flags a patient in a busy emergency department, the physician staring at the alert has seconds to decide: trust the machine, override it, or try to figure out how it reached its conclusion. Across the country, that scenario is playing out thousands of times a day as hospitals accelerate their adoption of artificial intelligence tools for everything from radiology reads to medication dosing. But the doctors on the receiving end of these recommendations say the guardrails have not kept pace with the rollout.
By spring 2026, federal regulators have laid down new transparency requirements, congressional committees have grilled experts on the risks, and academic researchers have proposed standardized safety labels for clinical AI. Yet no national database tracks how many hospitals use these tools, no court has ruled on who is liable when an algorithm contributes to patient harm, and frontline clinicians still lack a simple, consistent way to evaluate the systems they are asked to rely on.
Federal rules push for transparency
The U.S. Department of Health and Human Services finalized its HTI-1 rule, imposing new obligations for algorithm transparency in certified health IT systems. Under the rule, hospitals that deploy AI and algorithmic decision-support tools through certified platforms must meet disclosure standards designed to give clinicians and patients clearer insight into how predictive models generate their outputs. The rule also ties into broader federal interoperability goals, linking data-sharing standards to the machine-learning systems that increasingly sit on top of electronic health records.
The Food and Drug Administration, meanwhile, issued draft guidance on AI-enabled device software functions that spells out what manufacturers must document when seeking marketing approval. The guidance covers training-data descriptions, performance benchmarks, and plans for ongoing monitoring and updates, treating AI tools as products with a lifecycle rather than static code. Manufacturers would need to submit these details through FDA marketing submissions, creating an auditable paper trail. As of early 2026, the agency has not published post-implementation audit results or incident data that would show whether the new requirements are changing manufacturer behavior in practice.
The National Institute of Standards and Technology released its AI Risk Management Framework 1.0 in January 2023. The framework is voluntary, but it has become a widely cited benchmark for risk identification, measurement, and monitoring. Hospitals and vendors reference it in their own governance policies, and it has shaped the vocabulary that regulators, developers, and health-system leaders use when discussing AI safety.
Expert warnings on automation bias and governance gaps
In February 2024, the Senate Committee on Finance convened a hearing titled “Artificial Intelligence and Health Care: Promise and Pitfalls,” assembling clinicians, policy researchers, and industry representatives to testify on the state of clinical AI oversight.
Michelle M. Mello, JD, PhD, a Stanford professor of law and health policy, submitted written testimony focused on responsible governance of healthcare AI. She zeroed in on automation bias, the well-documented tendency of clinicians to defer uncritically to algorithmic recommendations, calling it a workflow-level danger that no amount of model accuracy can eliminate on its own. “The algorithm alone isn’t the full safety story,” Mello told the committee, arguing that hospital governance structures must account for how humans interact with AI outputs, not just whether those outputs are technically correct.
Mark Sendak, MD, MPP, a physician and AI implementer with the Duke Health AI Partnership, described real-world deployment challenges at the same hearing. Sendak called for guardrails paired with federal infrastructure investments, drawing a parallel to the HITECH Act programs that helped hospitals adopt electronic health records a decade earlier. He pointed to peer-reviewed evidence showing that AI tools can change performance characteristics after initial deployment through model updates that are invisible to the clinicians using them, a problem he argued current oversight mechanisms are not designed to catch.
That hearing produced testimony but no legislation or binding commitments. More than two years later, no publicly documented bill or regulatory roadmap has emerged directly from the committee’s work, though the testimony continues to be cited in policy discussions.
A prescription label for algorithms
One concrete proposal that has gained traction in academic and clinical circles is the concept of “Model Facts” labels. Published in npj Digital Medicine, the idea borrows from the familiar drug-facts format: a standardized, one-page summary that tells a clinician what a machine-learning model does, how it performs, its known limitations, and when it is appropriate to use. Key details, such as the intended patient population, the setting where the model was validated, and major sources of uncertainty, would be presented in plain language accessible at the bedside.
The concept is elegant, but it remains untested at scale. No published primary research documents whether Model Facts labels change physician behavior, reduce diagnostic errors, or improve patient outcomes. Hospitals considering adoption are left to infer potential benefits from theory and analogy to pharmaceutical labeling rather than from randomized trials or large observational studies. Six years after the proposal’s publication, the gap between concept and clinical validation persists.
Who pays when an algorithm gets it wrong?
Liability may be the thorniest unresolved question. A New England Journal of Medicine analysis by Mello and Guha laid out a framework for distributing responsibility when AI contributes to patient harm. Their model assigns potential liability among clinicians, hospitals, and software vendors depending on the circumstances. A physician who ignores a clear algorithmic warning, for instance, faces a different legal exposure than one pressured by hospital policy to follow an algorithm’s recommendation without question.
The framework applies existing malpractice and product-liability doctrines to AI-assisted care rather than assuming entirely new statutes will be needed, which is part of its appeal in legal and policy circles. But it remains theoretical. No public record of court decisions or settlements specifically tied to AI-driven clinical errors has surfaced. Judges may ultimately treat algorithms as just another diagnostic tool within the standard of care, or they may emphasize the role of vendors and software updates in ways that shift responsibility away from individual doctors. Until cases are litigated, hospitals and developers are operating under significant legal uncertainty.
What hospitals still do not know
Perhaps the most striking gap is the absence of basic national data. No publicly available federal survey or official dataset quantifies how many U.S. hospitals have deployed clinical AI tools or how many adverse events those tools have contributed to. Individual case reports and vendor marketing materials suggest rapid uptake, but they do not add up to a comprehensive picture. The American Hospital Association and organizations like ECRI have published surveys on technology adoption, yet none provides the granular, tool-level deployment data that policymakers and researchers need to assess systemwide risk.
The HTI-1 rule and FDA guidance set expectations for transparency and documentation, but neither agency has released compliance-period results. That leaves a gap between regulatory design and demonstrated impact. Congressional testimony from Mello and Sendak represents informed expert opinion, not consensus positions of medical associations or hospital systems. And peer-reviewed proposals like Model Facts labels and the NEJM liability framework offer informed blueprints, not proven safeguards.
Hospitals as their own proving ground
For clinicians and hospital leaders navigating this landscape in 2026, the practical takeaway is that federal rules are pushing toward greater transparency and lifecycle oversight but stop short of prescribing detailed clinical workflows or liability allocations. Expert testimony and academic frameworks offer useful starting points, but they do not substitute for local governance: multidisciplinary oversight committees, clear documentation of every AI tool in use, training programs that address automation bias head-on, and mechanisms for reporting and reviewing AI-related incidents.
In the absence of comprehensive national data, hospitals that adopt AI tools are, in effect, generating the next wave of evidence about how these systems perform in real clinical settings. Whether today’s patchwork of federal rules, voluntary frameworks, and expert proposals will prove sufficient to protect patients is a question that will be answered not in policy papers but in emergency departments, operating rooms, and, eventually, courtrooms.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.