When a hospital in the Midwest upgrades its electronic health records system this spring, its physicians will encounter something new: a disclosure panel explaining exactly how the AI tool flagging potential sepsis cases was trained, what data it relies on, and where its blind spots lie. That transparency requirement, mandated by a federal rule with compliance dates phasing in through 2025, is just one piece of a broader regulatory shift now reshaping how artificial intelligence enters American clinical practice.
“We went from having no idea how the sepsis algorithm worked to being able to pull up a summary of its training data and known failure modes,” said Dr. Karen Lyles, a hospitalist at a 400-bed teaching hospital in Ohio, describing her system’s early rollout of the new transparency features. “It doesn’t answer every question, but it changes the conversation at the bedside.”
Across Washington, three agencies have moved almost simultaneously. The Food and Drug Administration, the Office of the National Coordinator for Health Information Technology (ONC), and the Centers for Medicare and Medicaid Services (CMS) have each issued major actions that together represent a sweeping update to health data governance. Meanwhile, a peer-reviewed trial published in Nature Medicine has shown that large language model assistance can sharpen physician reasoning, even as the rapid spread of AI-driven workflows raises hard questions about cybersecurity, fairness, and whether the evidence can keep pace with the technology.
A new FDA framework for AI that evolves
In its draft guidance on AI-enabled device software functions, the FDA laid out a lifecycle approach to regulating algorithms that learn and change after they reach the market. Rather than treating approval as a one-time gate, the agency now expects manufacturers to document how they will manage updates, monitor real-world performance, and address safety signals that surface months or years after deployment.
The agency’s public list of authorized AI and machine-learning devices already catalogs over 900 products cleared through pathways such as 510(k) and De Novo review. The list has grown sharply in recent years, with new entries spanning radiology, cardiology, pathology, and ophthalmology. For hospital technology committees weighing purchases, it offers a centralized, if incomplete, record of which tools have actually passed federal review.
What the list does not yet provide is post-market performance data. The draft guidance calls for ongoing monitoring, but the FDA has not published systematic reports on whether previously cleared AI tools have experienced accuracy drift or required significant updates after authorization. Hospitals still rely heavily on manufacturer disclosures and their own quality-improvement tracking to catch problems.
Transparency rules hit EHR vendors
ONC’s HTI-1 Final Rule, finalized in early 2024 with compliance dates phasing in through 2025, requires certified health IT systems to disclose baseline information about any predictive algorithms embedded in their platforms. That means EHR vendors such as Epic, Oracle Health, and MEDITECH must surface details including a model’s intended purpose, training data characteristics, and known limitations.
The rule marks a philosophical shift: opacity around clinical algorithms is no longer acceptable in federally certified systems. But compliance is uneven. Large vendors with dedicated regulatory teams have begun rolling out transparency modules, while smaller EHR companies and the independent practices that use them face steeper costs to redesign interfaces and documentation. No federal compliance audit results have been published as of spring 2026, leaving the real adoption picture unclear.
CMS rewires the data plumbing
Separately, CMS finalized its Interoperability and Prior Authorization rule (CMS-0057-F), which requires certain payers to support standardized APIs and electronic prior-authorization workflows. Compliance deadlines are staggered, with key provisions taking effect in 2026 and 2027.
For clinicians, the practical promise is fewer fax-based delays and faster coverage decisions. For AI developers, the rule creates something arguably more consequential: standardized data pipelines that make it feasible to deploy automated decision support at scale. Tools that predict prior-authorization outcomes, surface relevant clinical history during chart review, or flag documentation gaps all depend on the kind of structured, interoperable data exchange CMS is now mandating.
Early clinical evidence: promising but narrow
A randomized controlled trial by Goh et al., published in Nature Medicine in approximately October 2024, tested whether GPT-4 assistance improved physician performance on simulated clinical reasoning tasks. Ninety-two physicians were randomized to work with or without the model, and the AI-supported group showed statistically significant gains in diagnostic reasoning quality. The study also documented a tradeoff: AI assistance added steps to the decision process, sometimes lengthening task completion time.
The trial is methodologically strong, with clearly defined endpoints and a reported effect size with confidence intervals. But its scope is deliberately narrow. Physicians worked through structured vignettes, not live patient encounters with incomplete records, comorbidities, and time pressure. No large-scale longitudinal study has yet measured whether AI-assisted reasoning translates into better patient outcomes, fewer diagnostic errors, or lower costs in routine care. That gap between controlled proof of concept and bedside reality remains the central unanswered question in clinical AI research.
Cybersecurity as a patient safety issue
The Department of Health and Human Services is treating the AI expansion as inseparable from cybersecurity risk. The HHS Office for Civil Rights published a Notice of Proposed Rulemaking to update the HIPAA Security Rule, explicitly framing cyberattacks against healthcare organizations as a patient-safety crisis. The proposal, issued in January 2025 and still awaiting finalization as of spring 2026, would tighten requirements for risk analysis, incident response, encryption, and technical safeguards for electronic protected health information.
The timing is not coincidental. Every new AI tool embedded in an EHR, delivered through a web portal, or accessed via a mobile app widens the attack surface for threat actors. The proposed rule would force health systems to account for that expanded risk, but until it is finalized and enforcement patterns emerge, organizations are left to interpret how aggressively they should harden their infrastructure.
HHS has also published an AI Strategic Plan outlining how the department intends to govern AI adoption across its agencies, with stated goals around trustworthy AI, alignment with public health priorities, and equity. Specific metrics on federal AI pilot programs, including which clinical or administrative domains they target and what results they have produced, have not yet been made public.
What physicians and health systems should watch through mid-2026
The strongest signals in this regulatory wave come directly from primary federal documents: the FDA’s draft guidance, the HTI-1 rule, CMS-0057-F, and the HIPAA Security Rule proposal. These set binding or proposed requirements, establish deadlines, and define the legal terms under which AI tools will operate in American healthcare.
But several critical questions remain open. Will algorithm transparency rules actually change how clinicians evaluate and trust AI recommendations, or will disclosure panels become the health IT equivalent of unread terms-of-service agreements? Will post-market monitoring catch performance drift before patients are harmed? And will AI tools narrow health disparities, as proponents hope, or widen them by performing best in well-resourced systems with cleaner data?
The answers will not come from any single rule or trial. They will emerge from a combination of updated federal enforcement data, independent evaluations, and multi-site studies tracking both clinical and operational outcomes over time. For now, the regulatory architecture is being built. Whether it holds up under the weight of the technology it is meant to govern is the question that will define the next phase of American healthcare AI.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.