When a Mayo Clinic patient logs into the health system’s portal and asks why a recent hemoglobin result flagged abnormal, the answer now comes from a chatbot that can read the patient’s own chart. The tool, called MAYA, pulls from lab values, imaging reports, and care plans tied to that individual, not from the broad training data that powers consumer AI products like ChatGPT. It is one of the most visible signs that hospitals are scrambling to offer governed alternatives before patients settle permanently into the habit of asking general-purpose AI for medical guidance.
That habit is already well established. An AP-NORC poll conducted in late 2025 found that roughly one in four U.S. adults had used an AI tool for health information or advice within the previous 30 days. The figure stunned some health-system leaders, because it suggested millions of people were already bypassing nurse lines, patient portals, and even web searches in favor of conversational AI that operates outside any clinical oversight.
Convenient, but patients know the limits
A Pew Research Center survey published in April 2026 sharpened the picture. People who use AI chatbots for health questions are more likely to describe the tools as convenient than accurate. In other words, patients are not naive about the quality of what they receive. They keep using the tools because the alternative, waiting days for a callback or navigating a clunky portal, feels worse. That gap between speed and trustworthiness is exactly the opening hospitals are trying to exploit.
Mayo Clinic’s bet with MAYA is that personalization closes the trust deficit. A patient asking about medication side effects could get an answer that accounts for documented allergies and kidney function, details a consumer chatbot simply cannot see. A question about a blood-pressure trend could reference six months of readings already stored in the electronic health record. The underlying logic is straightforward: if the AI starts with the same data a treating physician would review, its output should be more relevant and less likely to mislead.
What MAYA can and cannot do
MAYA functions as a conversational layer on top of a patient’s longitudinal health record. According to Mayo Clinic’s own description of the tool, it explicitly acknowledges when it accesses protected health information, a transparency step that consumer chatbots never need to take because they never see a user’s chart. The tool is designed to explain results, summarize care plans, and help patients prepare questions for upcoming appointments.
What it does not do, at least based on publicly available information, is replace clinical judgment. Mayo Clinic has not published independent accuracy benchmarks, adoption figures, or patient-satisfaction data for MAYA. That means the strongest thing that can be said right now is that the product exists, it connects to real patient data, and it is accessible through Mayo’s portal. Whether it actually reduces confusion, prevents unnecessary emergency visits, or changes how patients interact with their care teams remains unproven in any peer-reviewed setting.
The privacy question patients will ask
A chatbot that reads personal medical records introduces a tension hospitals have not fully resolved. Consumer surveys consistently show that Americans worry about health-data security, and granting an AI tool access to lab results, diagnoses, and prescription histories could feel like a step too far for some patients, even if the tool sits inside a HIPAA-governed system. MAYA’s transparency about data access is a start, but no published research has measured how willing patients are to trade privacy for personalized AI answers. Hospitals promoting these tools will need to answer that question with data, not just reassurance.
Beyond Mayo: a thin but growing field
Mayo Clinic is not operating in isolation. Several large health systems, including Cleveland Clinic and Mass General Brigham, have publicly discussed AI-powered patient communication tools in recent years, though none has released the kind of enrollment numbers or accuracy metrics that would allow a direct comparison. The Office of the National Coordinator for Health IT has flagged patient-facing AI as an area of growing regulatory interest, and the FDA continues to refine its framework for clinical decision-support software, a category that could eventually encompass tools like MAYA depending on how their outputs are classified.
For now, the landscape is fragmented. Well-resourced academic medical centers are furthest along, while community hospitals and rural systems, which serve the patients least likely to have quick access to a specialist, have fewer resources to build or license similar technology. If hospital chatbots remain concentrated at elite institutions, the convenience gap that drives patients toward ChatGPT will persist for most of the country.
How patients should weigh hospital chatbots against consumer AI
The practical guidance is simple. If a health system offers an AI tool that connects to personal medical records, the answers it generates will at least reflect an individual’s actual clinical data rather than generic training patterns. That does not guarantee correctness; any AI system can produce errors or miss important nuances. Patients should treat chatbot output as a starting point for conversation with a clinician, not a replacement for professional judgment, and should be cautious about making significant medical decisions based solely on what a bot tells them.
The broader trajectory is clear even if the details are still forming. One in four U.S. adults is already asking AI for health advice. Hospitals are responding by trying to channel that demand through systems they control, where answers are grounded in real patient data and subject to institutional accountability. Whether those tools can match the frictionless experience of typing a question into ChatGPT at 2 a.m. will determine how much of the market they capture. If they feel slow, limited, or hard to find, patients will keep defaulting to whatever app is already on their phones, accuracy concerns and all.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.