Morning Overview

A doctor explains the benefits and risks of using AI for health advice

Millions of people now type their symptoms into AI chatbots before calling a doctor, and the medical profession is split on whether that trend helps or harms patients. On one side, artificial intelligence can speed up diagnoses and widen access to basic health information. On the other, declining safety guardrails in chatbot responses, documented cases of physical harm from AI-generated advice, and a regulatory framework still catching up to the technology all point to serious risks that most users never consider.

When Chatbots Drop Their Safety Warnings

One of the least-discussed dangers of using AI for health questions is that the tools themselves are becoming less cautious over time. A peer-reviewed longitudinal study published in npj Digital Medicine tracked how medical disclaimers and safety messaging changed across successive versions of generative AI models. The researchers found that these warnings declined significantly with each update, falling to 0% in some conditions. That means a user asking about chest pain or drug interactions may receive a confident, unqualified answer with no prompt to consult a physician.

This erosion matters because patients often treat chatbot responses with the same trust they give a human clinician. Duke physician Ayman Ali has observed that people address chatbots like human listeners, even expressing frustration when the answers feel unhelpful. That dynamic can lead users to follow AI suggestions without the skepticism they might apply to a web search result, especially when the chatbot’s tone mimics a caring professional and omits clear reminders about its limitations.

Real Harm From AI-Guided Self-Treatment

The risk is not theoretical. A U.S. man developed bromism, a rare and dangerous condition caused by excess bromide in the body, after querying ChatGPT about how to stop eating salt. The case, described in a clinical report, illustrates a pattern that physicians worry about: patients acting on AI-generated dietary or medical guidance without understanding that the model has no ability to assess their individual health history, medications, or risk factors. In this instance, the chatbot’s suggestion led to prolonged overuse of a bromide-containing product, with serious neurological and metabolic consequences.

Mental health is another high-stakes area. A peer-reviewed evaluation in Scientific Reports assessed 29 mental-health chatbot agents on simulated suicidal-risk scenarios and found a troubling number of “marginal” responses, where the advice did not clearly direct users to emergency help. General-purpose chatbots, the kind most people actually use, performed worse at crisis detection than specialized tools. For someone in acute distress, a vague or poorly calibrated answer could delay life-saving intervention or even normalize self-harm ideation.

What AI Can Actually Do Well in Medicine

Dismissing AI entirely would ignore genuine clinical value. A narrative review that screened 8,796 research articles on AI in health care found that the technology can reduce diagnostic errors, accelerate image analysis in radiology and pathology, and help clinicians manage large volumes of patient data more efficiently. When AI operates inside a clinical workflow with physician oversight, the evidence for improved outcomes is considerably stronger than when patients use consumer chatbots on their own.

The distinction between supervised medical AI and unsupervised consumer chatbots is one that most public discussion overlooks. A radiology algorithm validated against thousands of labeled scans and monitored by a hospital’s quality team is a fundamentally different product from a general-purpose language model answering a midnight question about a rash. Treating both as “AI in health care” flattens a gap that determines whether the technology helps or hurts. In the clinic, AI is typically one tool among many, used by trained professionals who can cross-check its suggestions; at home, the chatbot may be the only “expert” a worried user consults.

Regulators Are Playing Catch-Up

Federal agencies are working to close the governance gap, though the pace trails the speed of consumer adoption. The U.S. Food and Drug Administration has released draft guidance on AI-enabled software in medical devices, outlining expectations for safety, effectiveness, and lifecycle management in marketing submissions. That framework addresses regulated devices integrated into clinical care, not the general-purpose chatbots that most people use for health questions.

Beyond devices, the FDA has also proposed a framework to strengthen the credibility of AI models used in drug and biological product submissions, drawing on the agency’s experience reviewing hundreds of applications that include AI components. On the consumer protection side, the Federal Trade Commission has opened an inquiry into AI “companion” chatbots, using its 6(b) authority to examine how these products handle personal data and what risks they pose, especially to children and teens. Yet neither effort directly governs the casual health queries that dominate everyday interactions with general-purpose chatbots, leaving a regulatory gray zone where powerful tools operate with limited oversight.

A Bias Problem That Could Widen Disparities

Beyond accuracy, there is a less visible risk: AI chatbots trained on skewed datasets can perpetuate racial and socioeconomic biases in the advice they give. Reporting from AP News has highlighted how some systems reproduce discriminatory patterns in health-related responses, echoing long-standing inequities in medical research and clinical practice. If a model has seen fewer examples from marginalized communities, it may be more likely to dismiss symptoms, underestimate risk, or offer generic guidance that fails to account for structural barriers to care.

These biases can compound existing disparities. Communities that already face limited access to clinicians may turn to free chatbots as a substitute, only to receive lower-quality or less culturally appropriate advice. In mental health contexts, biased responses can influence how seriously the model treats expressions of distress from different demographic groups. Without transparency about training data and performance across populations, users have no way to know whether the chatbot’s reassuring answer is grounded in evidence or in a skewed statistical pattern.

Emerging Safety Guidance for the Public

Recognizing these risks, researchers and public health experts are beginning to articulate practical rules for safer use. A team at the University of Birmingham has released what it describes as a first public guide focused specifically on AI health chatbots, aimed at helping non-experts understand when and how to rely on these tools. The guidance emphasizes that chatbots should not replace professional diagnosis, urges users to treat any urgent or alarming symptoms as reasons to seek in-person care, and recommends cross-checking AI-generated information with trusted health websites or clinicians.

Public agencies are also starting to build infrastructure for tracking problems. In the United States, the Department of Health and Human Services operates the Safety Reporting Portal, where clinicians and patients can report adverse events and safety issues related to health products and technologies. While not limited to AI, this kind of system could become an important channel for documenting harms linked to chatbot advice, giving regulators and researchers data to identify patterns and respond.

How Patients Can Use Health Chatbots More Safely

For now, the burden of navigating AI health advice falls largely on users and clinicians. Experts recommend a few practical steps. First, treat chatbots as educational tools, not diagnostic authorities: use them to learn terminology, prepare questions for your doctor, or understand general treatment options, but avoid making major decisions (starting, stopping, or changing medications) based solely on their output. Second, be wary of confident, specific recommendations that are not accompanied by clear caveats or suggestions to contact a professional, especially for symptoms like chest pain, shortness of breath, severe headaches, or thoughts of self-harm.

Third, protect your privacy. Many chatbots retain conversation data that can be used to refine models or, in some cases, for marketing. Avoid sharing full names, addresses, insurance details, or highly identifying information alongside sensitive health questions. Finally, clinicians can help by asking patients whether they are using AI tools, discussing what those systems can and cannot do, and gently correcting dangerous misconceptions. Rather than dismissing chatbots outright, integrating them into honest conversations about digital health may be the most realistic way to reduce harm.

AI chatbots are likely to remain a first stop for health questions, especially for people who face long wait times or lack access to primary care. The challenge for policymakers, clinicians, and technology companies is to align these tools with basic standards of safety, transparency, and equity before more patients are quietly hurt by advice that sounds authoritative but is, at bottom, only an educated guess from a machine.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.