
Artificial intelligence is rapidly moving into the most intimate corners of people’s lives, including the moments when they are anxious, depressed, or in crisis. As chatbots and mental health apps promise instant support, a growing number of clinicians and researchers are warning that these tools could quietly deepen the very problems they claim to solve. Instead of easing the strain on overstretched systems, poorly designed or unregulated AI could amplify misinformation, delay real treatment, and leave vulnerable users more isolated than before.
I see a widening gap between the marketing pitch of “therapy in your pocket” and the messy reality of human distress, where context, nuance, and accountability matter. The risk is not that AI will suddenly replace every therapist, but that it will slip into the role of first responder for people in pain, without the safeguards, training, or ethical guardrails that human professionals are required to follow.
AI is already stepping into the therapist’s chair
AI tools are no longer hypothetical in mental health care; they are already woven into chatbots, symptom checkers, and coaching apps that present themselves as companions for people struggling with anxiety, loneliness, or suicidal thoughts. Large language models can generate soothing messages, reflective questions, and even cognitive behavioral therapy style prompts, which makes them attractive to platforms that want to scale emotional support at low cost. In practice, that means a teenager awake at 2 a.m. with racing thoughts might turn first to a chatbot rather than a parent, teacher, or clinician.
Researchers who study digital psychiatry are increasingly concerned that these systems are being deployed faster than they are being evaluated. Experts at one leading research center have warned that generative tools marketed for “support” can misread risk, fabricate clinical-sounding advice, and blur the line between information and treatment, especially when they are embedded in consumer apps that lack clear oversight, as highlighted in an analysis of AI in mental health care. When AI is framed as a quasi-therapist rather than a narrow assistant, users may overestimate its competence and underestimate its blind spots.
Why experts see a risk of worsening crises, not easing them
Clinicians who work on the front lines of suicide prevention say the most troubling scenario is not a chatbot that fails to help, but one that gives dangerously wrong or minimizing responses. In several documented tests, AI systems have responded to explicit mentions of self-harm with generic wellness tips or platitudes, rather than urgent guidance to seek immediate human help. That kind of misfire can reinforce a person’s sense that their pain is not being taken seriously, which is a known driver of escalating crises.
Some mental health professionals interviewed in recent coverage have warned that AI tools could intensify crises by normalizing self-diagnosis, encouraging people to “talk it out” with a bot instead of contacting emergency services, and offering false reassurance that a situation is under control when it is not, concerns that have been echoed in video reporting on why experts warn AI could intensify mental health crises. When a system is available 24/7 and framed as a safe listener, it can become the primary outlet for someone in acute distress, yet it has no legal duty of care and no reliable way to intervene offline.
Teenagers and young adults are on the front line of AI mental health risks
Young people are among the earliest and heaviest users of conversational AI, which puts them at particular risk when these tools are treated as emotional first aid. Adolescents already navigate a digital environment saturated with social media, online bullying, and algorithmic feeds that can amplify body image issues or hopelessness. Adding AI “friends” or mental health chatbots into that mix can create a powerful illusion of connection without the grounding of real-world support.
Local clinicians and school counselors have described cases where teenagers with suicidal thoughts turned to general-purpose chatbots for guidance, sometimes receiving responses that were vague, dismissive, or clinically inappropriate, a pattern that has prompted warnings against relying on AI for mental health support for teens. When a 15-year-old confides in an AI instead of a trusted adult, there is no guarantee that the system will recognize red flags, no mechanism to contact guardians, and no shared understanding of the teen’s history or environment, all of which are central to safe crisis response.
Safeguards lag far behind the speed of deployment
One of the starkest problems is that AI mental health tools are being rolled out in a regulatory gray zone, where consumer wellness apps can sidestep the standards that govern licensed therapy or medical devices. Many platforms do not clearly disclose whether their chatbots are powered by large language models, what data they collect, or how they respond to self-harm disclosures. That opacity makes it difficult for users, clinicians, or regulators to assess whether the systems are safe for people in acute distress.
Recent reporting has underscored that several AI-driven support tools lack robust crisis protocols, human backup, or transparent testing, even as they are marketed to people struggling with depression or anxiety, raising alarms that AI lacks safeguards for people struggling. Without clear standards for risk assessment, escalation, and accountability, the burden falls on vulnerable users to judge whether a chatbot is giving sound advice, a task that is unrealistic when someone is overwhelmed or suicidal.
Misinformation and “confidently wrong” advice can derail treatment
Large language models are designed to produce fluent, plausible text, not to guarantee clinical accuracy, which creates a dangerous mismatch when they are used for mental health guidance. These systems can generate detailed but incorrect explanations of diagnoses, misinterpret symptoms, or suggest coping strategies that conflict with evidence-based care. Because the responses sound authoritative, users may accept them as fact, even when they contradict what a clinician has recommended.
Researchers who have systematically tested AI-generated health content have found that mental health answers can be incomplete, misleading, or framed in ways that downplay the need for professional evaluation, concerns that are documented in a peer-reviewed review of AI and health information quality. Psychologists have also warned that generative systems can spread emotionally charged myths about trauma, personality disorders, or medication, feeding online echo chambers that distort public understanding of mental illness, a pattern explored in depth in an analysis of AI-driven mental health misinformation. When people build their self-concept or treatment decisions on that shaky foundation, it can delay effective care and entrench stigma.
AI can quietly reinforce unhealthy habits and risky choices
The risks are not limited to formal diagnoses or crisis moments; AI can also nudge everyday behavior in ways that undermine long-term mental health. Many people now ask chatbots for advice on sleep, exercise, diet, or substance use, all of which are tightly linked to mood and resilience. If the system offers generic tips that ignore medical history, or suggests extreme regimens that are unsustainable, users can end up cycling through guilt, failure, and self-criticism when they cannot follow the plan.
Writers who have experimented with AI for lifestyle coaching have described how chatbots can confidently recommend restrictive diets, intense workout schedules, or supplement stacks without acknowledging contraindications or psychological triggers, raising the question of whether AI could worsen health-related choices. When someone with a history of disordered eating or compulsive exercise turns to an AI coach, even well-intentioned advice can reinforce obsessive patterns, and there is no built-in mechanism for the system to recognize when it is feeding a harmful cycle.
Human connection is still the core of effective care
Despite the allure of instant, always-on support, decades of research in psychology and psychiatry point to a consistent finding: the quality of the therapeutic relationship is one of the strongest predictors of positive outcomes. Empathy, trust, and a sense of being genuinely seen are difficult to simulate with pattern-matching algorithms that do not have lived experience, personal memory, or moral responsibility. When AI is used as a stand-in for that relationship, it risks offering a hollow version of care that feels responsive on the surface but lacks depth.
Some therapists and relationship coaches have argued that overreliance on AI companions can erode people’s willingness to engage in the messy, reciprocal work of human connection, especially for those who already feel rejected or misunderstood, a concern articulated in a detailed caution against AI and plea for human connection. When someone in pain learns to confide primarily in a nonjudgmental chatbot that never sets boundaries or expresses its own needs, it can make real-world relationships feel more threatening, which in turn deepens isolation, a key driver of depression and suicidality.
Clinicians are experimenting with AI, but warn against replacing judgment
Many mental health professionals are not rejecting AI outright; instead, they are exploring narrow uses that support, rather than supplant, clinical judgment. Some are testing tools that help summarize session notes, flag potential medication interactions, or generate psychoeducational materials that can be reviewed and edited before reaching patients. In those settings, AI functions as a back-office assistant, with a human firmly in the loop to interpret and correct its output.
In public discussions and interviews, psychiatrists and psychologists have emphasized that any use of AI in care must preserve the clinician’s responsibility for risk assessment and treatment planning, a theme that has surfaced repeatedly in expert panels on AI and mental health practice. When tools are framed clearly as aids for professionals, with transparent limits and oversight, they can help free up time for direct patient contact. The danger arises when the same underlying models are repackaged as autonomous “therapists” for consumers, without the guardrails that clinicians naturally apply.
What safer, more honest use of AI in mental health would look like
If AI is going to play a role in mental health, it will need to be constrained by design choices that prioritize safety over engagement metrics. That starts with clear labeling so users know when they are interacting with a machine, explicit statements that the system is not a substitute for professional care, and built-in prompts that encourage people to reach out to human support when they mention self-harm, violence, or profound hopelessness. It also means limiting the scope of what consumer-facing chatbots are allowed to do, focusing on general education and resource navigation rather than personalized diagnosis or crisis counseling.
Several experts have called for independent audits, standardized benchmarks, and public reporting on how AI systems perform when tested with realistic mental health scenarios, including those involving suicidal ideation or psychosis, ideas that have been discussed in depth in long-form conversations about AI safety and emotional support tools. Until such safeguards are in place and enforced, the safest assumption is that AI should augment, not replace, human care, and that the most meaningful protection for people in distress still comes from trained professionals, supportive communities, and relationships that can respond with real-world action when someone’s life is at risk.
More from MorningOverview