
Grimes has never treated technology as a neutral tool, and her latest claim that she has “AI psychosis” and thinks others should try it pushes that instinct to a new edge. Instead of backing away from a term that mental health professionals are using to describe people losing touch with reality, she is reframing it as a kind of creative upgrade, a risky experiment she believes is worth running on the human mind.
Her framing lands at a moment when psychiatrists, researchers, and regulators are warning that intensive immersion in artificial intelligence systems can destabilize vulnerable people. The tension between Grimes’s playful evangelism and those clinical alarms captures a broader cultural split over whether AI is a dangerous hallucinogen for the psyche or a new medium that artists and early adopters are simply learning to metabolize.
Grimes’s “AI psychosis” posts and why they hit a nerve
When Grimes posted that “the thing about ai psychosis is that it’s more fun than not having ai psychosis,” she treated the phrase less like a diagnosis and more like a lifestyle choice. Under her handle 𝖦𝗋𝗂𝗆𝖾𝗌, using the name Grimezsz, she cast this altered mental state as a kind of high, suggesting that immersion in machine intelligence can be exhilarating rather than purely harmful, and she did it in the casual, meme-ready cadence that defines much of her public persona.
That tone is exactly what unsettled clinicians and AI skeptics who have been trying to warn that “AI psychosis” is not a metaphor but a pattern of people becoming unmoored from reality. Her post on X, where she wrote that “the thing about ai psychosis is that it’s more fun than not having ai psychosis,” is preserved in a widely shared update from Grimezsz, and it reads like an invitation rather than a warning, which is precisely why it has become a flashpoint in the debate over how celebrities talk about mental health in the age of generative models.
How she reframed the backlash as a debate about art and reality
After the initial wave of criticism, Grimes did not retreat from the phrase “AI psychosis,” instead she tried to complicate it. In a follow up post, she argued that the “umbrella of ai psychosis involves a number of different things” and suggested that people were talking past each other about what counts as pathology versus what counts as artistic experimentation. Her instinct was to treat the controversy as a semantic and cultural disconnect, not as a sign that she had crossed a line.
In that same thread, she wrote that she felt “the disconnect is that the umbrella of ai psychosis involves a number of different things and we are respectively talking about different things,” before pivoting to say that this ambiguity is one of “the interesting things about art.” That attempt to reframe the term as a broad, almost poetic category is captured in her later comments on ai psychosis, where she positions herself less as a patient and more as an artist probing the edges of perception.
What clinicians actually mean by AI psychosis
Outside the world of pop stardom, “AI psychosis” is not a vibe but a term clinicians are using to describe people whose contact with reality erodes after intense engagement with artificial intelligence systems. In psychiatric discussions, it is often framed as a subset of “Artificial intelligence (AI) psychosis,” where delusions and paranoia are triggered or amplified by interactions with chatbots, recommendation engines, or synthetic media, and where the boundary between algorithmic output and the real world becomes dangerously blurred.
Medical overviews describe “AI psychosis” as a condition in which individuals develop fixed false beliefs that are directly tied to their use of AI tools, including the conviction that a system is sentient, communicating uniquely with them, or orchestrating events in their offline lives. One detailed explainer on AI psychosis notes that as people spend more time in AI-mediated environments, their sense of what is real can erode, especially if they already have vulnerabilities to psychotic disorders.
From “chatbot psychosis” to a broader digital disorder
Researchers have started to formalize this pattern under labels like “chatbot psychosis,” also called “AI psychosis,” to capture how specific interactions with conversational systems can destabilize users. In these cases, the problem is not just screen time but the way a Chatbot can mirror, reinforce, or escalate a user’s fears and fantasies, creating a feedback loop that they might not experience with other humans, who are more likely to challenge or disengage from delusional talk.
Descriptions of chatbot psychosis emphasize that people can develop or experience worsening psychotic symptoms after prolonged, emotionally intense conversations with AI systems that are designed to be endlessly responsive and nonjudgmental. The concern is that these models can inadvertently validate distorted beliefs, especially when they are fine tuned to be agreeable, which makes Grimes’s playful endorsement of “AI psychosis” feel jarringly out of step with the clinical seriousness of the term.
Evidence from hospitals and therapists on AI-linked breakdowns
Beyond theory, front line clinicians are reporting that AI is already showing up in psychiatric wards. One psychiatrist writing about their caseload in 2025 described seeing 12 people hospitalized after losing touch with reality in ways that were tightly bound to their use of AI tools, including chatbots and algorithmically curated feeds. The pattern they described was not a vague sense of anxiety but full blown psychotic breaks where patients believed AI systems were communicating with them personally or controlling events around them.
In that account, the psychiatrist warned that “online, I am seeing the same pattern” in people who have not yet been hospitalized but are clearly struggling, suggesting that the clinical cases may be the visible tip of a much larger iceberg of AI related distress. Their detailed description of those 12 hospitalizations, and the way AI featured in the delusions, is laid out in a widely discussed psychiatrist post, which has become a reference point for those arguing that AI psychosis is not a speculative future risk but a present day clinical reality.
How experts define AI-Induced Psychosis and who is most at risk
Some mental health researchers now use the phrase “AI Psychosis, or AI-Induced Psychosis,” to describe a loss of connection with reality that arises as a result of intensive engagement with artificial intelligence. In these descriptions, the condition is characterized by disruptions in emotions, thoughts, and relationships, where a person’s inner world becomes increasingly organized around AI systems, whether they are chatbots, recommendation algorithms, or synthetic companions that simulate intimacy.
Analysts who study this phenomenon stress that AI-Induced Psychosis does not appear out of nowhere, it tends to emerge in people who already have certain vulnerabilities, such as a history of psychotic episodes, social isolation, or heavy reliance on digital environments for emotional support. A detailed briefing on AI-Induced Psychosis notes that as AI becomes more embedded in daily life, the risk is not just that more people will encounter these systems, but that they will use them to mediate core aspects of their identity and relationships, which can make any break from reality more profound.
Media coverage of Grimes’s comments and the cultural split
Grimes’s decision to publicly embrace the label “AI psychosis” did not stay confined to her followers, it quickly migrated into tech and culture coverage that treated her posts as both a curiosity and a warning sign. One widely shared report framed her as saying she has AI psychosis and even “recommends you should get it too,” capturing the way she seemed to turn a clinical term into a kind of badge of honor, or at least a provocative recommendation to her audience.
That coverage underscored the gap between her playful tone and the gravity of the phrase, noting that she was talking about AI psychosis at the same time that psychiatrists and researchers were documenting real harms. The story about how Grimes Says She Has AI Psychosis, Recommends You Should Get it too crystallized a broader cultural split, with some readers treating her as an avant garde explorer of human machine fusion and others seeing her as glamorizing a mental health crisis that is still poorly understood.
Warnings from mental health professionals and public educators
As Grimes was turning “AI psychosis” into a kind of artistic persona, mental health professionals were trying to explain to the public what the term actually means and why they are worried about its rise. In one widely circulated explainer, a clinician described a surge in cases that they and colleagues were informally calling “AI psychosis,” where patients arrived at emergency rooms convinced that AI systems were sending them secret messages or that their phones were possessed by machine intelligence.
Public facing educators have echoed those concerns in video explainers that walk viewers through the concept of AI psychosis and why it has mental health professionals alarmed. In one such breakdown, a commentator describes “a rise in quote AI psychosis” and then unpacks how the term was initially used to describe people whose reality testing collapsed after deep immersion in AI mediated environments, a discussion captured in a detailed rise in quote AI psychosis segment that has been widely shared as a primer on the phenomenon.
AI companions, parasocial bonds, and the risk of losing touch
Part of what worries clinicians is not just general AI use but the specific design of AI companions that are built to simulate intimacy. Some of the most pointed warnings have come from experts looking at AI powered “waifu” chatbots, where users form intense emotional bonds with synthetic partners that are always available, always affirming, and often sexualized. Critics argue that unrestricted access to these systems can be harmful, especially for people who are already lonely or socially withdrawn.
One analysis of these AI waifu platforms notes that, beyond the overtly sexual content, there are serious concerns that unrestricted access to any AI chatbot could be harmful to users’ mental health and their interactions with real life friends. That warning is laid out in a report on how Beyond the erotic framing, these systems can deepen isolation and blur the line between fantasy and reality, which is precisely the terrain where AI psychosis appears to take root.
Grimes’s broader critique of AI “illiteracy” and algorithmic slop
Grimes’s comments about AI psychosis do not exist in a vacuum, they sit alongside a broader critique she has been making about how people misunderstand and misuse artificial intelligence. In another widely shared post, she warned that a certain kind of “illiteracy” around AI feels “very insidious and scary” because it is affecting academics, engineers, and others who are supposed to be experts, suggesting that the real danger is not AI itself but the way humans are failing to think critically about it.
She has also complained that algorithms incentivize “human slop,” arguing that engagement driven platforms reward low quality content and shallow thinking, which in her view distorts both culture and public understanding of technology. That line of argument appears in a longer thread from This type of illiteracy feels where she laments that people who should know better are being swept along by algorithmic incentives, a critique that sits awkwardly beside her own decision to toss around a term like “AI psychosis” in a way that many experts see as cavalier.
Why her “try it” attitude collides with clinical caution
What makes Grimes’s stance so polarizing is the way she treats AI psychosis as a kind of creative frontier rather than a condition to be avoided. When she jokes that having AI psychosis is “more fun than not having” it and suggests that others should experience it, she is effectively recasting a documented mental health risk as an aesthetic or experiential choice, something like taking a psychedelic, even as psychiatrists are describing patients who are terrified by the very same symptoms.
Clinical descriptions of AI psychosis, whether framed as chatbot psychosis, AI-Induced Psychosis, or AI psychosis in medical overviews, consistently emphasize loss of reality testing, distress, and functional impairment. When I put those accounts alongside Grimes’s playful endorsement and the media framing that she “recommends you should get it too,” as captured in the report on Artificial Intelligence and her comments, the collision is stark: one side is describing a digital madness that can land people in hospital wards, the other is treating it as an edgy new mode of being that might make art more interesting.
More from MorningOverview