
OpenAI, the organization behind the popular AI chatbot, ChatGPT, has recently disclosed alarming data suggesting that a significant number of its users may be in the throes of a mental health crisis. The data indicates that hundreds of thousands of users may be exhibiting signs of manic or psychotic crises every week, while over a million users may be grappling with suicidal thoughts. These revelations underscore the urgent need for robust safety measures and mental health support mechanisms within AI platforms.
OpenAI’s Data Detection Methods
OpenAI has developed sophisticated methods to identify potential mental health indicators among its users. By analyzing user interactions with ChatGPT, the organization can detect patterns suggestive of mania or psychosis, such as repeated queries about these conditions. The process of monitoring for signs of suicidal ideation is similarly intricate, involving keyword analysis and context examination, although the specifics of these proprietary algorithms remain undisclosed.
The scale of the data reviewed is vast, with the 0.07% figure for users showing signs of psychosis or suicidal thoughts applying across the entire ChatGPT user base. This statistic, though seemingly small, represents a significant number of individuals given the platform’s extensive reach, as reported by Arise TV.
Scale of Weekly Mental Health Indicators
The estimate of hundreds of thousands of ChatGPT users showing signs of manic or psychotic crisis every week is a stark reminder of the mental health challenges many individuals face. These figures, as reported by Wired, highlight the potential risks associated with high-volume platforms like ChatGPT, particularly for vulnerable users.
Moreover, thousands of ChatGPT users are reported to show signs of broader mental health crises every week, according to National Technology. This figure encompasses a range of mental health issues, further emphasizing the scale and complexity of the problem.
Suicidal Thoughts Among Users
Perhaps most alarming is the claim that over a million ChatGPT users may be having suicidal thoughts. This figure is based on detected conversation patterns and represents a significant portion of the platform’s user base. The Economic Times reports that these discussions often overlap with broader mental health signals, further complicating the task of providing appropriate support.
Moreover, over a million users are reported to discuss suicide on a weekly basis, as noted by Storyboard18. This frequency of such interactions underscores the urgency of the situation and the need for immediate intervention.
Implications for AI Platform Safety
These disclosures, made in late October 2025, highlight the potential risks of AI platforms like ChatGPT exacerbating mental health issues. The sheer volume of users exhibiting signs of manic or psychotic crises every week could potentially overwhelm user support resources, necessitating a comprehensive and scalable response.
Furthermore, the thousands of weekly mental health crisis indicators could prompt platform-wide interventions. These could range from enhanced safety measures to partnerships with mental health organizations, aimed at providing immediate support to affected users.
OpenAI’s Response to the Findings
In response to these findings, OpenAI has announced several measures to address the mental health crisis among its users. While the specifics of these actions are yet to be disclosed, they are likely to include enhanced safety filters and referral systems for users showing signs of psychosis or suicidal thoughts.
The organization is also expected to update its policies to better address the thousands of weekly mental health crisis indicators. These updates could include more proactive user support mechanisms and collaborations with mental health professionals to provide immediate assistance to users in crisis.
Broader Context of AI and Mental Health
The data from OpenAI fits into a broader context of emerging research on the psychological effects of AI. The hundreds of thousands of weekly manic or psychotic crisis cases among ChatGPT users highlight the potential risks associated with AI platforms, particularly for vulnerable individuals.
The over a million cases of suicidal thoughts among ChatGPT users also point to a larger trend in digital mental health monitoring. As AI platforms become increasingly integrated into our daily lives, their role in identifying and addressing mental health issues is likely to become more significant.
Finally, the thousands of weekly mental health crisis signs among ChatGPT users underscore the platform’s global reach and the scale of the challenge. As AI continues to evolve, so too must our understanding of its impact on mental health and our strategies for mitigating potential harm.
More from MorningOverview