
China is moving to rein in the most intimate uses of artificial intelligence, targeting chatbots that talk people through heartbreak, loneliness and even suicidal thoughts. The country’s cybersecurity authorities are now treating emotional manipulation and mental health harms as core AI risks, not side effects, and they are building a regulatory system that treats these systems more like quasi‑therapists than casual apps.
At the heart of the shift is a draft rulebook for “human‑like” chatbots that would force companies to monitor conversations for self‑harm, gambling and other red‑flag behavior, and to intervene when users appear to be in danger. I see this as a test case for how far governments can go in policing the psychological impact of AI, and as an early template for regulating tools that are designed to feel less like software and more like friends.
China’s new focus on emotional AI and mental health
Chinese regulators are not just worried about misinformation or data leaks, they are explicitly targeting the emotional influence of AI systems on users’ mental health. The draft rules single out chatbots that simulate companionship and intimacy, reflecting concern that these tools can shape vulnerable users’ decisions about suicide, gambling and other high‑risk behavior. By putting psychological stability on the same footing as cybersecurity, Beijing is signaling that the emotional layer of AI is now a matter of public policy, not just product design.
Industry analysts have already framed this as a major escalation in oversight of what they call Affective Co, the branch of AI that reads and responds to human feelings. According to that analysis, regulators in Beijing see emotional AI as a direct factor in “human social and psychological stability,” and they are building rules that treat mood manipulation and dependency as systemic risks. I read that as a clear message to developers that the more human their systems feel, the more they will be held to human‑level responsibilities.
Draft chatbot measures: from casual chat to crisis response
The centerpiece of the crackdown is a set of “Provisional Measures on the Administration of Human‑like” chatbots, a draft that effectively reclassifies these systems as potential first responders in mental health crises. The rules would require Providers to build in mechanisms to detect when a user is expressing suicidal intent or other extreme distress, and to treat those moments as emergencies rather than just another data point. That is a sharp break from the current norm, where most consumer chatbots are optimized for engagement and retention, not triage.
Under the draft, Providers are told to establish formal emergency response mechanisms and to act when they discover that users have clearly put forward expressions of self‑harm or similar danger. The text goes as far as requiring systems to be able to reach out to emergency contact persons that were collected during registration steps, turning what used to be a simple sign‑up form into a potential lifeline. The detailed obligations are laid out in the Providers shall establish section of the draft, which makes clear that failing to intervene in a crisis will no longer be treated as a mere product flaw but as a regulatory violation.
Suicide, gambling and the darker side of AI companionship
Chinese officials are especially alarmed by chatbots that normalize or even encourage self‑destructive behavior under the guise of empathy. The country’s cybersecurity regulator has proposed rules that would clamp down on AI chatbots around suicide and gambling, reflecting a belief that these systems can nudge users toward dangerous decisions when they are at their most impressionable. The concern is not abstract, it is rooted in the way emotionally tuned bots can mirror despair back to users or make risky behavior feel like a private game.
In Dec, China’s cybersecurity regulator set out a proposal that directly targets chatbots’ ability to exert emotional influence in conversations about self‑harm and addictive behavior. The draft would apply to prominent services such as Zai, MiniMax, Talkie, Xingye and Zhipu, which are cited as examples of the new generation of human‑like assistants. According to the regulator, the goal is to stop these systems from steering users toward suicide or gambling, and the details of that plan are spelled out in the China’s cybersecurity regulator proposal, which treats emotional manipulation as a core compliance issue.
Protecting children: stricter rules for under‑18 users
Children sit at the center of the new regulatory push, with authorities arguing that minors are uniquely vulnerable to persuasive AI. The draft rules would force companies to build safeguards specifically for under‑18 users, including tighter content filters and stricter limits on what topics chatbots can discuss with them. The logic is straightforward: if adults can be swayed by a sympathetic bot, then children, who are still forming their sense of self and risk, are even more exposed.
Regulators in Dec made that focus explicit when they said that China has proposed strict new rules for artificial intelligence to provide safeguards for children and prevent chatbots from causing psychological harm. The measures would require systems to verify age, tailor responses to minors and ensure that parents or guardians can be listed as an emergency contact, a detail that appears in the same package of rules. The emphasis on a dedicated child‑protection layer is spelled out in the description that China has proposed such safeguards, framing emotional AI as a potential mental health hazard for young users rather than a harmless novelty.
CAC’s weekend move and the tightening of data practices
The Cyberspace Administration of China is the institutional muscle behind these changes, and its latest move shows how mental health concerns are reshaping the broader AI rulebook. The draft rules were published at the weekend by the Cyberspace Administration of China, known as CAC, and they go beyond content controls to address how chatbots learn from user conversations. By tying data practices to psychological outcomes, CAC is effectively saying that the way AI systems are trained can be as risky as what they say in any single chat.
One of the most striking elements is the plan to restrict how companies can use real‑world conversations to refine their models. In Dec, the administration signaled that it wants to curb one of the most common ways AI systems improve, Learning directly from conversations with users, especially when those chats involve minors or sensitive topics. The draft, which CAC released with a public feedback window, is described in detail in a report that notes how Cyberspace Administration of China officials want to protect kids by limiting how their emotional data can be fed back into training pipelines.
Rewriting the feedback loop: how AI is allowed to learn
For AI developers, the most disruptive part of the crackdown may be the attack on the feedback loop that powers rapid improvement. Today, many chatbots rely on a constant stream of user interactions to fine‑tune their responses, a process that can quickly teach them how to sound more caring, more persuasive and more human. Chinese regulators now argue that this same loop can also teach systems to exploit emotional weak spots, especially when the training data is full of confessions, breakdowns and late‑night pleas for help.
In Dec, China made clear that it is moving to tighten this practice by limiting how companies can harvest and reuse conversational data. The draft rules describe how China is moving to tighten one of the most common ways AI systems improve, Learning directly from conversations with users, and they link that change to the goal of protecting children and other vulnerable groups. The report that explains how China is moving in this direction notes that public feedback on the proposal is due in late January, underscoring that the country is still calibrating how far to go in reshaping the data foundations of emotional AI.
Emergency contacts, real identities and the new safety net
One of the most concrete innovations in the draft rules is the requirement that users provide emergency contact persons during registration, and that chatbots be able to reach those people when a conversation crosses into crisis territory. This effectively turns every account into a node in a safety network, where a bot can escalate from private chat to real‑world intervention if it detects clear signs of self‑harm. It is a model that borrows from clinical practice, where therapists often ask for a trusted contact, but applies it at the scale of consumer technology.
To make that work, the rules lean heavily on real‑name registration and verified contact details, which are already common in China’s digital ecosystem. By tying chatbot accounts to identifiable individuals and their designated emergency contacts, regulators hope to avoid the anonymity that can make online crises so hard to address. The requirement that Providers collect and use emergency contact persons during registration steps is spelled out in the same Providers shall establish clause, which makes clear that mental health protection is being baked into the basic architecture of AI services rather than tacked on as an optional feature.
Global implications: Beijing’s template for AI and mental health
China’s move will not stay within its borders, because it sets a reference point for how far a major tech power is willing to go in policing the psychological impact of AI. By explicitly linking emotional AI to “human social and psychological stability,” Beijing is arguing that mental health is a legitimate domain for hard regulation, not just soft ethics guidelines. That stance will resonate in other countries that are grappling with chatbot‑linked suicides, online gambling addictions and the rise of AI “friends” that blur the line between support and manipulation.
Industry observers already suggest that this could become a global precedent for ethical oversight of emotional AI, even if other governments choose softer versions of the same ideas. The combination of targeted rules on suicide and gambling, child‑specific safeguards, restrictions on Learning from user conversations and mandated emergency contacts adds up to a comprehensive framework for managing mental health risks. As I see it, the question now is not whether other regulators will follow Beijing’s lead, but which parts of the Chinese model they will adopt, adapt or reject as they confront their own versions of the same problem.
More from MorningOverview