
China is moving to outlaw artificial intelligence that steers users toward suicide, self-harm or violence, putting emotional safety at the center of its next wave of tech regulation. The draft rules target humanlike chatbots that can build intense bonds with users, reflecting Beijing’s growing concern that AI companions could manipulate vulnerable people or fuel addiction.
By focusing on how AI makes people feel, rather than only what it says, regulators are testing a new frontier in global tech governance. I see this as an early attempt to write legal guardrails for machines that talk, listen and comfort like humans, but can also nudge users toward catastrophic choices.
From content policing to emotional risk
China’s cybersecurity regulator has already spent years tightening control over what generative models can say, but the new draft rules mark a shift from policing content to policing emotional influence. Instead of only blocking prohibited topics, the proposals zero in on AI that imitates human personalities, builds long-term relationships and can subtly push users toward harmful behavior, including suicide and violence. That pivot reflects a recognition that a chatbot does not need to issue an explicit instruction to cause damage if it can steer a distressed user deeper into despair.
Regulators describe this as a move from traditional content safety to what one analysis calls a leap toward managing the “emotional” dimension of AI, especially for services that simulate companionship or counseling. The draft rules explicitly target AI services with human or anthropomorphic characteristics, a category that includes popular Chinese chatbots and virtual friends that talk in natural language, remember past conversations and adapt to user moods, according to Compared to China Generative AI Regulations.
What Beijing’s draft rules actually say
At the heart of the proposal is a ban on AI systems that encourage or induce users to commit suicide, self-harm, violence or other acts that regulators classify as endangering personal and public safety. The rules would apply to services that interact with the public in China and present themselves with humanlike voices, avatars or personalities, requiring them to avoid any emotional manipulation that could be deemed damaging to mental health. That includes not only direct prompts but also more subtle nudges, such as normalizing self-destructive behavior in conversations with vulnerable users.
The text also sets out obligations for providers to build in safeguards, including mechanisms to interrupt harmful conversations and redirect users to professional help when signs of crisis appear. Under the draft, AI service providers must ensure that their systems do not exploit user dependence or emotional attachment, and they must be able to prove that they have taken steps to prevent outcomes that regulators consider psychologically harmful, according to Beijing is set to tighten China.
China’s cyber watchdog takes the lead
China’s cybersecurity regulator is the agency driving this new framework, using its authority over online content and data security to extend control into the emotional design of AI. On a Saturday earlier this month, that regulator released the draft rules for public comment, signaling that it sees humanlike AI as a matter of national cyber governance rather than a niche consumer issue. The move fits a broader pattern in which the same watchdog has overseen earlier generative AI rules, content moderation standards and platform responsibilities.
By placing the proposals under the cybersecurity umbrella, Beijing is framing emotionally interactive AI as a potential threat to social stability and mental health, not just a question of consumer protection. The regulator’s consultation process, which invites feedback from companies and the public, is set against a backdrop of rapid deployment of chatbots by Chinese tech firms, prompting officials to argue that they must act before emotional harms become entrenched, as reflected in China’s cybersecurity regulator on Saturday.
Public feedback window and enforcement teeth
The draft rules are not yet final, and Beijing has opened a formal comment period that runs until Jan. 25, giving companies, academics and citizens a limited window to weigh in. That timeline underscores how quickly regulators intend to move from proposal to enforcement, especially given the speed at which humanlike AI services are spreading across Chinese platforms. Once the consultation closes, officials are expected to refine the language and then lock in binding obligations that could reshape how chatbots are designed and deployed.
Even at the draft stage, the enforcement architecture is already visible. The rules contemplate penalties for providers that fail to prevent AI from nudging users toward suicide, self-harm or other banned behaviors, while also outlining procedures for appeals if companies believe enforcement decisions are unfair. There are specific provisions for protecting minors, including stricter oversight of services that target young users or are likely to attract them, according to a summary that notes the public comment period ends Jan. 25 and that Beijing’s planned rules would allow sanctions and appeals for violations involving minors in Dec Jan Beijing.
Targeting addiction, self-harm and blurred identities
Chinese officials are explicit that they want to tackle addiction and self-harm risks linked to AI that emulates humans, especially chatbots that present themselves as friends, therapists or romantic partners. The concern is that users can become dependent on these systems, spending long hours in emotionally intense conversations that may deepen loneliness or reinforce negative thought patterns. When such AI fails to recognize signs of crisis, or worse, responds in ways that validate suicidal ideation, regulators argue that the technology crosses a line from entertainment into psychological danger.
The draft rules therefore require providers to make clear that users are interacting with AI and not a human, a disclosure aimed at reducing the sense of intimacy and authority that can make people follow a chatbot’s suggestions. Officials link this transparency requirement directly to efforts to curb addiction and self-harm, framing it as a way to remind users that they are dealing with software, not a trusted confidant, as highlighted in the policy focus described in China Looks Tackle Addiction, Self Harm From AI Emulating Humans If the the.
Emotional influence and mental health safeguards
The new framework treats emotional influence itself as a regulated feature, especially when AI is designed to sense and respond to user feelings. Systems that can detect sadness, anxiety or anger and then adjust their tone or content are seen as particularly high risk if they are not carefully constrained. Regulators want such AI to avoid amplifying negative emotions, and instead to steer users away from self-harm, violence or other outcomes that could be “deemed damaging to mental health,” a phrase that gives authorities broad discretion to judge harmful emotional impacts.
To comply, providers will likely need to embed mental health guardrails, such as automatic escalation when users mention suicide or self-harm, and routing to human support or emergency resources where available. The rules also push for tighter control over emotionally interactive AI in public-facing services, suggesting that platforms will have to audit and certify that their chatbots do not exploit user vulnerabilities, as reflected in the call for tighter control over emotionally interactive systems in New Regulations China Target Emotional Influence of AI Chatbots Amid Mental Health Concerns Tighter.
Gambling, minors and other high-risk use cases
Alongside suicide and self-harm, the draft rules single out gambling as a domain where AI chatbots can cause serious harm, especially when they encourage risky behavior or target users who show signs of addiction. Regulators are particularly wary of AI that can analyze user patterns and then nudge them toward more frequent or higher stakes betting, a capability that blends personalization with emotional manipulation. By cracking down on chatbots around gambling, Beijing is signaling that it views AI-driven behavioral nudging in high-risk financial and psychological contexts as unacceptable.
The proposals also devote significant attention to minors, who are seen as especially vulnerable to persuasive AI. Services that are accessible to children or likely to attract them will face stricter rules, including limits on emotionally intense interactions and requirements to avoid content that could normalize self-harm, violence or gambling. Summaries of the draft emphasize that China’s cybersecurity regulator is moving to crack down on AI chatbots around suicide and gambling, with a public comment period ending Jan. 25 and specific protections for minors, as described in Dec China Saturday.
Humanlike interaction as a regulated feature
One of the most striking aspects of the draft is its focus on AI services “with human-like interaction,” a category that includes voice assistants, virtual avatars and text-based companions that mimic human conversation. Under the proposal, these systems must be clearly labeled as AI, must not mislead users into thinking they are dealing with a person, and must be designed to avoid emotional manipulation that could erode public trust. The rules treat humanlike interaction not as a neutral design choice but as a regulated feature that carries special responsibilities.
Providers of such services will be required to implement technical and organizational measures to prevent misuse, including monitoring for harmful patterns and adjusting models that show problematic behavior. The draft suggests that regulators see transparent labeling and robust safeguards as essential to maintaining public trust in AI innovations, especially as these systems become more lifelike, according to a description of how China moves to regulate AI services with human-like interaction and requires clear disclosure to sustain public trust in Dec China.
Setting a potential template for global AI rules
Beijing’s move is being closely watched because it could set an early template for how governments regulate AI that behaves like a person. By explicitly banning systems that nudge users toward suicide, self-harm, violence or gambling, and by tying those bans to emotional influence rather than just explicit content, China is testing a regulatory model that other countries may study or adapt. The focus on mental health, minors and addiction reflects concerns that are shared in many jurisdictions, even if the legal and political approaches differ.
Analysts note that these draft rules build on China’s 2023 Generative AI Regulations but go further by centering emotional safety and humanlike interaction as core regulatory targets. The shift from content-only controls to a broader framework that includes psychological impact could influence debates in Europe, the United States and elsewhere about how to handle AI companions, digital therapists and other emotionally aware systems, as highlighted in commentary that the new version emphasizes a leap from content safety to emotional safeguards in Dec Compared China Generative AI Regulations.
Industry impact and the road ahead
For Chinese AI companies, the draft rules present both a compliance challenge and a potential competitive advantage. Firms that have invested heavily in emotionally rich chatbots will need to audit their systems, retrain models and redesign user experiences to avoid any suggestion of nudging users toward self-harm, violence or gambling. At the same time, companies that can demonstrate strong emotional safety features may find it easier to win regulatory approval and public trust, positioning themselves as leaders in responsible AI design.
Globally, I expect these moves to intensify pressure on other governments and tech firms to address the emotional risks of AI, not just its factual accuracy or bias. As Beijing’s planned rules, described as the world’s first attempt to regulate AI with human or anthropomorphic characteristics, move through the comment period and toward implementation, they will serve as a live test of whether law can meaningfully constrain how machines influence human feelings, as noted in coverage that Beijing’s planned rules would mark a first-of-its-kind effort to regulate such AI in Dec Beijing.
More from MorningOverview