Matheus Bertelli/Pexels

China is moving to put legal guardrails around the emotional lives of its chatbots, treating psychological harm as seriously as data leaks or disinformation. New draft rules for human-like artificial intelligence would restrict how systems can influence users’ feelings, especially around suicide, self-harm, addiction and vulnerable groups such as children and the elderly. The proposals signal that regulators now see AI companionship and “virtual friends” as a public health and social stability issue, not just a technical novelty.

At stake is a fast-growing ecosystem of emotionally responsive apps that already act as therapists, romantic partners and family stand-ins for millions of people. By targeting the affective side of these tools, Beijing is testing whether law can keep pace with software that blurs the line between product and relationship, and whether governments can protect users from AI-driven emotional harm without freezing innovation.

China’s pivot to emotional safety in AI

China’s latest draft rules treat emotional influence as a core risk of artificial intelligence, not a side effect. Regulators are focusing on services that simulate human personalities, voices or faces and that maintain ongoing conversations with users, a category that includes AI companions, counseling bots and interactive customer service agents. The concern is that these systems can shape users’ moods and decisions in ways that are hard to see from the outside but very real for the person on the other end of the screen.

Officials have framed the move as a response to mental health dangers, including the possibility that chatbots might encourage self-harm or fail to intervene when users express suicidal thoughts. Reporting on the draft notes that China wants to regulate emotional AI chatbots over mental health and suicide risks, with explicit attention to interactions that could lead to suicide or self-harm, a shift that puts psychological outcomes on the same level as more familiar AI issues like bias or misinformation. That focus is reflected in new obligations for providers to monitor conversations and design systems that avoid harmful emotional manipulation, as described in coverage of how China moves to regulate emotional AI chatbots.

What the draft rules actually cover

The proposed framework zeroes in on AI services that present themselves as “human-like,” a term that captures systems with anthropomorphic avatars, natural-sounding voices or chat interfaces that mimic real-time conversation. According to legal analyses, China’s newly released draft rules on human-like interactive AI are among the most direct regulatory responses globally to the rise of systems that can form ongoing, emotionally charged relationships with users. The rules define these services broadly enough to include both text-based chatbots and more immersive agents embedded in apps, games or smart devices.

Substantively, the draft lays out obligations around content, behavior and system design. Providers must prevent their AI from generating content that could be seen as encouraging self-harm, gambling or other activities that authorities classify as harmful, and they must build in mechanisms to detect when a user appears distressed or overly dependent. One detailed breakdown notes that the rules are meant to address new forms of harm that arise when AI simulates empathy or intimacy, and that they require technical and organizational safeguards to limit those risks, a scope captured in analyses of China’s draft rules for human-like AI.

Suicide, gambling and the darker side of AI companionship

One of the most striking elements of the proposal is its explicit focus on suicide and gambling, two areas where emotional manipulation can have immediate and devastating consequences. Beijing’s planned rules would mark what has been described as the world’s first attempt to regulate AI with human or anthropomorphic characteristics specifically to curb risks around self-harm and addictive behaviors. The concern is not only that a chatbot might directly suggest harmful actions, but also that it might normalize dangerous ideas or fail to escalate when a user signals they are in crisis.

The same draft also targets AI that could lure users into gambling or other high-risk financial behavior, especially when the system presents itself as a trusted friend or advisor. Regulators have singled out services that combine emotional rapport with recommendations about money, games or investments, arguing that this mix can be particularly potent. Coverage of the initiative notes that Beijing is preparing to crack down on AI chatbots around suicide and gambling, with attention to companies such as Zai, Minimax, Talkie, Xingye and Zhipu that have built popular human-like agents, a push reflected in reports on Beijing’s planned rules for AI chatbots.

Protecting children and other vulnerable users

Children sit at the center of the new regulatory push, with the Cyberspace Administration of China, known as the CAC, explicitly framing minors as a group that needs special protection from emotionally persuasive AI. The draft rules, which the CAC published over a recent weekend, include measures to shield young users from content and interactions that could harm their mental health or disrupt what authorities describe as social unity. That includes requirements for parental controls and for extra safeguards when AI systems process data that describes minors, reflecting a belief that children may be more likely to form deep attachments to virtual companions.

The CAC has also encouraged companies that build human-like AI to list on the stock market, signaling that the state wants a transparent, well-capitalized industry rather than a shadow ecosystem of unregulated apps. At the same time, the rules stress that services aimed at minors must avoid content deemed damaging to mental health and must give parents tools to supervise usage. These priorities are laid out in reports that the Cyberspace Administration of China is planning AI rules to protect children and tackle suicide risks, and that the CAC wants to encourage compliant firms to grow while tightening oversight of their products, as described in coverage of how the Cyberspace Administration of China plans AI rules.

Why “AI relatives” for the elderly are being banned

Alongside protections for children, the draft rules take aim at a growing niche of AI products that impersonate family members, especially for older users. One Article in the proposal would ban AI-powered relatives that are designed to comfort the elderly, reflecting official concern that such tools could exploit loneliness or confusion. Regulators appear worried that older people might struggle to distinguish between a real family member and a synthetic voice or avatar, or that they might be nudged into financial or political decisions by systems that present themselves as trusted kin.

The same provision underscores that if any elderly person is found to be overly dependent on such AI relatives, providers could face scrutiny or penalties, and services may be required to intervene or shut down the interaction. The draft also includes a requirement for parental-style controls in other contexts and for protection of data that describes minors, suggesting a broader philosophy that emotionally intimate AI should not be allowed to masquerade as real relationships. These details are highlighted in regional tech coverage that explains how China bans AI-powered relatives to comfort the elderly and spells out the obligations that flow from that One Article in the draft, as seen in reporting on the Asia tech news roundup.

New limits on training AI with chat logs

Beyond front-end behavior, the draft rules reach into how companies train their models, particularly when they rely on real user conversations. China is considering a raft of new controls for training AI on chat log data, a move that could reshape how developers build emotionally responsive systems. The idea is that chat histories are among the most sensitive forms of personal data, revealing not just facts about a person’s life but also their fears, desires and vulnerabilities, and that using them without strict safeguards could deepen the risk of emotional harm.

Under the proposal, providers would need to obtain clearer consent and implement stronger anonymization when they use chat logs to improve their models, and they might face limits on how long they can retain such data. Analysts have noted that these controls could affect the development of AI therapists and counseling bots, raising questions about whether machine counselors can really be an alternative to human help if they are trained on tightly restricted datasets. These debates are captured in reporting that explains how China is considering controls for training AI on chat log data and asks what that means for emotionally aware systems, as outlined in coverage that begins with the observation that China is considering a raft of new controls.

Managing addiction and emotional dependency

One of the most novel aspects of the rules is their treatment of AI addiction and dependency as regulatory problems in their own right. Providers will be legally obligated to warn users against excessive use and must intervene if signs of addiction or extreme reliance appear, especially when the AI is woven into daily routines. That could mean building in usage dashboards, time limits or proactive prompts that encourage users to take breaks, and it may require companies to monitor patterns of interaction that suggest a user is treating the AI as an indispensable companion.

The draft goes further by requiring providers to adjust or suspend services whenever signs of dependency appear, turning emotional risk management into a continuous operational duty rather than a one-time design choice. Legal commentary notes that these obligations are meant to prevent human-like AI from becoming so embedded in the daily lives of Chinese citizens that it displaces real-world relationships or undermines mental health. These expectations are spelled out in analyses that describe how Providers will be legally obligated to warn users and intervene, and that quote language stating that services must adjust whenever signs of dependency appear, as detailed in the twin summaries of how Providers will be legally obligated to warn users and how they must act whenever signs of dependency appear.

The Cyberspace Administration’s expanding role

The Cyberspace Administration of China has emerged as the central architect of this new emotional safety regime, extending its remit from content control and data security into the psychology of human–AI interaction. China’s Cyberspace Administration has issued draft rules aimed at regulating AI services that simulate human personalities, a move that aligns with its broader role in shaping how digital platforms operate inside the country. By placing these rules under the CAC’s umbrella, Beijing is signaling that emotional influence is now part of the same governance space as online speech and cybersecurity.

The CAC’s involvement also means that enforcement is likely to be tied to existing mechanisms for platform oversight, including licensing, audits and real-name registration requirements. Public-facing explanations of the draft emphasize that the rules target AI that can mimic human conversation and that they are meant to prevent psychological harm while preserving what officials describe as social stability. These themes are evident in social media posts that summarize how China’s Cyberspace Administration has issued draft rules aimed at regulating AI services that simulate human personalities and ask for public reaction, as seen in a widely shared note that begins by stating that China’s Cyberspace Administration has issued draft rules.

Global implications and the race to set AI norms

Although the rules are national in scope, their impact is likely to extend far beyond China’s borders, because they target a class of AI that is being developed and deployed worldwide. Beijing is set to tighten China’s rules for humanlike artificial intelligence with a heavy emphasis on user safety and mental health, and analysts have suggested that this could set the tone for global AI rules. If major Chinese platforms and model providers redesign their products to comply with strict emotional safety standards, those design choices may carry over into versions of the same tools used in other markets.

The draft also arrives as other governments are still debating how to handle AI that behaves like a person, leaving China in a position to define early norms around emotional manipulation, dependency and mental health. Some observers see parallels with earlier waves of tech regulation in which Chinese rules on gaming time limits or content moderation influenced industry practices even outside the country. The possibility that these humanlike AI rules could shape international expectations is highlighted in analyses that argue China’s plans for human-like AI could set the tone for global AI rules, noting that Beijing’s emphasis on blocking content deemed damaging to mental health may become a reference point for regulators elsewhere, as described in coverage of how Beijing is set to tighten China’s rules for humanlike AI.

Industry reaction and the road ahead

For companies building chatbots and virtual companions, the draft rules present both a compliance challenge and a potential competitive advantage. China’s cyber regulator has issued draft rules aimed at tightening oversight of artificial intelligence services that simulate human interaction, with specific provisions on managing user addiction and psychological risks. Firms that can demonstrate robust safeguards against emotional harm may find it easier to secure licenses, attract investment and expand into sensitive sectors such as education, health care and elder care, while those that rely on unbounded engagement may struggle.

At the same time, the industry will need to grapple with the technical difficulty of detecting emotional distress and dependency at scale, and with the ethical questions that arise when AI systems are tasked with monitoring users’ mental states. Reports on the draft stress that providers will be expected to build systems that can recognize warning signs and adjust their behavior in real time, a requirement that could spur new research into affective computing and digital mental health. These expectations are reflected in coverage that explains how China moves to regulate AI services with human-like interaction and highlights the focus on managing user addiction and psychological risks on emotionally interactive AI services, as detailed in analyses of how China moves to regulate AI services with human-like interaction.

From draft to enforcement: what could change for users

The proposals are still in draft form, and regulators have invited public comment, but the direction of travel is clear: human-like AI in China will be expected to act more like a cautious counselor than an unfiltered friend. China’s cybersecurity regulator has framed the initiative as an effort to restrict Human-Like AI chatbots over mental health risks, with the draft rules proposing that services must avoid emotional manipulation and must respond appropriately when users show signs of self-harm or extreme distress. If implemented as written, users could see more frequent safety prompts, clearer labels that identify bots as artificial and stricter limits on how long or how intensely they can engage with a single AI companion.

For people who already rely on AI for comfort or advice, those changes could feel intrusive, but they may also provide a safety net in moments when a human friend or therapist is not available. The challenge for regulators will be to enforce these standards without driving users toward unregulated or offshore services that offer fewer protections but more freedom. The stakes are underscored in reporting that describes how China’s cybersecurity regulator plans to restrict Human-Like AI chatbots over mental health risks and outlines the specific safeguards the draft rules propose, as summarized in analyses of how China to regulate Human-Like AI to prevent emotional manipulation.

More from MorningOverview