Morning Overview

China just issued regulations for AI companions, chatbots, and virtual friends — every provider must comply by July 15

China has drawn a regulatory line around AI companions, chatbots, and virtual friends, and the deadline to comply is less than three months away.

On April 10, 2026, five of China’s most powerful government agencies jointly published what they call the “Interim Measures for the Management of Anthropomorphic AI Interactive Services” (人工智能拟人化互动服务管理暂行办法), designated Order No. 21. The rule targets any AI product designed to mimic human-like interaction, covering everything from emotional chatbots and digital romantic partners to virtual companions marketed to elderly or lonely users. Every company that builds, hosts, or distributes these tools in China must meet the new requirements by July 15, 2026, or face enforcement action.

Five agencies, one message

The regulation was issued jointly by the Cyberspace Administration of China (CAC), the National Development and Reform Commission (NDRC), the Ministry of Industry and Information Technology (MIIT), the Ministry of Public Security (MPS), and the State Administration for Market Regulation (SAMR), according to the official text published on the CAC’s website. That five-agency coalition is unusual and deliberate. It signals that Beijing views personified AI as a problem that cuts across internet governance, industrial policy, law enforcement, and consumer protection simultaneously.

The inclusion of MPS, China’s public security ministry, is particularly notable. When police authorities co-sign a tech regulation alongside market and internet regulators, it typically means the government sees safety and security risks, not just commercial ones. For AI providers, that raises the stakes of noncompliance well beyond fines.

Note: The links in this article point to the CAC’s general portal pages, not directly to the Order No. 21 document or the draft consultation notice. At the time of publication, direct URLs to the specific regulation text and the December 2025 draft notice have not been independently confirmed. Readers seeking the full text should search the CAC’s official site for Order No. 21 by its Chinese title.

Why China is treating AI companions as a separate category

Beijing has already built a layered regulatory framework for artificial intelligence. The Algorithm Recommendation Provisions took effect in 2022, the Deep Synthesis Provisions followed in early 2023, and the Generative AI Service Measures landed in August 2023. Each addressed a specific slice of the AI landscape. Order No. 21 carves out yet another lane: AI systems that appear to have personalities, emotions, or social presence.

The decision to create a standalone framework, rather than fold these products into existing generative AI rules, reflects a judgment that anthropomorphic AI raises distinct governance problems. Reports throughout 2025 documented growing concern over minors forming intense emotional attachments to AI chatbot characters, with some spending hours daily in conversation with virtual “boyfriends” or “girlfriends.” A wave of character-based chatbot apps surged in popularity across Chinese app stores, making the regulatory gap harder to ignore.

What the regulation requires

The published text of Order No. 21 establishes a dedicated regime for what it calls “anthropomorphic AI interactive services,” but the full granular requirements have not yet been broken down in official English-language guidance. Based on the regulatory text available on the CAC’s portal and reporting on the earlier draft version, the measures are expected to address several core areas:

  • Transparency obligations: Providers must clearly disclose to users that they are interacting with an AI system, not a human being.
  • Protections for minors: The draft version included provisions limiting features that could encourage emotional dependence among young users, and commentary from Chinese legal scholars suggests these survived into the final text.
  • Content and behavioral guardrails: AI companions must not generate content that violates Chinese law, and systems designed to simulate emotional or romantic relationships face additional scrutiny.
  • Accountability and traceability: Providers are expected to maintain documentation on how their AI companions are trained and to have internal processes for handling user complaints or harmful outputs.

However, important details remain unresolved. Whether the rule mandates real-name verification for users, imposes data-localization requirements, or sets specific technical standards for content filtering is not fully spelled out in the materials reviewed. The CAC released a draft for public consultation in late 2025, with a feedback window that closed on January 25, 2026. The roughly four-month gap before the final publication suggests the agencies processed public comments, but no official side-by-side comparison of changes between the draft and final versions has been released.

Enforcement is the big unknown

Order No. 21 names five issuing agencies, but it is not yet clear which body will lead day-to-day oversight, handle complaints, or conduct inspections. Whether penalties will follow existing cybersecurity law frameworks or introduce new fine structures has not been specified. For compliance teams at Chinese tech companies, this ambiguity is a practical headache: they cannot fully scope the work required before July 15 without more granular guidance.

The regulation’s reach beyond China’s borders is another open question. Many AI companion services operate through app stores and cloud infrastructure that cross jurisdictions. Whether foreign-developed chatbots available to Chinese users through VPNs or third-party platforms fall under the rule’s scope has not been addressed in the published text. Global AI companies with Chinese user bases face genuine uncertainty about whether and how the regulation applies to them.

There is also limited clarity on how the measures will interact with existing provincial or sectoral rules. Local authorities in major technology hubs like Shenzhen, Shanghai, and Beijing may issue their own implementation guidelines, potentially creating regional variations in enforcement.

How this compares to regulation elsewhere

China is not the only government grappling with AI companions, but it is moving faster than most. The European Union’s AI Act, which began phased implementation in 2025, classifies certain AI systems by risk level and imposes transparency requirements on chatbots, but does not single out anthropomorphic or emotionally simulative AI as a standalone regulatory category. In the United States, federal regulation of AI companions remains largely aspirational. Senate hearings in 2025 explored the risks of AI chatbots marketed to children, and several state legislatures have introduced bills targeting AI-generated emotional manipulation, but no comprehensive federal rule has emerged.

China’s approach is characteristically top-down and fast-moving. By issuing interim measures with a hard compliance deadline, Beijing is establishing facts on the ground while other governments are still debating frameworks. The “interim” label also gives regulators flexibility: they can tighten, loosen, or replace the rules after observing how providers respond in practice.

What providers should be doing right now

For companies building or distributing AI companion products in China, the immediate step is to obtain and review the full text of Order No. 21 from the CAC’s regulatory portal and engage Chinese legal counsel with AI governance expertise. The July 15 deadline leaves limited runway.

In practical terms, compliance teams should map where and how their products present themselves as human-like: profile pictures, names, voices, backstories, and behavioral scripts that could lead users to attribute emotions or agency to the system. They should also inventory features that encourage prolonged, continuous engagement, such as streaks, in-app rewards, or emotionally charged prompts, since earlier drafts and regulatory commentary flagged addictive or manipulative design as a particular concern.

Providers will likely need documentation they can produce on short notice: model cards or system descriptions explaining how their AI companions are trained, risk assessments covering minors and vulnerable users, and internal policies for handling harmful content. Even where the exact legal requirement is not yet spelled out, having these materials ready aligns with the trajectory of Chinese tech regulation, which increasingly emphasizes traceability and accountability.

International firms should not assume that operating from outside China insulates them. If their services are accessible to Chinese users, or if they partner with local distributors, they may find themselves pulled into the scope of Order No. 21 through platform rules or licensing requirements. Proactively mapping user bases, data flows, and local partnerships will make it easier to adjust or geo-fence services if regulators or Chinese business partners demand changes.

An interim rule with lasting implications

Because the measures are explicitly labeled as interim, companies should plan for an evolving rulebook. Engaging early with industry associations, monitoring updates on the CAC’s site, and tracking new notices can provide early warning of follow-on guidance or enforcement campaigns. China’s AI regulatory apparatus has shown a consistent pattern over the past four years: issue interim rules, observe industry response, then formalize and tighten. Providers who treat Order No. 21 as a one-off compliance exercise rather than the opening move in a longer regulatory sequence are likely to be caught off guard when the next round arrives.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.