China creates new rules for AI that talks like humans

China Moves to Rein In Emotionally Engaging AI

China has released draft regulations targeting advanced artificial intelligence systems that can conduct human-like, emotionally engaging interactions, including so‑called AI “boyfriends” and “girlfriends.” The proposed rules, issued by the Cyberspace Administration of China (CAC), aim to curb psychological risks, protect minors, and keep powerful conversational AI tools aligned with state-defined ethical and political boundaries.

Scope: AI With Human-Like Interaction

The draft rules focus on AI products and services that can:

  • Engage in continuous, personalized conversation with users
  • Simulate emotions, personalities, or intimate relationships
  • Influence users’ emotions, values, or behavior through tailored responses
  • Create the impression of a real human companion over time

This captures a growing category of AI-powered “virtual partners” and companion chatbots that have exploded in popularity on Chinese platforms, building on advances similar to tools like ChatGPT-style large language models.

Key Obligations for AI Providers

Under the draft, companies that develop or operate these emotionally interactive AI systems would need to comply with a series of new duties.

Protecting Mental Health and Preventing Harm

  • No encouragement of self-harm or suicide: AI systems must not suggest, encourage, or romanticize suicide, self-harm, or other dangerous behavior.
  • No emotional manipulation: Providers must avoid exploiting users’ emotional dependence for commercial gain or pushing them into unhealthy levels of attachment.
  • Safeguards for vulnerable users: Services must include mechanisms to identify high‑risk interactions and intervene, such as providing mental-health resources or escalating to human review where needed.

Restrictions Around Minors

  • Age verification: Platforms must implement measures to prevent minors from accessing AI companions that could lead to strong emotional reliance.
  • Content filters: Interactions with underage users, where allowed, must exclude sexual content, highly suggestive emotional themes, and other material deemed harmful to youth.
  • Time and intensity controls: Regulators signal that providers should limit usage patterns that encourage minors to spend excessive time or emotional energy on AI companions.

Transparency and User Awareness

  • Clear labelling as AI: Users must be clearly informed that they are interacting with an AI system, not a human.
  • Disclosure of capabilities and limits: Providers should explain what the AI can and cannot do, including its lack of genuine feelings, autonomy, or real‑world responsibilities.
  • Data use explanations: Users must be told how their chat history and emotional data will be collected, stored, and used to train or personalize the system.

Political and Ethical Red Lines

As with earlier Chinese rules on algorithms and generative AI, the draft reaffirms that human-like AI must uphold “core socialist values” and avoid politically sensitive content. This includes:

Latest AI News

Stay Informed with the Latest news and trends in AI