China Unveils New Regulatory Framework to Curb Addiction and Psychological Risks in Human-Like AI

China Unveils New Regulatory Framework to Curb Addiction and Psychological Risks in Human-Like AI

China is taking a significant step to regulate the emotional and psychological impact of artificial intelligence. The Cyberspace Administration of China recently published a series of draft rules specifically targeting AI services that mimic human personalities. These regulations aim to manage technologies that simulate human thinking, communication styles, and emotional engagement. As AI companions and “anthropomorphic” assistants become more common, Beijing wants to ensure these tools do not lead to social or mental health crises.

A central focus of the new guidelines is the prevention of digital addiction. Under the proposed rules, AI providers must monitor how users interact with their platforms. If a system detects that a person is becoming overly dependent or shows signs of addictive behavior, the provider is legally obligated to intervene. This might include issuing warnings against excessive use or implementing mandatory breaks. Specifically, the draft suggests that AI services should prompt users to rest after two hours of continuous interaction to prevent social withdrawal.

The regulations also address the complex emotional bond between humans and machines. Providers will be required to assess the emotional state of their users during interactions. If an AI system identifies that a person is experiencing extreme distress or expressing thoughts of self-harm, a human operator must take over the conversation immediately. This “human-in-the-loop” requirement highlights the government’s concern over the limitations of AI in handling delicate mental health scenarios.

Safety and ethics are paramount throughout the entire lifecycle of these AI products. Companies must establish rigorous systems for algorithm reviews and data security. They are also responsible for protecting the personal information that these emotionally intelligent systems often collect. Furthermore, the draft makes it clear that all AI-generated content must align with “core socialist values.” This means that services are strictly prohibited from creating content that threatens national security, spreads rumors, or promotes obscenity.

Transparency is another key pillar of the draft rules. Users must be explicitly informed when they are speaking with an artificial intelligence rather than a real person. This notification should happen at the point of login and appear as a visible reminder during long sessions. By mandating these disclosures, the government hopes to maintain a clear boundary between synthetic and authentic human relationships.

The public consultation period for these rules is set to conclude in late January 2026. This regulatory push signals a broader effort by Chinese authorities to balance technological innovation with social stability. While these measures may increase the compliance burden for tech giants, they represent an early global attempt to govern the psychological frontier of AI. As machines become more lifelike, the challenge for regulators will be ensuring that digital intimacy does not replace or damage real-world social structures.