Key Points
- China’s draft rules target AI that simulates human traits and interacts emotionally with users.
- Providers must monitor user behaviour and intervene to prevent addiction or excessive use.
- Content and safety safeguards aim to prevent harm, misinformation, and security threats.
China has introduced draft regulations aimed at tightening the oversight of artificial intelligence systems that mimic human behaviours and interact emotionally with users. The Cyberspace Administration of China recently issued the proposals for public consultation, reflecting Beijing’s growing focus on regulating advanced consumer-facing AI amid rapid technological adoption.
These draft rules target AI products and services marketed in China that present simulated human personality traits, thinking patterns, and communication styles. This includes AI systems that interact through text, images, audio or video. The rules are designed to ensure such technologies operate responsibly and ethically within Chinese society.
Under the draft framework, AI service providers would assume heightened safety responsibilities throughout the full lifecycle of their products. This includes conducting algorithm reviews, strengthening data protection, and implementing robust safeguards against misuse of personal information. Officials hope these measures will mitigate risks associated with advanced AI engagement.
A central focus of the proposal is psychological safety. Companies would be required to monitor user behaviour and issue warnings against excessive use. If systems detect signs of high emotional dependence or addictive patterns, providers must intervene appropriately to protect users.
The draft also outlines firm content boundaries. AI services must not generate material that threatens national security, promotes violence, spreads rumours, or includes obscenity. These content limits are part of broader efforts to align AI output with social and political norms emphasized by Chinese regulators.
The proposed regulations resonate with other recent measures aimed at guiding the responsible deployment of generative technologies across China. These efforts reflect concerns about the societal impact of AI, including misinformation, emotional manipulation, and the rapid evolution of interactive digital agents.
One notable aspect of the draft is its emphasis on emotional monitoring. Providers would need mechanisms to assess users’ emotional states and levels of dependence on AI systems. The intent is to prevent harm that could arise when users form intense emotional attachments to AI companions.
China’s approach contrasts in some respects with other global AI regulatory trends. While many jurisdictions focus primarily on transparency and data privacy, Chinese rules extend into psychological and behavioural zones, requiring more active intervention from service providers. This reflects a unique emphasis on societal stability and state interests.
The comment period for the draft rules extends into January 2026, giving industry stakeholders and the public a chance to provide feedback. If enacted, these regulations could set a new benchmark for how governments oversee human-like AI systems and influence how companies design interactive technologies for emotional engagement.
Observers note that rapid growth in AI tools capable of simulating human likeness has triggered concerns worldwide. China’s draft rules aim to confront these issues head-on by requiring proactive safety measures and ethical guardrails within the technology’s design and operation.
Critics, however, warn that overly stringent requirements could stifle innovation or complicate compliance for developers. Balancing safety, ethics, and technological progress remains a central challenge in AI governance discussions, both within China and internationally.
The draft rules also align with China’s broader strategy to expand regulatory oversight over digital platforms and emerging tech sectors. They build on existing AI guidelines and underscore the government’s intent to steer the industry according to state priorities and public welfare considerations.
As global debate intensifies around AI safety, China’s proposed rules for human-like interactive systems could influence regulatory thinking elsewhere. Governments and industry leaders worldwide are watching closely to see how these policies evolve and what impact they have on the future of emotionally engaging AI services.








