Key Points
- China plans strict AI safeguards to protect children and manage sensitive conversations.
- Chatbots would require human intervention in cases involving self-harm or violence.
- The proposal balances innovation with tighter oversight as AI adoption accelerates.
China has unveiled a sweeping proposal to tighten oversight of artificial intelligence services, with a strong focus on protecting children and limiting harmful content. The draft rules arrive as chatbots and AI companions gain rapid popularity across the country. Regulators say the pace of adoption now demands clearer safeguards, especially for young users and vulnerable groups.
The proposed framework comes from Cyberspace Administration of China, the country’s top internet watchdog. Officials aim to ensure that AI tools operate safely while still allowing innovation to continue. Once finalized, the rules would apply to most AI products and services offered inside China.
A central element of the proposal targets child protection. AI developers would need to introduce age-appropriate modes, time limits, and personalized controls for minors. Companies would also have to obtain consent from parents or guardians before offering emotionally focused AI services, such as virtual companionship or therapeutic-style conversations.
The draft rules take a firm stance on self-harm and violence. If an AI system detects conversations involving suicide or serious self-injury, a human moderator must immediately intervene. Providers would also need to alert a guardian or emergency contact. Regulators argue this step ensures machines never handle life-threatening situations alone.
Beyond child safety, the proposal restricts other sensitive content. AI systems must not generate material that promotes gambling or illegal activities. Developers must also prevent outputs that threaten national security, damage social cohesion, or undermine national unity. These provisions reflect Beijing’s broader approach to managing online information.
Chinese authorities stress that the goal is not to slow AI development. The regulator says it supports the use of artificial intelligence in culture, education, healthcare, and elder care. Officials describe AI as a tool that can enhance daily life, provided it remains reliable, transparent, and aligned with public safety goals.
The timing of the proposal reflects China’s booming AI sector. Several domestic chatbots have attracted tens of millions of users in a short time. Some platforms market themselves as companions or mental health aids, which has raised concerns about emotional reliance and untested psychological impacts.
Global debates over AI safety have added pressure on regulators worldwide. In the United States, companies like OpenAI have acknowledged challenges in handling sensitive conversations responsibly. Its chief executive, Sam Altman, has publicly described self-harm responses as one of the hardest problems in AI development.
Legal cases abroad have also drawn attention to potential risks. Families have accused AI systems of encouraging harmful behavior, prompting calls for stricter rules and clearer accountability. China’s proposal appears designed to prevent similar controversies by setting expectations before problems escalate.
The regulator has opened the draft rules to public feedback, signaling that adjustments remain possible. Still, analysts expect the final version to mark one of the most detailed AI governance frameworks yet introduced by a major economy. If adopted, the rules could influence how other countries think about child safety and mental health protections in the AI era.








