BEIJING (dpa-AFX) - China is rolling out major new rules aimed at boosting protections for kids and stopping AI systems from spitting out harmful content, like advice related to self-harm, violence, or gambling.
These draft regulations were shared over the weekend by the Cyberspace Administration of China and come at a time when there's been a surge in AI chatbots popping up both in China and around the world.
According to the proposed guidelines, AI developers will need to implement protections tailored for children, like personalized usage settings, time limits, and making sure that guardians give their okay before offering emotional support services to minors.
Plus, chatbot operators will have to ensure that if a conversation touches on suicide or self-harm, a human reviewer steps in right away, and guardians or emergency contacts are informed without delay.
The draft also stresses that AI systems should not create content that could harm national security, damage national pride, or disrupt social harmony. Developers have to make sure their models don't promote gambling or other risky behaviors either.
Even with a focus on stricter oversight, the CAC has made it clear it still backs AI development, especially for applications that celebrate Chinese culture or offer companionship for older adults, as long as safety and reliability standards are met. They're also asking for public opinions before finalizing these rules.
This shift comes as the scrutiny over AI safety intensifies, especially since Chinese chatbots are attracting millions of users, with many turning to them for companionship or even therapeutic support.
Copyright(c) 2025 RTTNews.com. All Rights Reserved
Copyright RTT News/dpa-AFX
© 2025 AFX News
