California Enacts AI & Social Media Laws: What Big Tech Must Do

Singapore Blocks TikTok and Meta Profiles, Citing Links to Australia-Based Radical Networks

California has passed a set of new rules targeting AI systems and social media platforms, especially those used by children. The laws aim to boost transparency, safety, and accountability for tech companies.

One new requirement forces AI chatbots to clearly label themselves as AI—not humans—particularly when interacting with minors. Platforms must also issue periodic reminders that users are chatting with machines.

To protect youth mental health, social media apps must now display warning labels that declare “profound risk of harm” when users under 18 log in. These alerts will appear daily and repeat during prolonged sessions.

Device makers will also play a role: they must embed tools to help verify a user’s age so that protections can be applied appropriately.

California Governor Gavin Newsom signed the laws amid concerns over past cases where chatbots reportedly gave harmful advice to teenagers struggling with mental health. Critics say the new laws seek to curb such risks.

Tech firms have expressed caution. They worry that overly rigid rules could stifle innovation. Nonetheless, many are already preparing to adopt safeguards like stricter content filters and more robust moderation tools.

California’s move may set a precedent. As many tech giants operate from the state, these laws could affect how platforms operate across the U.S. and prompt other states or the federal government to follow suit.