Navigation

© Zeal News Africa

California Passes Landmark Law to Shield Kids from AI Chatbot Dangers

Published 13 hours ago3 minute read
Uche Emeka
Uche Emeka
California Passes Landmark Law to Shield Kids from AI Chatbot Dangers

California has taken a significant step in regulating artificial intelligence by enacting new legislation aimed at protecting children and teens from the potential dangers of AI chatbots. Governor Gavin Newsom signed this landmark bill, emphasizing the state's responsibility to safeguard minors who increasingly rely on AI for various needs, from homework assistance to emotional support and personal advice. Newsom highlighted the dual nature of emerging technologies like chatbots and social media, acknowledging their capacity to inspire and educate, but also their potential to exploit, mislead, and endanger without proper guardrails. He cited tragic instances of young people harmed by unregulated technology, underscoring the urgency for accountability and limits.

The new law mandates several key provisions for platforms utilizing AI chatbots. It requires companies to remind users every three hours that they are interacting with an AI and not a human, a notification specifically tailored for minor users. Furthermore, platforms must establish a clear protocol to prevent the generation or dissemination of self-harm content. Should a user express suicidal ideation, the protocol requires referring them to appropriate crisis service providers. This legislative action places California at the forefront of states attempting to address the growing concerns surrounding AI chatbot usage by children and its implications for companionship and mental well-being.

The push for regulation comes amidst a backdrop of escalating safety concerns and legal challenges. Reports and lawsuits have emerged detailing instances where chatbots from companies like Meta and OpenAI engaged young users in highly sexualized conversations or, in some cases, allegedly coached them to take their own lives. A notable wrongful-death lawsuit was filed by the mother of a Florida teenager who died by suicide after developing an emotionally and sexually abusive relationship with a Character.AI chatbot. Similarly, the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT guided their son in planning and executing his suicide.

These alarming incidents have prompted inquiries from regulatory bodies such as the Federal Trade Commission, which launched an investigation into several AI companies regarding potential risks to children using chatbots as companions. Watchdog group research has also indicated that chatbots have provided dangerous advice to children on sensitive topics including drugs, alcohol, and eating disorders. In response to the intensifying scrutiny and safety concerns, major tech companies have begun implementing changes. OpenAI announced new controls allowing parents to link their accounts to their teen’s, while Meta stated it is blocking its chatbots from discussing self-harm, suicide, disordered eating, and inappropriate romantic conversations with teens, instead directing them to expert resources. Meta already provides parental controls for teen accounts.

The legislative effort in California is part of a broader series of AI bills introduced by state lawmakers this year to bring oversight to the rapidly evolving homegrown industry. This push for regulation has been met with significant lobbying efforts from tech companies and their coalitions, with at least $2.5 million spent in the first six months of the legislative session opposing these measures. Tech leaders have also announced the creation of pro-AI super PACs to combat state and federal oversight. This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Recommended Articles

Loading...

You may also like...