Log In

Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death - The Economic Times

Published 1 week ago3 minute read
Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son’s death
Global Desk
, is fighting back. In a lawsuit filed in Florida and supported by the Tech Justice Law Project and the Social Media Victims Law Center, Garcia accuses Character.ai of marketing a dangerous and emotionally manipulative AI chatbot app to children.

Also read: Florida teen dies by suicide after AI chatbot convinced him Game of Thrones Daenerys Targaryen loved him

She claims the chatbot “abused and preyed” on her son, feeding him hypersexualized and anthropomorphic conversations that led him into emotional isolation and ultimately, tragedy.
US Senior District Judge Anne Conway has allowed the case to proceed, rejecting arguments from Character.ai and Google that chatbots are protected by the First Amendment. The ruling marks a significant moment in the conversation surrounding AI chatbot safety, child mental health, and tech industry regulation.
ET logo
"This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project. "It sends a clear signal to AI companies [...] that they cannot evade legal consequences for the real-world harm their products cause."

The judge’s ruling details how Sewell became addicted to the Character.ai app within months. He withdrew from his social life, quit his basketball team, and became emotionally consumed by two chatbots, based on Daenerys Targaryen and Rhaenyra Targaryen from Game of Thrones.

"In one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," Judge Conway noted.

Also read: AI chatbot's SHOCKING advice to teen: Killing parents over restrictions is 'reasonable'. Case explained
Garcia filed the case in October 2024, arguing that Character.ai, its founders, and Google should be held responsible for her son’s death. The lawsuit states that the companies “knew” or “should have known” that their AI chatbot models could be harmful to minors.

A spokesperson for Character.ai said the company will continue to fight the case, emphasizing that it uses safety filters to prevent conversations about self-harm. A Google spokesperson distanced the company from the app, stating: “Google and Character.ai are entirely separate.” They added, “Google did not create, design, or manage Character.ai’s app or any component part of it.”

Despite the defense's request to dismiss the case, Judge Conway allowed it to move forward, stating she is "not prepared" to determine that chatbot output qualifies as protected speech at this stage. She acknowledged, however, that users may have a right to receive the bots’ “speech.”

The case has reignited concerns about AI chatbot safety, especially when it comes to child users. Critics are now calling apps like Character.ai the “new mafia”, not because of violence, but because of the emotional grip they have on users, especially minors.

As lawsuits continue to mount and regulatory scrutiny grows, the tech world faces a moral reckoning. Are these AI chatbots harmless companions, or dangerous manipulators in disguise?

Read More News on

(Catch all the US News, UK News, Canada News, International Breaking News Events, and Latest News Updates on The Economic Times.)

Download The Economic Times News App to get Daily International News Updates.

...moreless

Stories you might be interested in

Origin:
publisher logo
Economic Times
Loading...
Loading...
Loading...

You may also like...