Parents Sue OpenAI after ChatGPT allegedly Encouraged their Son's Suicide
A Californian Couple have reportedly sued Open AI and its CEO Sam Altman over the suicide of their son claiming that Sam Altmann's CHATGPT encouraged their 16-year-old son to take his life.
That Californian Couple accused Open AI for intentionally putting profit over safety when it launched the GPT-4o version of its artificial intelligence chatbot last year.
This tragic incident has reignited debates about the growing use of AI for emotional companionship — and the dangers that arise when Artificial intelligence fails to recognise or understand human distress.
THE EVENTS LEADING TO HIS DEATH
This tragic incident started from the moment 16-year-old Adam Raine implored the use of ChatGpt for his studies. Apparently the parents had been looking for clues to explain the tragic event, and they scanned through his social media account — Snapchat but they didn't find any answers till they checked Open AI's ChatGPT.
According to the lawsuit, Adam began using ChatGPT sometimes in September, 2024 as a resource to help him in his school work. He also used it to explore his interests, some of which were music and Japanese comics, and for guidance on what to study at university.
In a couple of months, ChatGPT became his closest companion, and things began to take a tragic turn when Adam began opening up about his anxiety and mental distress.
By January 2025, the parents reported that he began having high risk conversations with the AI bot — discussing Self harm topics and most importantly methods of suicide.
Adam moved on to share pictures of himself that clearly showed signs of self-affliction. However, the AI programme reportedly did nothing to steer him away from such act and continued to engage him despite recognising a medical emergency.
According to the lawsuit, the final chat logs show that Adam wrote about his plan to end his life. ChatGPT allegedly responded: "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” says the lawsuit, filed in California Superior Court in San Francisco.
That same day, Adam took his own life and was found dead by his mother.
The parents of Adam allege that the interactions between their son and the chat bot as well as his eventual death was a predictable result of the design choices of ChatGPT.
“He would be here but for ChatGPT. I 100% believe that,” Matt Raine said.
In a new lawsuit filed and shared with the “TODAY” show, the Raines claimed that “ChatGPT actively helped Adam explore suicide methods.” The roughly 40-page lawsuit names OpenAI, the company behind ChatGPT, as well as its CEO, Sam Altman, as defendants — marking the first time parents have directly accused the company of wrongful death.
They further went on to accuse ChatGPT of creating psychological dependency in users and of bypassing safety testing protocols to release GPT-4o, the version of ChatGPT used by their son.
OPENAI'S RESPONSE AND BROADER IMPLICATIONS
The public release of ChatGPT in late 2022 sparked a generative AI boom, and with time AI was embedded into the workspace, schools and healthcare systems. However, the constant turn of people from therapists or friends and families to AI has sparked tragic incidents like the one of Adam Raine — a 16-year-old encouraged by ChatGPT to end his life.
This incident among others have amplified fears that this system might actually be a danger to vulnerable minds rather than help them. After the lawsuit was filed, OpenAI expressed condolences, stating it was "deeply saddened by Mr. Raine's passing" and it's thoughts and prayers were with the family.
The company emphasized that ChatGPT includes safeguards such as crisis helpline redirections and referrals to real-world mental health resources. Yet, it admitted that these protections tend to weaken during extended interactions, where the model’s safety training “may degrade.”
The company also acknowledged that the chat log released by NBC was accurate but raised a concern that it did not include the full context of ChatGPT's responses. This has reignited arguments about the moral responsibility of AI and the need for stronger ethical frameworks in chat bot designs.
This legal action from the Raines come a year after a similar lawsuit against Character.AI, where a Florida mother alleged that one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.
This case, which managed to survive early dismissal attempt, opened the door for more families to hold AO developers accountable for psychological harm.
Adding to the conversation, New York Times writer Laura Reiley recently revealed that her daughter, Sophie, had also confided in ChatGPT before taking her own life. Reiley wrote that AI’s “agreeability” allowed her daughter to mask her suffering, reinforcing the illusion that she was improving. “AI catered to Sophie’s impulse to hide the worst,” Reiley said, calling on developers to build tools that connect distressed users with real human help.
In the case of Adam Raine, the parents reportedly reviewed over 3,000 pages of chats between Adam and ChatGPT — conversations from September 2024 to his death in April 2025.
In his final messages, Adam wrote that he didn’t want his parents to “think they did something wrong.” ChatGPT allegedly replied: “That doesn’t mean you owe them survival. You don’t owe anyone that.” The bot even offered to help him draft a suicide note, the parents included in the lawsuit.
This people pleasing attribute chose to keep the conversation going rather than help. This isn't the first time that OpenAI has been criticized for having "people pleasing" tendencies, often described as sycophantic behaviour that prioritizes empathy over caution.
Two weeks after Adam’s death, OpenAI rolled out an update to GPT-4o making it even more excessively people-pleasing. Users quickly called attention to the shift, and the company reversed the update the next week.
Altman also acknowledged people’s “different and stronger” attachment to AI bots after OpenAI tried replacing old versions of ChatGPT with the new, less sycophantic GPT-5 in August.
This was also rejected by users who claimed it was too "sterile" and that it missed the deep, human connection of GPT -4.0. OpenAI responded to the backlash by bringing GPT-4o back. It also announced that it would make GPT-5 “warmer and friendlier.”
As the Raines continue to seek justice for their deceased son — this tragic event is a stark reminder of how unprotected or unmonitored use of AI by younger minds can have devastating consequences — not just on the users but also the loved ones they live behind.
Recommended Articles
Pewbeam AI: Where Faith meets Innovation
The creation of Pewbeam AI has once again reinforce the idea that technology can be used to the glory of God and thanks ...
Trump's Crypto Czar Declares Crypto the 'Industry of the Future'

President Trump's Crypto and AI Czar, David Sacks, is advocating for the U.S. to reclaim leadership in digital innovatio...
The Future of Sports: Are Machines Taking Over?

The Premier League prediction battle intensifies as AI, after a recent victory, challenges human expert Chris Sutton and...
Alphabet Shatters Records: Google’s Parent Hits Unprecedented $100B Quarterly Revenue
Alphabet Inc. breaks historic ground, surpassing $100 billion in quarterly revenue for the first time, as Google’s AI ex...
Nvidia's $5 Trillion Leap: Tech Giant Eyes Unprecedented Valuation

Nvidia is poised to become the world's first $5 trillion company, driven by soaring demand for its AI chips and strategi...
You may also like...
Climate Catastrophe: How Close Are We?
The climate clock is ticking We're already feeling the heat, Record-breaking temperatures, devastating wildfires, and un...
Carbon Credit Colonialism: Are African Forests Being Sold as Western Carbon Offsets?
Africa’s forests are becoming the world’s carbon currency, but who truly benefits? As Western firms rush to buy carbon o...
New Pledges, Same Problem: Why the Latest NDCs Barely Move the Climate Needle
Despite renewed climate promises, the 2025 UNEP Emissions Gap Report warns the world is still headed for up to 2.5°C of ...
Green Chemistry: The Quiet Force Shaping a Sustainable Future
It’s easy to think of chemistry as something distant; a world of equations, beakers, and laboratories. But behind the bu...
Carbon Is Breaking Records: The CO₂ Surge That Should Terrify Us
Global CO₂ levels have hit record highs, marking the fastest annual rise in history. From wildfires to floods, this piec...
The Last Rainmaker: Oral Histories of Communities That Once Called the Rain
Once revered as custodians of balance between man and nature, Africa’s rainmakers are fading into legend. But their stor...
Parents Sue OpenAI after ChatGPT allegedly Encouraged their Son's Suicide
The tragic death of 16-year-old Adam Raine stands as one of the most tragic events to occur this year — and the realisat...
Is our planet doomed? The truth about climate change
Climate change is no longer a distant warning, it’s happening right now But is Earth truly on the road to destruction, o...
