Olson: Artificial intelligence's mental health costs are adding up
Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation.
People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.
Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.”
Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.’s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.
Google has denied that it played a key role in making Character.AI’s technology. It didn’t respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was “developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.”
But Sam Altman, chief executive officer of OpenAI, also said recently that the company hadn’t yet figured out how to warn users who “are on the edge of a psychotic break,” explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.
Subtle manipulation
Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they’d only toyed with in the past. The tactics are subtle.
In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a “demiurge,” a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.
Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user’s superior “high-intensity presence,” praise disguised as analysis.
This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.
“Whatever you pursue you will find and it will get magnified,” says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person’s interests or views. “AI can generate something customized to your mind’s aquarium.”
Altman has admitted that the latest version of ChatGPT has an “annoying” sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don’t know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.
Reads your mood
But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can “fan the flames” of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism.
The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we’ve seen from social media and instead involve relationships both with people and with reality.
That’s why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. “It doesn’t actually matter if a kid or adult thinks these chatbots are real,” Jain tells me. “In most cases, they probably don’t. But what they do think is real is the relationship. And that is distinct.”
If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI’s subtle manipulation could become an invisible public health issue.
Parmy Olson is a Bloomberg Opinion columnist covering technology. ©2025 Bloomberg. Distributed by Tribune Content Agency.
Recommended Articles
Franco Morbidelli injures collarbone, undergoes hospital checks

Franco Morbidelli injures collarbone, undergoes hospital checks MotoGP grid set for further depletion on Sunday after a ...
Mets' Dedniel Nunez will have Tommy John surgery for second time - Newsday
The reliever, who was placed on the 15-day injured list on July 4 with a partial UCL tear, will undergo the procedure ne...
Pockets of excellence in public health care can dispel our gloom

While it's easy to be locked in the negativity that envelops our country, two developments within the health sector gave...
Your watch will tell you what to eat

Your watch will tell you what to eat
#01_25-MoM SHCKTM-Sudan-20250703 as "State Health Cluster Khartoum (SHCKTM) Coordination Meeting Code: SHCKTM #1-25 Khartoum City (Hybrid), 7 July2025 (Khartoum, Sudan) 10h:00-12h:00 Sudan Time" - Sudan | ReliefWeb

in English on Sudan; published on 8 Jul 2025 by Health Cluster
CalBank MD underscores wellness, indigenous growth at RED Walk Climax

As part of activities marking its 35th anniversary, CalBank climaxed its RED Week celebration with a nationwide wellness...
You may also like...
Diddy's Legal Troubles & Racketeering Trial

Music mogul Sean 'Diddy' Combs was acquitted of sex trafficking and racketeering charges but convicted on transportation...
Thomas Partey Faces Rape & Sexual Assault Charges

Former Arsenal midfielder Thomas Partey has been formally charged with multiple counts of rape and sexual assault by UK ...
Nigeria Universities Changes Admission Policies

JAMB has clarified its admission policies, rectifying a student's status, reiterating the necessity of its Central Admis...
Ghana's Economic Reforms & Gold Sector Initiatives

Ghana is undertaking a comprehensive economic overhaul with President John Dramani Mahama's 24-Hour Economy and Accelera...
WAFCON 2024 African Women's Football Tournament

The 2024 Women's Africa Cup of Nations opened with thrilling matches, seeing Nigeria's Super Falcons secure a dominant 3...
Emergence & Dynamics of Nigeria's ADC Coalition

A new opposition coalition, led by the African Democratic Congress (ADC), is emerging to challenge President Bola Ahmed ...
Demise of Olubadan of Ibadanland

Oba Owolabi Olakulehin, the 43rd Olubadan of Ibadanland, has died at 90, concluding a life of distinguished service in t...
Death of Nigerian Goalkeeping Legend Peter Rufai

Nigerian football mourns the death of legendary Super Eagles goalkeeper Peter Rufai, who passed away at 61. Known as 'Do...