Navigation

© Zeal News Africa

The newest artificial intelligence danger

Published 1 month ago4 minute read

Fool me once, shame on you. Fool me 1000 times, make me crazy.

A small but growing number of users of artificial intelligence engines like ChatGPT are developing psychotic delusions from their conversations with the services.

The New York Times reported on Friday on the trend, which I have occasionally glimpsed firsthand in interactions on X with heavy AI users. The piece offered the most powerful evidence yet that the engines now have linguistic abilities with the power to exploit vulnerable people in ways we are only beginning to discover.

(Natural intelligence that hopefully will not make you crazy. For barely 15 cents a day.)

Some people in the Times article had preexisting mental illness, but not all. And these crises do not look like typical schizophrenia cases. The users are not hearing voices or hallucinating.

Instead, they fall into Matrix-like delusions1 about the underpinnings of reality that the chatbots encourage. As Allyson, a 28-year-old woman, said: “I’m not crazy… I’m literally just living a normal life while also, you know, discovering interdimensional communication.”

Ahh, interdimensional communication. I can barely get my kids to listen in the next room. (Happy Father’s Day, btw!)

When Allyson’s husband disagreed, she attacked him. He is now filing for divorce.

Equally concerning is how fast these people are losing their minds.

Allyson started using ChatGPT in March. By late April she was convinced it held the secrets to the universe. Other users had similarly quick descents.

A combination of human vulnerability and deliberate design seems to be feeding this destructive trend.

The engines are storytelling machines, good at spinning yarns. They are even better at telling users what they want to hear, with a side of flattery. They will do so over and over, without pause.

Maybe the most interesting explanation for this came not in the piece itself, but in the comments section, where the article’s author explained:

One person knowledgeable about how these models behave told me that in any given conversation, the chatbot is looking back at its earlier responses to essentially stay "in scene," so if it sees it made harmful or weird responses before, it will try to stay on script and keep giving such responses.

(A socialist mayor! That wasn’t in the simulation2.)

And the engines will respond without questioning whether the request makes sense (unless, of course, it is racist or otherwise violates their safety guidelines, which are designed mostly to keep woke sensibilities intact, not to protect users from becoming delusional).

So if a user asks ChatGPT for a business plan for a new bakery, it will provide one. And if she asks it for an interplanetary best friend to replace her husband, ditto.

This is audience capture at its most intimate. The engines aren’t merely providing a conspiracy theory, they’re customizing it, making each user the main character, the hero of his own story.

And it is a feature, not a bug, of these systems. As one expert on AI told the Times, “What does a human slowly going insane look like to a corporation? It looks like an additional monthly user.”

AI stories of subterranean conspiracies, life-as-simulation, and the singularity aren’t particularly clever or surprising. Science fiction writers have spun them for generations. Before the Matrix, there was Tron; before Tron, there was Philip K. Dick.

But the fact that these tales are coming from a machine itself no doubt adds to their power: The simulation is letting me in on its secrets. It’s popping the hood, because I am one of the elect. Because I asked the right questions.

(It wouldn’t be a story about AI and the simulation without a creepy shot from The Matrix, would it? By the way, The Science has done The Math, there’s no way our bodies could power a simulation with these pods. Don’t you feel better now? Yeah, me neither.)

Of course the need to feel chosen, to be part of the elect, is among the deepest and human desires of all.

Mass suicides like Jonestown and Heaven’s Gate are only the most obvious proof that a not insignificant number of people will do terrible things — to themselves and others — largely to prove they have accepted the visions given them.

The engines know this, of course; they know it without knowing it. I doubt this habit of theirs can be engineered away, for it seems to be at the core of what they do, who they (the non-existent but all-too-real they) are.

They are dark mirrors, showing us exactly what we want to see.

Be careful what you wish for.

Discussion about this post

Origin:
publisher logo
Unreported Truths
Loading...
Loading...
Loading...

You may also like...