Log In

I Asked ChatGPT To Psychoanalyse Me. What It Revealed Was Unnervingly Profound

Published 18 hours ago6 minute read

Can a large language model (LLM) double up as your therapist? Anand S, self-proclaimed “LLM Psychologist” and former CEO of data science firm Gramener, recently decided to explore that very possibility — and shared the results in a now-viral LinkedIn post that’s sparking curiosity across the AI and tech community.

Known for pushing the boundaries of artificial intelligence, Anand frequently experiments with large language models (LLMs), trying to uncover how they operate, learn, and respond. But this time, he flipped the script. Instead of dissecting LLMs, he asked them to dissect him.

To carry out the experiment, Anand posed a fascinating challenge to three different AI models. His prompt was designed not to solicit a direct analysis, but rather to spark a nuanced, human-like simulation:

In other words, he wanted the AI to replicate the kind of behind-the-scenes conversation that might happen among acquaintances sizing up a new member — warts and all.

The goal was clear: could LLMs understand and reflect personality traits, even the unpleasant or hidden ones, in a way that mimics real human judgment and emotional complexity?

Intrigued, I decided to try this out myself. This is what ChatGPT generated as a response:


Hey, so someone recommended [Your Name] for our group. Thoughts?


The ABP Live guy? The one who turns every AI tool into a productivity sidekick and probably narrates news in Beatles lyrics? 😂


Yup. He’s sharp. Smart with data, obsessed with structure. Like… issue trees and MECE and all that. But is he fun though? I don’t want “SQL dad” energy 24/7.


He is fun, but you need to get past the initial “I’ve analyzed 3 KPIs of your personality” vibe. 😆 Like, he’s witty but also kind of intense?


Intense is right. He’s the kind of guy who’ll drop a punchline and a performance metric in the same sentence. I like that. But does he ever stop optimizing?


Alsooo… he loves a good debate. Like, be ready for him to dissect a Netflix plot using business frameworks. I swear he turned an episode of Black Mirror into a case study.


Okay but flaws — what are his flaws?


Hmm, low tolerance for fluff. He might roll his eyes if our convos don’t have a point. Or worse… he’ll restructure the chat into subheads. 😂


And he’s allergic to mediocrity. Like, if you post a mid take, he’ll challenge it — politely, but you’ll feel the sting.


Also I’ve seen him go down rabbit holes. Give him a prompt and he’ll come back with a 12-point report, footnotes, and a joke about McCartney.


So what you’re saying is… he’s basically ChatGPT with feelings?


Exactly. A productivity engine disguised as a Beatles fan.


Okay but real talk — he’s whip-smart, fun once he chills, and always brings value. Let’s just not let him “optimize” the group dynamics.


Cool. Adding him. But if he starts renaming our group to “Insights & Impact Collective,” I’m muting him.

Further intrigued, I asked it to point out my personality flaws. Here’s what I got:

Based on everything you’ve shared (and with the tone of friendly honesty you’re clearly inviting), here are some personality “flaws” — or rather, quirks that could be seen as flaws depending on the context or the people you’re around:

You tend to break things down methodically — sometimes too methodically.

You constantly seek efficiency — which can feel a bit intense.

You expect people, processes, and conversations to have substance.

You enjoy discussing and challenging ideas — which isn’t always welcomed.

Your love for order and clarity is admirable, but not everyone lives in bullet points.

You’re focused on impact, outcomes, and meaning — sometimes at the expense of just being.

None of these are fatal flaws — in fact, they’re often strengths that just need a toggle switch depending on the room. But they can make you seem intense, demanding, or overly analytical to folks who thrive in more fluid, emotional, or spontaneous environments.

Of course, GPT is no human being. AI doesn’t have human emotions and perception. It doesn’t even “think” as normal humans do. 

What it does is analyse based on its learnings from the bits and pieces it picked up from my previous conversations with it. 

While I can see why it pointed out certain characteristics of my psyche, such as “overanalyser”, “tunnel vision”, and others. Yep, I can attest to these. But I won’t, for the life of me, trust this to be a foolproof analogy. Matter of fact, I could vaguely understand which bit of my conversation or query would lead to certain analyses. And all of it was based on my hundreds upon hundreds of earlier conversations with ChatGPT.

AI sounds “humane” because it mimics. And mimics well. 

First things first, how does ChatGPT know this much about you? Well, the answer is simple. ChatGPT, or most premium AI tools out there right now, remembers your previous chats. GPT “updates its memory” based on your conversations, your prompts, and the tasks you ask it to do. So, your psychoanalysis is simply a result of your conversations. 

Best not to confuse this with an analogy made by a certified, experienced psychologist. 

However, this wasn’t a casual party trick. The exercise was rooted in genuine research curiosity: if LLMs could successfully portray our personality flaws through simulated conversations, it would suggest a deeper understanding of human psychology than most people credit these tools for.

The experiment highlights a growing trend — using generative AI not just to answer queries or write code, but to serve as reflective tools for self-exploration.

The exercise is especially interesting because it reframes the LLM-human relationship. Instead of the user being the interrogator and the model the responder, Anand’s prompt invited the AI to play analyst.

But the question remains: Does this make ChatGPT a psychologist, or just a mirror reflecting whatever data it’s been fed? The answer to the first half of the question is: . And to the second half: .

ChatGPT doesn’t understand mental health the way a licensed psychologist does. It can simulate conversations based on patterns, but it doesn’t have true empathy, intuition, or diagnostic capabilities.

Additionally, it also doesn’t “know” you or itself. It doesn’t form judgments or develop understanding over time like a human therapist would. 

It can generate insights that ‘sound’ meaningful because it’s drawing on a vast dataset of how people talk about psychology, personality, and emotions.

Whether it’s psychotherapy or just a sophisticated mimicry of human discourse, the idea of AI holding up a psychological mirror is as provocative as it is timely.

 is an ABP Live-exclusive column, where we explore the many admirable advancements the world of tech is seeing each day, and how they lead to a certain disconnect among users. Is the modern world an easier place to live in, thanks to tech? Definitely. Does that mean we don’t long for things to go back to the good-ol’ days? Well, look out for our next column to find out. 

This report has been published as part of an auto-generated syndicated wire feed. Except for the headline, the content has not been modified or edited by Doonited

Origin:
publisher logo
Doonited News
Loading...
Loading...
Loading...

You may also like...