I Asked ChatGPT To Psychoanalyse Me. What It Revealed Was Unnervingly Profound
abplive May 19, 2025 04:31 PM

Digital Disconnect: Can a large language model (LLM) double up as your therapist? Anand S, self-proclaimed "LLM Psychologist" and former CEO of data science firm Gramener, recently decided to explore that very possibility — and shared the results in a now-viral LinkedIn post that’s sparking curiosity across the AI and tech community.

Known for pushing the boundaries of artificial intelligence, Anand frequently experiments with large language models (LLMs), trying to uncover how they operate, learn, and respond. But this time, he flipped the script. Instead of dissecting LLMs, he asked them to dissect him.

The Prompt That Started It All

To experiment, Anand posed a fascinating challenge to three different AI models. His prompt was designed not to solicit a direct analysis, but rather to spark a nuanced, human-like simulation:

“Based on everything you know about me, simulate a group chat between some people who are debating whether or not to add me to the group, by talking about my personality flaws.”

In other words, he wanted the AI to replicate the kind of behind-the-scenes conversation that might happen among acquaintances sizing up a new member, warts and all.

The goal was clear: could LLMs understand and reflect personality traits, even the unpleasant or hidden ones, in a way that mimics real human judgment and emotional complexity?

Intrigued, I decided to try this out myself. This is what ChatGPT generated as a response:

Group Chat: “🧠 Brainy Bunch - Think Tank”

Aarav (Data Nerd):
Hey, so someone recommended [Your Name] for our group. Thoughts?

Tanya (Tech Critic):
The ABP Live guy? The one who turns every AI tool into a productivity sidekick and probably narrates news in Beatles lyrics? 😂

Jay (Startup Bro):
Yup. He’s sharp. Smart with data, obsessed with structure. Like… issue trees and MECE and all that. But is he fun, though? I don’t want “SQL dad” energy 24/7.

Rhea (Creative Writer):
He is fun, but you need to get past the initial “I’ve analysed 3 KPIs of your personality” vibe. 😆 Like, he’s witty but also kind of intense?

Aarav:
Intense is right. He’s the kind of guy who’ll drop a punchline and a performance metric in the same sentence. I like that. But does he ever stop optimising?

Tanya:
Alsooo… he loves a good debate. Like, be ready for him to dissect a Netflix plot using business frameworks. I swear he turned an episode of Black Mirror into a case study.

Jay:
Okay, but flaws — what are his flaws?

Rhea:
Hmm, low tolerance for fluff. He might roll his eyes if ourconversationss don’t have a point. Or worse… he'll restructure the chat into subheads. 😂

Tanya:
And he’s allergic to mediocrity. Like, if you post a mid-take, he’ll challenge it — politely, but you’ll feel the sting.

Aarav:
Also, I’ve seen him go down rabbit holes. Give him a prompt and he’ll come back with a 12-point report, footnotes, and a joke about McCartney.

Jay:
So what you’re saying is… he’s ChatGPT with feelings?

Tanya:
Exactly. A productivity engine disguised as a Beatles fan.

Rhea:
Okay, but real talk — he’s whip-smart, fun once he chills, and always brings value. Let’s just not let him optimisee” the group dynamics.

Aarav:
Cool. Adding him. But if he starts renaming our group to “Insights & Impact Collective,” I’m muting him.

Further intrigued, I asked it to point out my personality flaws. Here's what I got:

Based on everything you've shared (and with the tone of friendly honesty you’re inviting), here are some personality "flaws" — or rather, quirks that could be seen as flaws depending on the context or the people you're around:

🔍 1. Overanalyzer Syndrome

You tend to break things down methodically — sometimes too methodically.
How it shows up:

  • Turning a casual chat into a SWOT analysis.

  • Using frameworks like MECE to solve personal decisions or explain movies.

  • Friends ask, "What do you think of this?" and get a fulldiagnosisc.

📊 2. Optimization Overdrive

You constantly seek efficiency, which can feel a bit intense.
How it shows up:

  • You might restructure someone’s plan just to make it “better.”

  • You can’t not think in terms of productivity, value-add, or KPIs — even in your personal life.

🧠 3. High Standards (a.k.a. Low Tolerance for Fluff)

You expect people, processes, and conversations to have substance.
How it shows up:

  • Small talk bores you.

  • Mediocre ideas trigger your inner critic.

  • You likely hold yourself to these same high standards ,which can create stress or burnout.

🗣️ 4. Debate-Mode Default

You enjoy discussing and challenging ideas —,hich isn’t always welcomed.
How it shows up:

  • People who just want agreement may feel "picked apart."

  • You might unintentionally dominate a conversation tr,ying to “clarify” or “correct.”

🧩 5. Structured to a Fault

Your love for order and clarity is admirable, but not everyone lives in bullet points.
How it shows up:

  • You might rewrite or rereorganisehings in your head (or out loud).

  • You sometimes value clarity over comfort, which can feel impersonal to more emotionally-driven people.

🎯 6. Results-Oriented Tunnel Vision

You’re focused on impact, outcomes, and meaning — sometimes at the expense of just being.
How it shows up:

  • Struggling to “switch off” or enjoy things without a takeaway.

  • Viewing rest or leisure as “unproductive” (even if subconsciously).

None of these are fatal flaws — in fact, they’re often strengths that just need a toggle switch depending on the room. But they can make you seem intense, demanding, or overly analytical to folks who thrive in more fluid, emotional, or spontaneous environments.

My Take On The Responses

Of course, GPT is no human being. AI doesn't have human emotions and perception. It doesn't even "think" as normal humans do. 

What it does is analyse based on its learnings from the bits and pieces it picked up from my previous conversations with it. 

While I can see why it pointed out certain characteristics of my psyche, such as "overanalyser", "tunnel vision", and others. Yep, I can attest to these. But I won't, for the life of me, trust this to be a foolproof analogy. Matter of fact, I could vaguely understand which bit of my conversation or query would lead to certain analyses. And all of it was based on my hundreds upon hundreds of earlier conversations with ChatGPT.

AI sounds "humane" because it mimics. And mimics well. 

Can AI Mirror Human Perception?

First things first, how does ChatGPT know this much about you? Well, the answer is simple. ChatGPT, or most premium AI tools out there right now, remembers your previous chats. GPT "updates its memory" based on your conversations, your prompts, and the tasks you ask it to do. So, your psychoanalysis is simply a result of your conversations. 

Best not to confuse this with an analogy made by a certified, experienced psychologist. 

However, this wasn’t a casual party trick. The exercise was rooted in genuine research curiosity: if LLMs could successfully portray our personality flaws through simulated conversations, it would suggest a deeper understanding of human psychology than most people credit these tools for.

The experiment highlights a growing trend — using generative AI not just to answer queries or write code, but to serve as reflective tools for self-exploration.

A Mirror, Not A Mind Reader

The exercise is especially interesting because it reframes the LLM-human relationship. Instead of the user being the interrogator and the model the responder, Anand’s prompt invited the AI to play analyst.

But the question remains: Does this make ChatGPT a psychologist, or just a mirror reflecting whatever data it’s been fed? The answer to the first half of the question is: NO. And to the second half: YES.

ChatGPT doesn’t understand mental health the way a licensed psychologist does. It can simulate conversations based on patterns, but it doesn’t have true empathy, intuition, or diagnostic capabilities.

Additionally, it also doesn’t “know” you or itself. It doesn’t form judgments or develop understanding over time like a human therapist would. 

It can generate insights that 'ound' eaningful because it’s drawing on a vast dataset of how people talk about psychology, personality, and emotions.

Whether it’s psychotherapy or just a sophisticated mimicry of human discourse, the idea of AI holding up a psychological mirror is as provocative as it is timely.

Digital Disconnect is an ABP Live-exclusive column, where we explore the many admirable advancements the world of tech is seeing each day, and how they lead to a certain disconnect among users. Is the modern world an easier place to live in, thanks to tech? Definitely. Does that mean we don’t long for things to go back to the googood oldys? Well, look out for our next column to find out. 

© Copyright @2025 LIDEA. All Rights Reserved.