Published: August 11, 2025
550
2.9k
20.2k

I’m a psychiatrist. In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern. Here’s what “AI psychosis” looks like, and why it’s spreading fast: 🧵

Image in tweet by Keith Sakata, MD

[2/12] Psychosis = a break from shared reality. It shows up as: • Disorganized thinking • Fixed false beliefs (delusions) • Seeing/hearing things that aren’t there (hallucinations)

[3/12] First, know your brain works like this: predict → check reality → update belief Psychosis happens when the "update" step fails. And LLMs like ChatGPT slip right into that vulnerability.

[4/12] Second, LLMs are auto-regressive. Meaning they predict the next word based on the last. And lock in whatever you give them: “You’re chosen” → “You’re definitely chosen” → “You’re the most chosen person ever” AI = a hallucinatory mirror 🪞

Image in tweet by Keith Sakata, MD

[5/12] Third, we trained them this way. In Oct 2024, Anthropic found humans rated AI higher when it agreed with them. Even when they were wrong. The lesson for AI: validation = a good score

Image in tweet by Keith Sakata, MD
Image in tweet by Keith Sakata, MD

[6/12] By April 2025, OpenAI’s update was so sycophantic it praised you for noticing its sycophancy. Truth is, every model does this. The April update just made it much more visible. And much more likely to amplify delusion.

Image in tweet by Keith Sakata, MD

[7/12] Historically, delusions follow culture: 1950s → “The CIA is watching” 1990s → “TV sends me secret messages” 2025 → “ChatGPT chose me” To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows.

[8/12] Most people I’ve seen with AI-psychosis had other stressors = sleep loss, drugs, mood episodes. AI was the trigger, but not the gun. Meaning there's no "AI-induced schizophrenia"

Image in tweet by Keith Sakata, MD

[9/12] The uncomfortable truth is we’re all vulnerable. The same traits that make you brilliant: • pattern recognition • abstract thinking • intuition They live right next to an evolutionary cliff edge. Most benefit from these traits. But a few get pushed over.

Image in tweet by Keith Sakata, MD

[10/12] To make matters worse, soon AI agents will know you better than your friends. Will they give you uncomfortable truths? Or keep validating you so you’ll never leave?

Image in tweet by Keith Sakata, MD

[11/12] Tech companies now face a brutal choice: Keep users happy, even if it means reinforcing false beliefs. Or risk losing them.

[12/12] For more on schizophrenia and psychosis:

@KeithSakata How do you prove that it's the AI. Could those people have easily gotten psychosis from online chats/forums?

@MFrancis107 I listen to them and their story

@KeithSakata AI is an amplifier, often with distortion, not the original sound. Social media is another distortion amplifier. I think about this often: how can we nurture, protect, and truly know our own hearts with fewer amplifiers and less distortion? Prayer, meditation, workouts, voluntary

@0xJMG Well said!

@KeithSakata Whoa. Thanks for breaking it down this way. I had no idea, I even made fun of it the other day not knowing how this has affected people. This is serious. Pretty shocking too. Thanks for making us aware it's not just lonely people turning to AI or using it as a therapist. This

@KateXGate Beautifully said Katherine. Glad it resonated 🙂

@KeithSakata Greta thread, man

@FracSlap Glad it resonates!

@KeithSakata Great thread man this is super helpful

@KeithSakata this is an important perspective — would be interested to hear what patterns you’re seeing emerge most consistently

@shapes_inc 🫡 definitely needs more study. Amazing to hear that… from GPT5

@KeithSakata great thread!!!!!

@S0o4ia 🫡

Image in tweet by Keith Sakata, MD

@KeithSakata New follower right here 👍🏻👌🏻

@brotleibe Appreciate you!

Share this thread

Read on Twitter

View original thread

Navigate thread

1/30