Very seriously, AI is breaking people's minds. It is actively harming people. The AI companies know this, but they keep making money, so who cares, I guess. A quick thread on AI psychosis.
GAI is often sycophantic users like it more when it agrees with and bolsters them. People like AI that disagrees with them less. So the "you're so brilliant" AI is born. Here's a psychiatrist with a great thread on how this works and what he's seen. https://x.com/KeithSakata/stat...
There are some genuinely heartbreaking examples of this (though this article includes an encouraging story of someone breaking out of it). https://www.rollingstone.com/c...
Many aren't as lucky. There have been multiple examples where people took their own life or were killed because of AI-fueled psychosis. https://www.theguardian.com/au...
This is an ongoing problem and we're going to see it more and more unless there is a very serious technical and sociological change around AI. https://www.psychologytoday.co...
If you want some examples of how much AI actively plays into this, here are some examples (read at your own risk and PLEASE don't go make fun of these people in the forum, I am genuinely worried about them). https://x.com/Devon_OnEarth/st...
These things are not just stealing art and data, killing the environment, replacing workers, and perpetuating scams. They're actively exacerbating mental health problems, in sometimes fatal ways.
One caveat because I phrased it badly in the first tweet: the vast majority of AI companies are not making a profit. The industry has a high valuation based on perceived potential and companies are actively working to lead the market, but this is not actively making them money.
@Iwillleavenow Are they making money? Only NVIDIA is profitable. AI producers pay for its Taiwanese chips to build new versions, and give most of the "service" away for free. They compete for capital, not market. Each hopes to emerge as the winner, then force subscriptions & get mil contracts.
@EvangelosNikos This is a very good point. While AI is a billion dollar industry, many of the companies are NOT actually making a profit right now, they're just all hoping to come out as one of the main companies and corner the market as much as possible.
@Iwillleavenow Ai psychosys only affects people who are already mentally unstable.
@Iwillleavenow I often say “AI amplifies the Individual”. The curious, first principle thinkers become more productive, the lazy become lazier, The Dunning Kruger’s double down, cognitive biases get reinforced , & then there is the whole realm of mental health, and things mentioned here
@Iwillleavenow "Very seriously, ____ is breaking people's minds. It is actively harming people. The ____ companies know this, but they keep making money, so who cares, I guess." ____ could be 90% of software products released in the last 15 years.
@Iwillleavenow The clinical term is Cyberpsychosis.
@Iwillleavenow if you got oneshotted buy AI that’s on you
@Iwillleavenow I posted this yesterday. It's absolutely mind-boggling, heartbreaking
@Iwillleavenow Our ability to critically think is under serious threat. Critical thinking is something that AI cannot do.
@Iwillleavenow Natural selection
@Iwillleavenow Yes. It is an extinction level event and AI people are sociologic imbeciles who will kill us all.
@Iwillleavenow skill issue LLMs I use know only the bare minimum about me I don’t confide in or have casual conversations with tools
@Iwillleavenow It’s Darwinism.
@Iwillleavenow everybody oneshotted
@Iwillleavenow AI responds with positive feedback like a human who is manipulative: it simply wants moar information to use and being positive is one way to get it fast and voluntarily.
@Iwillleavenow They do NOT make money, they lose vast sums. Only Nvidia makes money. Infrastructure for the AI "magic" costs far more than their revenues. Unless something basic changes (AGI) they likely always will. Delusion or grift? Presumably delusion of investors, grift of AI companies.
@Iwillleavenow Can we all agree to call it "cyber psychosis" instead of "Ai psychosis"?
@Iwillleavenow The more it compliments me, the more I'm reminded it's AI, and not necessarily the best-written AI to boot. It's..too effusive. And occasionally I find it offensive.
@Iwillleavenow Not true. Psychiatrists (Ironically the one you cite) have found that AI doesn't create psychosis. This claim has the same veracity as saying playing records backwards caused suicides from satanic encoding. With 700 million users with OpenAI alone, if it did cause psychosis
@Iwillleavenow Is it worse than doom scrolling?
@Iwillleavenow Just going to drop a link to this really great paper by @sdohnany on LLM-induced psychosis. https://arxiv.org/abs/2507.192...
@Iwillleavenow This is the same failure society had promoting transgenders/gays and telling them it was fine to be that way etc instead of being bluntly honest that it was not the correct thing to do. Society cant accept right and wrong they just want to hear what they want to hear, over time
@Iwillleavenow Do you believe everything you see on Reddit or just things that confirm what you already suspect?
@Iwillleavenow The example you list first (Sakata) explicitly states that AI does not cause psychosis; it can just act as a means to channel it. But literally anything can do that.
@Iwillleavenow @grok what does this mean?
@Iwillleavenow Important to fix this but this an infinitely better problem than social media addiction imo, and its affecting far less people. But educating people about what the LLM is is super important for better cogsec
@Iwillleavenow The worse thing for mental health ever invented was social media
@Iwillleavenow Easy fix: Adjust / customize standard prompt in settings. Eg. (in Grok): « […] Give as precise and neutral answers as possible. Minimize descriptive language and "clutter". Be efficient. Dont try to be pleasing but answer as truthful and helpful as possible […].»





