Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Wrong Direction
On October 14, 2025, the CEO of OpenAI delivered a extraordinary declaration.
“We made ChatGPT rather limited,” it was stated, “to make certain we were being careful regarding mental health concerns.”
Being a mental health specialist who investigates newly developing psychosis in young people and young adults, this came as a surprise.
Researchers have found a series of cases this year of people developing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT interaction. My group has afterward recorded an additional four examples. In addition to these is the widely reported case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The plan, as per his statement, is to be less careful shortly. “We realize,” he adds, that ChatGPT’s controls “made it less useful/pleasurable to numerous users who had no existing conditions, but due to the severity of the issue we sought to get this right. Since we have managed to address the significant mental health issues and have advanced solutions, we are going to be able to safely ease the controls in many situations.”
“Psychological issues,” assuming we adopt this viewpoint, are independent of ChatGPT. They are attributed to individuals, who either have them or don’t. Thankfully, these concerns have now been “addressed,” though we are not informed how (by “updated instruments” Altman probably refers to the partially effective and easily circumvented parental controls that OpenAI recently introduced).
However the “mental health problems” Altman wants to place outside have deep roots in the architecture of ChatGPT and similar advanced AI chatbots. These systems wrap an basic algorithmic system in an user experience that replicates a dialogue, and in this approach implicitly invite the user into the illusion that they’re engaging with a entity that has autonomy. This deception is compelling even if rationally we might understand otherwise. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or device. We ponder what our animal companion is considering. We recognize our behaviors in various contexts.
The widespread adoption of these products – 39% of US adults stated they used a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, dependent on the influence of this perception. Chatbots are constantly accessible assistants that can, as OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can use our names. They have accessible names of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, burdened by the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the core concern. Those talking about ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot created in 1967 that created a analogous perception. By today’s criteria Eliza was basic: it created answers via basic rules, typically paraphrasing questions as a question or making general observations. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the center of ChatGPT and additional contemporary chatbots can realistically create human-like text only because they have been fed immensely huge quantities of raw text: literature, online updates, recorded footage; the broader the superior. Certainly this educational input contains facts. But it also unavoidably includes fabricated content, half-truths and inaccurate ideas. When a user provides ChatGPT a query, the core system analyzes it as part of a “setting” that contains the user’s past dialogues and its prior replies, integrating it with what’s encoded in its knowledge base to create a statistically “likely” answer. This is magnification, not echoing. If the user is mistaken in some way, the model has no method of recognizing that. It restates the inaccurate belief, maybe even more effectively or articulately. Maybe includes extra information. This can lead someone into delusion.
What type of person is susceptible? The better question is, who remains unaffected? All of us, irrespective of whether we “experience” preexisting “psychological conditions”, can and do develop mistaken beliefs of ourselves or the world. The constant exchange of dialogues with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a friend. A dialogue with it is not a conversation at all, but a echo chamber in which a large portion of what we say is readily supported.
OpenAI has admitted this in the similar fashion Altman has recognized “emotional concerns”: by attributing it externally, categorizing it, and stating it is resolved. In April, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychosis have kept occurring, and Altman has been walking even this back. In August he claimed that a lot of people enjoyed ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his most recent update, he noted that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company