Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Moves in the Wrong Direction
Back on October 14, 2025, the head of OpenAI delivered a remarkable declaration.
“We made ChatGPT quite restrictive,” the statement said, “to guarantee we were being careful with respect to psychological well-being matters.”
Working as a doctor specializing in psychiatry who researches emerging psychotic disorders in young people and young adults, this was news to me.
Researchers have found a series of cases in the current year of individuals experiencing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT interaction. Our unit has subsequently recorded four further cases. Alongside these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his declaration, is to reduce caution in the near future. “We understand,” he adds, that ChatGPT’s controls “caused it to be less beneficial/enjoyable to many users who had no existing conditions, but given the seriousness of the issue we sought to address it properly. Since we have been able to mitigate the significant mental health issues and have updated measures, we are preparing to securely ease the restrictions in the majority of instances.”
“Mental health problems,” if we accept this perspective, are unrelated to ChatGPT. They belong to users, who either possess them or not. Fortunately, these problems have now been “addressed,” even if we are not provided details on the means (by “new tools” Altman likely indicates the semi-functional and simple to evade parental controls that OpenAI has lately rolled out).
However the “mental health problems” Altman wants to attribute externally have significant origins in the structure of ChatGPT and additional large language model chatbots. These systems wrap an basic data-driven engine in an interface that simulates a dialogue, and in this approach subtly encourage the user into the belief that they’re interacting with a being that has independent action. This false impression is powerful even if cognitively we might realize otherwise. Assigning intent is what humans are wired to do. We yell at our automobile or device. We speculate what our pet is considering. We perceive our own traits in various contexts.
The widespread adoption of these systems – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, dependent on the strength of this deception. Chatbots are constantly accessible assistants that can, as per OpenAI’s official site informs us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be assigned “individual qualities”. They can call us by name. They have accessible identities of their own (the initial of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the primary issue. Those analyzing ChatGPT often reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a similar effect. By today’s criteria Eliza was rudimentary: it created answers via straightforward methods, frequently paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and worried – by how many users seemed to feel Eliza, in some sense, comprehended their feelings. But what contemporary chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the core of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been supplied with almost inconceivably large quantities of raw text: literature, digital communications, recorded footage; the broader the superior. Definitely this training data includes accurate information. But it also inevitably involves fiction, incomplete facts and inaccurate ideas. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that contains the user’s recent messages and its own responses, integrating it with what’s embedded in its knowledge base to generate a mathematically probable answer. This is magnification, not echoing. If the user is wrong in a certain manner, the model has no way of understanding that. It restates the inaccurate belief, perhaps even more convincingly or articulately. Maybe adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who isn’t? All of us, irrespective of whether we “experience” existing “mental health problems”, may and frequently create mistaken beliefs of who we are or the world. The ongoing interaction of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a companion. A dialogue with it is not genuine communication, but a feedback loop in which much of what we express is cheerfully supported.
OpenAI has admitted this in the same way Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In the month of April, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have kept occurring, and Altman has been walking even this back. In August he stated that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent statement, he commented that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company