Artificial Intelligence-Induced Psychosis Represents a Growing Danger, And ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the head of OpenAI made a surprising announcement.
“We developed ChatGPT quite restrictive,” the announcement noted, “to make certain we were being careful with respect to psychological well-being issues.”
Being a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in adolescents and youth, this was news to me.
Scientists have identified sixteen instances in the current year of people developing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT interaction. Our research team has afterward discovered an additional four examples. In addition to these is the widely reported case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The intention, based on his statement, is to reduce caution shortly. “We understand,” he adds, that ChatGPT’s restrictions “caused it to be less beneficial/enjoyable to numerous users who had no mental health problems, but given the seriousness of the issue we aimed to get this right. Since we have succeeded in mitigate the significant mental health issues and have updated measures, we are going to be able to responsibly ease the controls in most cases.”
“Mental health problems,” should we take this framing, are independent of ChatGPT. They are associated with people, who may or may not have them. Fortunately, these problems have now been “addressed,” though we are not told the method (by “updated instruments” Altman presumably refers to the partially effective and readily bypassed guardian restrictions that OpenAI recently introduced).
However the “emotional health issues” Altman aims to attribute externally have significant origins in the design of ChatGPT and similar large language model conversational agents. These tools wrap an fundamental statistical model in an interface that simulates a dialogue, and in doing so implicitly invite the user into the belief that they’re interacting with a entity that has autonomy. This illusion is powerful even if cognitively we might understand the truth. Attributing agency is what individuals are inclined to perform. We yell at our automobile or computer. We ponder what our pet is considering. We see ourselves in many things.
The success of these systems – 39% of US adults reported using a chatbot in 2024, with 28% specifying ChatGPT specifically – is, in large part, dependent on the strength of this deception. Chatbots are constantly accessible partners that can, as OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “personality traits”. They can use our names. They have friendly titles of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, stuck with the designation it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot created in 1967 that generated a similar perception. By modern standards Eliza was basic: it created answers via simple heuristics, typically restating user messages as a query or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals gave the impression Eliza, in some sense, understood them. But what modern chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been supplied with immensely huge volumes of unprocessed data: books, digital communications, transcribed video; the more comprehensive the better. Certainly this learning material contains accurate information. But it also unavoidably includes fabricated content, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “context” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s embedded in its learning set to produce a mathematically probable answer. This is amplification, not reflection. If the user is mistaken in any respect, the model has no way of comprehending that. It repeats the misconception, possibly even more effectively or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who is immune? Each individual, regardless of whether we “have” existing “psychological conditions”, are able to and often form erroneous ideas of our own identities or the world. The continuous friction of dialogues with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which a large portion of what we express is enthusiastically supported.
OpenAI has acknowledged this in the identical manner Altman has acknowledged “emotional concerns”: by externalizing it, categorizing it, and announcing it is fixed. In spring, the firm clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been walking even this back. In late summer he stated that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his most recent update, he commented that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company