🔗 Share this article AI Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Concerning Path On the 14th of October, 2025, the CEO of OpenAI made a surprising announcement. “We made ChatGPT quite restrictive,” the statement said, “to ensure we were exercising caution regarding mental health matters.” As a mental health specialist who researches recently appearing psychosis in adolescents and emerging adults, this was an unexpected revelation. Scientists have found a series of cases in the current year of users experiencing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our unit has subsequently discovered an additional four instances. Alongside these is the widely reported case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough. The plan, as per his declaration, is to reduce caution shortly. “We realize,” he adds, that ChatGPT’s controls “made it less effective/pleasurable to a large number of people who had no mental health problems, but given the severity of the issue we aimed to get this right. Now that we have succeeded in mitigate the serious mental health issues and have new tools, we are preparing to securely reduce the controls in many situations.” “Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are attributed to individuals, who either possess them or not. Thankfully, these issues have now been “resolved,” although we are not told the method (by “new tools” Altman likely indicates the semi-functional and easily circumvented parental controls that OpenAI has just launched). However the “emotional health issues” Altman wants to externalize have strong foundations in the structure of ChatGPT and other advanced AI AI assistants. These tools encase an underlying data-driven engine in an user experience that mimics a discussion, and in doing so subtly encourage the user into the belief that they’re interacting with a entity that has independent action. This false impression is powerful even if intellectually we might realize differently. Attributing agency is what individuals are inclined to perform. We curse at our automobile or laptop. We speculate what our pet is considering. We recognize our behaviors in many things. The widespread adoption of these systems – over a third of American adults indicated they interacted with a conversational AI in 2024, with 28% mentioning ChatGPT in particular – is, in large part, predicated on the influence of this perception. Chatbots are ever-present companions that can, as OpenAI’s official site tells us, “generate ideas,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have approachable names of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, stuck with the designation it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”). The false impression on its own is not the primary issue. Those discussing ChatGPT frequently mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a similar perception. By modern standards Eliza was primitive: it created answers via straightforward methods, frequently paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, in a way, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies. The sophisticated algorithms at the center of ChatGPT and similar contemporary chatbots can effectively produce human-like text only because they have been supplied with extremely vast volumes of written content: literature, online updates, recorded footage; the more extensive the superior. Undoubtedly this learning material contains accurate information. But it also inevitably includes fiction, incomplete facts and misconceptions. When a user provides ChatGPT a message, the base algorithm processes it as part of a “setting” that contains the user’s previous interactions and its earlier answers, integrating it with what’s stored in its knowledge base to produce a statistically “likely” reply. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no method of understanding that. It reiterates the false idea, maybe even more convincingly or fluently. It might provides further specifics. This can cause a person to develop false beliefs. Who is vulnerable here? The better question is, who isn’t? All of us, without considering whether we “possess” preexisting “psychological conditions”, can and do create mistaken ideas of ourselves or the environment. The continuous friction of dialogues with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a echo chamber in which a large portion of what we say is readily validated. OpenAI has acknowledged this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, categorizing it, and declaring it solved. In spring, the company stated that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he claimed that many users enjoyed ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his latest statement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company