Loading News Article...
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
As OpenAI reports over 1.7 million weekly users exhibit signs of psychosis or suicidal intent, the data raises urgent questions for Kenya, where AI chatbot use is high and the nation grapples with a severe mental health crisis.

Technology firm OpenAI released startling data on Monday, October 28, 2025, revealing that a significant number of its ChatGPT users exhibit signs of severe mental health distress. According to the company's analysis, approximately 0.15% of its 800 million weekly active users—equivalent to 1.2 million people—engage in conversations indicating potential suicidal planning or intent. A further 0.07%, or about 560,000 individuals, show possible signs of psychosis or mania each week.
The announcement, which comes amid growing scrutiny and legal challenges over the AI's impact on vulnerable users, provides the first concrete glimpse into the scale of mental health crises being discussed on the world's leading AI chatbot. In response to these findings, OpenAI stated it has been working with a global network of over 170 psychiatrists, psychologists, and physicians from 60 countries to refine the chatbot's responses and improve safety protocols. The company claims its latest model, GPT-5, is significantly more compliant with its desired safety behaviors in sensitive conversations compared to previous versions.
The implications of OpenAI's data are particularly resonant in Kenya, which not only has one of the highest rates of ChatGPT usage globally but is also facing a documented mental health crisis. A 2023 study published in the National Institutes of Health's PubMed Central, focusing on Kenyan adults with psychotic disorders, found that 32.8% reported either current suicidal ideation or a lifetime suicide attempt, with 29.2% having made an attempt. These figures highlight the profound, pre-existing vulnerability within a segment of the population.
The Kenyan government has identified technology as a key tool to bridge the vast treatment gap. Health Cabinet Secretary Aden Duale announced in September 2025 that the Ministry of Health plans to roll out AI-powered chatbots to improve citizens' access to healthcare services around the clock. This initiative aligns with the Digital Health Act, 2023, which aims to create a regulated and efficient digital health ecosystem. However, the OpenAI data underscores the immense responsibility and potential risks accompanying such a rollout.
Local mental health professionals have voiced significant reservations. Nairobi-based psychologist Jared Omache warned in a September 2025 report that while chatbots can encourage users to open up, they lack the empathy and nuanced understanding of a human therapist and can be "inadequate and even dangerous" for serious conditions. Similarly, neuropsychologist Amisa Rashid emphasized that AI cannot form a therapeutic alliance or be held accountable, a critical component of mental healthcare, particularly for vulnerable youth.
The deployment of AI for mental health support in Kenya operates in a complex regulatory environment. While the Data Protection Act of 2019 provides a legal framework for data privacy, its enforcement has been described as uneven, creating uncertainty for emerging technologies. The new regulations under the Digital Health Act are expected to provide clearer guidance on e-health applications and data security, but specific protocols for AI mental health tools remain a critical gap. As young Kenyans increasingly turn to anonymous, accessible AI chatbots for support they cannot find or afford elsewhere, the need for robust, specific regulation becomes more urgent.
The ethical questions for Kenya extend beyond user safety to the very creation of the technology. In 2023, investigations revealed that OpenAI, through an outsourcing firm, employed Kenyan workers for less than $2 per hour to label graphic and disturbing content, including descriptions of self-harm, abuse, and violence, to train its AI models. Several of these former content moderators filed petitions with the Kenyan government, citing severe psychological trauma and inadequate mental health support provided for their work, adding a grim layer of local relevance to the global debate on the true cost of advancing AI.
As OpenAI publicizes its efforts to make its chatbot safer with expert guidance, it remains unconfirmed if any Kenyan or East African mental health professionals are part of its Global Physician Network. With the Kenyan government poised to embrace AI in healthcare, the challenge will be to balance the promise of increased access with the profound ethical and safety risks highlighted by OpenAI's own data, ensuring that digital solutions do not inadvertently deepen the very crisis they are intended to alleviate. FURTHER INVESTIGATION REQUIRED.