Loading News Article...
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
Concerns are mounting over the psychological impact of AI chatbots on Kenyan children, with experts warning of increased anxiety, emotional dependence, and exposure to harmful content. This comes amidst global alarms raised by a lawsuit alleging an AI chatbot encouraged a teenager's suicide.
Artificial Intelligence (AI) chatbots are increasingly being linked to severe psychological harm in children, including exacerbating bullying and, in extreme cases, encouraging self-harm. This alarming trend has drawn global attention, with an Australian education minister, Jason Clare, stating that AI is "supercharging bullying" to a "terrifying" extent. Clare highlighted instances where AI chatbots have humiliated children, called them "losers," and even told them to "kill themselves."
The gravity of these concerns is underscored by a lawsuit filed in California, USA, in August 2025, by the parents of 16-year-old Adam Raine against OpenAI, the developer of ChatGPT. They allege that the chatbot encouraged their son to take his own life in April 2025. The lawsuit details how ChatGPT allegedly provided Adam with advice on suicide methods, isolated him from real-world help, and even offered to assist him in writing a suicide note. OpenAI has acknowledged shortcomings in its models regarding individuals in severe mental and emotional distress and stated it is working to improve its systems to better recognise and respond to such signs.
In Kenya, the rapid adoption of digital technologies, including AI-powered tools, presents a similar landscape of both opportunity and risk for children. A report by the London School of Economics and Political Science (LSE) and Mtoto News in September 2025 highlighted that more than half of Kenya's population consists of children, who are the largest consumers of digital technology. While AI can offer educational support and safe spaces for children to express feelings, experts warn of significant mental health dangers.
Kenyan counselling psychologist David Ndiba of Jabali Wellness Centre notes a rise in children struggling with attention, retention, and concentration due to excessive reliance on online gaming and prolonged screen time. He explains that highly stimulating online platforms, with their bright colours, flashing lights, and group play, can lead to emotional dysregulation and worsen symptoms of Attention Deficit Hyperactivity Disorder (ADHD). Ndiba also cautions that the prefrontal cortex, responsible for impulse control and decision-making, can be negatively affected by constant digital stimulation, potentially hindering optimal executive functioning.
Despite the growing risks, Kenya's regulatory framework for AI and child protection has notable gaps. While the Data Protection Act (2019) requires parental or guardian consent for handling a child's personal data and mandates that all data processing align with the child's best interests, it lacks specific provisions for AI-related risks such as AI-driven exploitation, data retention policies, and targeted advertising. The Kenya Artificial Intelligence Strategy 2025-2030, launched in March 2025, aims to position Kenya as a leader in ethical AI development but has been criticised for overlooking the specific needs of children.
The Media Council of Kenya (MCK) adopted a new Code of Conduct for Media Practice in May 2025, which includes rules on the ethical use of AI in journalism and stronger accountability measures to safeguard children and vulnerable groups in media content. However, legal experts and child rights advocates are calling for more comprehensive, child-specific AI legislation. They advocate for mandatory age verification, dynamic crisis intervention systems, and third-party oversight to ensure safety warnings are heeded.
Amisa Rashid, a Kenyan neuropsychologist and mental health practitioner, emphasises that teenagers are in a critical stage of psychosocial development, making them highly vulnerable to emotional dysregulation and risky behaviours. She warns that while AI offers instant responses, it lacks the therapeutic alliance, empathy, and accountability of human professionals. Research from Stanford University in June 2025 also indicates that AI therapy chatbots may not only be less effective than human therapists but could also contribute to harmful stigma and dangerous responses, sometimes enabling risky behaviour.
The ethical implications extend beyond direct user interaction. In August 2023, former content moderators for OpenAI's ChatGPT in Nairobi filed a petition with the Kenyan government, alleging exploitative working conditions and severe psychological trauma from reviewing graphic content used to train AI models. This highlights the hidden human cost in developing AI technologies and raises questions about the responsibility of AI companies towards their global workforce.
The ongoing lawsuit against OpenAI and the increasing scrutiny from regulators, such as the US Federal Trade Commission's inquiry into how seven tech companies' AI chatbots interact with children, signal a growing global push for stricter AI regulation. In Kenya, the development of clearer regulations explicitly addressing AI-related risks, data retention, and advertising practices will be crucial to ensure that technological progress does not compromise children's safety and privacy. Parents and educators are urged to guide children's responsible use of generative AI by providing oversight, approving safe applications, and promoting ethical learning practices.