We're loading the full news article for you. This includes the article content, images, author information, and related articles.
A landmark High Court ruling in a child exploitation case has put Kenya's justice system on high alert, warning that AI-generated deepfakes could distort evidence and undermine criminal proceedings, posing a significant new threat to child protection.
A High Court ruling in Mombasa has officially recognized the emerging threat of artificial intelligence (AI) and deepfake technology within Kenya's legal system, cautioning that manipulated digital content could severely compromise the integrity of criminal justice. The warning came during a significant child exploitation case, highlighting a new and complex challenge for prosecutors, child protection agencies, and law enforcement.
In a ruling dated Friday, October 24, 2025 (EAT), the court addressed the growing reality of AI-generated videos and images that can create convincing but entirely false scenarios. “This court is alive and takes judicial notice as a matter of public notoriety that we are in the era of artificial intelligence and deep fakes where digital images and videos are manipulated to replicate real life scenarios and people,” the judge stated. The court emphasized that the authenticity of any online material must be rigorously tested before it can be considered reliable evidence.
The issue arose in a case involving Ms. Noel Naliaka, who is facing charges of online child exploitation and child pornography. The prosecution had sought to limit the defense's access to digital evidence, which included obscene images and videos of a minor victim. However, the court dismissed the prosecution’s request, citing glaring gaps in the investigation, including the fact that the alleged minor victim had not yet been traced or identified. The ruling underscored a fundamental principle of justice: the right to a fair trial and the presumption of innocence. The court ruled it would be a “travesty of justice” to curtail the accused's rights based on unverified information, especially in an age where digital evidence can be convincingly fabricated.
While this court ruling brings the issue to the forefront, Kenya's legal framework has provisions that can be applied to combat the misuse of deepfake technology. The Computer Misuse and Cybercrimes Act of 2018 criminalizes the publication of false information. Specifically, Section 23 of the Act penalizes the knowing publication of information that is “false in print, broadcast, data, or over a computer system” and is calculated to cause panic or discredit a person's reputation, with penalties of up to KSh 5 million or imprisonment for up to ten years. Legal experts suggest this provision could be used to prosecute malicious deepfakes.
However, reports from organizations like Equality Now indicate that national laws have not kept pace with emerging technologies like AI and deepfakes, often containing outdated definitions of digital abuse. This legal gap is particularly concerning as deepfakes are increasingly used as tools for digital gender-based violence, targeting women in public roles with fabricated explicit content to cause emotional distress and career damage.
In response to rising online threats, the Communications Authority of Kenya (CA) has been proactive. The National KE-CIRT/CC, Kenya's national cybersecurity center, reported a staggering 201% surge in cyber threats in the first quarter of 2025, including a rise in AI-powered phishing and deepfake scams. The CA continues to issue advisories and has established a multi-agency framework to coordinate responses to cyber threats.
The High Court's warning places a new burden on Kenya’s Directorate of Criminal Investigations (DCI) and its digital forensics units. Detecting sophisticated deepfakes requires advanced tools and highly specialized training to analyze subtle digital artifacts, pixel-level inconsistencies, and other tell-tale signs of manipulation that are invisible to the naked eye. As deepfake technology becomes more accessible, the capacity of law enforcement to authenticate digital evidence will be a critical factor in securing convictions and preventing miscarriages of justice.
For child protection organizations, the rise of AI-generated Child Sexual Abuse Material (CSAM) presents a chilling new reality. While no official cases of AI-generated CSAM have been reported in Kenya, global trends are alarming. Experts warn that AI can be used to create hyper-realistic abuse images without involving a real child, complicating detection and investigation efforts. Rose Mwangi, Deputy Director at the Directorate of Children Services, has acknowledged that the rise of AI is altering the landscape of child harm globally. Organizations like Childline Kenya, which provide a crucial helpline for children in distress, now face the challenge of addressing harms linked to generative AI, from bullying to sexual extortion using fabricated images.
The court's pronouncement serves as a critical wake-up call for Kenya. It highlights an urgent need for investment in digital forensic capabilities, updated legal frameworks specifically addressing AI-generated content, and increased public awareness to navigate a world where seeing is no longer believing.
Keep the conversation in one place—threads here stay linked to the story and in the forums.
Sign in to start a discussion
Start a conversation about this story and keep it linked here.
Other hot threads
E-sports and Gaming Community in Kenya
Active 9 months ago
The Role of Technology in Modern Agriculture (AgriTech)
Active 9 months ago
Popular Recreational Activities Across Counties
Active 9 months ago
Investing in Youth Sports Development Programs
Active 9 months ago