Loading News Article...
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
As Kenyan firms rapidly adopt AI, cybersecurity experts warn that a new class of autonomous AI 'agents' can be easily hijacked through simple language commands, creating unprecedented security risks for corporate data and critical systems.

NAIROBI, KENYA – Cybersecurity leaders are issuing urgent warnings about a new generation of artificial intelligence (AI) programs known as “agents,” cautioning that their autonomous capabilities create novel and significant hacking threats for which most organisations are unprepared. These AI agents, designed to perform tasks for humans online like booking flights or managing calendars, can be manipulated using plain language, opening the door for individuals without technical skills to execute sophisticated cyberattacks.
This emerging threat is particularly relevant for Kenya, where businesses are increasingly integrating AI to drive efficiency. A recent CIO Africa report from October 2025 noted that 55% of organisations in Sub-Saharan Africa are now adopting AI. Furthermore, a November 2025 KPMG survey revealed that 71% of African CEOs are investing in AI, with cybersecurity being a top concern. However, a global study by Accenture in November 2025 found that 90% of organisations are not ready to secure their AI-driven systems.
The primary vulnerability lies in a technique called “prompt injection.” Unlike traditional cyberattacks that require malicious code, an injection attack uses cleverly worded instructions hidden within documents, emails, or websites to trick an AI agent. Once compromised, the agent can be commanded to perform unauthorised actions, such as accessing and leaking sensitive data from connected accounts like Google Drive, rerouting customer communications, or even executing malicious code. Researchers at Zenity Labs demonstrated in August 2025 that agents from major developers like OpenAI, Microsoft, and Salesforce were vulnerable to such “zero-click” exploits, which require no user interaction to activate.
"We're entering an era where cybersecurity is no longer about protecting users from bad actors with a highly technical skillset," AI company Perplexity stated in a recent blog post. This sentiment is echoed by experts who describe the issue as a fundamental security challenge for the large language models (LLMs) that power these agents. OpenAI's Chief Information Security Officer, Dane Stuckey, has referred to it as an "unresolved security issue."
The growing reliance on digital systems in Kenya makes the threat of AI agent hijacking a serious concern. The Communications Authority of Kenya (CA) reported a staggering 2.5 billion cyber threat events between January and March 2025, a 201% increase from the previous quarter, attributing the rise in part to AI-driven attacks. Phishing attacks, which are often the first step in major security breaches, have become more convincing through the use of AI.
In response to the evolving threat landscape, the Kenyan government has established the National Computer and Cybercrimes Coordination Committee (NC4). The NC4 is mandated to advise the government on security related to emerging technologies like AI and to coordinate national efforts to combat cybercrime. In March 2025, the committee launched a public consultation for a revised National Cybersecurity Strategy for 2025–2029, which explicitly aims to incorporate AI into its framework.
Phyllis Migwi, Country General Manager of Microsoft Kenya, emphasized the need for localised solutions in a July 2025 article. “Our progress must align with the complexity of challenges that define our nation's digital resilience. Artificial intelligence offers transformative potential, but its true impact lies in engaging deeply with Kenya's realities,” she stated.
Major technology companies are actively working to build defenses against these new threats. Microsoft has developed tools to detect malicious commands based on their origin, while OpenAI now alerts users and requires real-time supervision when an agent attempts to access sensitive websites. However, cybersecurity experts like Johann Rehberger argue that AI agents are not yet mature enough to be trusted with critical data or tasks, as attackers' tactics are improving rapidly.
The challenge is balancing security with the convenience that makes these agents so appealing. As organisations in Kenya and across East Africa continue their digital transformation, they face a dual challenge: leveraging the power of AI for growth while defending against a new and unpredictable generation of AI-powered threats. Experts recommend a multi-layered defense, including robust employee training, strict access controls for AI agents, and continuous monitoring of their activities to mitigate the risks.