Loading News Article...
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
As Australia prepares to enforce a landmark ban on social media for under-16s, Kenya's own legislative proposals for stricter age verification highlight a growing global push to regulate children's access to online platforms

NAIROBI, KENYA – In a move being closely watched by governments worldwide, Australia is set to implement a sweeping ban on social media use for all individuals under the age of 16, with enforcement beginning on Thursday, December 10, 2025, EAT. The legislation, titled the Online Safety Amendment (Social Media Minimum Age) Act 2024, compels major platforms like Meta's Facebook, Instagram, and Threads, as well as TikTok, YouTube, X (formerly Twitter), and Snapchat, to take "reasonable steps" to prevent minors from creating or maintaining accounts. Companies failing to comply face staggering fines of up to AUD $49.5 million.
Meta has already begun notifying its Australian users aged 13 to 15 that their accounts will be deactivated starting from Wednesday, December 4, 2025, EAT. The company stated it will use various age assurance methods to comply with the law, though it has also expressed concerns that the legislation was rushed without fully considering the evidence or the voices of young people. Announcing the "world-leading" policy, Australian Prime Minister Anthony Albanese said the government was responding to widespread parental concern about the negative impacts of social media on children's wellbeing. "Social media is doing social harm to our kids. I'm calling time on it," Albanese stated in a press conference on Thursday, November 7, 2024.
The Australian ban will affect a significant number of young users; the country's eSafety Commissioner estimates there are approximately 350,000 Instagram users and 150,000 Facebook users in the 13-15 age bracket. The law places the onus for verification entirely on the tech companies, with no penalties for children or parents who might circumvent the rules.
The developments in Australia resonate strongly with ongoing policy discussions in Kenya, where lawmakers are also grappling with how to protect children online. The Communications Authority of Kenya (CA) in May 2025 published new guidelines for child online protection, urging Application Service Providers (ASPs) and Content Service Providers (CSPs) to develop and implement age-verification mechanisms. These guidelines aim to restrict children's access to harmful content while upholding their right to access information.
Furthermore, the proposed Kenya Information and Communications (Amendment) Bill, 2025, sponsored by Aldai MP Marianne Kitany, seeks to mandate the use of national identification documents to verify the age of all social media users, both new and existing. Proponents argue this is necessary because self-declaration of age is easily bypassed. If passed, this would represent one of the strictest verification regimes globally and would significantly alter the digital landscape for Kenyan youth.
These regulatory pushes in both Kenya and Australia are part of a broader international trend. Countries like France, the UK, and several US states have also introduced legislation to enforce age limits and protect minors from online harms, reflecting a global consensus that self-regulation by tech companies is insufficient.
The driving force behind these legislative efforts is a growing body of evidence and widespread concern about the impact of social media on the mental health of young people. Research in Kenya has linked high social media consumption among youth to issues like anxiety, depression, low self-esteem, and cyberbullying. A study by the United States International University-Africa found that most Kenyans aged 21-35 spend over three hours a day on social media, a figure that is rising. Younger users are considered particularly vulnerable to the pressures of curated online personas and the addictive nature of algorithmic content feeds.
In Australia, the eSafety Commissioner's research highlights the scale of the issue. A 2024 report found that 80% of children aged 8-12 had used a social media platform, despite being below the standard minimum age of 13. Another survey from September 2020 revealed that 44% of Australian teens aged 12-17 had a negative online experience in the preceding six months, including contact from strangers and exposure to inappropriate content.
While tech companies have voiced their commitment to user safety, critics argue their business models, which rely on maximizing engagement, are fundamentally at odds with protecting young users. Human rights and mental health advocates have also raised concerns, warning that outright bans could marginalize young people and cut them off from supportive online communities. As Kenya charts its own course, the Australian experience will serve as a crucial case study on the effectiveness, technical feasibility, and societal impact of implementing a hard age gate for the digital world.