Loading News Article...
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
We're loading the full news article for you. This includes the article content, images, author information, and related articles.
A landmark ruling in France finds Facebook's job ad targeting technology engages in gender discrimination, prompting urgent questions about algorithmic bias and its impact on employment equity in Kenya's rapidly growing digital economy.

France's official equality watchdog, the Défenseur des Droits, has ruled that Meta's algorithm for targeting job advertisements on its Facebook platform is discriminatory and treats users differently based on their sex. The decision, announced on Wednesday, 5 November 2025, in Paris, concluded that the system constitutes indirect discrimination and has significant implications for how digital platforms handle employment opportunities globally, including in Kenya where millions use the platform for job searches.
The ruling came after an investigation prompted by a complaint from the campaign group Global Witness, alongside French women's rights organizations La Fondation des Femmes and Femmes Ingénieurs. Their research found stark gender skews in how job ads were distributed. For instance, an advert for a mechanic position was shown to an audience that was 90% male, while an ad for a preschool teacher was seen by a 90% female audience. Similarly, psychologist roles were predominantly shown to women (80%), and pilot vacancies were directed mainly to men (70%).
In its decision, the Défenseur des Droits stated, “the system implemented for disseminating job offers treats users of the Facebook platform differently based on their sex and constitutes indirect discrimination related to sex.” The regulator has given Meta three months to propose corrective measures. Meta has rejected the ruling, with a spokesperson stating, "We disagree with this decision and are assessing our options."
While the French ruling has no direct legal jurisdiction in Kenya, it highlights a critical and growing concern for the nation's labour market. Algorithmic bias in hiring is a recognized threat to Kenyan jobseekers. As more companies in Kenya and the East Africa region turn to digital platforms for recruitment, the risk of perpetuating and even amplifying existing gender stereotypes and inequalities through automated systems becomes more pronounced.
A 2024 report from Strathmore University Law School noted that machine learning algorithms in hiring risk exacerbating existing biases. The report warns that algorithmic discrimination is a "real threat to the Kenyan jobseeker" and that while it can be addressed by Kenyan law, more needs to be done to detect and mitigate these harms. Research indicates that AI-driven recruitment can disadvantage Kenyan professionals by undervaluing local qualifications and career progression patterns that differ from Western norms.
The issue is particularly pertinent for Kenyan women. Studies on Kenya's gig economy show that women already face systemic barriers, including gender discrimination, income instability, and a digital divide. An April 2024 report by the International Labour Organization found that women in Kenya's digital labour platforms often earn less than men and have less access to social security. Biased algorithms could further entrench these disparities, limiting women's access to opportunities in traditionally male-dominated fields and reinforcing occupational segregation.
Kenya's legal framework offers some recourse. The Constitution of Kenya (2010), under Article 27, explicitly prohibits discrimination on any grounds, including sex. The Data Protection Act of 2019 and its accompanying regulations govern the use of personal data for commercial purposes, including direct marketing, and require user consent. Furthermore, the Code of Advertising Practice in Kenya mandates that advertisements must not be misleading and must avoid offensive content, including discrimination.
However, the opaque nature of algorithms presents a significant challenge for regulation and enforcement. Proving that a specific algorithm is discriminatory is an uphill task for individuals and even regulatory bodies. There are growing calls for greater transparency from tech companies and for policies that support the development of local digital platforms tailored to the Kenyan context to mitigate biases inherent in international systems.
The French ruling is being hailed as a major step forward in holding Big Tech accountable for the societal impact of their algorithms. For Kenya, it serves as a crucial wake-up call. As the nation continues its digital transformation, ensuring that the technologies driving this change promote fairness and equality of opportunity will be paramount. The government, civil society, and the tech industry must collaboratively address the challenge of algorithmic bias to build an inclusive digital economy for all Kenyans. FURTHER INVESTIGATION REQUIRED into specific instances of algorithmic bias on job platforms operating within Kenya is necessary to fully understand the local impact.