By Raylenne Kambua |
The raw paradox at the heart of Kenya’s Artificial Intelligence (AI) moment is that the country is simultaneously sprinting ahead in AI adoption while grappling with a shrinking space for the very digital voices that AI empowers.
According to the Digital Global Update Report, Kenya recorded the world’s highest usage rate of AI tools in 2025, with 42.1% of internet users aged 16 and above reporting active use of AI-powered technologies. This level of usage indicates that AI is increasingly being woven into the daily life of Kenyans.
However, the Navigating the Implications of AI on Digital Democracy in Kenya report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) highlights that while AI empowers citizens, it also enables unprecedented surveillance and manipulation.
A Nation Leading the Way in AI Adoption
Kenya has made significant investments in digital services, innovation hubs, and connectivity under the National Digital Master Plan 2022–2032.
These developments are also transforming how citizens interact with the government. Tools such as the Office of the Data Protection Commissioner’s Linda Data chatbot and platforms such as Sauti ya Bajeti have expanded access to rights information and budget tracking.
Yet, even as AI delivered clear benefits, it also revealed its dual nature, most visibly during the 2024 #RejectFinanceBill protests, during which Gen Zs mobilised through AI-generated infographics, satire, and short-form videos. At the height of the protests on June 25, a nationwide internet disruption was enforced despite assurances from the Communications Authority. The disruption was confirmed by network monitors like Cloudflare and NetBlocks, exposing the fragility of internet freedom in Kenya.
Civil society condemned the internet shutdown as a violation of rights, while telecoms Safaricom and Airtel attributed it to outages in their undersea cable. In the aftermath, reports of abductions and enforced disappearances of digital activists escalated, with the Kenya National Commission on Human Rights documenting at least 82 cases between June and December 2024.
Kenya’s AI Policy Landscape
The launch of the Kenya National AI Strategy 2025–2030 in March 2025 signalled the country’s ambition to position itself as Africa’s leading AI innovation hub. The strategy prioritises governance, ethics, investment, digital infrastructure, data ecosystem development, and support for AI research and innovation.
Kenya has also strengthened its international profile through participation in programmes such as the United Nations High-Level Advisory Board on AI, joining the International Network of AI Safety Institutes, and assuming leadership in the World Summit on the Information Society (WSIS+20).
At the national level, initiatives such as Digital Platforms Kenya (DigiKen) and the Kenya Bureau of Standards’ draft AI Code of Practice reflect growing momentum toward operationalising AI governance and skills building. The government is also developing an AI and Emerging Technologies Policy and a Data Governance Policy, both of which are expected to be in place by July 2026.
However, the gap between ambition and readiness remains wide. Kenya ranks 93rd in the 2025 Government AI Readiness Index, due to persistent weaknesses in infrastructure, implementation, and institutional capacity.
Moreover, Kenya’s legal framework for AI remains fragmented and incomplete. Currently, there is no standalone AI law in force, but a controversial Artificial Intelligence Bill, 2026, that has raised significant concerns about over-regulation and censorship is under discussion. Additionally regulation is based on broader laws such as the Data Protection Act 2019 and the Computer Misuse and Cybercrimes Act 2018, which were not designed to address AI-specific risks such as deepfakes, automated decision-making, algorithmic discrimination, or synthetic disinformation.
As highlighted in the CIPESA report, critical gaps remain in the use of AI. These include the absence of mandatory algorithmic impact assessments, weak safeguards against AI-driven surveillance such as facial recognition, and scant measures to address AI-generated electoral misinformation. Furthermore, regulatory authorities lack sufficient capabilities to audit and monitor sophisticated AI systems, and there are no clear licensing or accountability frameworks for AI creators and deployers.
“Without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities.” (Navigating the Implications of AI on Digital Democracy in Kenya, 2025)
The Way Ahead: AI Governance Focused on Human Rights
The CIPESA report outlines a human rights–centred approach to AI governance that is built on the following key principles:
- Life-Centred and Human-Centred Design and Accountability: AI should support and not replace human judgment, with strong oversight to ensure transparency and accountability.
- Equity and Fairness: Design AI to prevent bias and expand inclusive access, especially for underrepresented groups.
- Transparency and Trust: Ensure AI systems are explainable, well-documented, and open to public scrutiny and challenge.
- Safety, Security and Resilience: Build resilient systems with ongoing risk assessments and strong protections against misuse.
- International Collaboration and Ethical AI Development: Advance ethical AI through international collaboration while upholding constitutional values and human oversight.
- Environmental sustainability: Align AI development with climate resilience and sustainable resource use.
- Inclusive Participation and Cultural Relevance: Reflect local diversity and involve marginalised communities in AI design.
- Robust Governance and Adaptive Regulation: Maintain flexible, responsive regulation that keeps pace with technological change.
The report calls for a coordinated, multi-stakeholder approach to AI governance. It recommends that:
- The government should enact a comprehensive AI law aligned with constitutional and international human rights standards, establish a legally mandated National AI Advisory Council with inclusive representation and strong enforcement powers. It should also introduce clear prohibitions on high-risk practices such as real-time biometric surveillance without judicial oversight.
- Civil society and the media should strengthen public awareness, promote accountability, and counter AI-driven disinformation.
- Private sector actors should uphold transparency, fairness, and ethical standards across AI systems, including fair labour practices. Labour protections must be guaranteed for gig workers and data annotators within the AI value chain.
- Academia and research institutions should continue generating evidence that can guide context-specific policy and regulation.
- Across all stakeholders, digital literacy must be expanded, especially in underserved and rural communities, so that citizens can understand and challenge AI systems that affect them.
With the ongoing legislative processes on AI, this is a pivotal time for Kenya, as it has the momentum and the attention of the world. But momentum without action will not work. The country cannot afford slow, fragmented debates while technology is fast progressing. Additionally, Kenya must strike a careful balance between regulation and innovation, as overly restrictive rules could limit access, slow local innovation, and lock the country out of AI’s economic and social benefits. The goal should be a flexible, forward-looking framework that protects rights while still enabling growth and opportunity.
Read the full report, Navigating the Implications of AI on Digital Democracy in Kenya.





