Can Kenya Close the Gap Between Innovation and Human Rights?

By Raylenne Kambua |

The raw paradox at the heart of Kenya’s Artificial Intelligence (AI) moment is that the country is simultaneously sprinting ahead in AI adoption while grappling with a shrinking space for the very digital voices that AI empowers.

According to the Digital Global Update Report, Kenya recorded the world’s highest usage rate of AI tools in 2025, with 42.1% of internet users aged 16 and above reporting active use of AI-powered technologies. This level of usage indicates that AI is increasingly being woven into the daily life of Kenyans.

However, the Navigating the Implications of AI on Digital Democracy in Kenya report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) highlights that while AI empowers citizens, it also enables unprecedented surveillance and manipulation.

A Nation Leading the Way in AI Adoption

Kenya has made significant investments in digital services, innovation hubs, and connectivity under the National Digital Master Plan 2022–2032.

These developments are also transforming how citizens interact with the government. Tools such as the Office of the Data Protection Commissioner’s Linda Data chatbot and platforms such as Sauti ya Bajeti have expanded access to rights information and budget tracking.

Yet, even as AI delivered clear benefits, it also revealed its dual nature, most visibly during the 2024 #RejectFinanceBill protests, during which Gen Zs mobilised through AI-generated infographics, satire, and short-form videos. At the height of the protests on June 25, a nationwide internet disruption was enforced despite assurances from the Communications Authority. The disruption was confirmed by network monitors like Cloudflare and NetBlocks, exposing the fragility of internet freedom in Kenya.

Civil society condemned the internet shutdown as a violation of rights, while telecoms Safaricom and Airtel attributed it to outages in their undersea cable. In the aftermath, reports of abductions and enforced disappearances of digital activists escalated, with the Kenya National Commission on Human Rights documenting at least 82 cases between June and December 2024.

Kenya’s AI Policy Landscape

The launch of the Kenya National AI Strategy 2025–2030 in March 2025 signalled the country’s ambition to position itself as Africa’s leading AI innovation hub. The strategy prioritises governance, ethics, investment, digital infrastructure, data ecosystem development, and support for AI research and innovation.

Kenya has also strengthened its international profile through participation in programmes such as the United Nations High-Level Advisory Board on AI, joining the International Network of AI Safety Institutes, and assuming leadership in the World Summit on the Information Society (WSIS+20).

At the national level, initiatives such as Digital Platforms Kenya (DigiKen) and the Kenya Bureau of Standards’ draft AI Code of Practice reflect growing momentum toward operationalising AI governance and skills building. The government is also developing an AI and Emerging Technologies Policy and a Data Governance Policy, both of which are expected to be in place by July 2026.

However, the gap between ambition and readiness remains wide. Kenya ranks 93rd in the 2025 Government AI Readiness Index, due to persistent weaknesses in infrastructure, implementation, and institutional capacity.

Moreover, Kenya’s legal framework for AI remains fragmented and incomplete. Currently, there is no standalone AI law in force, but a controversial Artificial Intelligence Bill, 2026, that has raised significant concerns about over-regulation and censorship  is under discussion. Additionally regulation is based on broader laws such as the Data Protection Act 2019 and the Computer Misuse and Cybercrimes Act 2018, which were not designed to address AI-specific risks such as deepfakes, automated decision-making, algorithmic discrimination, or synthetic disinformation.

As highlighted in the CIPESA report, critical gaps remain in the use of AI. These include the absence of mandatory algorithmic impact assessments, weak safeguards against AI-driven surveillance such as facial recognition, and scant measures to address AI-generated electoral misinformation. Furthermore, regulatory authorities lack sufficient capabilities to audit and monitor sophisticated AI systems, and there are no clear licensing or accountability frameworks for AI creators and deployers.

“Without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities.” (Navigating the Implications of AI on Digital Democracy in Kenya, 2025)

The Way Ahead: AI Governance Focused on Human Rights

The CIPESA report outlines a human rights–centred approach to AI governance that is built on the following key principles:

  1. Life-Centred and Human-Centred Design and Accountability: AI should support and not replace human judgment, with strong oversight to ensure transparency and accountability.
  2. Equity and Fairness: Design AI to prevent bias and expand inclusive access, especially for underrepresented groups.
  3. Transparency and Trust: Ensure AI systems are explainable, well-documented, and open to public scrutiny and challenge.
  4. Safety, Security and Resilience: Build resilient systems with ongoing risk assessments and strong protections against misuse.
  5. International Collaboration and Ethical AI Development: Advance ethical AI through international collaboration while upholding constitutional values and human oversight.
  6. Environmental sustainability: Align AI development with climate resilience and sustainable resource use.
  7. Inclusive Participation and Cultural Relevance: Reflect local diversity and involve marginalised communities in AI design.
  8. Robust Governance and Adaptive Regulation: Maintain flexible, responsive regulation that keeps pace with technological change.

The report calls for a coordinated, multi-stakeholder approach to AI governance. It recommends that:

  • The government should enact a comprehensive AI law aligned with constitutional and international human rights standards, establish a legally mandated National AI Advisory Council with inclusive representation and strong enforcement powers.  It should also introduce clear prohibitions on high-risk practices such as real-time biometric surveillance without judicial oversight.
  • Civil society and the media should strengthen public awareness, promote accountability, and counter AI-driven disinformation.
  • Private sector actors should uphold transparency, fairness, and ethical standards across AI systems, including fair labour practices. Labour protections must be guaranteed for gig workers and data annotators within the AI value chain.
  • Academia and research institutions should continue generating evidence that can guide context-specific policy and regulation.
  • Across all stakeholders, digital literacy must be expanded, especially in underserved and rural communities, so that citizens can understand and challenge AI systems that affect them.

With the ongoing legislative processes on AI, this is a pivotal time for Kenya, as it has the momentum and the attention of the world. But momentum without action will not work. The country cannot afford slow, fragmented debates while technology is fast progressing. Additionally, Kenya must strike a careful balance between regulation and innovation, as overly restrictive rules could limit access, slow local innovation, and lock the country out of AI’s economic and social benefits. The goal should be a flexible, forward-looking framework that protects rights while still enabling growth and opportunity.

Read the full report, Navigating the Implications of AI on Digital Democracy in Kenya.

CIPESA Urges Rights-Centred Approach to Uganda’s AI Strategy

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has submitted recommendations to Uganda’s Artificial Intelligence (AI) and Emerging Technologies Strategy national taskforce, calling for a human rights-centred approach to the governance of these technologies.

The submission, made in response to the Ministry of ICT and National Guidance’s ongoing process to develop a National AI and Emerging Technologies Strategy, welcomes Uganda’s ambition to harness AI for development. At the same time, CIPESA cautions that innovation must be matched with the legal, institutional, and ethical safeguards needed to protect people from harms.

Discussions on Uganda’s AI policy come at a moment when AI technologies are already being deployed in both public and private sectors. The submission states that AI-enhanced tools are currently employed in customs risk profiling at the Uganda Revenue Authority, customer service functions, digital financial services, research organisations, and environmental monitoring initiatives.

In agriculture, AI-powered tools can support weather forecasting, pest detection, control and prevention, and tailored advice for farmers, whereas in healthcare, they can enhance disease detection, diagnostics, prescription and help address shortages in medical personnel. These applications highlight the transformative potential of AI, yet there are also concerns around surveillance, exclusion, discrimination, and misuse of personal data.

The submission is informed by CIPESA’s broader work on digital rights in Africa, including the Navigating the Implications of AI on Digital Democracy in Uganda report, which emphasises the growing impact of AI-driven technologies on online expression, political communication, surveillance practices, and civic participation.

The recommendations also build on CIPESA’s earlier work on developing an inclusive AI ecosystem for Uganda. According to the policy brief, An Artificial Intelligence Eco-System for Uganda, the country’s existing legal and policy frameworks provide a fragmented foundation for regulating AI and responding to emerging risks such as algorithmic bias, automated discrimination, opaque decision-making, and AI-enabled surveillance.

Accordingly, CIPESA calls for a rights-by-design approach to AI governance. High-risk AI systems used by both public and private actors should be transparent, auditable, and subject to independent oversight. It also calls for mandatory Human Rights Impact Assessments for AI systems used in sensitive sectors such as healthcare, agriculture, education, taxation, law enforcement, and social protection.

The submission further recommends dedicated legal and policy measures that address algorithmic transparency, automated decision-making, public-sector AI procurement, safeguards against discriminatory outcomes, and mechanisms for redress where harm occurs.

CIPESA also raises concerns about the growing use of automated systems in areas such as digital lending and mobile money services, where millions of Ugandans are already subjected to algorithmic profiling and automated credit scoring with limited transparency or accountability. The submission recommends that Uganda’s AI strategy should establish clear safeguards and oversight standards for both existing and future AI systems.

While AI presents significant opportunities for improving public service delivery and supporting development priorities, CIPESA stresses that such systems must be built using representative local datasets, and designed in ways that minimise bias, exclusion, and discriminatory outcomes.

The organisation further stresses that AI governance must be inclusive and participatory. The submission calls for meaningful involvement of civil society organisations, academia, technical experts, and affected communities in shaping Uganda’s AI strategy. It also recommends multilingual and accessible AI-enabled platforms that support citizen participation through channels that are accessible to underserved and low-literacy communities.

Beyond governance safeguards, CIPESA urges the government to invest in local AI research, innovation, and infrastructure development. It recommends support for universities, innovation hubs, and local startups, alongside the establishment of national AI research centres and dedicated funding mechanisms. Earlier recommendations by CIPESA also proposed the creation of a national AI Research Fund and citizen awareness programmes to improve public understanding of AI technologies and their societal implications.

Without deliberate investment in local capacity, Uganda risks becoming merely a supplier of raw data to foreign technology companies while deriving limited economic value from AI technologies. This would also deepen dependence on externally developed systems that may not fully reflect local contexts, needs, or priorities.

CIPESA additionally calls for alignment between Uganda’s strategy and broader regional initiatives, including the African Union Continental AI Strategy and wider African efforts on digital governance, data protection, and platform accountability.

Ultimately, CIPESA argues that Uganda’s AI and Emerging Technologies Strategy should put people first, ensuring that innovation and emerging technologies are matched with clear safeguards and meaningful oversight.

Read the full submission here: CIPESA Submissions on Uganda AI and Emerging Technology Strategy

CIPESA Condemns Zambia’s Cancellation of RightsCon 2026

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) notes with deep concern the Government of Zambia’s decision to postpone Rights Con 2026, which was scheduled to take place in Lusaka next week. The postponement was confirmed by the organisers on April 29, 2026. Civic convenings of this nature thrive precisely because they create a safe space for diverse, sometimes uncomfortable, conversations about rights, technology, and power. Restricting that space undermines the principles of openness, dialogue, and democratic engagement on the continent.

The information provided by the Zambia government suggests that halting of RightsCon was not a necessary and proportionate measure. It has caused undue financial losses and disrupted the plans of thousands of national and international human rights actors and the local tourism, travel and conferencing sector, while also denting Zambia’s  governance credentials and international standing.

CIPESA has joined over 130 organisations from across the world in expressing concern over the  government’s decision that raises questions about transparency, civic space, and commitment to inclusive global digital governance.

The cancellation of RightsCon 2026 escalates an ongoing crisis of democratic regression and the rise of digital authoritarianism on the continent.

In a related development, the World Press Freedom Day (WPFD) Global Conference, originally scheduled to also take place in Lusaka ahead of RightsCon. has also undergone significant changes. UNESCO has announced that the conference will now be held online, while the UNESCO/Guillermo Cano World Press Freedom Prize ceremony will be relocated to the UNESCO Headquarters in Paris, France at a later date. These developments effectively delist Zambia as the host of this year’s WPFD, although a commemorative event remains scheduled for May 4, 2026.

Strengthening Digital Rights Awareness and Practice in the Business Sector

By Nadhifah Muhammad |

It has been 12 months since the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) championed the Civil Society (CSO) Fund for digital rights in Uganda’s business sector. Across 12 districts, the supported initiatives have implemented action plans to advance advocacy and awareness on digital rights for businesses.

Radio and television programmes, social media campaigns, community forums, stakeholder dialogues, and caravans were part of the awareness campaigns conducted by partners. As a result, more than 17 million citizens, including district leaders, youths, and the general public, have been sensitised on business and digital rights. In parallel, CIPESA has supported Small and Medium Enterprises (SMEs) in tourism and travel, education, digital marketing, IT development, legal services, lifestyle and health to conduct digital security self assessment and to integrate digital rights within their business models.

Before the interventions were rolled out, CIPESA built the partners’ capacity in implementing awareness raising and advocacy campaigns. At a March 2026 bootcamp, CIPESA re-convened the CSOs and SMEs, alongside innovators, researchers, local government officials and human rights advocates, to reflect on progress and take stock of their interventions. The 30 participants at the bootcamp also explored how to sustain, deepen, and institutionalise this work beyond the Advancing Respect for Human Rights by Businesses (ARBHR) project funded by the European Union and implemented with the support of Enabel.

A participant appreciated CIPESA’s approach to equipping them with skills and knowledge on digital rights and business and human rights ahead of rolling out the initiatives, stating: “It played a fundamental role in helping us appreciate the issues for us trickle them down to the communities.”

During the bootcamp, partners shared lessons and experiences from their action plans across the project focus regions: Busoga (Iganga, Mayuge, Bugiri, and Bugweri), the Albertine region (Hoima, Kikuube, Masindi, Buliisa, and Kiryandongo) and the Kampala Metropolitan area (Kampala, Mukono, and Wakiso).

Key lessons learnt included framing failure to respect digital rights as business risks to increase private sector engagement, prioritising multi-stakeholder engagements to foster ownership of initiatives, using local languages and simplified text while interfacing with grassroots communities, partnering with media to amplify awareness campaigns and consistent engagement with beneficiaries to ensure they grasp the context of the project initiatives.

However, despite the project’s wide reach, surveys conducted by project partners show that the majority of SMEs continue to face numerous digital vulnerabilities. For instance, a study by Girls for Climate Action on digital inclusion among 924 women-led green businesses in the Busoga region revealed an 85.9% digital skills gap. In Kampala, a mapping by Boundless Minds of 119 youth-led SMEs found that 49% were unaware of the existence of the Data Protection and Privacy Act, Cap 97 .

Similarly, the Capacity Needs Analysis conducted by the Private Sector Foundation Uganda (PSFU) on 79 Business Member Associations showed that while 72% of the businesses collect customer data, only 38% are aware of the Data Protection and Privacy Act, and only 16% have internal cybersecurity policies. These findings reinforce the need for continued support to help Ugandan businesses to understand and uphold digital rights in their operations.

However, the question of resources to continue the business and digital rights work beyond the ARBHR seed funding remained a concern. With the changing funding landscape, advocates were urged to rethink strategies for sustainability and reduce dependence on grants. As one participant noted, “Budget limitations greatly affected effective execution of some awareness campaigns,” which hampered the reach and depth of engagements.

Dr. Joyce Tamale, of Capital Solutions Ltd who facilitated a session on financial resilience, addressed this challenge by demonstrating how organisations can strengthen institutional sustainability and build long-term financing models to sustain their work. Framing the issue through a “mindset cycle,” she argued that resilience begins with a shift in thinking that shapes attitudes, actions, and ultimately results. She paired this with practical strategies such as building operating reserves, diversifying funding streams, and adopting long-term financial planning. Emphasising social entrepreneurship, she encouraged organisations to monetise their assets without compromising their mission.

To put this into practice, organisations were encouraged to codify financial health within their institutions through operating reserve policies, building emergency funds, and the use of financial instruments such as bonds and unit trusts, while also strengthening their own financial literacy.

To consolidate these gains and sustainability of efforts under the project, partners were rallied to mainstream digital rights and the Business and Human Rights agenda in their institutional programming.

African Governments are Using “Smart City” Systems to Monitor Dissent and Consolidate State Control

By CIVICUS |

CIVICUS discusses the spread of AI-powered surveillance in Africa with Wairagala Wakabi, executive director of the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) and co-editor of Smart City Surveillance in Africa: Mapping Chinese AI Surveillance Across 11 Countries, the latest report by the African Digital Rights Network (ADRN) and the Institute of Development Studies (IDS).
At least 11 African governments have spent over US$2 billion on Chinese-built surveillance infrastructure that uses AI-powered cameras, biometric data collection and facial recognition to monitor public spaces. Marketed as ‘smart city’ solutions to reduce crime and manage urban growth, these systems have been rolled out with little regulation and no independent evidence of their effectiveness. This technology is instead being used to monitor activists, track protesters and silence dissent, with a chilling effect on freedoms of assembly and expression.

How widespread is AI-powered surveillance in Africa?

Under the guise of reducing crime and fighting terrorism, at least 11 governments have invested over US$2 billion in AI-powered ‘smart city’ surveillance infrastructure: Algeria, Egypt, Kenya, Mauritius, Mozambique, Nigeria, Rwanda, Senegal, Uganda, Zambia and Zimbabwe.

Governments are installing thousands of CCTV cameras linked to central command centres, paired with tools such as automatic number-plate recognition, biometric ID systems and facial recognition to track people and vehicles. The largest known investments are in Nigeria (over US$470 million), Mauritius (US$456 million) and Kenya (US$219 million), though the real total is likely much higher, since surveillance spending is often secret and the report covers only 11 of Africa’s 55 countries.

Despite being presented as tools for crime prevention, counter-terrorism, modernisation and urban management, these are not targeted security measures. They represent a broader shift toward continuous, population-level monitoring of public spaces, rolled out over the past five to ten years almost always without clear legal limits or public debate.

Are these systems achieving their stated purpose?

No, there is no compelling evidence that they have in any of the countries studied. Instead, the data points to a pattern of use that raises serious human rights concerns.

In Uganda and Zimbabwe, AI-powered surveillance including facial recognition is being used to suppress dissent rather than ensure public safety. Activists, critics of the government, opposition leaders and protesters are identified and monitored through this system, even after protests have ended. In Mozambique, smart CCTV systems have reportedly been installed in areas of strong political opposition, suggesting targeted rather than neutral surveillance.

In Senegal and Zambia, countries with relatively low terrorism threats, governments have still invested heavily, which calls into question the stated security rationale.

Across the countries studied, the scale of surveillance far exceeds any actual or perceived security threat, and the infrastructure is consistently being used to monitor dissent and consolidate state control rather than address genuine public safety needs.

Who’s supplying this technology?

While firms from Israel, South Korea and the USA supply surveillance technologies, Chinese companies are the primary suppliers and financiers. They typically offer end-to-end ‘smart city’ packages that include cameras, software platforms, data analytics systems, training and ongoing technical support. Many projects are backed by loans from Chinese state-linked banks, which makes them financially accessible in the short term but creates long-term dependencies on external vendors for maintenance, system management and upgrades.

This model undermines transparency. Procurement processes are opaque and civil society, the public and oversight institutions including parliaments rarely have information about how these systems operate, how data is stored or who has access to it. That lack of accountability is what makes abuse not just possible, but hard to detect or challenge.

What impact is this having on civic space?

This large-scale surveillance of public spaces is not legal, necessary or proportionate to the legitimate aim of providing security. Recording, analysing and retaining facial images of people in public without their consent interferes with their right to privacy and, over time, their willingness to move, assemble and speak freely.

The most immediate consequence is a chilling effect, particularly where civic space is already restricted. Knowing they can be identified and tracked, activists and journalists are less willing to attend protests for fear of later arrest or reprisals, and end up self-censoring. Civil society organisations also report heightened anxiety about the risks for their members and partners.

What should governments and civil society do?

None of the 11 countries studied have a legal framework capable of balancing the state’s security needs with its commitments to protect fundamental human rights. That must change. Governments must adopt clear regulations on surveillance, including restrictions on facial recognition and other AI tools, require independent human rights impact assessments before introducing new systems, make procurement and deployment processes transparent and establish strong oversight mechanisms, including judicial and parliamentary scrutiny, to prevent abuse.

Civil society should continue documenting abuses, raising public awareness and advocating for accountability, while also supporting affected people and communities through digital security support and legal assistance.

Technology-exporting states and donors must enforce stricter controls and safeguards on the export and financing of these tools, support rights-based approaches to digital governance and help fund independent monitoring and advocacy across Africa.

Without urgent action, these systems will continue to expand, and the rights of people across Africa will continue to shrink.

CIVICUS interviews a wide range of civil society activists, experts and leaders to gather diverse perspectives on civil society action and current issues for publication on its CIVICUS Lens platform. The views expressed in interviews are the interviewees’ and do not necessarily reflect those of CIVICUS. Publication does not imply endorsement of interviewees or the organisations they represent.

This article was first published on the Website of CIVICUS LENS on April 07, 2026