Towards Inclusive AI Policies in Africa’s Digital Transformation

By CIPESA Writer |

On November 13, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) took part in the global PILNET summit on Artificial Intelligence (AI) and its impact on the work of Civil Society Organisations (CSOs). Over three days, the summit assembled stakeholders from across the world in Rome, Italy, to deliberate on various topics under the theme, “Amplifying Impact: Pro Bono & Public Interest Law in a Shifting World.”

CIPESA contributed to a session titled, “Pro bono for AI: Addressing legal risks and enhancing opportunities for CSOs”. The session focused on AI and its potential impacts on the work and operations of CSOs. CIPESA emphasised the need for a universally acceptable and adaptable framework to guide the increased application of AI in the fast-evolving technological era. Furthermore, CIPESA highlighted its efforts in developing a model policy on AI for CSOs in Africa, which is being undertaken with the support of the Thomson Reuters Foundation through its global pro bono network.

Edrine Wanyama, Programme Manager Legal at CIPESA, centered his discussions around ethical and rights-respecting AI adoption, and emphasised the necessity for CSOs to enhance their knowledge and measures of accountability while navigating the AI ecosystem.

Mara Puacz, the Head of Impact at Tech To The Rescue, Ana de la Cruz Cubeiro, a Legal Officer at PILnet, and Megan King, a Senior Associate at Norton Rose Fulbright, shared similar sentiments on the benefits of AI, which include expanding advocacy work and initiatives of CSOs.

They noted the increased demand for transparency and accountability in AI development and use, and the need to minimise harms that marginalised communities face from AI enabled analysis of data sets that often perpetuate bias, gaps in data, and limited or poorly digitalised language sets.

The session cited various benefits of AI for CSOs, such as enabling human rights monitoring, documenting and reporting at various fronts like the Universal Periodic Review, aiding democratic participation, and tracking and documenting trends. Others are facilitating and enhancing environmental protection, such as through monitoring pollution and providing real-time support to agri-business and the health sector by facilitating pest and disease identification and diagnosis for viable solutions.

However, funding constraints not only affect AI deployment but also capacity building to address the limited skills and expertise in AI deployment. In Africa, the inadequacy of relevant infrastructure, data sovereignty fears amongst states, and the irresponsible use of AI and related technologies present additional challenges.

Meanwhile, between October 23 and 24, 2025, CIPESA joined KTA Advocates and the Centre for Law, Policy and Innovation Initiative (CeLPII), to co-host the 8th Annual Symposium under the theme of “Digital Trade, AI and the Creative Economy as Drivers for Digital Transformation”.

The symposium explored the role of AI in misinformation and disinformation, as well as its potential to transform Uganda’s creative economy and digital trade. CIPESA emphasised the need to make AI central in all discussions of relevant sectors, including governments, innovators, CSOs and the private sector, among others, to identify strategies, such as policy formulation and adoption, to check potential excesses by AI.

Conversations at the PILNET summit and the KTA Symposium align with CIPESA’s ongoing initiatives across the continent, where countries and regional blocs are developing AI strategies and policies to inform national adoption and its application. At the continental level, in 2024, the African Union (AU) adopted the Continental AI Strategy, which provides a unified framework for using AI to drive digital transformation and socio-economic development of Africa.

Amongst the key recommendations from the discussions is the need for:

  • Wide adoption of policies guiding the use of AI by civil society organisations, firms, the private sector, and innovators.
  • Nationwide and global participation of individuals and stakeholders, including governments, CSOs, the private sector, and innovators, in AI processes and how it works to ensure that no one is left behind. This will ensure inclusive participation.
  • Awareness creation and continuous education of citizens, CSOs, innovators, firms, and the private sector on the application and value of AI in their work.
  • The adoption of policies and laws that specifically address the application of AI at national, regional and international levels and at organisational and institutional levels to mitigate the potential adverse impacts of AI rollout.

CIPESA @African Economic Research Consortium (AERC) Summit 2025

Update |

This year, the African Economic Research Consortium (AERC) is holding its first Summit in the context of its new 10-year Strategic Plan (2025-2035), Nairobi, Kenya. The three-day Summit themed ‘A Renewed AERC for Africa’s New Development Priorities’, is designed to hardwire the research-policy bridge.

This event is taking place from November 30 to December 02, 2025. For more information, click here.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control
CCTV system in Kampala, Uganda. REUTERS/James Akrena (2019)

By Brian Byaruhanga

In June 2025, Uganda suspended its Express Penalty Scheme (EPS) for traffic offences, less than a week after its launch, citing a “lack of clarity” among government agencies. While this seemed like a routine administrative misstep, it exposed a more significant issue: the brittle foundation upon which many digital public infrastructures (DPI) in Africa are being built. DPI refers to the foundational digital systems and platforms, such as digital identity, payments, and data exchange frameworks, which form the backbone of digital societies, similar to how roads or electricity function in the physical world

This EPS saga highlighted implementation gaps and illuminated a systemic failure to promote equitable access, public accountability, and safeguard fundamental rights in the rollout of DPI.

When the State Forgets the People

The Uganda EPS, established under section 166 of the Traffic and Road Safety Act, Cap 347, serves as a tech-driven improvement to road safety. Its  goal is to reduce road accidents and fatalities by encouraging better driver behaviour and compliance with traffic laws. By allowing offenders to pay fines directly without prosecution, the system aims to resolve minor offences quickly and to ease the burden on the judicial system. Challenges faced by the manual EPS system, which the move to the automated system aimed to eliminate, include corruption (reports of deleted fines, selective enforcement, and theft of collected penalties). 

At the heart of the EPS was an automated surveillance and enforcement system, which used Closed Circuit Television (CCTV) cameras and license plate recognition to issue real-time traffic fines. This system operated with almost complete opacity. A Russian company, Joint Stock Company Global Security, was reportedly entitled to 80% of fine revenues, despite making minimal investment, among other significant legal and procurement irregularities. There was a notable absence of clear contracts, publicly accessible oversight mechanisms, or effective avenues for appeal. Equally concerning, the collection and storage of extensive amounts of sensitive data lacked transparency regarding who had access to it.

Such an arrangement represented a profound breach of public trust and an infringement upon digital rights, including data privacy and access to information. It illustrated the minimal accountability under which foreign-controlled infrastructure can operate within a nation. This was a data-driven governance mechanism that lacked the corresponding data rights safeguards, subjecting Ugandans to a system they could neither comprehend nor contest.

This is Not an Isolated Incident

The situation in Uganda reflects a widespread trend across the continent. In Kenya, the 2024 Microsoft–G42 data centre agreement – announced as a partnership with the government to build a state-of-the-art green facility aimed at advancing infrastructure, research and development, innovation, and skilling in Artificial Intelligence (AI) –  has raised serious concerns about data sovereignty and long-term control over critical digital infrastructure. 

In Uganda, the National Digital ID system (Ndaga Muntu) became a case study in how poorly-governed DPI deepens structural exclusion and undermines equitable  access to public services. A 2021 report by the Centre for Human Rights and Global Justice found that rigid registration requirements, technical failures, and a lack of recourse mechanisms denied millions of citizens access to healthcare, education, and social protection. Those most affected were the elderly, women, and rural communities. However, a 2025 High Court ruling ignored evidence and expert opinions about the ID system’s exclusion and implications for human rights. 

Studies estimate that most e-government projects in Africa end in partial or total failure, often due to poor project design, lack of infrastructure, weak accountability frameworks, and insufficient citizen engagement. Many of these projects are built on imported technologies and imposed models that do not reflect the realities or governance contexts of African societies.

The clear pattern is emerging across the continent: countries  are integrating complex, often foreign-managed or poorly localised digital systems into public governance without establishing strong, rights-respecting frameworks for transparency, accountability, and oversight. Instead of empowering citizens, this version of digital transformation risks deepening inequality, centralising control, and undermining public trust in government digital systems.

The State is Struggling to Keep Up

National Action Plans (NAPs) on Business and Human Rights, intended to guide ethical public–private collaboration, have failed to address the unique challenges posed by DPI. Uganda’s NAP barely touches on data governance, algorithmic harms, or surveillance technologies. While Kenya’s NAP mentions the digital economy, it lacks enforceable guardrails for foreign firms managing critical infrastructure. In their current form, these frameworks are insufficiently equipped to respond to the complexity and ethical risks embedded in modern DPI deployments.

Had the Ugandan EPS system been subject to stronger scrutiny under a digitally upgraded NAP, key questions would likely have been raised before implementation:

  • What redress exists for erroneous or abusive fines?
  • Who owns the data and where is it stored?
  • Are the financial terms fair, equitable, and sovereign?

But these questions came too late.

What these failures point to is not just a lack of policy, but a lack of operational mechanisms to design, test and interrogate DPI before roll out. What is needed is a practical bridge that responds to public needs and enforces human rights standards.

Regulatory Sandboxes: A Proactive Approach to DPI

DPI systems, such as Uganda’s EPS, should undergo rigorous testing before full-scale deployment. In such a space, a system’s logic, data flows, human rights implications, and resilience under stress are collectively scrutinised before any harm occurs. This is the purpose of regulatory sandboxes – platforms that offer a structured, participatory, and transparent testbed for innovations. 

Thus, a regulatory sandbox could have revealed and resolved core failures of Uganda’s EPS before rollout, including the controversial revenue-sharing arrangement with a foreign contractor.

How Regulatory Sandboxes Work: Regulatory sandboxes are useful for testing DPI systems and governance frameworks such as revenue models in a transparent manner, enabling stakeholders to examine the model’s fairness and legality. This entails publicly revealing financial terms to regulators, civil society, and the general public. Secondly, before implementation, simulated impact analyses can also highlight possible public backlash or a decline in trust. Sandboxes can be used for facilitating pre-implementation audits, making vendor selection and contract terms publicly available, and conducting mock procurements to detect errors.  By defining data ownership and access guidelines, creating redress channels for data abuse, and supporting inclusive policy reviews with civil society, regulatory sandboxes make data governance and accountability more clear.

This shift from reactive damage control to proactive governance is what regulatory sandboxes offer. If Uganda had employed a sandbox approach, the EPS system might have served as a model for ethical innovation rather than a cautionary tale of rushed deployment, weak oversight, and lost public trust.

Beyond specific systems like EPS or digital ID, the future of Africa’s digital transformation hinges on how digital public infrastructure is conceived, implemented, and governed. Foundational services, such as digital identity, health information platforms, financial services, surveillance mechanisms, and mobility solutions, are increasingly reliant on data and algorithmic decision-making. However, if these systems are designed and deployed without sufficient citizen participation, independent oversight, legal safeguards, and alignment with the public interest, they risk becoming tools of exclusion, exploitation, and foreign dependency. 

Realising the full potential of DPIs as a tool for inclusion, digital sovereignty, and rights-based development demands urgent and deliberate efforts to embed accountability, transparency, and digital rights at every stage of their lifecycle.

Photo Credit – CCTV system in Kampala, Uganda. REUTERS/James Akena (2019)