Now More Than Ever, Africa Needs Participatory AI Regulatory Sandboxes 

By Brian Byaruhanga and Morine Amutorine |

As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values

Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.

As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes.  Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.

At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.

Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe. 

CIPESA’s 2025 State of Internet Freedom in Africa report reveals that AI policy processes across the continent are “often opaque and dominated by state actors, with limited multistakeholder participation.” This pattern of exclusion contradicts the continent’s vibrant civil society landscape, where various organisations in 29 African countries are actively working on responsible AI issues and frequently outpacing government efforts to protect human rights.

The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms. 

However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.

Case Study: Rwanda’s Participatory AI Policy Development

The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.

By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable. 

Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes. 

Consequences of Excluding CSOs in AI Regulatory Sandbox Development???

The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.

The session noted that CSOs  are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation. 

The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.

Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?

Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process. 

Case Study: Uganda’s Failures Without Sandbox Testing

Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing. Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022.    A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.

Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination of algorithmic decision-making processes and vendor contracts.

Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.

Models of Participatory Sandboxes

Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”

However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.

Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential. 

Pathways Forward: Building Truly Inclusive Sandboxes

Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.

Creating the legal foundation

Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.

Building capacity and awareness

Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.

Institutionalise cross-sector learning.

Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.

Redesigning governance structures

True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.

The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.

Why Civil Society Participation is Practical

Research on regulatory sandboxes demonstrates that participatory design delivers concrete benefits beyond legitimacy. CIPESA’s analysis of digital public infrastructure governance shows that sandboxes incorporating civil society input “make data governance and accountability more clear” through inclusive policy reviews, pre-implementation audits, and transparent examination of financial terms and vendor contracts. 

Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.

Civil society engagement:

  • Surfaces lived experiences regulators often miss.
  • Strengthens legitimacy of governance frameworks.
  • Pushes for transparency in AI design and data use.
  • Ensures frameworks reflect African values and protect vulnerable communities, and
  • Enables oversight that prevents exploitative arrangements

While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.

The cost of exclusion can be  seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.

The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.

Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control
CCTV system in Kampala, Uganda. REUTERS/James Akrena (2019)

By Brian Byaruhanga

In June 2025, Uganda suspended its Express Penalty Scheme (EPS) for traffic offences, less than a week after its launch, citing a “lack of clarity” among government agencies. While this seemed like a routine administrative misstep, it exposed a more significant issue: the brittle foundation upon which many digital public infrastructures (DPI) in Africa are being built. DPI refers to the foundational digital systems and platforms, such as digital identity, payments, and data exchange frameworks, which form the backbone of digital societies, similar to how roads or electricity function in the physical world

This EPS saga highlighted implementation gaps and illuminated a systemic failure to promote equitable access, public accountability, and safeguard fundamental rights in the rollout of DPI.

When the State Forgets the People

The Uganda EPS, established under section 166 of the Traffic and Road Safety Act, Cap 347, serves as a tech-driven improvement to road safety. Its  goal is to reduce road accidents and fatalities by encouraging better driver behaviour and compliance with traffic laws. By allowing offenders to pay fines directly without prosecution, the system aims to resolve minor offences quickly and to ease the burden on the judicial system. Challenges faced by the manual EPS system, which the move to the automated system aimed to eliminate, include corruption (reports of deleted fines, selective enforcement, and theft of collected penalties). 

At the heart of the EPS was an automated surveillance and enforcement system, which used Closed Circuit Television (CCTV) cameras and license plate recognition to issue real-time traffic fines. This system operated with almost complete opacity. A Russian company, Joint Stock Company Global Security, was reportedly entitled to 80% of fine revenues, despite making minimal investment, among other significant legal and procurement irregularities. There was a notable absence of clear contracts, publicly accessible oversight mechanisms, or effective avenues for appeal. Equally concerning, the collection and storage of extensive amounts of sensitive data lacked transparency regarding who had access to it.

Such an arrangement represented a profound breach of public trust and an infringement upon digital rights, including data privacy and access to information. It illustrated the minimal accountability under which foreign-controlled infrastructure can operate within a nation. This was a data-driven governance mechanism that lacked the corresponding data rights safeguards, subjecting Ugandans to a system they could neither comprehend nor contest.

This is Not an Isolated Incident

The situation in Uganda reflects a widespread trend across the continent. In Kenya, the 2024 Microsoft–G42 data centre agreement – announced as a partnership with the government to build a state-of-the-art green facility aimed at advancing infrastructure, research and development, innovation, and skilling in Artificial Intelligence (AI) –  has raised serious concerns about data sovereignty and long-term control over critical digital infrastructure. 

In Uganda, the National Digital ID system (Ndaga Muntu) became a case study in how poorly-governed DPI deepens structural exclusion and undermines equitable  access to public services. A 2021 report by the Centre for Human Rights and Global Justice found that rigid registration requirements, technical failures, and a lack of recourse mechanisms denied millions of citizens access to healthcare, education, and social protection. Those most affected were the elderly, women, and rural communities. However, a 2025 High Court ruling ignored evidence and expert opinions about the ID system’s exclusion and implications for human rights. 

Studies estimate that most e-government projects in Africa end in partial or total failure, often due to poor project design, lack of infrastructure, weak accountability frameworks, and insufficient citizen engagement. Many of these projects are built on imported technologies and imposed models that do not reflect the realities or governance contexts of African societies.

The clear pattern is emerging across the continent: countries  are integrating complex, often foreign-managed or poorly localised digital systems into public governance without establishing strong, rights-respecting frameworks for transparency, accountability, and oversight. Instead of empowering citizens, this version of digital transformation risks deepening inequality, centralising control, and undermining public trust in government digital systems.

The State is Struggling to Keep Up

National Action Plans (NAPs) on Business and Human Rights, intended to guide ethical public–private collaboration, have failed to address the unique challenges posed by DPI. Uganda’s NAP barely touches on data governance, algorithmic harms, or surveillance technologies. While Kenya’s NAP mentions the digital economy, it lacks enforceable guardrails for foreign firms managing critical infrastructure. In their current form, these frameworks are insufficiently equipped to respond to the complexity and ethical risks embedded in modern DPI deployments.

Had the Ugandan EPS system been subject to stronger scrutiny under a digitally upgraded NAP, key questions would likely have been raised before implementation:

  • What redress exists for erroneous or abusive fines?
  • Who owns the data and where is it stored?
  • Are the financial terms fair, equitable, and sovereign?

But these questions came too late.

What these failures point to is not just a lack of policy, but a lack of operational mechanisms to design, test and interrogate DPI before roll out. What is needed is a practical bridge that responds to public needs and enforces human rights standards.

Regulatory Sandboxes: A Proactive Approach to DPI

DPI systems, such as Uganda’s EPS, should undergo rigorous testing before full-scale deployment. In such a space, a system’s logic, data flows, human rights implications, and resilience under stress are collectively scrutinised before any harm occurs. This is the purpose of regulatory sandboxes – platforms that offer a structured, participatory, and transparent testbed for innovations. 

Thus, a regulatory sandbox could have revealed and resolved core failures of Uganda’s EPS before rollout, including the controversial revenue-sharing arrangement with a foreign contractor.

How Regulatory Sandboxes Work: Regulatory sandboxes are useful for testing DPI systems and governance frameworks such as revenue models in a transparent manner, enabling stakeholders to examine the model’s fairness and legality. This entails publicly revealing financial terms to regulators, civil society, and the general public. Secondly, before implementation, simulated impact analyses can also highlight possible public backlash or a decline in trust. Sandboxes can be used for facilitating pre-implementation audits, making vendor selection and contract terms publicly available, and conducting mock procurements to detect errors.  By defining data ownership and access guidelines, creating redress channels for data abuse, and supporting inclusive policy reviews with civil society, regulatory sandboxes make data governance and accountability more clear.

This shift from reactive damage control to proactive governance is what regulatory sandboxes offer. If Uganda had employed a sandbox approach, the EPS system might have served as a model for ethical innovation rather than a cautionary tale of rushed deployment, weak oversight, and lost public trust.

Beyond specific systems like EPS or digital ID, the future of Africa’s digital transformation hinges on how digital public infrastructure is conceived, implemented, and governed. Foundational services, such as digital identity, health information platforms, financial services, surveillance mechanisms, and mobility solutions, are increasingly reliant on data and algorithmic decision-making. However, if these systems are designed and deployed without sufficient citizen participation, independent oversight, legal safeguards, and alignment with the public interest, they risk becoming tools of exclusion, exploitation, and foreign dependency. 

Realising the full potential of DPIs as a tool for inclusion, digital sovereignty, and rights-based development demands urgent and deliberate efforts to embed accountability, transparency, and digital rights at every stage of their lifecycle.

Photo Credit – CCTV system in Kampala, Uganda. REUTERS/James Akena (2019)