Now More Than Ever, Africa Needs Participatory AI Regulatory Sandboxes 

By Brian Byaruhanga and Morine Amutorine |

As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values

Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.

As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes.  Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.

At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.

Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe. 

CIPESA’s 2025 State of Internet Freedom in Africa report reveals that AI policy processes across the continent are “often opaque and dominated by state actors, with limited multistakeholder participation.” This pattern of exclusion contradicts the continent’s vibrant civil society landscape, where various organisations in 29 African countries are actively working on responsible AI issues and frequently outpacing government efforts to protect human rights.

The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms. 

However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.

Case Study: Rwanda’s Participatory AI Policy Development

The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.

By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable. 

Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes. 

Consequences of Excluding CSOs in AI Regulatory Sandbox Development???

The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.

The session noted that CSOs  are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation. 

The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.

Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?

Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process. 

Case Study: Uganda’s Failures Without Sandbox Testing

Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing. Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022.    A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.

Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination of algorithmic decision-making processes and vendor contracts.

Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.

Models of Participatory Sandboxes

Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”

However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.

Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential. 

Pathways Forward: Building Truly Inclusive Sandboxes

Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.

Creating the legal foundation

Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.

Building capacity and awareness

Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.

Institutionalise cross-sector learning.

Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.

Redesigning governance structures

True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.

The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.

Why Civil Society Participation is Practical

Research on regulatory sandboxes demonstrates that participatory design delivers concrete benefits beyond legitimacy. CIPESA’s analysis of digital public infrastructure governance shows that sandboxes incorporating civil society input “make data governance and accountability more clear” through inclusive policy reviews, pre-implementation audits, and transparent examination of financial terms and vendor contracts. 

Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.

Civil society engagement:

  • Surfaces lived experiences regulators often miss.
  • Strengthens legitimacy of governance frameworks.
  • Pushes for transparency in AI design and data use.
  • Ensures frameworks reflect African values and protect vulnerable communities, and
  • Enables oversight that prevents exploitative arrangements

While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.

The cost of exclusion can be  seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.

The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.

Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.

State of Internet Freedom In Africa Report

2025 State of Internet Freedom In Africa Report Documents the Implications of AI on Digital Democracy in Africa

By Juliet Nanfuka | 

The 2025 edition of the Forum on Internet Freedom in Africa (FIFAfrica25) concluded on a high note with the unveiling of the latest State of Internet Freedom in Africa (SIFA) report. Titled Navigating the Implications of AI on Digital Democracy in Africa, this landmark study unpacks how artificial intelligence is shaping, disrupting, and reimagining civic space and digital rights across the continent.

Drawing on research from 14 countries (Cameroon, Egypt, Ethiopia, Ghana, Kenya, Mozambique, Namibia, Nigeria, Rwanda, Senegal, South Africa, Tunisia, Uganda, and Zimbabwe), the report documents both the immense promise and the urgent perils of AI in Africa. It highlights AI’s potential to strengthen democratic participation, improve public services, and drive innovation, while also warning of its role in amplifying surveillance, disinformation, and exclusion. 

Using a qualitative approach, including literature review and key informant interviews, the report shows that AI is rapidly transforming how Africans interact with technology, yet AI also amplifies existing vulnerabilities, introduces new challenges that undermine fundamental freedoms, and deepens existing inequalities.

The report notes that the political environment is a crucial determinant of AI’s trajectory, with strong democracies generally enabling a positive outcome. Top performers in freedom and governance indices such as South Africa, Ghana, Namibia, and Senegal are more likely to set the standard to AI rollout in Africa. Conversely, countries with lower democratic credentials such as Cameroon, Egypt, Ethiopia, and Rwanda risk constraining AI’s potential or deploying it to amplify digital authoritarianism and political repression.  

Countries such as South Africa, Tunisia and Egypt that have a higher internet access and technological development, Gross Domestic Product (GDP) per capita, and score highly on the Human Development Index (HDI), are more likely to lead in AI. Meanwhile, countries with lower or weaker levels of digital infrastructure face greater challenges and higher risks of AI replicating and worsening existing divides. Such countries include Cameroon, Mozambique and Uganda.

The political environment is a crucial determinant of AI’s trajectory, with strong democracies generally enabling a positive outcome. Economic and developmental status also dictates the capacity for AI development and adoption. 

Despite these challenges, the report documents that AI offers substantial value to the public sector by improving service delivery and enhancing transparency. Governments are leveraging AI tools for efficiency, such as the South African Revenue Services (SARS) AI Assistant for tax assessments and Nigeria’s Service-Wise GPT for streamlined governance document access. In Kenya, the Sauti ya Bajeti (Voice of the Budget) platform fosters fiscal transparency by allowing citizens to query and track government expenditures. Furthermore, countries like Tunisia and Uganda are using AI models within tax bodies to detect fraud, while Rwanda is deploying AI for judicial system improvements and identity management at borders.

The private sector and academic institutions are driving AI-inspired innovation, particularly in the areas of FinTech, AgriTech, and Natural Language Processing (NLP). For the latter, notable efforts to localise AI include Tunisia’s TUNBERT model for Tunisian Arabic, and Ghana’s Khaya, an open-source AI-powered translator tailored for local languages. In Ghana, the DeafCanTalk, is an AI-powered app that enables bidirectional translation between sign language and spoken language, and has enhanced accessibility for deaf users. Rwanda has integrated AI into healthcare using drone delivery systems for medical supplies, while Cameroon and Uganda use AI to assist farmers with pest identification. 

However, despite increasing investment, such as the ongoing USD 720 million investment in compute power by Cassava Technologies across hubs in South Africa, Egypt, Kenya, Morocco, and Nigeria, Africa receives  significantly lower AI funding than global counterparts.

Moreover, while AI is gaining traction across many sectors, the proliferation of AI-generated misinformation and disinformation is a pervasive and growing challenge that poses a critical threat to electoral integrity. During South Africa’s 2024 elections, deepfake videos were circulated to manipulate perceptions and endorse political entities. Similarly, during elections and protests in Kenya and Namibia, deepfake technology and automated campaigns were used to discredit opponents. 

The report also documents that governments are deploying AI-powered surveillance technologies, which has led to widespread privacy violations and a chilling effect on freedoms. For example, pro-government propagandists in Rwanda utilised Large Language Models (LLMs) to mass-produce synthetic messages on social media, simulating authentic support and suppressing dissenting voices. Meanwhile, algorithmic bias and exclusion are producing discriminatory outcomes, particularly against low-resource African languages. Also, AI-based content moderation is often ineffective because it lacks contextual understanding and fails to capture local nuance.

A key finding in the report is that across the continent, the pace of AI development far outstrips regulatory readiness. None of the 14 study countries has AI-specific legislation. Instead, fragmented laws on data protection, cybercrime, and copyright are stretched to cover AI, but remain inadequate. Data protection authorities are under-resourced, under-staffed, and often lack the technical expertise required to audit or govern complex AI systems.

Although many national AI strategies are emerging, they prioritise economic growth while neglecting human rights and accountability. This is also fuelled by policy processes that are often opaque and dominated by state actors, with limited multistakeholder participation.

The report  stresses that without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities. 

To avoid the current trajectory that AI is taking in Africa, in which AI risks entrenching authoritarianism and exacerbating inequalities, the report calls for a human-centred AI governance framework built on inclusivity, transparency, and context. 

It also makes recommendations, including enacting comprehensive AI-specific legislation, instituting mandatory human rights impact assessments, establishing empowered AI and data governance institutions, and promoting rights-based advocacy. Others are building technical capacity across governments, civil society and media, and developing policies that prioritise equity and human dignity alongside innovation.

AI offers Africa the opportunity to foster innovation, strengthen democracy, and drive sustainable development. This edition of the State of Internet Freedom in Africa report provides an evidence-based roadmap to ensure that Africa’s digital future remains open, inclusive, and rights-respecting.Find the report here.

Lesotho Charts a Progressive Path on Data Governance

Patricia Ainembabazi |

From July 28-31, Lesotho’s highland capital of Maseru buzzed with the energy of a data governance sprint. Government officials, academics, civil society, and the private sector assembled for an intensive workshop convened by the African Union Development Agency-NEPAD and AU-GIZ with support from the Lesotho Ministry of Information, Communications, Science, Technology and Innovation (MICSTI), the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) and the Kenya ICT Action Network (KICTANet). 

The gathering drew over 60 participants with a shared proposition: get everyone on the same page about what good data governance should look like in Lesotho and how to achieve it. The workshop charted a pathway for Lesotho to domesticate the African Union Data Policy Framework (AUDPF), which was adopted by the African Union in 2022. The framework provides guidance to Member States on building harmonised, rights-respecting data governance policies that support digital transformation, innovation, and secure cross-border data flows. 

Lesotho’s efforts to domesticate this framework come at a crucial time, as the country seeks to modernise its digital policy environment and position itself within the continent’s increasingly data-driven economy. These efforts, including the workshop, demonstrate Lesotho’s political will to align with continental digitalisation and data governance blueprints such as the AUDPF and to build an inclusive digital future.

Lesotho’s Minister for Information, Communications, Science, Technology and Innovation,  Nthati Moorosi, stated that the various workshop sessions aimed to “instill more understanding on data governance for clear domestication of the [AUDPF] policy framework.” Sessions saw participants introduced to key concepts such as data as a public good, the importance of ethics and accountability, and the need for harmonised cross-border data policies. Case studies from across Africa helped illustrate how sound data governance could unlock value in sectors such as agriculture, health, and education.  

CIPESA led practical sessions during which participants examined Lesotho’s Data Protection Act and the draft Data Management Policy (2025) in relation to key African instruments, including the AUDPF, the Malabo Convention, the African Continental Free Trade Area (AfCFTA), and the Digital Transformation Strategy for Africa. Participants noted areas of alignment between the national and continental frameworks but also identified gaps in the national data governance framework

As such, effort went into mapping out where Lesotho is already aligned with the AUDPF and which specific areas need to be prioritised in revising the country’s legal and policy framework. Recommendations from this mapping exercise included establishing an independent Data Protection Commission, clarifying categories of personal and sensitive data, improving inter-agency coordination, and investing in digital literacy and data privacy skills across the public and private sectors. 

The workshop was complemented by a strategic stakeholder survey to assess perceptions on Lesotho’s data governance framework. The survey revealed a minimal understanding of the country’s data governance frameworks. Over half of the respondents stated that existing policies were outdated and unable to address current challenges such as cross-border data flows, cloud computing, and evolving digital privacy threats. Respondents identified digital trade, innovation, health research, and public trust as key benefits of robust data governance. Responses further emphasised the importance of inclusive policymaking, public awareness on data governance campaigns in both English and Sesotho, alongside targeted support for rural and marginalised communities. 

By the conclusion of the meeting, participants had agreed on a national data governance strengthening roadmap. With contributions coming from the broad spectrum of participants, who included Princess Senate Mohato Seeiso, through to Principal Secretary Kanono Leronti Ramashamole of the MICSTI, who noted that “data is no longer a by-product of administration but a strategic national asset”, the meeting reinforced the value multistakeholderism holds in digital governance. 

Undeniably, no single institution can carry the policymaking load by itself. The real story from Maseru is the multistakeholder collaboration, with the MICSTI anchoring at home, the AUDA-NEPAD and AU-GIZ bringing continental scaffolding, civic and technical communities translating frameworks into practice, and a steady emphasis that data governance has to work for everyone. Ultimately, this process could enable Lesotho to not just catch up with the AUDPF, but to help show what a people-centred, innovation-friendly data ecosystem looks like in the region. 

As regional leaders in digital rights and digital governance, CIPESA is committed to providing continued support to Lesotho as it works to reform its policies and institutionalise stronger data governance mechanisms. Our involvement helps to ensure that national efforts are grounded in best practices and aligned with continental and global standards, such as those set by the African Union.

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control
CCTV system in Kampala, Uganda. REUTERS/James Akrena (2019)

By Brian Byaruhanga

In June 2025, Uganda suspended its Express Penalty Scheme (EPS) for traffic offences, less than a week after its launch, citing a “lack of clarity” among government agencies. While this seemed like a routine administrative misstep, it exposed a more significant issue: the brittle foundation upon which many digital public infrastructures (DPI) in Africa are being built. DPI refers to the foundational digital systems and platforms, such as digital identity, payments, and data exchange frameworks, which form the backbone of digital societies, similar to how roads or electricity function in the physical world

This EPS saga highlighted implementation gaps and illuminated a systemic failure to promote equitable access, public accountability, and safeguard fundamental rights in the rollout of DPI.

When the State Forgets the People

The Uganda EPS, established under section 166 of the Traffic and Road Safety Act, Cap 347, serves as a tech-driven improvement to road safety. Its  goal is to reduce road accidents and fatalities by encouraging better driver behaviour and compliance with traffic laws. By allowing offenders to pay fines directly without prosecution, the system aims to resolve minor offences quickly and to ease the burden on the judicial system. Challenges faced by the manual EPS system, which the move to the automated system aimed to eliminate, include corruption (reports of deleted fines, selective enforcement, and theft of collected penalties). 

At the heart of the EPS was an automated surveillance and enforcement system, which used Closed Circuit Television (CCTV) cameras and license plate recognition to issue real-time traffic fines. This system operated with almost complete opacity. A Russian company, Joint Stock Company Global Security, was reportedly entitled to 80% of fine revenues, despite making minimal investment, among other significant legal and procurement irregularities. There was a notable absence of clear contracts, publicly accessible oversight mechanisms, or effective avenues for appeal. Equally concerning, the collection and storage of extensive amounts of sensitive data lacked transparency regarding who had access to it.

Such an arrangement represented a profound breach of public trust and an infringement upon digital rights, including data privacy and access to information. It illustrated the minimal accountability under which foreign-controlled infrastructure can operate within a nation. This was a data-driven governance mechanism that lacked the corresponding data rights safeguards, subjecting Ugandans to a system they could neither comprehend nor contest.

This is Not an Isolated Incident

The situation in Uganda reflects a widespread trend across the continent. In Kenya, the 2024 Microsoft–G42 data centre agreement – announced as a partnership with the government to build a state-of-the-art green facility aimed at advancing infrastructure, research and development, innovation, and skilling in Artificial Intelligence (AI) –  has raised serious concerns about data sovereignty and long-term control over critical digital infrastructure. 

In Uganda, the National Digital ID system (Ndaga Muntu) became a case study in how poorly-governed DPI deepens structural exclusion and undermines equitable  access to public services. A 2021 report by the Centre for Human Rights and Global Justice found that rigid registration requirements, technical failures, and a lack of recourse mechanisms denied millions of citizens access to healthcare, education, and social protection. Those most affected were the elderly, women, and rural communities. However, a 2025 High Court ruling ignored evidence and expert opinions about the ID system’s exclusion and implications for human rights. 

Studies estimate that most e-government projects in Africa end in partial or total failure, often due to poor project design, lack of infrastructure, weak accountability frameworks, and insufficient citizen engagement. Many of these projects are built on imported technologies and imposed models that do not reflect the realities or governance contexts of African societies.

The clear pattern is emerging across the continent: countries  are integrating complex, often foreign-managed or poorly localised digital systems into public governance without establishing strong, rights-respecting frameworks for transparency, accountability, and oversight. Instead of empowering citizens, this version of digital transformation risks deepening inequality, centralising control, and undermining public trust in government digital systems.

The State is Struggling to Keep Up

National Action Plans (NAPs) on Business and Human Rights, intended to guide ethical public–private collaboration, have failed to address the unique challenges posed by DPI. Uganda’s NAP barely touches on data governance, algorithmic harms, or surveillance technologies. While Kenya’s NAP mentions the digital economy, it lacks enforceable guardrails for foreign firms managing critical infrastructure. In their current form, these frameworks are insufficiently equipped to respond to the complexity and ethical risks embedded in modern DPI deployments.

Had the Ugandan EPS system been subject to stronger scrutiny under a digitally upgraded NAP, key questions would likely have been raised before implementation:

  • What redress exists for erroneous or abusive fines?
  • Who owns the data and where is it stored?
  • Are the financial terms fair, equitable, and sovereign?

But these questions came too late.

What these failures point to is not just a lack of policy, but a lack of operational mechanisms to design, test and interrogate DPI before roll out. What is needed is a practical bridge that responds to public needs and enforces human rights standards.

Regulatory Sandboxes: A Proactive Approach to DPI

DPI systems, such as Uganda’s EPS, should undergo rigorous testing before full-scale deployment. In such a space, a system’s logic, data flows, human rights implications, and resilience under stress are collectively scrutinised before any harm occurs. This is the purpose of regulatory sandboxes – platforms that offer a structured, participatory, and transparent testbed for innovations. 

Thus, a regulatory sandbox could have revealed and resolved core failures of Uganda’s EPS before rollout, including the controversial revenue-sharing arrangement with a foreign contractor.

How Regulatory Sandboxes Work: Regulatory sandboxes are useful for testing DPI systems and governance frameworks such as revenue models in a transparent manner, enabling stakeholders to examine the model’s fairness and legality. This entails publicly revealing financial terms to regulators, civil society, and the general public. Secondly, before implementation, simulated impact analyses can also highlight possible public backlash or a decline in trust. Sandboxes can be used for facilitating pre-implementation audits, making vendor selection and contract terms publicly available, and conducting mock procurements to detect errors.  By defining data ownership and access guidelines, creating redress channels for data abuse, and supporting inclusive policy reviews with civil society, regulatory sandboxes make data governance and accountability more clear.

This shift from reactive damage control to proactive governance is what regulatory sandboxes offer. If Uganda had employed a sandbox approach, the EPS system might have served as a model for ethical innovation rather than a cautionary tale of rushed deployment, weak oversight, and lost public trust.

Beyond specific systems like EPS or digital ID, the future of Africa’s digital transformation hinges on how digital public infrastructure is conceived, implemented, and governed. Foundational services, such as digital identity, health information platforms, financial services, surveillance mechanisms, and mobility solutions, are increasingly reliant on data and algorithmic decision-making. However, if these systems are designed and deployed without sufficient citizen participation, independent oversight, legal safeguards, and alignment with the public interest, they risk becoming tools of exclusion, exploitation, and foreign dependency. 

Realising the full potential of DPIs as a tool for inclusion, digital sovereignty, and rights-based development demands urgent and deliberate efforts to embed accountability, transparency, and digital rights at every stage of their lifecycle.

Photo Credit – CCTV system in Kampala, Uganda. REUTERS/James Akena (2019)

The opportunity for Africa to take a leadership role in the WSIS+20 review process

By Elonnai Hickok (GNI), Anriette Esterhuysen (APC), and Lillian Nalwoga (CIPESA)

Alongside the Africa School for Internet Governance (AfriSIG) and the regional Africa Internet Governance Forum (AfricaIGF) that took place in late May in Dar-es-Salaam, Tanzania, the Global Network Initiative (GNI), the Association for Progressive Communications (APC), and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) held several meetings that brought together civil society, governments, parliamentarians, and the private sector from across the continent to reflect on Africa’s role in the World Summit on the Information Society +20 (WSIS+20) Review Process . This included a session at the AfriSIG, the regional workshop “The Road to WSIS+20”, and the session “Forging connections between Internet Governance, human rights, and development through the WSIS+20 process” at the AfricaIGF. The meetings highlighted key policy priorities across countries that stakeholders would like reflected in the WSIS+20 review process, surfaced challenges in past implementation of the WSIS framework and action lines with forward-looking recommendations, and emphasized the opportunity for Africa to play a leadership role in the WSIS+20 review process going forward. 

In 2025, the world faces an important moment for digital governance. The WSIS+20 review — marking two decades since the World Summit on the Information Society — will not only evaluate past progress but also shape the future of Internet governance, rights, and development as it considers how to align the Global Digital Compact (GDC) into the WSIS process and evaluates the renewal of the Internet Governance Forum (IGF). For Africa, this is a pivotal opportunity to lead, to center the continent’s priorities in global digital discourse, and to champion a people-centered, equitable information society.

Since its founding documents — the Geneva Declaration, the Plan of Action, and the Tunis Agenda — the WSIS has put forward a vision rooted in multistakeholderism, human rights, and inclusive digital development. But nearly 20 years on, that vision is under question amid accelerating technological shifts, geopolitical tensions, billions of people without meaningful connectivity, and the marginalization of voices from the Global Majority. Africa’s leadership in the WSIS+20 review process will be an essential counterbalance to these challenges.

Africa has always participated strongly in WSIS, with robust contributions from the technical community, civil society, many governments, and the WSIS prize winners who have taken high-level action lines and worked to implement them at the local community level. Recent months have seen growing momentum across the continent for the WSIS+20 review process. From Dar-es-Salaam to Cotonou, the United Nations Economic Commission for Africa (UNECA) has convened civil society, governments, parliamentarians, and the private sector to reflect on Africa’s role in the WSIS. This has been complemented by national-level dialogues driven by civil society with participation from the technical community, including in ZambiaGhana, and South Africa. These dialogues reveal national-level priorities and the potential for Africa to shape the future of the WSIS.

Two major declarations — the Dar es Salaam Declaration and the Cotonou Declaration — highlight Africa’s vision for the WSIS. They underscore issues central to the region: bridging the digital divide, fostering AI innovation, building resilient digital public infrastructure, ensuring data governance, and using the WSIS as a catalyst for Agenda 2063 and the Sustainable Development Goals (SDGs). Crucially, both declarations reaffirm the importance of the IGF and call for its strengthening.

The global WSIS+20 Preparatory and Stocktaking Meeting held on May 30 gave us some insight into the positions that African countries will take. Statements by Uganda, South Africa, and Morocco aligned with the G77’s call for digital sovereignty and technology transfer, recognized the importance of leveraging the WSIS to achieve the 2030 Agenda, called for aligning the GDC with the WSIS, and highlighted new challenges such as AI, but stopped short of unanimously advocating for the renewal, strengthening, and making the mandate of the IGF permanent.  

The geopolitical landscape only heightens the urgency of strong participation from African countries. The United States’ controversial stance during the 28th session of the Commission on Science and Technology for Development (CSTD) — particularly its resistance to language on climate, the SDGs, and diversity, equity, and inclusion — has raised alarms. While this may signal a shift in the U.S. government’s approach, it also can be seen as opening space for Africa and other actors to step into leadership roles and push for a rights-based digital future that reflects national priorities.

To do so, Africa must bring vision and coordinated diplomacy. The continent has several key regional frameworks and strategies: the African Digital Compact, the Digital Transformation Strategy for Africa, the African Union Convention on Cyber Security, and the Continental Artificial Intelligence Strategy offer policy blueprints for the continent’s digital development. The African Commission on Human and People’s Rights has also passed several important resolutions including the Resolution on Access to Data 2024 and the Declaration of Principles on Freedom of Expression 2019.  There is an opportunity to actively inject these priorities and values into global processes like the WSIS+20 and the GDC.

Nationally, progress is tangible. Countries are expanding digital public infrastructure, reforming cybersecurity laws, and working to reduce connectivity gaps. At the same time, challenges persist — from Internet shutdowns and online surveillance to shrinking civic space and rising digital authoritarianism, as highlighted in Paradigm Initiative’s 2024 Londa report. These challenges underscore why a rights-respecting, multistakeholder framework is essential for Africa’s digital future.

As the WSIS+20 review process continues, it will be critical that African countries actively engage in the process, emphasizing inclusive multistakeholder participation from all stakeholders as articulated by a cross-stakeholder group in the Five-Point Plan for an Inclusive WSIS+20 Review and a further set of eight recommendations. Going forward, the UNECA and the African Union will play an essential role in not only coordinating regional positions but in ensuring this participation. 

The WSIS+20 presents a timely chance for Africa to take forward the original spirit of the WSIS: a digital world built by and for the people, across sectors and borders. To seize this moment, it will be important for African governments and regional bodies to:

  1. Participate robustly and cohesively in the WSIS+20 review process, ensuring Africa’s priorities are reflected.
  2. Promote inclusive multistakeholder engagement, proactively engaging with and empowering civil society, academia, and the technical community to robustly participate in the WSIS+20 process and inform the position of African governments.
  3. Advance a shared agenda rooted in human rights, sustainable development, the renewal of the IGF,  the alignment of the GDC into the WSIS, and Africa-centric innovation and development.

The discussions at AfricaIGF indicated an important opportunity for Africa to shape the future of the WSIS process and ensure country-level and regional priorities are reflected in the review and implementation, that the review process is truly multistakeholder, and results in implementation that is meaningful and effective.