How CIPESA Is Supporting Harmonised Data Governance in African Countries

By Juliet Nanfuka |

Across the world, larger amounts of data are being collected than ever before. For instance, massive volumes of data are being collected by national identity systems and mandatory SIM card registration exercises, as well as by private actors, including through online platforms and mobile devices. However, in many African countries data governance structures remain lacking, fuelling various concerns such as data breaches and surveillance.

Over the course of 2025, CIPESA has undertaken extensive work alongside the GIZ DataCipation programme and the African Union to support countries and Regional Economic Communities (RECs) to collaboratively develop data governance policies that are progressive and rights-respecting.

The various engagements, which were guided by the African Union Data Policy Framework (AUDPF), also involved building the capacity of regulators, policymakers, and other stakeholders in devising and implementing data governance policies that promote socio-economic transformation and regional integration.

Adopted in 2022, the AUDPF offers a harmonised set of principles to guide African states in governing data safely, fairly, and effectively, as it provides a continental vision for protecting personal rights, enabling cross-border data flows, unlocking socio-economic value, and fostering interoperable digital systems. CIPESA has long advocated for African countries to adopt the AUDPF as a common benchmark to guide data policies that strengthen accountability and foster trust between governments and citizens.

The inaugural capacity building workshop to build the capacity of judges and senior staff of the East African Court of Justice (EACJ) on data governance, was held in March 2025, in Kigali, Rwanda. The training aimed to enhance court officials’ understanding of the AUDPF and its implications for national and regional data governance, as well as the need for harmonised data governance policies within the East African Community (EAC).

As East Africa moves into a regional economy, the EACJ might be faced with a number of challenges in its operations. There are cases in national courts related to data governance, and if the EACJ is not aware of what is going on in the digital space, it might not be able to handle such cases should they come before the court.” Hon. Justice Nestor Kayobera,  President of the EACJ

This was followed by another training in April 2025 in Kampala, Uganda for members of the East African Legislative Assembly (EALA). At a time when the eight-member regional bloc was developing a harmonised data policy legislation, this training strengthened the capacity of members and staff of the regional parliament in the areas of data governance, data protection, and related legislative and policy issues.

The Southern African Development Community (SADC) has similarly embarked on developing a Regional Data Governance Framework. In September, CIPESA supported training for more than 50 regulators and policymakers from 16 SADC countries in Madagascar, on harmonising data protection frameworks to support cross-border data flows and regional trade.

At the country level, CIPESA has supported capacity development as well as data governance policy development. In July 2025, a consultative workshop in the capital Maseru brought together more than 60 stakeholders from the Lesotho government, civil society, academia, and the private sector to review the country’s draft Data Management Policy and align it with the AUDPF. The workshop developed a roadmap towards building a more progressive data governance policy framework, with various revisions being made to the Data Management Policy. In October, the policy was validated at a multi-stakeholder engagement led by the Ministry of Information, Communications, Science, Technology and Innovation, alongside the AU, GIZ, and CIPESA.

In November 2025, CIPESA supported capacity building in Liberia for government ministries, civil society organisations, and private sector representatives at a two-day workshop in Monrovia. The engagement, which was convened by the Ministry of Posts and Telecommunications, CIPESA, and the AU, explored how data could support Liberia’s digital transformation and the need to align the country’s laws and policies with continental and global frameworks.

Additionally, CIPESA is supporting the government of Liberia to develop a Data Governance Policy that is aligned to the AUDPF. In this regard, a separate two-day multi-stakeholder consultation was held to inform the content of the prospective policy, which is anticipated to be completed early in 2026. The consultation marked a critical step in Liberia’s ongoing efforts to establish a coherent national framework for data governance, protection, and utilisation.

Also in November, CIPESA supported capacity building in Uganda for 81 policymakers, regulators, civil society, and private sector actors. In partnership with the Ministry of ICT and National Guidance, the Personal Data Protection Office, GIZ and AU, in Kampala, Uganda. Participants explored foundational elements of data governance, including data infrastructure, data value creation, standards, trust mechanisms, and institutional arrangements. Participants discussed regulatory approaches, institutional structures, and capacity-building strategies necessary for Uganda to harness data responsibly and efficiently.

Meanwhile, various global settings have also served as platforms to further deliberate and contribute to the global discourse on data governance in Africa. At the June 2025 Internet Governance Forum held in Norway, a collaborative session hosted by CIPESA, GIZ, and The Republic of The Gambia saw discussions on how fragmented national regulations and inconsistent privacy and cybersecurity standards pose challenges to regional and global cooperation.

Similarly at the September 2025 Forum on Internet Freedom in Africa (FIFAfrica25) hosted by CIPESA, various sessions discussed data governance as central to Africa’s digitalisation efforts. Across multiple sessions, speakers underscored the growing recognition that how data is governed will shape the continent’s democratic, economic, and social futures. Notably, the European Union (EU) Delegation to Namibia emphasised its continued commitment to investing in digital infrastructure, strengthening democratic governance, and advancing a human-centric digital transformation through the Global Gateway strategy.

Addressing Online Harms Ahead of Rwanda’s 2026 UPR Review

By Patricia Ainembabazi |

As the world commemorates the 16 Days of Activism Against Gender-Based Violence (November 25 to December 10), global attention is drawn to the rising risks women and girls face in digital environments. These harms increasingly undermine political participation, public discourse, and the safety of women across Africa.

Accordingly, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) and the Association for Progressive Communications (APC) have stressed the urgent need to address technology-facilitated gender-based violence (TFGBV) in Rwanda in written and oral submissions to the Universal Periodic Review (UPR) 51st pre-session for Rwanda at the United Nations Human Council in Geneva. In a joint CIPESA–APC fact sheet on human rights, the two organisations highlighted critical gaps in legal protections, online safety, and digital inclusion in Rwanda.

The joint UPR report notes that TFGBV has become a major deterrent to Rwandan women’s participation online, affecting women in politics, journalism, activism, and advocacy. The 2024 online smear campaign against opposition figure Victoire Ingabire Umuhoza illustrates the gendered nature of digital disinformation and harassment. Such attacks rely on misogynistic narratives designed to humiliate, silence, and delegitimise women’s public engagement. This pattern is not only a violation of rights; it also reinforces structural inequalities and dissuades other women from engaging in civic or political life.

These concerns reflect global trends. UN Women has warned of the rapid escalation of deepfake pornography, a form of digitally manipulated sexualised content disproportionately deployed against women and girls. Deepfakes can cause severe psychological, reputational, and professional harm, often leaving survivors without effective avenues for redress. They are increasingly used to silence women, distort electoral participation, and discourage women from entering political leadership. Such harms undermine democratic processes, distort public debate, and entrench gender inequality.

Rwanda’s obligations under the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) require the state to take comprehensive measures to eliminate discrimination (Articles 2 and 3) and ensure women’s full participation in political and public life (Article 7). However, as documented in the joint UPR report and fact sheet, gaps persist. The 2018 Cybercrime Law lacks survivor-centred provisions, and its broad definitions have on occasion been applied in ways that disadvantage victims.

Moreover, enforcement remains inconsistent, and the absence of specialised mechanisms for investigating and prosecuting online violence limits accountability. In this context, TFGBV is not merely a digital phenomenon; it is a direct barrier to fulfilling Rwanda’s CEDAW obligations and achieving SDGs 5 and 16.

The gender digital divide further compounds these harms. Internet penetration in Rwanda stands at 34.2%, with women representing just 38.2% of social media users. Structural inequalities, including device affordability, income disparities, and limited digital literacy, restrict women’s participation in digital spaces. These inequalities heighten vulnerability to online harm and restrict access to safety tools, reporting mechanisms, and digital rights resources. As the joint CIPESA–APC evidence indicates, without targeted investment in digital literacy, device access, and connectivity for women, Rwanda risks deepening existing socio-economic and civic inequalities.

During the UPR pre-session, CIPESA and APC presented a set of recommendations aimed at promoting rights-respecting digital governance. These included adopting survivor-centred TFGBV protections aligned with CEDAW, strengthening investigative and prosecutorial capacities to effectively respond to online harms, and compelling technology platforms to improve reporting, moderation, and accountability mechanisms. The submission also called for amending restrictive provisions in the Penal Code and Cybercrime Law, establishing independent oversight over surveillance operations, and addressing the gender digital divide through targeted digital literacy and affordability initiatives.

The 16 Days of Activism provide an important reminder that violence against women is evolving in both form and reach. Digital technologies have expanded the avenues through which women are targeted, often enabling harm that is faster, more pervasive, and harder to remedy. Ending violence against women, therefore, requires recognising online spaces as critical sites of protection.

Rwanda enters its fourth UPR cycle with a number of unaddressed commitments. During the 2021 review, the Rwandan government received 32 recommendations on freedom of expression and media freedom, including 24 urging reforms to restrictive speech provisions and 17 calling for enhanced protections for journalists and human rights defenders. Yet implementation has been limited. Provisions in Rwanda’s 2018 Penal Code and 2018 Cybercrime Law continue to criminalise “false information”, edited content, and criticism of public authorities, enabling arrests of journalists and discouraging dissenting expression.

These laws have contributed to widespread self-censorship, shrinking civic space, and undermining public participation in digital environments. At the same time, reports of intrusive surveillance, such as the documented use of Pegasus spyware targeting thousands of journalists, activists, and diaspora members, further erode trust and violate privacy rights. The absence of independent oversight in surveillance practices intensifies this concern.

The Country’s ongoing engagement with the UPR process and its upcoming review scheduled for January 21, 2026, offers a timely opportunity to address these challenges. During the pre-sessions 51 from 26 -27 November 2025 in Geneva, several permanent missions expressed eagerness to advance strong recommendations for Rwanda, and there is hope that these delegations will amplify our proposals during the formal review.

CIPESA and APC remain committed to supporting evidence-based reforms that strengthen digital rights protections across Africa. Rwanda’s review presents a defining moment for the government to adopt meaningful, future-focused reforms that uphold human rights, ensure accountability, and create a digital environment where all citizens, especially women, can participate safely, freely, and equally in shaping the country’s democratic and digital future.

Towards Inclusive AI Policies in Africa’s Digital Transformation

By CIPESA Writer |

On November 13, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) took part in the global PILNET summit on Artificial Intelligence (AI) and its impact on the work of Civil Society Organisations (CSOs). Over three days, the summit assembled stakeholders from across the world in Rome, Italy, to deliberate on various topics under the theme, “Amplifying Impact: Pro Bono & Public Interest Law in a Shifting World.”

CIPESA contributed to a session titled, “Pro bono for AI: Addressing legal risks and enhancing opportunities for CSOs”. The session focused on AI and its potential impacts on the work and operations of CSOs. CIPESA emphasised the need for a universally acceptable and adaptable framework to guide the increased application of AI in the fast-evolving technological era. Furthermore, CIPESA highlighted its efforts in developing a model policy on AI for CSOs in Africa, which is being undertaken with the support of the Thomson Reuters Foundation through its global pro bono network.

Edrine Wanyama, Programme Manager Legal at CIPESA, centered his discussions around ethical and rights-respecting AI adoption, and emphasised the necessity for CSOs to enhance their knowledge and measures of accountability while navigating the AI ecosystem.

Mara Puacz, the Head of Impact at Tech To The Rescue, Ana de la Cruz Cubeiro, a Legal Officer at PILnet, and Megan King, a Senior Associate at Norton Rose Fulbright, shared similar sentiments on the benefits of AI, which include expanding advocacy work and initiatives of CSOs.

They noted the increased demand for transparency and accountability in AI development and use, and the need to minimise harms that marginalised communities face from AI enabled analysis of data sets that often perpetuate bias, gaps in data, and limited or poorly digitalised language sets.

The session cited various benefits of AI for CSOs, such as enabling human rights monitoring, documenting and reporting at various fronts like the Universal Periodic Review, aiding democratic participation, and tracking and documenting trends. Others are facilitating and enhancing environmental protection, such as through monitoring pollution and providing real-time support to agri-business and the health sector by facilitating pest and disease identification and diagnosis for viable solutions.

However, funding constraints not only affect AI deployment but also capacity building to address the limited skills and expertise in AI deployment. In Africa, the inadequacy of relevant infrastructure, data sovereignty fears amongst states, and the irresponsible use of AI and related technologies present additional challenges.

Meanwhile, between October 23 and 24, 2025, CIPESA joined KTA Advocates and the Centre for Law, Policy and Innovation Initiative (CeLPII), to co-host the 8th Annual Symposium under the theme of “Digital Trade, AI and the Creative Economy as Drivers for Digital Transformation”.

The symposium explored the role of AI in misinformation and disinformation, as well as its potential to transform Uganda’s creative economy and digital trade. CIPESA emphasised the need to make AI central in all discussions of relevant sectors, including governments, innovators, CSOs and the private sector, among others, to identify strategies, such as policy formulation and adoption, to check potential excesses by AI.

Conversations at the PILNET summit and the KTA Symposium align with CIPESA’s ongoing initiatives across the continent, where countries and regional blocs are developing AI strategies and policies to inform national adoption and its application. At the continental level, in 2024, the African Union (AU) adopted the Continental AI Strategy, which provides a unified framework for using AI to drive digital transformation and socio-economic development of Africa.

Amongst the key recommendations from the discussions is the need for:

  • Wide adoption of policies guiding the use of AI by civil society organisations, firms, the private sector, and innovators.
  • Nationwide and global participation of individuals and stakeholders, including governments, CSOs, the private sector, and innovators, in AI processes and how it works to ensure that no one is left behind. This will ensure inclusive participation.
  • Awareness creation and continuous education of citizens, CSOs, innovators, firms, and the private sector on the application and value of AI in their work.
  • The adoption of policies and laws that specifically address the application of AI at national, regional and international levels and at organisational and institutional levels to mitigate the potential adverse impacts of AI rollout.

#BeSafeByDesign: A Call To Platforms To Ensure Women’s Online Safety

By CIPESA Writer |

Across Eastern and Southern Africa, activists, journalists, and women human rights defenders (WHRDs) are leveraging online spaces to mobilise for justice, equality, and accountability.  However, the growth of online harms such as Technology-Facilitated Gender-Based Violence (TFGBV), disinformation, digital surveillance, and Artificial Intelligence (AI)-driven discrimination and attacks has outpaced the development of robust protections.

Notably, human rights defenders, journalists, and activists face unique and disproportionate digital security threats, including harassment, doxxing, and data breaches, that limit their participation and silence dissent.

It is against this background that the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Irene M. Staehelin Foundation, is implementing a project aimed at combating online harms so as to advance digital rights. Through upskilling, advocacy, research, and movement building, the initiative addresses the growing threats in digital spaces, particularly affecting women journalists and human rights defenders.

The first of the upskilling engagements kicked off in Nairobi, Kenya, at the start of December 2025, with 25 women human rights defenders and activists in a three-day digital resilience skills share workshop hosted by CIPESA and the Digital Society Africa. Participants came from the Democratic Republic of Congo, Madagascar, Malawi, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe. It coincides with the December 16 Days Of Activism campaign, which this year is themed “Unite to End Digital Violence against All Women and Girls”.

According to the United Nations Population Fund (UNFPA), TFGBV is “an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated, and amplified in part or fully by the use of information and communication technologies or digital media against a person based on their gender.” It includes cyberstalking, doxing, non-consensual sharing of intimate images, cyberbullying, and other forms of online harassment.

Women in Sub-Saharan Africa are 32% less likely than men to use the internet, with the key impediments being literacy and digital skills, affordability, safety, and security. On top of this gender digital divide, more women than men face various forms of digital violence. Accordingly, the African Commission on Human and Peoples’ Rights (ACHPR) Resolution 522 of 2022 has underscored the urgent need for African states to address online violence against women and girls.

Women who advocate for gender equality, feminism, and sexual minority rights face higher levels of online violence. Indeed, women human rights defenders, journalists and politicians are the most affected by TFGBV, and many of them have withdrawn from the digital public sphere due to gendered disinformation, trolling, cyber harassment, and other forms of digital violence. The online trolling of women is growing exponentially and often takes the form of gendered and sexualised attacks and body shaming.

Several specific challenges must be considered when designing interventions to combat TFGBV. These challenges are shaped by legal, social, technological, and cultural factors, which affect both the prevalence of digital harms and violence and the ability to respond effectively. They include weak and inadequate legal frameworks; a lack of awareness about TFGBV among policymakers, law enforcement officers, and the general public; the gender digital divide; and normalised online abuse against women, with victims often blamed rather than supported.

Moreover, there is a shortage of comprehensive response mechanisms and support services for survivors of online harassment, such as digital security helplines, psychosocial support, and legal aid. On the other hand, there is limited regional and cross-sector collaboration between CSOs, government agencies, and the private sector (including tech companies).

A guiding strand for these efforts will be the #BeSafeByDesign campaign that highlights the necessity of safe platforms for women as well as the consequences when safety is missing. The #BeSafeByDesign obligation shifts the burden of responsibility of ensuring safety in online spaces away from women and places it on platforms where more efforts on risk assessments, accessible and stronger reporting pathways, proactive detection of abuse, and transparent accountability mechanisms are required. The initiative will also involve the practical upskilling of at-risk women in practical cybersecurity.

Now More Than Ever, Africa Needs Participatory AI Regulatory Sandboxes 

By Brian Byaruhanga and Morine Amutorine |

As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values

Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.

As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes.  Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.

At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.

Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe. 

CIPESA’s 2025 State of Internet Freedom in Africa report reveals that AI policy processes across the continent are “often opaque and dominated by state actors, with limited multistakeholder participation.” This pattern of exclusion contradicts the continent’s vibrant civil society landscape, where various organisations in 29 African countries are actively working on responsible AI issues and frequently outpacing government efforts to protect human rights.

The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms. 

However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.

Case Study: Rwanda’s Participatory AI Policy Development

The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.

By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable. 

Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes. 

Consequences of Excluding CSOs in AI Regulatory Sandbox Development???

The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.

The session noted that CSOs  are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation. 

The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.

Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?

Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process. 

Case Study: Uganda’s Failures Without Sandbox Testing

Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing. Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022.    A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.

Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination of algorithmic decision-making processes and vendor contracts.

Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.

Models of Participatory Sandboxes

Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”

However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.

Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential. 

Pathways Forward: Building Truly Inclusive Sandboxes

Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.

Creating the legal foundation

Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.

Building capacity and awareness

Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.

Institutionalise cross-sector learning.

Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.

Redesigning governance structures

True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.

The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.

Why Civil Society Participation is Practical

Research on regulatory sandboxes demonstrates that participatory design delivers concrete benefits beyond legitimacy. CIPESA’s analysis of digital public infrastructure governance shows that sandboxes incorporating civil society input “make data governance and accountability more clear” through inclusive policy reviews, pre-implementation audits, and transparent examination of financial terms and vendor contracts. 

Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.

Civil society engagement:

  • Surfaces lived experiences regulators often miss.
  • Strengthens legitimacy of governance frameworks.
  • Pushes for transparency in AI design and data use.
  • Ensures frameworks reflect African values and protect vulnerable communities, and
  • Enables oversight that prevents exploitative arrangements

While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.

The cost of exclusion can be  seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.

The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.

Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.