MOSIP Connect 2026 Calls for Scalable, Country-Driven Digital Public Infrastructure

By Milliam Murigi |

African governments have been urged to move beyond pilot projects and develop a Digital Public Infrastructure (DPI) that is open, secure, nationally owned and capable of operating at a full national scale to truly serve citizens.

This call was made today in Morocco during the launch of the MOSIP Connect 2026 conference. Speaking at the event, Prof. Debabrata Das, Director of the International Institute of Information Technology (IIIT) – Bangalore, emphasized that DPI cannot remain at the pilot stage indefinitely.

“DPI must be scaled to operate at national levels,” he said, highlighting that such systems are not just technology projects but national assets that underpin citizens’ access to services like education, health, social protection, and payments.

DPI refers to government-backed digital systems and platforms designed to provide citizens with secure, accessible, and efficient services. These systems include national digital IDs, e-government portals, digital payment platforms, and health and social service databases.

Across Africa, countries are making significant strides in implementing DPI, though progress varies widely. Ghana’s national ID system, the Ghana Card, has been linked to banking, mobile verification, and social services, while Rwanda has achieved over 90 percent coverage of adults with digital IDs integrated into multiple government services.

Kenya’s Maisha Namba program seeks to consolidate several identity databases into a single, unified platform. Despite these advances, many initiatives remain fragmented or confined to pilot projects, limiting their ability to deliver services nationwide.

Prof. Das stressed that for DPI to deliver real public value, it must be open-source, secure, respect national and data sovereignty, and be designed to evolve with changing policies, technologies, and citizen needs.

“DPI is not regular software development. When embraced, it becomes part of the relationship between citizens and the state. That means it must be based on evidence, transparency, accountability and continuous learning.”

According to him, six principles should guide next-generation DPI development: open-source technology, respect for national and data sovereignty, neutrality in partnerships, reusability of systems, commitment to national-scale deployment and the ability to evolve as policies, technologies and citizens’ needs change.

Additionally, he added that building successful national digital systems requires three elements working together: strong technology platforms and standards, governance structures that ensure accountability, and institutional and user capacity to adopt the systems. Without all three, pilot programs risk failing to scale.

“Data creates power. Countries must retain control over the data generated through digital systems. This is why sovereignty considerations are central to MOSIP’s approach when working with governments,” he added.

The Modular Open Source Identity Platform (MOSIP) a IIIT-Bangalore project, offers countries modular and open-source technology to build and own their national identity systems. The project aims to provide governments with the tools for meaningful digital transformation, established on a bedrock of good principles and human-centric design.

Speaking at the same event, Abdelhak Harrak, Director of Information Systems and Telecommunications, Ministry of the Interior, Kingdom of Morocco said that the success of a digital identification system does not rely solely on technical solutions, however advanced they may be, it also depends on strong governance and the sustained mobilization of teams responsible for rigorously managing complex transformations involving numerous field actors. It is this synergy that ensures both the security and the sustainability of a national identification system.

“Technology alone cannot drive change; it is the alignment of people, processes, and purpose that turns innovation into lasting impact,” said Harrak.

He also highlighted the role of private-sector and civil-society partners in building sustainable digital ecosystems. He described them as “essential” rather than peripheral, noting that innovation often comes from organizations that build localized solutions on top of open platforms.

This article was first published by Science Africa on February 12, 2026.

Now More Than Ever, Africa Needs Participatory AI Regulatory Sandboxes 

By Brian Byaruhanga and Morine Amutorine |

As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values

Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.

As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes.  Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.

At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.

Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe. 

CIPESA’s 2025 State of Internet Freedom in Africa report reveals that AI policy processes across the continent are “often opaque and dominated by state actors, with limited multistakeholder participation.” This pattern of exclusion contradicts the continent’s vibrant civil society landscape, where various organisations in 29 African countries are actively working on responsible AI issues and frequently outpacing government efforts to protect human rights.

The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms. 

However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.

Case Study: Rwanda’s Participatory AI Policy Development

The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.

By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable. 

Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes. 

Consequences of Excluding CSOs in AI Regulatory Sandbox Development???

The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.

The session noted that CSOs  are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation. 

The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.

Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?

Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process. 

Case Study: Uganda’s Failures Without Sandbox Testing

Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing. Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022.    A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.

Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination of algorithmic decision-making processes and vendor contracts.

Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.

Models of Participatory Sandboxes

Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”

However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.

Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential. 

Pathways Forward: Building Truly Inclusive Sandboxes

Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.

Creating the legal foundation

Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.

Building capacity and awareness

Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.

Institutionalise cross-sector learning.

Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.

Redesigning governance structures

True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.

The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.

Why Civil Society Participation is Practical

Research on regulatory sandboxes demonstrates that participatory design delivers concrete benefits beyond legitimacy. CIPESA’s analysis of digital public infrastructure governance shows that sandboxes incorporating civil society input “make data governance and accountability more clear” through inclusive policy reviews, pre-implementation audits, and transparent examination of financial terms and vendor contracts. 

Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.

Civil society engagement:

  • Surfaces lived experiences regulators often miss.
  • Strengthens legitimacy of governance frameworks.
  • Pushes for transparency in AI design and data use.
  • Ensures frameworks reflect African values and protect vulnerable communities, and
  • Enables oversight that prevents exploitative arrangements

While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.

The cost of exclusion can be  seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.

The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.

Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.

Applications are Open for a New Round of Africa Digital Rights Funding!

Announcement |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is calling for proposals to support digital rights work across Africa.

This call for proposals is the 10th under the CIPESA-run Africa Digital Rights Fund (ADRF) initiative that provides rapid response and flexible grants to organisations and networks to implement activities that promote digital rights and digital democracy, including advocacy, litigation, research, policy analysis, skills development, and movement building.

 The current call is particularly interested in proposals for work related to:

  • Data governance including aspects of data localisation, cross-border data flows, biometric databases, and digital ID.
  • Digital resilience for human rights defenders, other activists and journalists.
  • Censorship and network disruptions.
  • Digital economy.
  • Digital inclusion, including aspects of accessibility for persons with disabilities.
  • Disinformation and related digital harms.
  • Technology-Facilitated Gender-Based Violence (TFGBV).
  • Platform accountability and content moderation.
  • Implications of Artificial Intelligence (AI).
  • Digital Public Infrastructure (DPI).

Grant amounts available range between USD 5,000 and USD 25,000 per applicant, depending on the need and scope of the proposed intervention. Cost-sharing is strongly encouraged, and the grant period should not exceed eight months. Applications will be accepted until November 17, 2025. 

Since its launch in April 2019, the ADRF has provided initiatives across Africa with more than one million US Dollars and contributed to building capacity and traction for digital rights advocacy on the continent.  

Application Guidelines

Geographical Coverage

The ADRF is open to organisations/networks based or operational in Africa and with interventions covering any country on the continent.

Size of Grants

Grant size shall range from USD 5,000 to USD 25,000. Cost sharing is strongly encouraged.

Eligible Activities

The activities that are eligible for funding are those that protect and advance digital rights and digital democracy. These may include but are not limited to research, advocacy, engagement in policy processes, litigation, digital literacy and digital security skills building. 

Duration

The grant funding shall be for a period not exceeding eight months.

Eligibility Requirements

  • The Fund is open to organisations and coalitions working to advance digital rights and digital democracy in Africa. This includes but is not limited to human rights defenders, media, activists, think tanks, legal aid groups, and tech hubs. Entities working on women’s rights, or with youth, refugees, persons with disabilities, and other marginalised groups are strongly encouraged to apply.
  • The initiatives to be funded will preferably have formal registration in an African country, but in some circumstances, organisations and coalitions that do not have formal registration may be considered. Such organisations need to show evidence that they are operational in a particular African country or countries.
  • The activities to be funded must be in/on an African country or countries.

Ineligible Activities

  • The Fund shall not fund any activity that does not directly advance digital rights or digital democracy.
  • The Fund will not support travel to attend conferences or workshops, except in exceptional circumstances where such travel is directly linked to an activity that is eligible.
  • Costs that have already been incurred are ineligible.
  • The Fund shall not provide scholarships.
  • The Fund shall not support equipment or asset acquisition.

Administration

The Fund is administered by CIPESA. An internal and external panel of experts will make decisions on beneficiaries based on the following criteria:

  • If the proposed intervention fits within the Fund’s digital rights priorities.
  • The relevance to the given context/country.
  • Commitment and experience of the applicant in advancing digital rights and digital democracy.
  • Potential impact of the intervention on digital rights and digital democracy policies or practices.

The deadline for submissions is Monday, November 17, 2025. The application form can be accessed here.

Strengthening Media Reporting on Digital Public Infrastructure in Eastern Africa

By Juliet Nanfuka |

On October 13-15, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Co-Develop hosted 20 journalists in a workshop as part of the Digital Public Infrastructure (DPI) Journalism Fellowship for Eastern Africa. This is a regional initiative aimed at strengthening journalists’ capacity to report knowledgeably and critically on DPI and Digital Public Goods (DPGs) in the region.

The workshop took place in Nairobi, Kenya and brought together journalists from Burundi, the Democratic Republic of the Congo, Ethiopia, Kenya, Rwanda, Somalia, South Sudan, Tanzania, and Uganda, who are receiving both knowledge-and skills-based training alongside a reporting grant to produce in-depth DPI stories.

At an inaugural virtual workshop held in August 2025, the Fellows examined among others, Digitalisation and digital rights in Eastern Africa; UN and African Union frameworks on DPI; the DPI ecosystem in Eastern Africa; and Media coverage of DPI across nine countries, based on CIPESA’s ongoing research. The workshop also provided practical training in journalism skills, including technology beat reporting, conceptualising story ideas, writing effective pitches, data storytelling, and the use of AI in storytelling.

Report Launch

Following the workshop, a regional public event was hosted on October 16, 2025, and served to showcase findings from a multi-country media monitoring study on DPI coverage conducted y CIPESA. 

The report presents the findings of a baseline study on media coverage of Digital Public Infrastructure (DPI) and Digital Public Goods (DPGs) across seven Eastern African countries in 2024: Democratic Republic of Congo (DRC), Ethiopia, Kenya, Rwanda, South Sudan, Tanzania, and Uganda. Using a mixed-methods approach that combined quantitative content analysis and key informant interviews, the study analysed 680 DPI- and DPG-related stories published in 28 major print and online outlets between January and December 2024. 

The study assessed the volume, prominence, themes, sourcing patterns, and framing of stories and complemented the findings with interviews and focus group discussions involving journalists, editors, and experts. The study reveals that while media in the region are actively reporting on digital transformation, the coverage is largely event-driven, government-centric, and male-dominated. It focuses primarily on the functional benefits of DPI—such as service delivery and innovation—while giving limited attention to critical issues of governance, data privacy, equity, and citizen inclusion.

Find the report summary here

The G20 Should Challenge the Power Dynamics in Digital Public Infrastructure

Juliet Nanfuka |

Data plays a crucial role in T20 discussions at the G20, influencing online interaction and civic engagement. The G20 should use its influence to create a multi-stakeholder agenda for Digital Public Infrastructure design.

Data is at the heart of T20 discussions around the G20, as it informs the architecture of online interaction, civic participation (and exclusion) and the governance of digital society. As such, it is also central to digital public infrastructure (DPI), serving as a foundational requirement and an enabler of new data generation and data mobility. Data drives the three key pillars of DPI – digital identification, digital payments and data exchange – in addition to other emerging features such as geospatial data and data aggregation. However, the expanding role of DPI raises questions about its alignment with constitutional guarantees, data protection frameworks and the lived realities of end users across Africa.

In 2023, India’s G20 presidency laid the foundation for discourse on DPI with great precision. A year later, the 2024 G20 Rio de Janeiro Leaders’ Declaration acknowledged ‘the contribution of digital public infrastructure to an equitable digital transformation’. It went on to note ‘the transformative power of digital technologies to bridge existing divides and empower societies and individuals including all women and girls and people in vulnerable situations. 

Consequently, DPI has been positioned as a necessary tool for international trade facilitation and industrialisation in developing countries. In Africa, this momentum has been supported by strategies such as the AU’s Digital Transformation Strategy for Africa (2020–2030), the African Continental Free Trade Area (AfCFTA) and the 2024 adoption of the Continental AI Strategy. Various countries across the continent have integrated DPI into their national strategies.

The pace of DPI integration is mirrored by growing financial investment in DPI. Examples include the $200 million Ghana Digital Acceleration Project by the World Bank in 2022 to expand broadband access and strengthen digital innovation ecosystems. In June 2025, the AfCFTA Adjustment Fund Credit Facility funded $10 million to support private sector adaptation to AfCFTA frameworks, with initial commitments to Telecel Global Services to enhance connectivity and regional integration. The company provides wholesale voice and SMS services and enterprise connectivity solutions to more than 250 telecom operators across Africa and globally.

While the expansion of DPI is often framed as a progressive step, it also carries significant governance trade-offs. The expansion of DPI in countries with weak democratic safeguards heightens the risk of state overreach, mass surveillance and reduced civic freedoms, making it essential to set clear limits on state access to citizens’ data to safeguard participation and accountability. Further, concerns over data sovereignty also loom.

Other T20 commentaries have stressed the urgent need for multi-stakeholder engagement to align DPI with the realities of developing countries. Without this alignment, DPI could increase existing regulatory gaps that compromise civic rights and consumer protection, fraud prevention and privacy. Meanwhile, the current wave of DPI design could exclude smaller economies that lack the capacity to engage in complex cross-border arrangements, such as those established between India’s Unified Payments Interface and Singapore’s PayNow. However, efforts such as the East African Community’s Cross-Border Payment System Masterplan aimed at inclusive, secure, efficient and interoperable cross-border payments in the region are underway.

If DPI is deployed without further interrogation, especially within the contexts of lower-income and developing countries that are often still navigating authoritarian systems, there is a risk of introducing yet another form or layer of digital exclusion from the global ecosystem. This could amplify existing national exclusions emerging from lack of access to the basics promised by DPI, such as national identity documents as keys to financial inclusion or access to basic services and civic rights.

When governments replace human interaction with automated systems, they risk ignoring the real-life experiences and needs of people who use – or could use – DPI. Thus, while DPI is being positioned as a solution to the challenges many developing countries are facing, it is important to keep in mind that infrastructure is not neutral. Its built-in biases, risks and design choices will ultimately impact citizens. Thus, for the real impact of DPI to be realised, it is necessary for the G20 to address concerns on:

  • The power affordances embedded in DPI design. The architecture of DPI prioritises the interests of those who design and fund it. The G20 should require that DPI initiatives undergo power mapping to identify who holds decision-making authority, how data flows are controlled and which actors stand to benefit or be marginalised by the design and deployment of DPI.
  • The institutionalisation of regulatory sandboxing. Regulatory sandboxes offer a controlled, transparent environment where DPI tools and policies can be tested for fairness, legality, inclusivity and public interest alignment before full-scale implementation. The G20 should promote the use of regulatory sandboxes as a mechanism to scrutinise DPI systems and their governance frameworks.
  • Strengthen multi-stakeholder inclusion. DPI needs to be built with the participation of more stakeholders – including civil society, private sector actors, academia and marginalised communities – in decision-making. The G20 should use its convening power to set the multi-stakeholder agenda in the design of DPI interventions. 
  • Safeguard data sovereignty. African countries developing data governance frameworks need to balance sovereignty with interoperability, and prevent a dependency on foreign-controlled systems.
  • Enhance public awareness interventions. Despite significant DPI developments, many citizens remain unaware of their implications. The media plays a critical role in bridging this gap. There should be more integration with media partners in furthering public awareness of DPI, its functions and consequences. The G20 should not negate the role of the media in driving public awareness on DPI interventions.

This commentary was first published on the T20 website on October 06, 2025.

1 2 3 6