Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control

Digital Public Infrastructure in Africa: A Looming Crisis of Equitable Access, Digital Rights, and Sovereign Control
CCTV system in Kampala, Uganda. REUTERS/James Akrena (2019)

By Brian Byaruhanga

In June 2025, Uganda suspended its Express Penalty Scheme (EPS) for traffic offences, less than a week after its launch, citing a “lack of clarity” among government agencies. While this seemed like a routine administrative misstep, it exposed a more significant issue: the brittle foundation upon which many digital public infrastructures (DPI) in Africa are being built. DPI refers to the foundational digital systems and platforms, such as digital identity, payments, and data exchange frameworks, which form the backbone of digital societies, similar to how roads or electricity function in the physical world

This EPS saga highlighted implementation gaps and illuminated a systemic failure to promote equitable access, public accountability, and safeguard fundamental rights in the rollout of DPI.

When the State Forgets the People

The Uganda EPS, established under section 166 of the Traffic and Road Safety Act, Cap 347, serves as a tech-driven improvement to road safety. Its  goal is to reduce road accidents and fatalities by encouraging better driver behaviour and compliance with traffic laws. By allowing offenders to pay fines directly without prosecution, the system aims to resolve minor offences quickly and to ease the burden on the judicial system. Challenges faced by the manual EPS system, which the move to the automated system aimed to eliminate, include corruption (reports of deleted fines, selective enforcement, and theft of collected penalties). 

At the heart of the EPS was an automated surveillance and enforcement system, which used Closed Circuit Television (CCTV) cameras and license plate recognition to issue real-time traffic fines. This system operated with almost complete opacity. A Russian company, Joint Stock Company Global Security, was reportedly entitled to 80% of fine revenues, despite making minimal investment, among other significant legal and procurement irregularities. There was a notable absence of clear contracts, publicly accessible oversight mechanisms, or effective avenues for appeal. Equally concerning, the collection and storage of extensive amounts of sensitive data lacked transparency regarding who had access to it.

Such an arrangement represented a profound breach of public trust and an infringement upon digital rights, including data privacy and access to information. It illustrated the minimal accountability under which foreign-controlled infrastructure can operate within a nation. This was a data-driven governance mechanism that lacked the corresponding data rights safeguards, subjecting Ugandans to a system they could neither comprehend nor contest.

This is Not an Isolated Incident

The situation in Uganda reflects a widespread trend across the continent. In Kenya, the 2024 Microsoft–G42 data centre agreement – announced as a partnership with the government to build a state-of-the-art green facility aimed at advancing infrastructure, research and development, innovation, and skilling in Artificial Intelligence (AI) –  has raised serious concerns about data sovereignty and long-term control over critical digital infrastructure. 

In Uganda, the National Digital ID system (Ndaga Muntu) became a case study in how poorly-governed DPI deepens structural exclusion and undermines equitable  access to public services. A 2021 report by the Centre for Human Rights and Global Justice found that rigid registration requirements, technical failures, and a lack of recourse mechanisms denied millions of citizens access to healthcare, education, and social protection. Those most affected were the elderly, women, and rural communities. However, a 2025 High Court ruling ignored evidence and expert opinions about the ID system’s exclusion and implications for human rights. 

Studies estimate that most e-government projects in Africa end in partial or total failure, often due to poor project design, lack of infrastructure, weak accountability frameworks, and insufficient citizen engagement. Many of these projects are built on imported technologies and imposed models that do not reflect the realities or governance contexts of African societies.

The clear pattern is emerging across the continent: countries  are integrating complex, often foreign-managed or poorly localised digital systems into public governance without establishing strong, rights-respecting frameworks for transparency, accountability, and oversight. Instead of empowering citizens, this version of digital transformation risks deepening inequality, centralising control, and undermining public trust in government digital systems.

The State is Struggling to Keep Up

National Action Plans (NAPs) on Business and Human Rights, intended to guide ethical public–private collaboration, have failed to address the unique challenges posed by DPI. Uganda’s NAP barely touches on data governance, algorithmic harms, or surveillance technologies. While Kenya’s NAP mentions the digital economy, it lacks enforceable guardrails for foreign firms managing critical infrastructure. In their current form, these frameworks are insufficiently equipped to respond to the complexity and ethical risks embedded in modern DPI deployments.

Had the Ugandan EPS system been subject to stronger scrutiny under a digitally upgraded NAP, key questions would likely have been raised before implementation:

  • What redress exists for erroneous or abusive fines?
  • Who owns the data and where is it stored?
  • Are the financial terms fair, equitable, and sovereign?

But these questions came too late.

What these failures point to is not just a lack of policy, but a lack of operational mechanisms to design, test and interrogate DPI before roll out. What is needed is a practical bridge that responds to public needs and enforces human rights standards.

Regulatory Sandboxes: A Proactive Approach to DPI

DPI systems, such as Uganda’s EPS, should undergo rigorous testing before full-scale deployment. In such a space, a system’s logic, data flows, human rights implications, and resilience under stress are collectively scrutinised before any harm occurs. This is the purpose of regulatory sandboxes – platforms that offer a structured, participatory, and transparent testbed for innovations. 

Thus, a regulatory sandbox could have revealed and resolved core failures of Uganda’s EPS before rollout, including the controversial revenue-sharing arrangement with a foreign contractor.

How Regulatory Sandboxes Work: Regulatory sandboxes are useful for testing DPI systems and governance frameworks such as revenue models in a transparent manner, enabling stakeholders to examine the model’s fairness and legality. This entails publicly revealing financial terms to regulators, civil society, and the general public. Secondly, before implementation, simulated impact analyses can also highlight possible public backlash or a decline in trust. Sandboxes can be used for facilitating pre-implementation audits, making vendor selection and contract terms publicly available, and conducting mock procurements to detect errors.  By defining data ownership and access guidelines, creating redress channels for data abuse, and supporting inclusive policy reviews with civil society, regulatory sandboxes make data governance and accountability more clear.

This shift from reactive damage control to proactive governance is what regulatory sandboxes offer. If Uganda had employed a sandbox approach, the EPS system might have served as a model for ethical innovation rather than a cautionary tale of rushed deployment, weak oversight, and lost public trust.

Beyond specific systems like EPS or digital ID, the future of Africa’s digital transformation hinges on how digital public infrastructure is conceived, implemented, and governed. Foundational services, such as digital identity, health information platforms, financial services, surveillance mechanisms, and mobility solutions, are increasingly reliant on data and algorithmic decision-making. However, if these systems are designed and deployed without sufficient citizen participation, independent oversight, legal safeguards, and alignment with the public interest, they risk becoming tools of exclusion, exploitation, and foreign dependency. 

Realising the full potential of DPIs as a tool for inclusion, digital sovereignty, and rights-based development demands urgent and deliberate efforts to embed accountability, transparency, and digital rights at every stage of their lifecycle.

Photo Credit – CCTV system in Kampala, Uganda. REUTERS/James Akena (2019)

CIPESA Welcomes Namibia Ministry of ICT and the Namibia IGF as Co-Hosts of FIFAFrica25

By FIFAfrica |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is pleased to announce that the 2025 edition of the Forum on Internet Freedom in Africa (FIFAfrica25) will be co-hosted in partnership with the Namibian Ministry of Information and Communication Technology (MICT) and the Namibia Internet Governance Forum (NamIGF).

Set to take place in Windhoek, Namibia, from September 25–27, 2025, this year’s Forum will serve as yet another notch in FIFAfrica’s 12-year history of assembling digital rights defenders, policymakers, technologists, academics, regulators, journalists, and the donor community, who all have the shared vision of advancing internet freedom in Africa.

With its strong commitments to democratic governance, press freedom, and inclusive digital development, Namibia offers fertile ground for rich dialogues on the future of internet freedom in Africa. The country holds a powerful legacy in the global media and information landscape, being the birthplace of the 1991 Windhoek Declaration on promoting independent and pluralistic media. In a digital age where new challenges are emerging – from information integrity and Artificial Intelligence (AI) governance to connectivity gaps and platform accountability – hosting FIFAfrica in Namibia marks a key moment for the movement toward trusted information as a public good, including in the digital age.

“Through the Ministry of Information and Communication Technology, Namibia is proud to co-host FIFAfrica25 as a demonstration of our commitment to advancing technology for inclusive social and economic development. This Forum comes at a critical moment for Africa’s digital future, and we welcome the opportunity to engage with diverse voices from across the continent and beyond in shaping a rights-respecting, secure, and innovative digital landscape,” Minister of Information and Communication Technology (ICT), Emma Inamutila Theofelus

This sentiment is shared by the NamIGF Chairperson, Albertine Shipena. “We are honoured to co-host the FIFAfrica25 here in Namibia. This partnership with MICT and CIPESA marks a significant step in advancing digital rights, open governance, and meaningful multistakeholder engagement across the continent. As the NamIGF, we are proud to contribute to shaping a more inclusive and secure internet ecosystem, while spotlighting Namibia’s growing role in regional and global digital conversations.”

The NamIGF was established in September 2017, through a Cabinet decision, as a multistakeholder platform that facilitates public policy discussion on issues pertaining to the internet in Namibia.

Dr. Wairagala Wakabi, the CIPESA Executive Director, noted that FIFAfrica25 presents a timely opportunity to advance progressive digital policy agendas that uphold fundamental rights and promote digital democracy in Africa. “As global debates on internet governance, data sovereignty, and platform accountability intensify, it is essential that Africans inform and shape the frameworks that govern our digital spaces. We are honoured to partner with the Namibian government and NamIGF to convene this critical conversation on the continent,” he said.

Since its inception in 2014, FIFAfrica has grown to become the continent’s leading assembly of actors instrumental in shaping conversations and actions at the intersection of technology with democracy, society and the economy. It has become the stage for concerted efforts to advance digital rights and digital inclusion. These issues, and new emerging themes such as mental health, climate and the environment, and the content economy, will take centre stage at FIFAfrica25, which will feature a mix of plenaries, workshops, exhibitions, and a series of pre-events.

Meanwhile, FIFAfrica will also recognise the International Day for Universal Access to Information (IDUAI), celebrated annually on September 28. The commemoration serves to underscore the fundamental role of access to information in empowering individuals, supporting informed decision-making, fostering innovation, and advancing inclusive and sustainable development – tenets which resonate with the Forum. This year’s celebration is themed, “Ensuring Access to Environmental Information in the Digital Age”.

At the heart of the Forum is a Community of Allies that have, over the years, stood alongside CIPESA in its pursuit of effective and inclusive digital governance in Africa.

Feedback on Session Proposals and Travel Support Applications

All successful session proposals and travel support applicants have been contacted directly. See the list of successful sessions here. Thank you for your patience and for contributing to what promises to be an exciting FIFAfrica25.  

Prepare for FIFAfrica25: Travel and Logistics

Everything you need to plan your attendance at the Forum can be found here – visit this page for key logistical details and tips to help you make the most of your experience!

Ugandan Regulator Finds Google in Breach of Country’s Data Protection Law, Orders Local Registration

By Edrine Wanyama |

In a July 18, 2025 decision, Uganda’s Personal Data Protection Office (PDPO) found Google LLC in breach of the country’s data protection law and ordered the global tech giant to register with the local data protection office within 30 days.

The decision would place the popular search engine under the ambit of Uganda’s Data Protection and Privacy Act, whose provisions it would have to comply with. In particular, the PDPO has ordered Google to provide – within 30 days – documentary evidence of how it is complying with requirements for transferring the personal data of Ugandan citizens outside of the country’s borders. Google also has to explain the legal basis for making those cross-border data transfers and the accountability measures in place to ensure that such transfers respect Uganda’s laws.

The orders followed a November 2024 complaint by four Ugandans, who argued that as a data collector, controller, and processor, Google had failed to register with the PDPO as required by local laws. They also contended that Google unlawfully transferred their personal data outside Uganda without meeting the legal conditions enshrined in the law, and claimed these actions infringed their data protection and privacy rights and caused them distress.

The PDPO ruled that Google was indeed collecting and processing personal data of the complainants without being registered with the local data regulator, which contravened section 29 of the Data Protection and Privacy Act. Google was also found liable for transferring the complainants’ data across Uganda’s borders without taking the necessary safeguards, in breach of section 19 of the Act.

This section provides that, where a data processor or data controller based in Uganda processes or stores personal data outside Uganda, they must ensure that the country in which the data is processed or stored has adequate measures for protecting the data. Those measures should at least be equivalent to the protection provided for under the Ugandan law. The consent of the data subject should also be  obtained for their data to be stored outside Uganda.

In its defence, Google argued that since it was not based in Uganda and had no physical presence in the country, it was not obliged to register with the PDPO, and the rules on cross-border transfers of personal data did not apply to it. However, the regulator rejected this argument, determining that Google is a local data controller since it collects data from users in Uganda and decides how that data is processed.

The regulator further determined that the local data protection law has extra-territorial application, as it states in section 1 that it applies to a person, institution or public body outside Uganda who collects, processes, holds or uses personal data relating to Ugandan citizens. Accordingly, the regulator stated, the law places obligations “not only to entities physically present in Uganda but to any entity handling personal data of Ugandan citizens, including those established abroad, provided they collect or process such data.”

The implication of this decision is that all entities that collect Ugandans’ data, including tech giants such as Meta, TikTok, and X, must register with the Ugandan data regulator. This decision echoes global calls to hold Big Tech more accountable, and for African countries to have strong laws as per African Union (AU) Convention on Cyber Security and Personal Data Protection (Malabo Convention), and the AU Data Policy Framework.

However, enforcement of these orders remains a challenge. For instance, Uganda’s PDPO does not make binding decisions and only makes declaratory orders. Additionally, the regulator does not have powers to make orders of compensation to aggrieved parties, and indeed did not do so under the current decision. It can only recommend that the complainants engage a court of competent jurisdiction, in accordance with section 33(1) of the Act.

Conversely, the Office of the Data Protection Commissioner of Kenya established by section 5 of Data Protection Act, 2019  and the Personal Data Protection Commission of Tanzania established by section 6 of the Protection of Personal Information Act, 2022 are bestowed with powers to issue administrative fines under sections 9(1)(f) and section 47 respectively.

The dilemma surrounding the Uganda PDPO presents major concerns about its capacity to remedy wrongs of global data collectors, controllers and processors. Among its declarations in the July 2025 decision was that it would not issue an order for data localisation “at this stage” but “Google LLC is reminded that all cross-border transfers of personal data must comply fully with Ugandan law”. This leaves unanswered questions over data sovereignty and respect for individuals’ data rights given the handicaps faced by data regulators in countries such as Uganda and the practicalities presented by the global digital economy.

In these circumstances, Uganda’s Data Protection and Privacy Act should be amended to expand the powers of PDPO to impose administrative fines so as to add weight and enforceability to its decisions.

Elevating Children’s Voices and Rights in AI Design and Online Spaces in Africa

By Patricia Ainembabazi

As Artificial Intelligence (AI) reshapes digital ecosystems across the globe, one group remains consistently overlooked in discussions around AI design and governance: Children. This gap was keenly highlighted at the Internet Governance Forum (IGF) held in June 2025 in Oslo, Norway, where experts, policymakers, and child-focused organisations called for more inclusive AI systems that protect and empower young users.

Children today are not just passive users of digital technologies; they are among the most active and most vulnerable user groups. In Africa, internet use among youths aged 15 to 24 was partly fuelled by the Covid-19 pandemic, hence their growing reliance on digital platforms for learning, play, and social interaction. New research by the Digital Rights Alliance Africa (DRAA), a consortium hosted by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), shows that this rapid connectivity has amplified exposure to risks such as harmful content, data misuse, and algorithmic manipulation that are especially pronounced for children.

The research notes that AI systems have become deeply embedded in the platforms that children engage with daily, including educational software, entertainment platforms, health tools, and social media. Nonetheless, Africa’s emerging AI strategies remain overwhelmingly adult-centric, often ignoring the distinct risks these technologies pose to minors. At the 2025 IGF, the urgency of integrating children’s voices into AI policy frameworks was made clear through a session supported by the LEGO Group, the Walt Disney Company, the Alan Turing Institute, and the Family Online Safety Institute. Their message was simple but powerful: “If AI is to support children’s creativity, learning, and safety, then children must be included in the conversation from the very beginning”.

The forum drew insights from recent global engagements such as the Children’s AI Summit of February 2025 held in the UK and the Paris AI Action Summit 2025. These events demonstrated that while children are excited about AI’s potential to enhance learning and play, they are equally concerned about losing creative autonomy, being manipulated online, and having their privacy compromised. A key outcome of these discussions was the need to develop AI systems that children can trust; systems that are safe by design, transparent, and governed with accountability.

This global momentum offers important lessons for Africa as countries across the continent begin to draft national AI strategies. While many such strategies aim to spur innovation and digital transformation, they often lack specific protections for children. According to DRAA’s 2025 study on child privacy in online spaces, only a handful of African countries have enacted child-specific privacy laws in the digital realm. Although instruments like the African Charter on the Rights and Welfare of the Child recognise the right to privacy, regional frameworks such as the Malabo Convention, and even national data protection laws, rarely offer enforceable safeguards against AI systems that profile or influence children.

Failure to address these gaps will leave African children vulnerable to a host of AI-driven harms ranging from exploitative data collection and algorithmic profiling to exposure to biased or inappropriate content. These harms can deprive children of autonomy and increase their risk of online abuse, particularly when AI-powered systems are deployed in schools, healthcare, or entertainment without adequate oversight.

To counter these risks and ensure AI becomes a tool of empowerment rather than exploitation, African governments, policymakers, and developers must adopt child-centric approaches to AI governance. This could start with mainstreaming children’s rights such as privacy, protection, education, and participation, into AI policies. International instruments like the UN Convention on the Rights of the Child and General Comment No. 25 provide a solid foundation upon which African governments can build desirable policies.

Furthermore, African countries should draw inspiration from emerging practices such as the “Age-Appropriate AI” frameworks discussed at IGF 2025. These practices propose clear standards for limiting AI profiling, nudging, and data collection among minors. Given that only 36 out 55 African countries currently have data protection laws, with few of them containing child-specific provisions, policymakers must take efforts to strengthen these frameworks. Such reforms should require AI tools targeting children to adhere to strict data minimisation, transparency, and parental consent requirements.

Importantly, digital literacy initiatives must evolve beyond basic internet safety to include AI awareness. Equipping children and caregivers with the knowledge to critically engage with AI systems will help them navigate and question the technology they encounter. At the same time, platforms similar to the Children’s AI Summit 2025 should be replicated at national and regional levels to ensure that African children’s lived experiences, hopes, and concerns shape the design and deployment of AI technologies.

Transparency and accountability must remain central to this vision. AI tools that affect children, whether through recommendation systems, automated decision-making, or learning algorithms, should be independently audited and publicly scrutinised. Upholding the values of openness, fairness, and inclusivity within AI systems is essential not only for protecting children’s rights but for cultivating a healthy, rights-respecting digital environment.

As the African continent’s digital infrastructure expands and AI becomes more pervasive, the choices made today will define the digital futures of generations to come. The IGF 2025 stressed that children must be central to these choices, not as an afterthought, but as active contributors to a safer and more equitable AI ecosystem. By elevating children’s voices in AI design and governance, African countries can lay the groundwork for an inclusive digital future that truly serves the best interests of all.

Human Rights in the Digital Context in Rwanda

Universal Periodic Review |

The joint stakeholder report by the Association for Progressive Communications (APC) and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) focuses on key issues relating to human rights in the digital context in Rwanda, including digital connectivity and inclusion, freedom of expression online, surveillance and technology-facilitated gender-based violence, particularly its impact on human and women’s rights defenders.

Context of the human rights situation online in Rwanda

Since its Universal Periodic Review during the third cycle, Rwanda has made progress towards implementing some of the recommendations received. The digital infrastructure in Rwanda has expanded and access to information and communication technologies (ICTs) has improved significantly. However, there was heightened authoritarianism and censorship of online criticism in the lead-up to the 2024 general elections. Violations of user rights, strict censorship, increased surveillance and infrastructure limitations contributed to Freedom House lowering Rwanda’s Freedom on the Net rating in 2024, rating it as “Not Free” with a score of 36/100.

Digital connectivity and inclusion

Rwanda’s internet penetration rate was estimated to be 34.2% at the start of 2025. There is gender imbalance in internet access and use, with about 38.2% of active social media user identities estimated to be female in January 2025. Price and lack of appropriate devices, the main reasons for not accessing ICTs, affect women more due to fundamental gender disparities, particularly in education and income. There is a need for an effective framework to regulate Rwanda’s national ID with a unique identifier number, which is becoming popular as a key to access services and effect transactions electronically.

Freedom of speech and expression online

Rwandan laws still contain provisions which unduly restrict free speech and expression online, including provisions of the 2018 Penal Code relating to dissemination of edited words or images, spreading false information, causing hostile international opinion, humiliating national authorities, and refusing to answer questions by intelligence officers. Meanwhile, troubling incidents of arrests, intimidation, abduction and killings of journalists and human rights defenders for exercising their rights to free speech continue to be reported. Self-censorship is widespread due to social pressure to support the government and fear of reprisals for criticising authorities. Restrictions on disinformation in the 2018 Cybercrime law are characterised by vague definitions and prescribe long periods of imprisonment. In June 2025, Rwanda’s Supreme Court rejected a challenge to the 2018 Cybercrime law for violating the constitutional guarantees of free speech.

Online surveillance, transnational repression and right to privacy

The law on Protection of Personal Data enacted by the Rwandan Government in October 2021 lacks a public-interest exception for digital and traditional media outlets, and has very strict data localisation requirements. Mass surveillance is institutionalised within Rwanda, with law 60/2013 requiring service providers to ensure that systems are technically capable of supporting interceptions at all times. Credible reports indicate the government has acquired and deployed Pegasus (a powerful spyware) against political opponents and human rights defenders, including members of the diaspora.

Technology-facilitated gender-based violence (TFGBV) against women human rights defenders

Rwanda’s 2018 Cybercrime Law aims to address some forms of TFGBV, but falls short in its implementation. There are concerns about the law being misused, for instance, to criminalise TFGBV survivors in cases where content is created and shared without consent. Gendered disinformation in Rwanda has been used to target women politicians and human rights defenders with image-based disinformation, sexualise them and create false narratives, shifting public focus from their main political discourse. Online and offline attacks based on gender identity and sexual orientation are prevalent in Rwanda, including online harassment, government surveillance and posting of images without consent.

Key recommendations to the government of Rwanda

  • Ensure that digital access is inclusive and equitable for all by removing access barriers for marginalised communities, including rural communities, women and persons with disabilities.
  • Repeal provisions which unduly criminalise free speech including articles 157, 164, 194, 233 and 253 of the 2018 Penal Code; amend the 2018 Cybercrime law to ensure that all provisions comply with international human rights standards relating to free speech and expression.
  • Withdraw all cases against individuals facing harassment, intimidation and prosecution from state authorities for legitimate expression of dissent against the government.
  • Refrain from or cease the use of artificial intelligence applications and spyware, where they are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights, unless and until the adequate safeguards to protect human rights and fundamental freedoms are in place.
  • Guarantee adequate independent oversight mechanisms to ensure state surveillance practices are limited and proportional in accordance with international human rights standards.
  • Enhance measures and policies to prohibit, investigate and prosecute TFGBV in line with international human rights standards.
  • Amend the 2018 Cybercrime law to ensure that restrictions to freedom of expression as a response to TFGBV are necessary and proportionate, not overly broad or vague in terms of what speech is restricted, and do not over penalise.
  • Provide redress and reparation as an effective, efficient and meaningful way of aiding victims of TFGBV and ensuring that justice is achieved.
  • Develop appropriate and effective accountability mechanisms for social media platforms and other technology companies focused on ensuring company transparency and remediation to ensure that hate speech and TFGBV is appropriately addressed on their platforms.

Read full report here.