Now More Than Ever, Africa Needs Participatory AI Regulatory Sandboxes 

By Brian Byaruhanga and Morine Amutorine |

As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values

Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.

As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes.  Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.

At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.

Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe. 

CIPESA’s 2025 State of Internet Freedom in Africa report reveals that AI policy processes across the continent are “often opaque and dominated by state actors, with limited multistakeholder participation.” This pattern of exclusion contradicts the continent’s vibrant civil society landscape, where various organisations in 29 African countries are actively working on responsible AI issues and frequently outpacing government efforts to protect human rights.

The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms. 

However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.

Case Study: Rwanda’s Participatory AI Policy Development

The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.

By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable. 

Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes. 

Consequences of Excluding CSOs in AI Regulatory Sandbox Development???

The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.

The session noted that CSOs  are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation. 

The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.

Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?

Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process. 

Case Study: Uganda’s Failures Without Sandbox Testing

Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing. Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022.    A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.

Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination of algorithmic decision-making processes and vendor contracts.

Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.

Models of Participatory Sandboxes

Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”

However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.

Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential. 

Pathways Forward: Building Truly Inclusive Sandboxes

Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.

Creating the legal foundation

Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.

Building capacity and awareness

Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.

Institutionalise cross-sector learning.

Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.

Redesigning governance structures

True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.

The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.

Why Civil Society Participation is Practical

Research on regulatory sandboxes demonstrates that participatory design delivers concrete benefits beyond legitimacy. CIPESA’s analysis of digital public infrastructure governance shows that sandboxes incorporating civil society input “make data governance and accountability more clear” through inclusive policy reviews, pre-implementation audits, and transparent examination of financial terms and vendor contracts. 

Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.

Civil society engagement:

  • Surfaces lived experiences regulators often miss.
  • Strengthens legitimacy of governance frameworks.
  • Pushes for transparency in AI design and data use.
  • Ensures frameworks reflect African values and protect vulnerable communities, and
  • Enables oversight that prevents exploitative arrangements

While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.

The cost of exclusion can be  seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.

The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.

Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).

Advancing African-Centred AI is a Priority for Development in Africa

By Patricia Ainembabazi |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in the annual DataFest Africa event held on 30-31 October, 2025. Hosted by Pollicy, the event serves to celebrate data use in Africa by bringing together various stakeholders from diverse backgrounds, such as government, civil society, donors, academics, students, and private industry experts, under one roof and theme.  The event provided a timely platform to advance discussions on how Africa can harness AI and data-driven systems in ways that centre human rights, accountability, and social impact.

CIPESA featured in various sessions at the event, one of which was the launch of the ‘Made in Africa AI for Monitoring, Evaluation, Research and Learning (MERL)’ Landscape Study by the MERL Tech Initiative. At the session, CIPESA provided reflections on the role of AI in development across several humanitarian sectors in Africa.

CIPESA’s contributions complemented insights from the study that explored African approaches to AI in data-driven evidence systems and which emphasised responsive and inclusive design, contextual relevance, and ethical deployment. The Study resonated with insights from the CIPESA 2025 State of Internet Freedom in Africa report, which highlights the role of AI as  Africa navigates digital democracy.

According to the CIPESA report, AI technologies hold significant potential to improve civic engagement, extend access to public services, scale multilingual communication tools, and support fact-checking and content moderation. On the flip side, the MERL study also underscores the risks posed by AI systems that lack robust governance frameworks, including increased surveillance capacity, algorithmic bias, the spread of misinformation, and deepening digital exclusion. The aforementioned risks and challenges pose major concerns regarding readiness, accountability, and institutional capacity, given the nascent and fragmented legal and regulatory landscape for AI in the majority of African countries..

Sam Kuuku, Head of the GIZ-African Union AI Made in Africa Project, noted that it is important for countries and stakeholders to reflect on how well Africa can measure the impact of AI and evaluate the role and potential of AI use in improving livelihoods across the continent. He further reiterated the value of various European Union (EU) frameworks in providing useful guidance for African countries seeking to develop AI policies that promote both innovation and safety, to ensure that technological developments align with public interest, legal safeguards, and global standards.

The session was underscored by the need for African governments and stakeholders to benchmark global regulatory practices that are grounded in human rights principles for progressive adoption and deployment of AI.  CIPESA pointed out the EU AI Act of 2024, which offers a structured and risk-based model that categorises AI systems according to the level of potential harm and establishes controls for transparency, safety, and non-discrimination.

Key considerations for labour rights, economic justice, and the future of work were highlighted, particularly in relation to the growing role of African data annotators and platform workers within global AI supply chains. Investigations into outsourced data labelling, such as the case of Kenyan workers contracted by tech platforms to train AI models under precarious economic conditions, underlie the need for stronger labour protections and ethical AI sourcing practices. Through platforms such as DataFest Africa, there is a growing community dedicated towards shaping a forward-looking narrative in which AI is not only applied to solve African problems but is also developed, regulated, and critiqued by African actors. The pathway to an inclusive and rights-respecting digital future will rely on working collectively to embed accountability, transparency, and local expertise within emerging AI and data governance frameworks.

Safeguarding African Democracies Against AI-Driven Disinformation

ADRF Impact Series |

As Africa’s digital ecosystems expand, so too do the threats to its democratic spaces. From deepfakes to synthetic media and AI-generated misinformation, electoral processes are increasingly vulnerable to technologically sophisticated manipulation. Against this backdrop, THRAETS, a civic-tech pro-democracy organisation, implemented the Africa Digital Rights Fund (ADRF)-supported project, “Safeguarding African Elections – Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy.”

The initiative aimed to build digital resilience by equipping citizens, media practitioners, and civic actors with the knowledge and tools to detect and counter disinformation with a focus on that driven by artificial intelligence (AI) during elections across Africa.

At the heart of the project was a multi-pronged strategy to create sustainable solutions, built around three core pillars: public awareness, civic-tech innovation, and community engagement.

The project resulted in innovative civic-tech tools, each of which has the potential to address a unique facets of AI misinformation. These tools include the  Spot the Fakes which is a gamified, interactive quiz that trains users to differentiate between authentic and manipulated content. Designed for accessibility, it became a key entry point for public digital literacy, particularly among youth. Additionally, the foundation for an open-source AI tracking hub was also developed. The “Expose the AI” portal will offer free educational resources to help citizens evaluate digital content and understand the mechanics of generative AI.

A third tool, called “Community Fakes” which is a dynamic crowdsourcing platform for cataloguing and analysing AI-altered media, combined human intelligence and machine learning. Its goal is to support journalists, researchers, and fact-checkers in documenting regional AI disinformation. The inclusion of an API enables external organisations to access verified datasets which is a unique contribution to the study of AI and misinformation in the Global South. However, THRAETS notes that the effectiveness of public-facing tools such as Spot the Fakes and Community Fakes is limited by the wider digital literacy gaps in Africa.

Meanwhile, to demonstrate how disinformation intersects with politics and public discourse, THRAETS documented case studies that contextualised digital manipulation in real time. A standout example is the “Ruto Lies: A Digital Chronicle of Public Discontent”, which analysed over 5,000 tweets related to Kenya’s #RejectTheFinanceBill protests of 2024. The project revealed patterns in coordinated online narratives and disinformation tactics, achieving more than 100,000 impressions. This initiative provided a data-driven foundation for understanding digital mobilisation, narrative distortion, and civic resistance in the age of algorithmic influence.

THRAETS went beyond these tools and embarked upon a capacity building drive through which journalists, technologists, and civic leaders were trained in open-source intelligence (OSINT), fact-checking, and digital security.

In October 2024, Thraets partnered with eLab Research to conduct an intensive online training program for 10 Tunisian journalists ahead of their national elections. The sessions focused on equipping the participants with tools to identify and counter-tactics used to sway public opinion, such as detecting cheap fakes and deepfakes. Journalists were provided with hands-on experience through an engaging fake content identification quiz/game. The training provided journalists with the tools to identify and combat these threats, and this helped them prepare for election coverage, but also equipped them to protect democratic processes and maintain public trust in the long run.

This training served as a framework for a training that would take place in August 2025 as part of the Democracy Fellowship, a program funded by USAID and implemented by the African Institute for Investigative Journalism (AIIJ). This training aimed to enhance media capacity to leverage OSINT tools in their reporting.

The THRAETS project enhanced regional collaboration and strengthened local investigative capacity to expose and counter AI-driven manipulation. This project demonstrates the vital role of civic-tech innovation that integrates participation and informed design. As numerous African countries navigate elections, initiatives like THRAETS provide a roadmap for how digital tools can safeguard truth, participation, and democracy.

Find the full project insights report here.

Commentary: Africa’s Endless Struggle for Internet Freedom Is Always in Motion, But Rarely Forward

By Jimmy Kainja |

In September 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) hosted the 12th edition of the Forum on Internet Freedom in Africa (FIFAfrica) in Windhoek, Namibia. I have attended six of these Forums over the years, with my first being in 2017, when the event was held in Johannesburg, South Africa. I have also contributed to several editions of FIFAfrica’s flagship report, the State of Internet Freedom in Africa and thus through these activities, have been witness to CIPESA’s role in contributing to and shaping the continent’s digital policy conversations.

Each year, FIFAfrica provides a platform for governments, civil society, private sector actors, and researchers to reflect on emerging challenges and opportunities around digital rights and internet governance in Africa. Over time, the Forum has engaged with various themes which have mirrored global technological and policy shifts including internet shutdowns, data privacy and surveillance concerns, digital inclusion, disinformation and more recently, Artificial Intelligence (AI) and Digital Public Infrastructure (DPI). This adaptability demonstrates how FIFAfrica continues to engage with the evolving digital ecosystem and the continent’s responses to emerging digital and internet governance shifts. Yet, beneath this progress lies a paradox: Africa keeps moving on with the latest trends in internet freedom and internet governance concerns, but the foundational problems remain unresolved. 

When FIFAfrica began over a decade ago, Africa’s internet freedom challenges were clear and urgent: limited access, prohibitive data costs, state surveillance, weak legal protections, and rampant censorship. Governments often justified internet restrictions in the name of “national security” or “public order”. The term “fake news” soon emerged as another pretext for silencing critics and regulating online speech. Fast forward to 2025, and while the vocabulary of digital repression has evolved, the logic remains the same. Several African states continue to shut down internet access, particularly during times of public protest and elections, with Ethiopia, Sudan, Senegal, Uganda, and most recently Tanzania being prominent examples. Across the continent, privacy and data protection laws exist on paper but are inconsistently enforced or manipulated to align with political interests.

In essence, Africa has not yet achieved the baseline of internet freedom that would allow citizens to safely express themselves, access information, and participate fully in digital spaces. Instead, the continent’s policy agenda has become increasingly aspirational, focused on AI ethics, big data, and digital transformation, while the fundamental guarantees of access, security, and expression remain precarious.

Moving on Without Fixing the Old

The evolution of FIFAfrica’s agenda, from internet shutdowns to AI governance and digital identity, is both natural and necessary and might signal thought leadership, but it can also obscure the persistence of unresolved injustices. Take, for example, personal data and identity systems, which were popular topics of discussion at FIFAfrica. Across Africa, governments have introduced biometric ID programmes to modernise administration and improve service delivery. Yet, these systems are deeply entangled with long-standing concerns, surveillance, exclusion, and control, issues that FIFAfrica has grappled with since its inception. The technology has changed, but the regulatory dynamics have remained the same.

Similarly, AI ethics and data governance frameworks are now fashionable discussion points. However, how meaningful are these debates in countries where citizens still lack affordable, reliable internet access or where independent journalists risk arrest for their online commentary? Can we genuinely talk about algorithmic bias when freedom of expression itself is under threat? The danger, then, lies in what might be called “thematic displacement”, which is the tendency to move on to emerging global trends without consolidating progress on foundational freedoms. This displacement risks turning digital rights discourse into a treadmill: always in motion but not moving forward.

The persistence of old internet freedom problems is not accidental. It reflects deeper structural continuities in African digital governance and political economy. States continue to see the internet as both a tool of modernisation and a threat to political interests. Digital technologies are embraced for economic growth, service delivery, and image-building, but their democratic potential remains tightly controlled. This is especially true of authoritarian states. This duality produces a familiar pattern: governments invest in connectivity infrastructure while simultaneously tightening control over civic engagement and digital expression. Regulatory authorities are strengthened, but often in ways that expand state power rather than protect citizens’ rights. Surveillance capacities grow, but transparency and accountability shrink. The internet, once hailed as a space of liberation, increasingly mirrors the offline hierarchies of control, privilege, and exclusion.

In this sense, the continuity of control outweighs the rhetoric of freedom. The instruments may change, from content filtering to biometric registration and AI-enabled surveillance, but the underlying power relations remain largely intact.

Towards a More Grounded Internet Freedom Agenda

As FIFAfrica continues to play a role in convening a diverse spectrum of stakeholders with vested interests in a progressive internet freedom landscape in Africa, perhaps the most urgent task is to reconnect Africa’s digital policy discourse to its unresolved foundations. The continent does not need to reject new topics like AI or digital identity, but rather to approach them through the lens of continuity, recognising how they reproduce or intensify older struggles for rights, accountability, and inclusion. An agenda for the next decade of internet freedom in Africa must therefore balance innovation with introspection. It must ask: Who still lacks meaningful access to the internet, and why? How are digital laws being weaponised against journalists and citizens? Who benefits from datafication and AI, and who is being left out or surveilled? How can the African Union and sub-regional bodies ensure genuine enforcement of digital rights commitments?

Africa’s journey with internet freedom mirrors its broader democratic trajectory, marked by aspiration, innovation, and resilience, yet haunted by persistent constraints. The Forum has provided a vital mirror to this journey, reflecting both progress and contradiction. But as the themes evolve, one truth endures: Africa cannot truly move forward without resolving its unfinished struggles for internet freedom. Until access becomes equitable, laws become just, and expression becomes truly free, the continent’s digital future will remain suspended between promise and paradox.

About the author:

Jimmy Kainja is a Senior Lecturer at the University of Malawi and a PhD candidate at the Wits Centre for Journalism, University of the Witwatersrand. He researches media and communications policy, journalism, digital rights, freedom of expression, and the intersection of telecommunications, democracy, and development.