This year, the African Economic Research Consortium (AERC) is holding its first Summit in the context of its new 10-year Strategic Plan (2025-2035), Nairobi, Kenya. The three-day Summit themed ‘A Renewed AERC for Africa’s New Development Priorities’, is designed to hardwire the research-policy bridge.
This event is taking place from November 30 to December 02, 2025. For more information, click here.
As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values.
Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.
As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes. Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.
At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.
Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe.
The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms.
However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.
Case Study: Rwanda’s Participatory AI Policy Development
The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.
By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable.
Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes.
Consequences of Excluding CSOs in AI Regulatory Sandbox Development???
The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.
The session noted that CSOs are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation.
The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.
Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?
Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process.
Case Study: Uganda’s Failures Without Sandbox Testing
Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing.Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022. A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.
Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination ofalgorithmic decision-making processes and vendor contracts.
Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.
Models of Participatory Sandboxes
Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”
However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.
Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential.
Pathways Forward: Building Truly Inclusive Sandboxes
Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.
Creating the legal foundation
Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.
Building capacity and awareness
Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.
Institutionalise cross-sector learning.
Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.
Redesigning governance structures
True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.
The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.
Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.
Civil society engagement:
Surfaces lived experiences regulators often miss.
Strengthens legitimacy of governance frameworks.
Pushes for transparency in AI design and data use.
Ensures frameworks reflect African values and protect vulnerable communities, and
Enables oversight that prevents exploitative arrangements
While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.
The cost of exclusion can be seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.
The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.
Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.
The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”
The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.
According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at 1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.
To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.
For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore, it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”
Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.
NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF, AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.
To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.
The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries – 32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).
The Forum on Internet Freedom in Africa (FIFAfrica25) is fast approaching, and the excitement is building. This year’s edition, hosted by the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) in partnership with the Namibian Ministry of Information and Communication Technology (MICT) and the Namibia Internet Governance Forum (NamIGF), will take place in Windhoek, Namibia, from 22–26 September 2025. We are pleased to announce that the speaker line-up and full agenda is now available, offering a comprehensive line-up of conversations, experiences, and networking opportunities.
This year’s Forum will serve as yet another edition in FIFAfrica’s 12-year history of assembling digital rights defenders, policymakers, technologists, academics, regulators, journalists, and the donor community, who all have the shared vision of advancing internet freedom in Africa. FIFAfrica25 promises to build on this legacy, with an agenda that is engaging and inclusive of the many shifts we have witnessed since last year.
Here is What You Can Expect
Pre-Events by invitation (September 22-24, 2025) FIFAfrica25 kicks off with a series of pre-events designed to engage allies, stock-take and build skills and knowledge ahead of the main programme. These include community-driven workshops, closed-door strategic dialogues, and network member meetings that allow participants to dive deeper into niche areas of digital rights and governance. Over the years, pre-events have served as an exciting avenue for various organisations to connect early, showcase their work, sharpen ideas, and prepare new communities for the various sessions at the main event.
The pre-events are by invitation only. However, limited spots are available for additional participants in select events. Are you arriving in Windhoek early and interested? Express interest in attending a pre-event here.
A Diverse Main Programme (September 25-26, 2025) The main agenda features plenary sessions, workshops, consultations, breakout discussions, and networking moments. Sessions cover a wide array of topics reflecting themes that emerged from our public call for proposals, including digital inclusion, digital resilience and safety; freedom of expression & access to information; platform accountability; the implications of AI; digital economy; and digital democracy.
Strategic Consultations As part of the Forum, various strategic meetings have been set up with the goal of deeper-level discussion and interrogation of specific issues and processes. These include efforts aimed at influencing action on areas such as various resolutions of the African Commission on Human and People’s Rights (ACHPR) and Digital Public Infrastructure. Additionally, this year a book launch on internet shutdowns features amongst the strategic engagements. These engagements will allow for frank exchanges on some of the challenges in the digital ecosystem, but will likely also cast a light on more opportunities for collaboration across Africa and beyond.
Immersive Experience This year, we want participants at FIFAfrica to “Be the experience!” Accordingly, the Forum will encourage attendees, onsite or participating remotely to partake in various interactions that bring digital rights issues to life. These experiences aim to break down barriers between complex digital rights policy concepts and real-world lived experiences.
These include:
An interactive FIFAfrica25 exhibition showcasing research, campaigns, and various digital resilience tools.
Storytelling spaces where the #InternetFreedomAfrica community can share personal accounts of resilience and advocacy in digital spaces.
An art and activism installation reflecting FIFAfrica’s tradition of merging creativity with digital justice.
For the third time, the Digital Security on Wheels is back on a one-of-a-kind journey which will see biker Digital Security expert Andrew Gole set off from Kampala, Uganda, weaving through Kenya, Tanzania, Malawi, Mozambique, Zimbabwe, and Botswana before crossing the finish line in Windhoek, Namibia just in time for the opening of FIFAfrica25. His route back to Uganda will also include Zambia and Rwanda.
The Run for Internet Freedom in Africa aims to bring together participants to jog, or walk in solidarity with the call for a free, fair and open internet. More details to follow.
Celebrating the International Day for Universal Access to Information FIFAfrica25 will continue the practice of commemorating the International Day for Universal Access to Information (IDUAI) celebrated annually on September 28. While the Forum this year precedes the IDUAI, many sessions and plenary discussions will highlight the essential role of access to information in enabling civic participation, inclusion, and digital democracy. This year’s global theme, “Ensuring Access to Environmental Information in the Digital Age”, focuses on the vital importance of timely, comprehensive, and cross-border access to environmental information in an increasingly digital world and resonates deeply with many of the Forum’s discussions.
Join Us in Windhoek
We invite you to explore the full agenda on theFIFAfrica website and begin planning your journey through the sessions, meetings, and immersive experiences (remember to register on the event platform and join the community there). Whether you are a policymaker, activist, journalist, academic, technologist, or artist, FIFAfrica25 will have a space for you to contribute.
In the lead-up to the Forum on Internet Freedom in Africa (FIFAfrica25), a series of pre-events will set the stage for engaging discussions and actions. These sessions set to be held between September 22-24, 2025, serve as a build-up to the Forum by creating avenues for deeper engagement with critical themes that resonate with the content of the Forum. This year, a diverse range of partners have established a series of pre-event sessions focused on various aspects of digital rights, governance, and advocacy across Africa with the goal of addressing the evolving digital landscape. Several common themes emerge from the upcoming pre-event sessions, which are by invitation or by registration.
Find the full list of Pre-events below (Some limited slots are open for registration) | Find the full FIFAfrica Agenda here
Various pre-event sessions include inter-organisational collaboration and capacity building within African networks. These include meetings to be hosted by the African Internet Rights Alliance (AIRA), Digital Rights Alliance Africa (DRAA), and the Association for Progressive Communications (APC).
Some sessions place a significant focus on understanding and influencing digital rights and data governance. This includes training National Human Rights Institutions (NHRIs) on Artificial Intelligence (AI) and human rights, preparing them for regional consultations on protecting digital civic space from human rights harms. Meanwhile, a session on the “United Voices: Media & Civil Society for African Data Governance” will address how the pervasive nature of datafication has fractured the symbiotic relationship between media and civil society. A session on “Gender Transformative Data Governance in Africa” will highlight the need for a gender-responsive approach to data governance, built upon addressing the minimal representation of diverse gender perspectives and the dominance of private sector interests.
A series of litigation surgeries hosted by Media Defense will be dedicated towards building expertise and capacity among lawyers across Sub-Saharan Africa to protect and advance freedom of expression. Participants will receive expert-led training on international and regional legal frameworks, engage in collaborative case analysis, and strengthen their ability to litigate before national courts and international human rights bodies.
Some pre-events are dedicated towards amplifying the achievements of leveraging advocacy and international mechanisms such as the Universal Periodic Review (UPR) to advance digital rights. These sessions hosted by the Civil Alliance for Digital Empowerment (CADE), Small Media, and CIPESA, aim to build the digital advocacy capacities of civil society and policymakers in Africa. These efforts are also extended to the youth and will see the European Partnership for Democracy (EPD) host an advocacy training for young activists on the African Union system.
An Africa Regional Consultation on Global Policy and Legal Action, collaboratively hosted by the Danish Human Rights Institute, the International Commission of Jurists (ICJ), and CIPESA, aims to provide clarity on state obligations and company responsibilities regarding digitally mediated human rights harms, supporting civil society advocacy against disinformation and the shrinking of democratic/civic space. The “Spaces of Solidarity (SoS) Forum”, hosted by DW Akademie, also focuses on compiling and updating advocacy positions on freedom of expression and media freedom, including the impact of shrinking international funding. The “African MILE Production” workshop, also hosted by DW Akademie further promotes open exchange on media production, digital campaigning, and cross-border collaboration to strengthen regional media impact.
FIFAfrica25 will also be the home for a “Digital Rights Academy” hosted by NamTushwe and Paradigm Initiative (PIN). The Academy aims to raise awareness and knowledge of digital rights and inclusion, enhancing stakeholders’ capacity to foster inclusive and rights-respecting legislation in their countries.
Digital resilience is a key component of the Forum and also features as a key component of several pre-event sessions including the “Africa Cybersecurity Advocacy Workshop” hosted by the Internet Society (ISOC) a “Digital Security and Localization Workshop” hosted by the Localization Lab. Both sessions are aimed at enhancing digital skills and practices amongst various stakeholders. There is also a led session on “From Harm to Justice: Reimagining Digital Safety for Women and Girls in Africa,” which explores the increasing incidence of online gender-based violence (TFGBV), including image-based abuse and algorithmic amplification of harmful content, and how systemic inequality and weak legal enforcement contribute to these harms.
Line-up of Pre-Events at FIFAfrica25 (Full details can be found in the Agenda)
September 22, 2025
Pre-Event Name
Host/s
Litigation Surgery
Media Defense
Safety of Voices Meeting
Association for Progressive Communications (APC)
United Voices: Media & Civil Society for African Data Governance
Data Governance in Africa Research Fund, Media Institute of Southern Africa, Namibia Media Foundation and DW Akademie.
September 23, 2025
NHRI Training on Artificial Intelligence (AI) and Human Rights