India AI Impact Summit: A Missed Opportunity for Africa’s Voice in Global AI Governance

By Lillian Nalwoga |

The India AI Impact Summit, held on February 16-21, 2026, was themed “Sarvajan Hitaya, Sarvajan Sukhaya” (Welfare for all, Happiness for all). It was expected to be a platform for South-to-South cooperation. However, despite Africa’s growing AI ambitions and strategic participation in preparatory working groups, the summit exposed a stark representation gap, raising concerns about Africa’s ability to influence the future of global AI governance.

Artificial Intelligence (AI) presents a transformative opportunity for Africa, with projections indicating it could contribute up to USD 1 trillion to the continent’s Gross Domestic Product (GDP) by 2035. This significant potential underscores Africa’s growing ambition to harness AI for inclusive growth while positioning itself as a key player in global AI governance.

Many African countries are engaging with AI proactively, seeking to harness its benefits across various sectors. Countries such as Rwanda, Nigeria, Kenya, and Egypt have demonstrated strategic foresight in their AI initiatives. Rwanda, for instance, co-chaired the human-capital working group at the Summit, in line with its national AI strategy to become a global hub for AI research and innovation. Nigeria, as Africa’s largest economy, is focused on utilising AI for inclusive growth, while Kenya and Egypt are contributing to broader debates on AI ethics and digital infrastructure.

The African Union’s Continental AI Strategy, adopted in July 2024, further solidifies this commitment. The strategy emphasises an Africa-centric, development-focused approach to AI, promoting ethical, responsible, and equitable practices. Key pillars of this strategy include data sovereignty, ethical frameworks, and inclusive governance.

Across the continent, initiatives are emerging, such as South Africa’s establishment of AI institutes and Ghana’s investments in AI for agriculture and healthcare projects. These efforts highlight a continent actively pursuing AI integration to address its unique challenges and opportunities.

Despite the summit’s promise of inclusivity and South-to-South cooperation, African voices were largely absent from high-level sessions and critical decision-making forums. Only two African heads of state, from Mauritius and Seychelles, and ministers from Rwanda, Kenya, Egypt and Togo, attended  the global summit. This limited presence stood in stark contrast to the dominant participation of tech giants and diplomatic delegations from the Global North, undermining the summit’s stated goal of elevating Global South perspectives.

Despite strong enthusiasm from leading African AI startups, who showcased their innovative solutions,  the lukewarm African endorsement of the summit’s Impact Document exposed a clear disconnect. Only 11 African countries out of the 92 countries that attended endorsed the declaration that calls for “international cooperation and multistakeholder engagement.” This limited endorsement suggests either inadequate consultation with African stakeholders or a mismatch between the summit agenda and Africa’s priorities.

Notably, African civil society voices, academic experts, and private-sector leaders – those most intimately familiar with the continent’s challenges and opportunities – were largely sidelined at an event meant to champion South-South cooperation. Their absence highlights a significant gap between the summit’s stated commitment to inclusivity and the reality of who was heard.

The under-representation of African voices at global digital governance forums like the India AI Impact Summit has significant implications. As AI becomes increasingly central to economic competitiveness and social development, Africa’s marginalisation could impede its ability to fully harness AI’s potential while protecting its citizens’ interests.

African initiatives, such as Nigeria’s push for data sovereignty and Egypt’s integration of AI into sustainable development, deserve a prominent seat at the global table. Without more equitable representation, Africa’s vision for an ethical and inclusive AI future risks being overshadowed by agendas primarily driven by the Global North.

Africa still faces significant AI governance challenges, including incomplete digital policy frameworks, limited financial resources for consistent participation in global policy meetings, and weak coordination among governments, companies, and civil society. However, these constraints should not prevent it from equal representation in global digital governance forums.

These participation challenges are not unique to Africa: members of the Global South Alliance have similarly called for more meaningful and diverse engagement in global digital governance, in their letter to the India AI Summit Organising Committee. Initiatives such as the Multistakeholder Approach to Participation to AI Governance have also stressed the need to ensure that global AI conversations are informed by the “voices and experiences of those who are most impacted by the development and diffusion of AI.”

Africa has enormous AI potential, a clear strategic vision, and growing initiatives to harness AI for sustainable development. The representation gap evident at the India AI Summit highlights the urgent need to ensure that voices from the Global South, including Africa, are not only heard but are influential in shaping global AI governance.

Strengthening the capacity of national regulators and policymakers to craft progressive AI policies and engage effectively in global AI negotiations is essential. Leveraging continental frameworks such as the African Union AI Strategy can help shape common negotiating positions. At the same time, empowering civil society to provide evidence-based, rights-respecting input to national and global AI frameworks will help ensure more citizen-centered policymaking and more equitable participation in national, regional, and international policy processes. As the world prepares for the upcoming UN Global Dialogue on AI Governance in July and the Global AI Summit 2027 in Geneva, the first annual report of the 40-member UN Independent International Scientific Panel on AI that is due in July 2026 will be a crucial test of whether African priorities can be adequately reflected in global AI governance processes.

When Fighting Disinformation Becomes a Threat to Freedom

By Reyhana Masters |

The phrase “misinformation crisis” used to evoke images of shadowy troll farms and bot networks manipulating elections from afar. Today, the crisis is extremely close – in WhatsApp groups, TikTok reels, and “breaking news” alerts that collapse under scrutiny. The more urgent question is no longer whether Africa faces a polluted information ecosystem but how the continent responds to it.

A February 2026 regional engagement convened by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) gathered members of the judiciary, data protection authorities, communications regulators, law enforcement officers and National Human Rights Institutions (NHRIs) to examine the scale and impact of digital harms.

CIPESA’s Victor Kapiyo set the tone with a reminder that disinformation is not simply about false content; it is about power, intent, amplification, and impact. Discussions focused on responses that separate genuine harm from protected expression.

Disinformation has become sophisticated and professionalised, often backed by political or commercial interests with the resources to manipulate narratives at scale. It moves across borders, shielded by opaque algorithms and corporate structures that complicate national oversight.

Nigeria’s elections illustrate this phenomenon, with political contestation unfolding not only at rallies and ballot boxes, but across encrypted messaging platforms, influencer networks and algorithm-driven feeds.

Fabricated audio recordings, doctored endorsements, and deepfake videos circulated widely. One false claim suggested that President Donald Trump would intervene in Nigeria’s election – a fabrication designed to exploit geopolitical anxieties as well as domestic political and religious tensions.

What makes the Nigerian case instructive is not only the scale of falsehoods, but the architecture behind them. Influencers are reportedly paid significant sums to seed and normalise partisan narratives. Political actors assemble coordinated digital teams to produce, test and amplify content across multiple platforms simultaneously.

“Elections and armed conflicts are key drivers of disinformation. Governments have used both disinformation and the response to it to entrench themselves in power, shrink civic space, and target opponents and critics.” Source: Disinformation Pathways and Effects: Case Studies from Five African Countries.

Even trained journalists, facing financial strain in struggling media markets, are sometimes recruited into propaganda networks that blur the line between professional reporting and political messaging. Moreover, some foreign state actors invest in narrative campaigns to advance their geopolitical interests, viewing African electoral environments as arenas for strategic influence.

A Wider Continental Pattern

Across Africa, disinformation thrives at the intersection of several reinforcing vulnerabilities: intense political competition, widening economic inequality, weak and underfunded media ecosystems, gaps in platform governance, low levels of media literacy and the growing entanglement of foreign geopolitical interests in domestic affairs.

In many contexts, independent newsrooms struggle financially, leaving audiences vulnerable to cheaper, sensationalist content engineered for virality. Regulatory frameworks are often outdated or overly broad, oscillating between under-enforcement and heavy-handed crackdowns that conflate criticism with criminality.

Meanwhile, global technology platforms operate across borders with inconsistent content moderation standards, creating jurisdictional grey zones that undermine accountability.

Beyond Criminalisation

Experience from across the continent suggests that criminalising individual users for “false information” is a blunt and frequently counter-productive response. Without clear legal definitions, disinformation laws can be weaponised against journalists, opposition figures and ordinary citizens exercising legitimate expression.

Indeed, this has been witnessed in countries such as Kenya and Uganda, where laws on “false news” or “computer misuse” have been invoked to arrest and prosecute individuals over what appears to be protected speech.

Effective responses to disinformation require a more layered approach. Clear and precise legal definitions are essential to distinguish between harmful coordinated manipulation and protected speech. Safeguards must be embedded to prevent abuse of disinformation laws for political ends. Platform accountability mechanisms need strengthening, particularly around transparency in political advertising, algorithmic amplification, and coordinated inauthentic behaviour.

Equally critical is sustained investment in media literacy so that citizens are better equipped to interrogate sources and narratives. Independent journalism must be protected and financially supported as a public good. Oversight of coordinated political digital campaigns – including disclosure of funding sources and sponsorship structures – is necessary to illuminate the financial and logistical structures behind viral content.

Following the Money

Focusing on individual users such as those who forward or share content misses the deeper architecture of harm. Without tracing and addressing the networks that design, fund and amplify these campaigns, regulatory responses risk treating symptoms rather than causes.

Participants were urged to draw careful distinctions between misinformation (false information shared without harmful intent), disinformation (deliberate deception), and malinformation (genuine information used to cause harm). Yet these distinctions are often blurred in law. As Kapiyo explained, “when legislation uses vague terms like ‘false news’, ‘annoying’, or ‘offensive’, it creates a net so wide that legitimate criticism can be trapped within it.”

Across several African countries, disinformation laws have been invoked not to dismantle coordinated fraud networks, but to prosecute critics, journalists and opposition voices. This is specifically when governments intervene in digital spaces when their political legitimacy is threatened or when electoral narratives are challenged and when protest movements emerge.

However, the same urgency is not always visible when harmful misinformation spreads socially, when children are exposed to abuse content, or when online fraud syndicates operate at scale.

Several participants observed that enforcement patterns often mirror political anxieties rather than objective harm assessments. “We must ask ourselves,” one judicial officer reflected during the discussions, “are we responding to harm, or are we responding to discomfort?”

Another participant from an NHRI cautioned that credibility is eroded when states appear animated only by speech that threatens authority. “If citizens see that the law moves fastest against critics but slowest against fraudsters and child exploitation networks, trust collapses,” she noted. “And once trust collapses, regulation itself becomes suspect.”

Kapiyo urged the room to think beyond reactionary fixes and toward structural reform: “Digital harms are real but so are constitutional protections. The challenge is not choosing one over the other but instead the solution lies in designing responses that respect both.”

This tension between legitimate regulation and opportunistic control formed a key undercurrent throughout the engagement. Participants repeatedly returned to the same conclusion: a polluted ecosystem cannot be cleaned with contaminated tools. If the response lacks proportionality, clarity and fairness, it risks becoming part of the problem it seeks to solve.

Participants agreed that responses must balance addressing harm with protecting constitutional rights. The test of legality, legitimacy and proportionality remains essential: if a restriction fails one, it fails entirely.

From Discussion to Duty

As the engagement drew toward its close, the conversation shifted from diagnosis to responsibility. Who, precisely, must act and how?

For legislators, the recommendation was unequivocal: draft narrowly tailored laws grounded in clear definitions. Avoid vague formulations such as “false news” that collapse complex categories into blunt offences. Embed explicit safeguards against abuse, including independent oversight and sunset clauses that require periodic review.

For the judiciary, the charge was equally clear: rigorously interrogate executive claims of harm. Apply constitutional proportionality tests consistently. Insist on evidence of coordinated manipulation rather than speculative assertions of public disorder. Judicial independence, several participants noted, is the difference between regulation and repression.

Communications regulators and data protection authorities were urged to strengthen transparency requirements for political advertising and algorithmic amplification. “If money is shaping narratives,” one regulator observed, “then disclosure must follow the money.” Cross-border cooperation will be essential, particularly where coordinated campaigns operate across jurisdictions.

Law enforcement agencies were encouraged to prioritise organised fraud networks, child exploitation rings and coordinated digital criminal enterprises – areas where harm is demonstrable and urgent – rather than focusing disproportionate energy on individual expression. Capacity-building in digital forensics and evidence preservation was identified as critical.

And for civil society and media institutions, the focus is on resilience: invest in investigative capacity to expose coordinated campaigns, strengthen fact-checking networks, and expand media literacy initiatives so that citizens can interrogate viral narratives without defaulting to cynicism.

CIPESA Public Dialogue Series: Interrogating Digital Public Infrastructure in East Africa

By CIPESA Writer |

Digital transformation is reshaping governance, service delivery, and civic life across Africa. At the centre of this transformation is the growing adoption of Digital Public Infrastructure (DPI) — foundational interoperable digital systems such as digital identity programmes, payment systems, and data exchange frameworks that enable governments to deliver services at scale.

Across Eastern Africa, governments are increasingly investing in DPI as a core pillar of their digital transformation strategies. These systems promise to improve administrative efficiency, expand access to services, and support more inclusive digital economies.

However, DPI is not merely a technical infrastructure. It is also an institutional and political infrastructure. The way these systems are designed, governed, and implemented can shape power relations, accountability structures, privacy protections, and citizen participation in the digital state.

Despite the growing importance of DPI, public debate around these systems remains limited. A study by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) into media coverage of DPI in Eastern Africa shows that reporting is largely government-centric and event-driven, focusing primarily on announcements and service delivery benefits while giving limited attention to governance arrangements, procurement processes, rights protections, and questions of inclusion.

Strengthening informed public discourse around DPI is therefore critical. Greater participation by civil society, journalists, policymakers, technologists, and citizens can help ensure that emerging digital systems are transparent, accountable, inclusive, and aligned with the public interest.

To contribute to this goal, CIPESA is convening a series of public dialogues on DPI in Eastern Africa. Through four in-depth discussions, the CIPESA public dialogue series will explore key dimensions of DPI implementation such as governance and accountability, data protection and trust, inclusion and equity, and cross-regional learning, while bringing together diverse stakeholders to deepen public understanding and encourage more critical engagement with the region’s digital transformation.

The details of the CIPESA Public Dialogue dare listed below. Be sure to mark your calendar for each dialogue!

Follow @cipesaug on social media and join the conversation using #DPIAfrica and #DPIJournalism.

Dialogue 1: Interrogating DPI: Governance, Power, and Accountability

Background: As governments across Eastern Africa accelerate the rollout of Digital Public Infrastructure systems, questions of governance, oversight, and accountability are becoming increasingly important.

While DPI initiatives are often presented as tools for efficiency and innovation, they also shape power relations within the digital state. Decisions about who designs these systems, who controls the data they generate, and how procurement and partnerships are structured can significantly influence how public digital systems operate and whom they ultimately serve.

Yet public scrutiny of these governance questions remains limited. Media coverage frequently focuses on the technical benefits of DPI, such as improved service delivery, while giving less attention to governance arrangements, procurement transparency, and mechanisms for accountability when systems fail.

This dialogue will examine the political economy of DPI, focusing on questions of governance, oversight, transparency, and accountability as the region expands its digital infrastructure.

Date: March 24, 2026 | 15:00 PM Nairobi | Reserve your seat

Dialogue 2: Interrogating DPI: Data, Privacy, and Trust

Background: Digital Public Infrastructure systems depend heavily on the collection, processing, and exchange of large volumes of personal data. While these systems can improve efficiency and coordination across government services, they also raise significant questions about privacy, surveillance, and data protection.

Public discourse around DPI in Eastern Africa has largely focused on service delivery benefits, with relatively limited attention to the risks associated with data governance and citizen trust.

CIPESA’s media analysis similarly shows that journalists tend to under-report issues of data protection, surveillance, and the enforcement of privacy laws, despite growing public concerns about the misuse of personal data and weak institutional safeguards.

This dialogue will examine whether DPI systems in Eastern Africa are being designed and implemented in ways that protect rights and build public trust.

Date: March 31, 2026 | 15:00 PM Nairobi | Reserve your seat

Dialogue 3: Interrogating DPI: Inclusion, Equity, and Gender

Background: Digital Public Infrastructure is often framed as inclusive by design. However, evidence from across Eastern Africa suggests that issues of equity, access, and representation remain underexplored in both policy debates and media coverage.

Media analysis conducted by CIPESA reveals limited reporting on how DPI systems affect citizens differently based on gender, geography, income, and digital access. It also highlights a significant gender imbalance in media sources, with roughly 80 percent of quoted sources being male.

Yet digital systems can inadvertently reinforce existing inequalities if barriers related to connectivity, digital literacy, affordability, identification documents, or social norms are not addressed.

This dialogue will explore whether DPI initiatives are truly delivering on their promise of inclusion, and who may be left behind by digital transformation.

Date: April 7, 2026 | 15:00 PM Nairobi | Reserve your seat

Dialogue 4: Interrogating DPI: Cross-Regional Learning Session

Africa is undergoing a profound digital transformation. The African Union Digital Transformation Strategy (2020–2030) encourages member states to develop Digital Public Infrastructure and Digital Public Goods as foundations for inclusive service delivery, digital trade, and economic growth.

However, public participation in shaping these developments remains limited, partly due to insufficient public discourse and limited specialised reporting on DPI and DPGs.

To address this gap, Co-Develop partnered with regional organisations, including the Media Foundation for West Africa (MFWA) and CIPESA, to establish journalism fellowships focused on DPI reporting in West and Eastern Africa.

MFWA launched the first fellowship in West Africa in 2023, generating valuable lessons and case studies. CIPESA has since adapted the fellowship model for Eastern Africa, creating opportunities for cross-regional learning among journalists and ecosystem actors.

This session will bring fellows and stakeholders from both regions together to share lessons, experiences, and strategies for strengthening public discourse on DPI and DPGs.

Date: April 14, 2026 | 15:00 PM Nairobi | Reserve your seat

Webinar: Advancing Platform Accountability for Women’s Online Safety in Africa

By CIPESA Writer |

The right to freedom of information, access to information, and democratic engagement belongs equally to all citizens. Yet across Africa, women and girls continue to face significant barriers that prevent them from exercising these foundational rights in digital spaces. This calls for urgent legal and enforcement mechanisms to ensure that women and girls can safely access, contribute to, and participate in information ecosystems. Furthermore, social media and Artificial Intelligence (AI) platforms must address the gendered concerns that impact women’s experiences in online spaces, including disproportionate online harassment, algorithmic discrimination, and digital exclusion. See more here.

In support of this year’s International Women’s Day (IWD) commemoration, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is hosting a webinar titled “Advancing Platform Accountability for Women’s Online Safety in Africa” on Thursday, 19 March 2026 (14:00–15:30 EAT). The webinar resonates with this year’s IWD theme: “Rights. Justice. Action. For ALL Women and Girls.”

Register for the webinar here.

The webinar brings together experts to discuss what efforts are being made to enhance the safety of women in online spaces and to hold platforms accountable. It supports the sentiments of United Nations (UN) General Assembly President Annalena Baerbock, who last week, in her opening remarks at the 70th UN Commission on the Status of Women, noted that the gaps in legal rights afforded to women are deliberate. She stated that “these are not oversights but deliberate choices — choices that violate the UN Charter, the Universal Declaration of Human Rights, and 70 years of commitments made in this Commission.”

This webinar serves as a platform for discussion and insight into the various efforts needed to address these gaps. The discussion is open to all and features expert panelists from diverse backgrounds with a vested interest in advancing the safety of women in online spaces.

Meet the Panelists

Barbra Okafor | Founder and Lead Strategist, The Agency Lab

Barbra is the Founder of The Agency Lab, an initiative empowering African creatives and organisations to secure data ownership, protect Intellectual Property (IP), and navigate fair compensation in the AI economy. Drawing on her previous roles as Content Programming Lead at TikTok Sub-Saharan Africa and Senior Producer at BBC Media Action, Barbra translates complex digital transitions into actionable strategies. Her work sits at the critical intersection of the African creator economy, technology, and governance.

Marie-Simone Kadurira | Feminist researcher and communications strategist

Marie-Simone works at the intersection of gender, technology, and social justice. Her work focuses on technology-facilitated gender-based violence (TFGBV), digital rights, and the ways in which online platforms shape access, safety, and agency for women and marginalised communities. She has contributed to research and advocacy efforts examining how online harms disproportionately affect women, particularly in the Global South, and has worked with international organisations to develop policy and communications strategies aimed at strengthening platform accountability and advancing safer digital environments. Her work engages with questions of governance, content moderation, and the structural inequalities embedded within digital ecosystems. She is currently engaged in research on gender-based violence prevention and supports initiatives that centre community-led approaches to justice, safety, and digital inclusion.

Mercy Mutemi | Executive Director, The Oversight Lab Africa

Mercy’s work advocates for fair regulation and deployment of technology across Africa. She focuses on restorative and retributive justice solutions for those who have been harmed by technology. This includes workers who have been exploited to build and maintain technological systems and those who have been harmed by algorithms.  She has worked on several cases and initiatives focused on addressing inequality and the unconsidered consequences of tech algorithms for African communities and societies. She currently represents a cohort of content moderators based throughout Africa in a suit over workplace human rights violations.

Abdul Waiswa | Head of Litigation, Prosecution and Legal Advisory, Uganda Communications Commission

Waiswa works as the Head of Litigation, Prosecution and Legal Advice at the Uganda Communications Commission (UCC), which is the statutory regulator of the converged communications sector in Uganda, with a mandate to license, supervise, and facilitate the development of a robust communication sector in Uganda. He oversees the legal advisory, licensing, and enforcement functions of UCC and supports the implementation of the Government of Uganda’s policies on ICT. Waiswa is a regular participant in national, regional, and international engagements on internet jurisdiction, data, and other ICT-related policy matters.

Lilian Nalwoga | Programme Manager, CIPESA

Lillian has several years of ICT policy research and advocacy experience, having joined the CIPESA as a Policy Officer in 2007. She has facilitated and coordinated ICT policy workshops – including coordinating the East African Internet Governance Forum. Lillian has a Bachelors of Development Studies (Makerere University, Uganda) with a Postgraduate Diploma in Project Management as well as advanced training in Internet studies. She holds a Master’s in Digital Media and Society from Uppsala University, Sweden. She is also the former President of the Internet Society (ISOC) Uganda.

The webinar forms part of the #BeSafeByDesign campaign, which calls for improved platform accountability in Africa. The campaign is part of a project supported by the Irene M. Staehelin Foundation. Since December 2025, the project has pursued a series of collaborative activities aimed at improving online safety and governance. These included a convening in Nairobi, Kenya, which served as the launch of the #BeSafeByDesign campaign. The convening assembled human rights defenders and activists from eight African countries for upskilling in digital resilience. In February 2026, a meeting held in Port Louis, Mauritius brought together 30 participants — including judges, magistrates, law enforcement officers, communications regulators, data protection authorities, and National Human Rights Institutions (NHRIs). Participants recommended that African governments strengthen their engagement with big tech companies through regional mechanisms, such as the African Union, to present a more coordinated voice on issues of platform accountability.

CIPESA Builds the Capacity of State Actors to Address Online Harms

By CIPESA Writer |

Digital platforms serve as vital spaces for civic participation, political expression, and social mobilisation throughout Africa, including for women, youths, and human rights advocates. However, there has been a rise in digital hams that threaten online rights, safety, and democratic engagement. Technology-Facilitated Gender-Based violence (TFGBV), disinformation, digital surveillance, and increasingly complex attacks made possible by Artificial Intelligence (AI) are all on the rise in African online spaces. The majority of those harmed are journalists, activists, women and girls.

To address these challenges, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) last month convened a two-day regional engagement in Mauritius to explore trends in digital harms and equip state actors with practical tools and guidance to monitor, prevent, and respond to online rights violations. It drew 30 participants from seven countries (Kenya, Malawi, Mauritius, South Africa, Tanzania, Uganda and Zimbabwe). They included judges, magistrates, law enforcement officers, communications regulators, and representatives of data protection authorities and National Human Rights Institutions (NHRIs).

Although the internet and digital technologies have enhanced civic participation and broadened the enjoyment of human rights, they have also brought about new risks for individuals and organisations. Accordingly, the discussion addressed the evolving nature of online harms and their impact on digital rights and democratic engagement.

Across all countries in the region, TFGBV is a major concern. Women in public roles, such as politicians, journalists, and activists, face a rising wave of online harassment, sexualised threats, and disinformation campaigns aimed at intimidating and silencing them. In countries like Kenya, Uganda, Zimbabwe, and South Africa, these attacks increase significantly during elections, compelling numerous women to withdraw from public engagement.

Disinformation and AI-driven manipulation present another concern. Coordinated disinformation campaigns, often amplified by bots and increasingly reliant on synthetic media like deepfakes, influence public opinion and target independent and critical voices. In countries where laws are enacted to address these ills, they often fail to target harmful manipulation and are instead weaponised to suppress legitimate expression.

For instance, laws on cybercrime and “false information”in countries like Tanzania, Zimbabwe, Kenya, and Uganda are often broadly framed, with ambiguous provisions and overly broad definitions and excessive penalties. These laws are frequently applied to detain and prosecute individuals, primarily journalists, bloggers and social accountability activists. Even where prosecutions are rare, the chilling effect on civic engagement is significant.

The engagement also heard that, in several countries, surveillance-enabling measures, from SIM card registration linked to national IDs, to biometric voter databases and interception technologies, have expanded without proper independent oversight.

Speaking at the engagement, the African Commission on Human and Peoples’ Rights (ACHPR) Commissioner and Special Rapporteur on Freedom of Expression and Access to Information in Africa, Ourveena Geereesha Topsy-Sonoo, discussed how her mandate was addressing digital harms and promoting rights. She highlighted the commission’s resolution against internet disruptions and Resolution 591, which addresses the growing issue of violence against women on digital platforms across the continent.

In 2019, the ACHPR adopted the Declaration of Principles on Freedom of Expression and Access to Information in Africa and has more recently issued resolutions addressing digital violence against women. However, most governments are yet to domesticate and implement these key instruments.

The meeting also underscored how democratic backsliding, shrinking civic space, and the expansion of executive power, as witnessed in several countries, create an environment in which digital harms flourish. While instruments such as the African Charter, the Malabo Convention on Cyber Security and Personal Data Protection, and the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) provide safeguards, their impact depends on independent courts, empowered regulators, and capable NHRIs.

Participants noted that limited capacity, resources, and coordination across government institutions often undermine enforcement, monitoring, and accountability. The Mauritius engagement therefore, recommended establishing stronger institutional capacity, such as inter-ministerial committees on digital rights, to foster collaboration among key actors responsible for protecting digital rights.

Participants further explored practical approaches to monitoring digital rights violations, supporting survivors of online abuse, and ensuring accountability for harmful online behaviour. These discussions also drew from a handbook developed by the International Center for Not-for-Profit Law (ICNL) and CIPESA, which provides guidance for NHRIs on monitoring and promoting digital rights.

In particular, the convening challenged NHRIs to play a greater role in addressing digital harms through investigating violations, providing remedies to victims, advising on legislation and standards, and conducting public education.

Another focus of the discussions was the role of technology companies in moderating harmful content. Participants highlighted concerns that major platforms such as Meta and X do not allocate sufficient resources to content moderation in Africa. In many cases, moderation systems rely heavily on automated tools that are poorly adapted to local languages and socio-political contexts, while the number of human moderators covering African content remains limited.

Participants recommended that African governments strengthen their engagement with big tech companies through regional mechanisms such as the African Union, to present a more coordinated voice on issues of platform accountability.

Through this engagement, CIPESA strengthened the capacity of state actors to safeguard digital rights, highlighting that protecting these rights is both their legal mandate and central to growing democratic resilience and inclusion across Africa. The engagement was supported by the Irene M. Staehelin Foundation.