CIPESA at the Digital Rights and Inclusion Forum 2026

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is participating in this year’s Digital Rights and Inclusion Forum (DRIF), taking place in Abidjan, Côte d’Ivoire on April 14-16, 2026. Hosted by Paradigm Initiative under the theme, “Building Inclusive and Resilient Digital Futures”, the Forum focuses on strengthening technology to withstand crises and promoting digital rights across the Global South.

At DRIF, CIPESA is contributing to critical conversations that move beyond dialogue to impact. The organisation will host a session titled Beyond the Microphone – Turning IGF Participation into Policy Influence in West Africa,” exploring how engagement in global internet governance spaces can translate into meaningful policy change at national and regional levels.

CIPESA will also feature in the exhibition space, presenting the African Digital Reality Walk, “Paths, Traps, and Safe Passage.” This immersive experience invites participants to navigate the complexities of Africa’s digital landscape as it highlights the opportunities and the risks that define digital rights and freedoms today while encouraging digital resilience.

Where to Find CIPESA at DRIF

April 14

· Image-based TFBGV in the Age of Artificial Intelligence
 10:10 AM – 11:10 AM |  Room 5
 Hosted by Digital Rights Alliance Africa (DRAA)

· Beyond the Microphone – Turning IGF Participation into Policy Influence in West Africa
  2:20 PM – 3:20 PM |  Room 6
 Hosted by CIPESA

· Reviewing the ACHPR Resolution 631 Draft Guidelines for Universal Access to Public Service Content in Africa
 2:20 PM – 3:20 PM | Room 4
 Hosted by SOS Coalition / UNESCO

April 15

· Shrinking Civic Space and Funding Cuts: How Can We Ensure Digital Resilience? 10:10 AM – 11:10 AM | Room 4
 Hosted by Oxfam

· Democracy Disconnected: Fighting Against Election Shutdowns in Africa
 10:10 AM – 11:10 AM | Room 5
 Hosted by Access Now

· Fighting Non-Consensual Intimate Image (NCII) Abuse in Africa & Beyond
 1:50 PM – 2:50 PM | Auditorium
 Hosted by Google

April 16

· From Data to Action: Responding to Digital Authoritarianism’s Threat to Civil Society
 11:10 AM – 12:10 PM | Room 3
 Hosted by EU-SEE

· Digital Sovereignty and Inclusive DPI in Africa: A Stakeholder Roundtable
 11:10 AM – 12:10 PM | Room 4
 Hosted by Digital Action

Stakeholders in Kenya Commit to Safeguarding the Country’s 2027 General Elections

By Lyndcey Oriko |

As Kenya looks ahead to the 2027 general elections, the rapid digitisation of the civic space presents both opportunities and risks. A February 2026 multi-stakeholder engagement organised by the National Cohesion and Integration Commission (NCIC), in partnership with the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), reflected on preparedness for the 2027 elections, with a strong emphasis on moving from reactive responses to proactive, coordinated action.

The Nairobi convening brought together electoral bodies, oversight institutions, law enforcement, regulators, and media actors to deliberate on the need to safeguard rights, strengthen coordination, and build trust in an increasingly digital electoral environment.

Across Africa, digital platforms are reshaping how elections unfold. They have opened up participation, especially for young people, but also introduced new challenges. Increased online regulation, network disruptions, hate speech and disinformation are commonplace, while women, particularly those actively involved in politics, face rising levels of technology-facilitated gender-based violence.

This shifting environment highlights a key reality: the same digital tools that enable participation can also erode trust and weaken social cohesion. And what begins online does not stay online. It often carries real consequences offline, and vice versa. Kenya is no exception. The country’s upcoming 2027 elections are high-stakes, closely contested and in an environment fraught with disinformation.

More recently, there has been a heightened crackdown on activism, including through the abduction and intimidation of activists and journalists, politically motivated internet censorship, rising disinformation, cyber threats, data breaches, and a decline in freedom.

CIPESA’s Kenya’s Digital Crossroads brief, published in February 2025, offers a detailed account of the scale of this challenge. In June 2024, Kenya experienced its first nationwide internet shutdown, imposed during the #RejectFinanceBill2024 protests that disrupted mobile payments, health services, and education systems alongside social media.

The Kenya National Commission on Human Rights (KNCHR) documented 50 deaths, 413 injuries, 59 abductions, and 682 arbitrary arrests as of July 2024, with over 82 people subsequently abducted by armed plainclothes officers. The Communications Authority recorded 657.8 million cyber threats in just three months between July and September 2024, while government and media institutions — including KBC, K24 TV, and the DCI’s account on X — faced successful cyberattacks. The Computer Misuse and Cybercrimes Act was deployed to target critics, bloggers, and political activists. And in January 2025, the incoming Cabinet Secretary for ICT publicly pledged readiness to shut down the internet again if national security is threatened. These patterns have direct implications for 2027.

Opening the discussion, Ashnah Kalemera, Programmes Manager at CIPESA, emphasised the importance of balancing electoral integrity and national security with the protection of civic space. She noted that core freedoms such as free speech, access to information, and participation should continue to be prioritised, even as institutions address digital risks. She also highlighted the need for stronger collaboration, responsible content sharing, and inclusive approaches that bring citizens, especially young people, into the conversation.

Commissioner Ken Williams Nyakomitah of the Independent Policing Oversight Authority (IPOA) stressed that the scale and complexity of digital harms require collective action. He noted that institutions must adapt to evolving technological realities and work in complementarity, emphasising that no single actor can effectively address digital threats in isolation. Strengthening coordination, avoiding duplication, and ensuring timely information sharing were highlighted as critical to improving institutional effectiveness.

The NCIC Chief Executive Officer, Dr. Daniel Mutegi Giti, underscored the importance of early and sustained interventions to promote cohesion. He cautioned that elections could amplify existing tensions if not carefully managed, particularly in digital spaces where narratives spread rapidly and shape public perception. He called for vigilance, responsible engagement, and a shared commitment to upholding constitutional values, including inclusivity and respect for human rights.

Bringing in a technological perspective, Daniel Odongo, Technology Lead at Ushahidi, highlighted the speed, coordination, and sophistication with which harmful content spreads online. He pointed out that misinformation often follows predictable patterns across platforms, making early detection, real-time monitoring, and coordinated response critical to preventing escalation. This further underscores the importance of institutions focusing not just on individual incidents but on identifying patterns, trends, and coordinated behaviour over time.

Director Kilian Nyambu of NCIC emphasised the human dimension of digital harms, noting that information shapes perception, and perception shapes action. This is especially significant for vulnerable groups, including women, youth, and persons with disabilities, who are often disproportionately affected by harmful online narratives. Ensuring inclusivity and protection of these groups remains central to building a peaceful digital environment.

The role of the media was also central to the discussion. Leo Mutisya of the Media Council of Kenya (MCK) highlighted both the resilience and challenges within Kenya’s media landscape. While media remains a key pillar in promoting accountability and public awareness, rising disinformation, political pressure, and declining trust continue to shape how citizens consume information, often leading them to turn to less regulated digital spaces.

At the same time, the engagement highlighted the growing challenge of declining public trust in public institutions and information sources. As more citizens turn to digital platforms for news, the line between credible information and manipulation continues to blur, reinforcing the need for strong media literacy and fact-checking ecosystems. Addressing this trust deficit will require transparency, consistency, and sustained public engagement from institutions.

Concerns were also raised about the emerging risks of Artificial Intelligence (AI), such as AI-generated content and deepfakes, which are increasingly difficult to detect and could significantly distort public perception during elections. Stakeholders emphasised the need to proactively address these risks, including advocating for greater transparency and accountability from digital platforms.

Importantly, participants also highlighted that misinformation is no longer random or organic. It is often coordinated, moving rapidly across platforms within minutes, from X to WhatsApp and into community networks, making early detection and response critical. This calls for investment in real-time monitoring systems and stronger partnerships between institutions and technology platforms. It also reinforces the need for institutions to shift from isolated responses to a more connected, system-wide approach that reflects the complexity of the digital ecosystem.

Discussions further underscored the importance of data protection, responsible platform governance, and context-specific solutions. Participants emphasised that Kenya must develop localised frameworks that reflect its unique realities, rather than relying solely on external models. Building effective responses will require grounding solutions in local contexts, strengthening regional collaboration, and investing in homegrown research and knowledge systems.

Key priorities emerging from the engagement included strengthening inter-agency coordination, investing in early warning and response systems, improving strategic communication, safeguarding data and privacy, and ensuring inclusive approaches that protect all groups. There was also a strong call to establish clear inter-institutional protocols for responding to digital threats, ensuring timely, coordinated, and rights-respecting action across agencies. Strengthening collaboration across institutions and aligning mandates will be essential in closing existing gaps. Ultimately, participants agreed that preparedness must begin now. Building resilient systems, strengthening collaboration, and equipping citizens with the tools to navigate digital spaces responsibly will be critical to shaping peaceful, credible elections.

As Kenya prepares for the 2027 general elections, digital platforms will play a decisive role in shaping public discourse and electoral outcomes. The challenge, and opportunity, lies in ensuring these spaces promote trust, inclusion, and informed participation.

The Forum on Internet Freedom in Africa 2026 (FIFAfrica26) – Open For Registration and Session Proposals!

By FIFAfrica |

Registration is now open for the 13th edition of the Forum on Internet Freedom in Africa (FIFAfrica26). The Forum will take place in Mauritius from September 28 to October 1, 2026, and will bring together over 500 participants from across Africa and beyond for critical conversations on digital rights, inclusion, and governance.

Be sure to register here!

FIFAfrica26 will offer a platform for deliberation on the most pressing issues shaping Africa’s digital landscape, including digital democracy and civic participation, data governance and sovereignty, artificial intelligence and emerging technologies, platform accountability, digital inclusion, digital economy and trade, movement building, and digital security and safety.

Submit A Session

In the lead-up to FIFAfrica26, we invite interested parties to submit session proposals. Submissions can include panel discussions, lightning talks, exhibitions, and skills workshops. Successful submissions will help to shape the agenda of the event, which is set to gather policymakers, regulators, human rights defenders, journalists, academics, private sector players, global information intermediaries, bloggers, and developers.

The Forum recognises the importance of ensuring diversity in the voices, backgrounds, viewpoints, and thematic areas represented at the conference. To enable this, limited travel support is available to support attendance (travel and/or accommodation) for successful applications.

The call for proposals will close at midnight (Nairobi time) on May 29, 2026.

Join the Community 

Be part of the excitement before, during, and beyond the Forum. We invite you to follow @cipesaug on social media and help amplify the movement by sharing your anticipation, insights, and reflections about the event.

Use #InternetFreedomAfrica and #FIFAfrica26 to join a vibrant community working to shape a more open, inclusive, and secure digital future for the continent.

About FIFAfrica

Since its launch in 2014, the Forum on Internet Freedom in Africa (FIFAfrica) has grown into Africa’s premier multi-stakeholder convening on digital rights, digital democracy, and internet governance. The Forum has consistently shaped continental and global conversations on freedom of expression, access to information, privacy, data governance, and has integrated more recent shifts in the digital ecosystem, including on topics like cryptocurrency, AI, platform accountability, and digital public infrastructure.

Visit the FIFAfrica website for updates: https://internetfreedom.africa/

India AI Impact Summit: A Missed Opportunity for Africa’s Voice in Global AI Governance

By Lillian Nalwoga |

The India AI Impact Summit, held on February 16-21, 2026, was themed “Sarvajan Hitaya, Sarvajan Sukhaya” (Welfare for all, Happiness for all). It was expected to be a platform for South-to-South cooperation. However, despite Africa’s growing AI ambitions and strategic participation in preparatory working groups, the summit exposed a stark representation gap, raising concerns about Africa’s ability to influence the future of global AI governance.

Artificial Intelligence (AI) presents a transformative opportunity for Africa, with projections indicating it could contribute up to USD 1 trillion to the continent’s Gross Domestic Product (GDP) by 2035. This significant potential underscores Africa’s growing ambition to harness AI for inclusive growth while positioning itself as a key player in global AI governance.

Many African countries are engaging with AI proactively, seeking to harness its benefits across various sectors. Countries such as Rwanda, Nigeria, Kenya, and Egypt have demonstrated strategic foresight in their AI initiatives. Rwanda, for instance, co-chaired the human-capital working group at the Summit, in line with its national AI strategy to become a global hub for AI research and innovation. Nigeria, as Africa’s largest economy, is focused on utilising AI for inclusive growth, while Kenya and Egypt are contributing to broader debates on AI ethics and digital infrastructure.

The African Union’s Continental AI Strategy, adopted in July 2024, further solidifies this commitment. The strategy emphasises an Africa-centric, development-focused approach to AI, promoting ethical, responsible, and equitable practices. Key pillars of this strategy include data sovereignty, ethical frameworks, and inclusive governance.

Across the continent, initiatives are emerging, such as South Africa’s establishment of AI institutes and Ghana’s investments in AI for agriculture and healthcare projects. These efforts highlight a continent actively pursuing AI integration to address its unique challenges and opportunities.

Despite the summit’s promise of inclusivity and South-to-South cooperation, African voices were largely absent from high-level sessions and critical decision-making forums. Only two African heads of state, from Mauritius and Seychelles, and ministers from Rwanda, Kenya, Egypt and Togo, attended  the global summit. This limited presence stood in stark contrast to the dominant participation of tech giants and diplomatic delegations from the Global North, undermining the summit’s stated goal of elevating Global South perspectives.

Despite strong enthusiasm from leading African AI startups, who showcased their innovative solutions,  the lukewarm African endorsement of the summit’s Impact Document exposed a clear disconnect. Only 11 African countries out of the 92 countries that attended endorsed the declaration that calls for “international cooperation and multistakeholder engagement.” This limited endorsement suggests either inadequate consultation with African stakeholders or a mismatch between the summit agenda and Africa’s priorities.

Notably, African civil society voices, academic experts, and private-sector leaders – those most intimately familiar with the continent’s challenges and opportunities – were largely sidelined at an event meant to champion South-South cooperation. Their absence highlights a significant gap between the summit’s stated commitment to inclusivity and the reality of who was heard.

The under-representation of African voices at global digital governance forums like the India AI Impact Summit has significant implications. As AI becomes increasingly central to economic competitiveness and social development, Africa’s marginalisation could impede its ability to fully harness AI’s potential while protecting its citizens’ interests.

African initiatives, such as Nigeria’s push for data sovereignty and Egypt’s integration of AI into sustainable development, deserve a prominent seat at the global table. Without more equitable representation, Africa’s vision for an ethical and inclusive AI future risks being overshadowed by agendas primarily driven by the Global North.

Africa still faces significant AI governance challenges, including incomplete digital policy frameworks, limited financial resources for consistent participation in global policy meetings, and weak coordination among governments, companies, and civil society. However, these constraints should not prevent it from equal representation in global digital governance forums.

These participation challenges are not unique to Africa: members of the Global South Alliance have similarly called for more meaningful and diverse engagement in global digital governance, in their letter to the India AI Summit Organising Committee. Initiatives such as the Multistakeholder Approach to Participation to AI Governance have also stressed the need to ensure that global AI conversations are informed by the “voices and experiences of those who are most impacted by the development and diffusion of AI.”

Africa has enormous AI potential, a clear strategic vision, and growing initiatives to harness AI for sustainable development. The representation gap evident at the India AI Summit highlights the urgent need to ensure that voices from the Global South, including Africa, are not only heard but are influential in shaping global AI governance.

Strengthening the capacity of national regulators and policymakers to craft progressive AI policies and engage effectively in global AI negotiations is essential. Leveraging continental frameworks such as the African Union AI Strategy can help shape common negotiating positions. At the same time, empowering civil society to provide evidence-based, rights-respecting input to national and global AI frameworks will help ensure more citizen-centered policymaking and more equitable participation in national, regional, and international policy processes. As the world prepares for the upcoming UN Global Dialogue on AI Governance in July and the Global AI Summit 2027 in Geneva, the first annual report of the 40-member UN Independent International Scientific Panel on AI that is due in July 2026 will be a crucial test of whether African priorities can be adequately reflected in global AI governance processes.

When Fighting Disinformation Becomes a Threat to Freedom

By Reyhana Masters |

The phrase “misinformation crisis” used to evoke images of shadowy troll farms and bot networks manipulating elections from afar. Today, the crisis is extremely close – in WhatsApp groups, TikTok reels, and “breaking news” alerts that collapse under scrutiny. The more urgent question is no longer whether Africa faces a polluted information ecosystem but how the continent responds to it.

A February 2026 regional engagement convened by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) gathered members of the judiciary, data protection authorities, communications regulators, law enforcement officers and National Human Rights Institutions (NHRIs) to examine the scale and impact of digital harms.

CIPESA’s Victor Kapiyo set the tone with a reminder that disinformation is not simply about false content; it is about power, intent, amplification, and impact. Discussions focused on responses that separate genuine harm from protected expression.

Disinformation has become sophisticated and professionalised, often backed by political or commercial interests with the resources to manipulate narratives at scale. It moves across borders, shielded by opaque algorithms and corporate structures that complicate national oversight.

Nigeria’s elections illustrate this phenomenon, with political contestation unfolding not only at rallies and ballot boxes, but across encrypted messaging platforms, influencer networks and algorithm-driven feeds.

Fabricated audio recordings, doctored endorsements, and deepfake videos circulated widely. One false claim suggested that President Donald Trump would intervene in Nigeria’s election – a fabrication designed to exploit geopolitical anxieties as well as domestic political and religious tensions.

What makes the Nigerian case instructive is not only the scale of falsehoods, but the architecture behind them. Influencers are reportedly paid significant sums to seed and normalise partisan narratives. Political actors assemble coordinated digital teams to produce, test and amplify content across multiple platforms simultaneously.

“Elections and armed conflicts are key drivers of disinformation. Governments have used both disinformation and the response to it to entrench themselves in power, shrink civic space, and target opponents and critics.” Source: Disinformation Pathways and Effects: Case Studies from Five African Countries.

Even trained journalists, facing financial strain in struggling media markets, are sometimes recruited into propaganda networks that blur the line between professional reporting and political messaging. Moreover, some foreign state actors invest in narrative campaigns to advance their geopolitical interests, viewing African electoral environments as arenas for strategic influence.

A Wider Continental Pattern

Across Africa, disinformation thrives at the intersection of several reinforcing vulnerabilities: intense political competition, widening economic inequality, weak and underfunded media ecosystems, gaps in platform governance, low levels of media literacy and the growing entanglement of foreign geopolitical interests in domestic affairs.

In many contexts, independent newsrooms struggle financially, leaving audiences vulnerable to cheaper, sensationalist content engineered for virality. Regulatory frameworks are often outdated or overly broad, oscillating between under-enforcement and heavy-handed crackdowns that conflate criticism with criminality.

Meanwhile, global technology platforms operate across borders with inconsistent content moderation standards, creating jurisdictional grey zones that undermine accountability.

Beyond Criminalisation

Experience from across the continent suggests that criminalising individual users for “false information” is a blunt and frequently counter-productive response. Without clear legal definitions, disinformation laws can be weaponised against journalists, opposition figures and ordinary citizens exercising legitimate expression.

Indeed, this has been witnessed in countries such as Kenya and Uganda, where laws on “false news” or “computer misuse” have been invoked to arrest and prosecute individuals over what appears to be protected speech.

Effective responses to disinformation require a more layered approach. Clear and precise legal definitions are essential to distinguish between harmful coordinated manipulation and protected speech. Safeguards must be embedded to prevent abuse of disinformation laws for political ends. Platform accountability mechanisms need strengthening, particularly around transparency in political advertising, algorithmic amplification, and coordinated inauthentic behaviour.

Equally critical is sustained investment in media literacy so that citizens are better equipped to interrogate sources and narratives. Independent journalism must be protected and financially supported as a public good. Oversight of coordinated political digital campaigns – including disclosure of funding sources and sponsorship structures – is necessary to illuminate the financial and logistical structures behind viral content.

Following the Money

Focusing on individual users such as those who forward or share content misses the deeper architecture of harm. Without tracing and addressing the networks that design, fund and amplify these campaigns, regulatory responses risk treating symptoms rather than causes.

Participants were urged to draw careful distinctions between misinformation (false information shared without harmful intent), disinformation (deliberate deception), and malinformation (genuine information used to cause harm). Yet these distinctions are often blurred in law. As Kapiyo explained, “when legislation uses vague terms like ‘false news’, ‘annoying’, or ‘offensive’, it creates a net so wide that legitimate criticism can be trapped within it.”

Across several African countries, disinformation laws have been invoked not to dismantle coordinated fraud networks, but to prosecute critics, journalists and opposition voices. This is specifically when governments intervene in digital spaces when their political legitimacy is threatened or when electoral narratives are challenged and when protest movements emerge.

However, the same urgency is not always visible when harmful misinformation spreads socially, when children are exposed to abuse content, or when online fraud syndicates operate at scale.

Several participants observed that enforcement patterns often mirror political anxieties rather than objective harm assessments. “We must ask ourselves,” one judicial officer reflected during the discussions, “are we responding to harm, or are we responding to discomfort?”

Another participant from an NHRI cautioned that credibility is eroded when states appear animated only by speech that threatens authority. “If citizens see that the law moves fastest against critics but slowest against fraudsters and child exploitation networks, trust collapses,” she noted. “And once trust collapses, regulation itself becomes suspect.”

Kapiyo urged the room to think beyond reactionary fixes and toward structural reform: “Digital harms are real but so are constitutional protections. The challenge is not choosing one over the other but instead the solution lies in designing responses that respect both.”

This tension between legitimate regulation and opportunistic control formed a key undercurrent throughout the engagement. Participants repeatedly returned to the same conclusion: a polluted ecosystem cannot be cleaned with contaminated tools. If the response lacks proportionality, clarity and fairness, it risks becoming part of the problem it seeks to solve.

Participants agreed that responses must balance addressing harm with protecting constitutional rights. The test of legality, legitimacy and proportionality remains essential: if a restriction fails one, it fails entirely.

From Discussion to Duty

As the engagement drew toward its close, the conversation shifted from diagnosis to responsibility. Who, precisely, must act and how?

For legislators, the recommendation was unequivocal: draft narrowly tailored laws grounded in clear definitions. Avoid vague formulations such as “false news” that collapse complex categories into blunt offences. Embed explicit safeguards against abuse, including independent oversight and sunset clauses that require periodic review.

For the judiciary, the charge was equally clear: rigorously interrogate executive claims of harm. Apply constitutional proportionality tests consistently. Insist on evidence of coordinated manipulation rather than speculative assertions of public disorder. Judicial independence, several participants noted, is the difference between regulation and repression.

Communications regulators and data protection authorities were urged to strengthen transparency requirements for political advertising and algorithmic amplification. “If money is shaping narratives,” one regulator observed, “then disclosure must follow the money.” Cross-border cooperation will be essential, particularly where coordinated campaigns operate across jurisdictions.

Law enforcement agencies were encouraged to prioritise organised fraud networks, child exploitation rings and coordinated digital criminal enterprises – areas where harm is demonstrable and urgent – rather than focusing disproportionate energy on individual expression. Capacity-building in digital forensics and evidence preservation was identified as critical.

And for civil society and media institutions, the focus is on resilience: invest in investigative capacity to expose coordinated campaigns, strengthen fact-checking networks, and expand media literacy initiatives so that citizens can interrogate viral narratives without defaulting to cynicism.