Addressing Online Harms Ahead of Rwanda’s 2026 UPR Review

By Patricia Ainembabazi |

As the world commemorates the 16 Days of Activism Against Gender-Based Violence (November 25 to December 10), global attention is drawn to the rising risks women and girls face in digital environments. These harms increasingly undermine political participation, public discourse, and the safety of women across Africa.

Accordingly, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) and the Association for Progressive Communications (APC) have stressed the urgent need to address technology-facilitated gender-based violence (TFGBV) in Rwanda in written and oral submissions to the Universal Periodic Review (UPR) 51st pre-session for Rwanda at the United Nations Human Council in Geneva. In a joint CIPESA–APC fact sheet on human rights, the two organisations highlighted critical gaps in legal protections, online safety, and digital inclusion in Rwanda.

The joint UPR report notes that TFGBV has become a major deterrent to Rwandan women’s participation online, affecting women in politics, journalism, activism, and advocacy. The 2024 online smear campaign against opposition figure Victoire Ingabire Umuhoza illustrates the gendered nature of digital disinformation and harassment. Such attacks rely on misogynistic narratives designed to humiliate, silence, and delegitimise women’s public engagement. This pattern is not only a violation of rights; it also reinforces structural inequalities and dissuades other women from engaging in civic or political life.

These concerns reflect global trends. UN Women has warned of the rapid escalation of deepfake pornography, a form of digitally manipulated sexualised content disproportionately deployed against women and girls. Deepfakes can cause severe psychological, reputational, and professional harm, often leaving survivors without effective avenues for redress. They are increasingly used to silence women, distort electoral participation, and discourage women from entering political leadership. Such harms undermine democratic processes, distort public debate, and entrench gender inequality.

Rwanda’s obligations under the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) require the state to take comprehensive measures to eliminate discrimination (Articles 2 and 3) and ensure women’s full participation in political and public life (Article 7). However, as documented in the joint UPR report and fact sheet, gaps persist. The 2018 Cybercrime Law lacks survivor-centred provisions, and its broad definitions have on occasion been applied in ways that disadvantage victims.

Moreover, enforcement remains inconsistent, and the absence of specialised mechanisms for investigating and prosecuting online violence limits accountability. In this context, TFGBV is not merely a digital phenomenon; it is a direct barrier to fulfilling Rwanda’s CEDAW obligations and achieving SDGs 5 and 16.

The gender digital divide further compounds these harms. Internet penetration in Rwanda stands at 34.2%, with women representing just 38.2% of social media users. Structural inequalities, including device affordability, income disparities, and limited digital literacy, restrict women’s participation in digital spaces. These inequalities heighten vulnerability to online harm and restrict access to safety tools, reporting mechanisms, and digital rights resources. As the joint CIPESA–APC evidence indicates, without targeted investment in digital literacy, device access, and connectivity for women, Rwanda risks deepening existing socio-economic and civic inequalities.

During the UPR pre-session, CIPESA and APC presented a set of recommendations aimed at promoting rights-respecting digital governance. These included adopting survivor-centred TFGBV protections aligned with CEDAW, strengthening investigative and prosecutorial capacities to effectively respond to online harms, and compelling technology platforms to improve reporting, moderation, and accountability mechanisms. The submission also called for amending restrictive provisions in the Penal Code and Cybercrime Law, establishing independent oversight over surveillance operations, and addressing the gender digital divide through targeted digital literacy and affordability initiatives.

The 16 Days of Activism provide an important reminder that violence against women is evolving in both form and reach. Digital technologies have expanded the avenues through which women are targeted, often enabling harm that is faster, more pervasive, and harder to remedy. Ending violence against women, therefore, requires recognising online spaces as critical sites of protection.

Rwanda enters its fourth UPR cycle with a number of unaddressed commitments. During the 2021 review, the Rwandan government received 32 recommendations on freedom of expression and media freedom, including 24 urging reforms to restrictive speech provisions and 17 calling for enhanced protections for journalists and human rights defenders. Yet implementation has been limited. Provisions in Rwanda’s 2018 Penal Code and 2018 Cybercrime Law continue to criminalise “false information”, edited content, and criticism of public authorities, enabling arrests of journalists and discouraging dissenting expression.

These laws have contributed to widespread self-censorship, shrinking civic space, and undermining public participation in digital environments. At the same time, reports of intrusive surveillance, such as the documented use of Pegasus spyware targeting thousands of journalists, activists, and diaspora members, further erode trust and violate privacy rights. The absence of independent oversight in surveillance practices intensifies this concern.

The Country’s ongoing engagement with the UPR process and its upcoming review scheduled for January 21, 2026, offers a timely opportunity to address these challenges. During the pre-sessions 51 from 26 -27 November 2025 in Geneva, several permanent missions expressed eagerness to advance strong recommendations for Rwanda, and there is hope that these delegations will amplify our proposals during the formal review.

CIPESA and APC remain committed to supporting evidence-based reforms that strengthen digital rights protections across Africa. Rwanda’s review presents a defining moment for the government to adopt meaningful, future-focused reforms that uphold human rights, ensure accountability, and create a digital environment where all citizens, especially women, can participate safely, freely, and equally in shaping the country’s democratic and digital future.

#BeSafeByDesign: A Call To Platforms To Ensure Women’s Online Safety

By CIPESA Writer |

Across Eastern and Southern Africa, activists, journalists, and women human rights defenders (WHRDs) are leveraging online spaces to mobilise for justice, equality, and accountability.  However, the growth of online harms such as Technology-Facilitated Gender-Based Violence (TFGBV), disinformation, digital surveillance, and Artificial Intelligence (AI)-driven discrimination and attacks has outpaced the development of robust protections.

Notably, human rights defenders, journalists, and activists face unique and disproportionate digital security threats, including harassment, doxxing, and data breaches, that limit their participation and silence dissent.

It is against this background that the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Irene M. Staehelin Foundation, is implementing a project aimed at combating online harms so as to advance digital rights. Through upskilling, advocacy, research, and movement building, the initiative addresses the growing threats in digital spaces, particularly affecting women journalists and human rights defenders.

The first of the upskilling engagements kicked off in Nairobi, Kenya, at the start of December 2025, with 25 women human rights defenders and activists in a three-day digital resilience skills share workshop hosted by CIPESA and the Digital Society Africa. Participants came from the Democratic Republic of Congo, Madagascar, Malawi, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe. It coincides with the December 16 Days Of Activism campaign, which this year is themed “Unite to End Digital Violence against All Women and Girls”.

According to the United Nations Population Fund (UNFPA), TFGBV is “an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated, and amplified in part or fully by the use of information and communication technologies or digital media against a person based on their gender.” It includes cyberstalking, doxing, non-consensual sharing of intimate images, cyberbullying, and other forms of online harassment.

Women in Sub-Saharan Africa are 32% less likely than men to use the internet, with the key impediments being literacy and digital skills, affordability, safety, and security. On top of this gender digital divide, more women than men face various forms of digital violence. Accordingly, the African Commission on Human and Peoples’ Rights (ACHPR) Resolution 522 of 2022 has underscored the urgent need for African states to address online violence against women and girls.

Women who advocate for gender equality, feminism, and sexual minority rights face higher levels of online violence. Indeed, women human rights defenders, journalists and politicians are the most affected by TFGBV, and many of them have withdrawn from the digital public sphere due to gendered disinformation, trolling, cyber harassment, and other forms of digital violence. The online trolling of women is growing exponentially and often takes the form of gendered and sexualised attacks and body shaming.

Several specific challenges must be considered when designing interventions to combat TFGBV. These challenges are shaped by legal, social, technological, and cultural factors, which affect both the prevalence of digital harms and violence and the ability to respond effectively. They include weak and inadequate legal frameworks; a lack of awareness about TFGBV among policymakers, law enforcement officers, and the general public; the gender digital divide; and normalised online abuse against women, with victims often blamed rather than supported.

Moreover, there is a shortage of comprehensive response mechanisms and support services for survivors of online harassment, such as digital security helplines, psychosocial support, and legal aid. On the other hand, there is limited regional and cross-sector collaboration between CSOs, government agencies, and the private sector (including tech companies).

A guiding strand for these efforts will be the #BeSafeByDesign campaign that highlights the necessity of safe platforms for women as well as the consequences when safety is missing. The #BeSafeByDesign obligation shifts the burden of responsibility of ensuring safety in online spaces away from women and places it on platforms where more efforts on risk assessments, accessible and stronger reporting pathways, proactive detection of abuse, and transparent accountability mechanisms are required. The initiative will also involve the practical upskilling of at-risk women in practical cybersecurity.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).

Uganda Data Governance Capacity Building Workshop

Event |

The AU-NEPAD and GIZ in collaboration with CIPESA are pleased to convene this three-day capacity-building and stakeholder engagement workshop to support the Government of Uganda in its data governance journey.

The three-day workshop will focus on providing insights into data governance and the transformative potential of data to drive equitable socio-economic development, empower citizens, safeguard collective interests, and protect digital rights in Uganda. This will include aspects on foundational infrastructure, data value creation and markets, legitimate and trustworthy data systems, data standards and categorisation, and data governance mechanisms.

Participants will critically evaluate regulatory approaches, institutional frameworks, and capacity-building strategies necessary to harnessing the power of data for socio-economic transformation and regional integration, in line with the African Union Data Policy Framework.

The workshop will take place from November 19th to 21st, 2025.

Advancing African-Centred AI is a Priority for Development in Africa

By Patricia Ainembabazi |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in the annual DataFest Africa event held on 30-31 October, 2025. Hosted by Pollicy, the event serves to celebrate data use in Africa by bringing together various stakeholders from diverse backgrounds, such as government, civil society, donors, academics, students, and private industry experts, under one roof and theme.  The event provided a timely platform to advance discussions on how Africa can harness AI and data-driven systems in ways that centre human rights, accountability, and social impact.

CIPESA featured in various sessions at the event, one of which was the launch of the ‘Made in Africa AI for Monitoring, Evaluation, Research and Learning (MERL)’ Landscape Study by the MERL Tech Initiative. At the session, CIPESA provided reflections on the role of AI in development across several humanitarian sectors in Africa.

CIPESA’s contributions complemented insights from the study that explored African approaches to AI in data-driven evidence systems and which emphasised responsive and inclusive design, contextual relevance, and ethical deployment. The Study resonated with insights from the CIPESA 2025 State of Internet Freedom in Africa report, which highlights the role of AI as  Africa navigates digital democracy.

According to the CIPESA report, AI technologies hold significant potential to improve civic engagement, extend access to public services, scale multilingual communication tools, and support fact-checking and content moderation. On the flip side, the MERL study also underscores the risks posed by AI systems that lack robust governance frameworks, including increased surveillance capacity, algorithmic bias, the spread of misinformation, and deepening digital exclusion. The aforementioned risks and challenges pose major concerns regarding readiness, accountability, and institutional capacity, given the nascent and fragmented legal and regulatory landscape for AI in the majority of African countries..

Sam Kuuku, Head of the GIZ-African Union AI Made in Africa Project, noted that it is important for countries and stakeholders to reflect on how well Africa can measure the impact of AI and evaluate the role and potential of AI use in improving livelihoods across the continent. He further reiterated the value of various European Union (EU) frameworks in providing useful guidance for African countries seeking to develop AI policies that promote both innovation and safety, to ensure that technological developments align with public interest, legal safeguards, and global standards.

The session was underscored by the need for African governments and stakeholders to benchmark global regulatory practices that are grounded in human rights principles for progressive adoption and deployment of AI.  CIPESA pointed out the EU AI Act of 2024, which offers a structured and risk-based model that categorises AI systems according to the level of potential harm and establishes controls for transparency, safety, and non-discrimination.

Key considerations for labour rights, economic justice, and the future of work were highlighted, particularly in relation to the growing role of African data annotators and platform workers within global AI supply chains. Investigations into outsourced data labelling, such as the case of Kenyan workers contracted by tech platforms to train AI models under precarious economic conditions, underlie the need for stronger labour protections and ethical AI sourcing practices. Through platforms such as DataFest Africa, there is a growing community dedicated towards shaping a forward-looking narrative in which AI is not only applied to solve African problems but is also developed, regulated, and critiqued by African actors. The pathway to an inclusive and rights-respecting digital future will rely on working collectively to embed accountability, transparency, and local expertise within emerging AI and data governance frameworks.