#BeSafeByDesign: A Call To Platforms To Ensure Women’s Online Safety

By CIPESA Writer |

Across Eastern and Southern Africa, activists, journalists, and women human rights defenders (WHRDs) are leveraging online spaces to mobilise for justice, equality, and accountability.  However, the growth of online harms such as Technology-Facilitated Gender-Based Violence (TFGBV), disinformation, digital surveillance, and Artificial Intelligence (AI)-driven discrimination and attacks has outpaced the development of robust protections.

Notably, human rights defenders, journalists, and activists face unique and disproportionate digital security threats, including harassment, doxxing, and data breaches, that limit their participation and silence dissent.

It is against this background that the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Irene M. Staehelin Foundation, is implementing a project aimed at combating online harms so as to advance digital rights. Through upskilling, advocacy, research, and movement building, the initiative addresses the growing threats in digital spaces, particularly affecting women journalists and human rights defenders.

The first of the upskilling engagements kicked off in Nairobi, Kenya, at the start of December 2025, with 25 women human rights defenders and activists in a three-day digital resilience skills share workshop hosted by CIPESA and the Digital Society Africa. Participants came from the Democratic Republic of Congo, Madagascar, Malawi, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe. It coincides with the December 16 Days Of Activism campaign, which this year is themed “Unite to End Digital Violence against All Women and Girls”.

According to the United Nations Population Fund (UNFPA), TFGBV is “an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated, and amplified in part or fully by the use of information and communication technologies or digital media against a person based on their gender.” It includes cyberstalking, doxing, non-consensual sharing of intimate images, cyberbullying, and other forms of online harassment.

Women in Sub-Saharan Africa are 32% less likely than men to use the internet, with the key impediments being literacy and digital skills, affordability, safety, and security. On top of this gender digital divide, more women than men face various forms of digital violence. Accordingly, the African Commission on Human and Peoples’ Rights (ACHPR) Resolution 522 of 2022 has underscored the urgent need for African states to address online violence against women and girls.

Women who advocate for gender equality, feminism, and sexual minority rights face higher levels of online violence. Indeed, women human rights defenders, journalists and politicians are the most affected by TFGBV, and many of them have withdrawn from the digital public sphere due to gendered disinformation, trolling, cyber harassment, and other forms of digital violence. The online trolling of women is growing exponentially and often takes the form of gendered and sexualised attacks and body shaming.

Several specific challenges must be considered when designing interventions to combat TFGBV. These challenges are shaped by legal, social, technological, and cultural factors, which affect both the prevalence of digital harms and violence and the ability to respond effectively. They include weak and inadequate legal frameworks; a lack of awareness about TFGBV among policymakers, law enforcement officers, and the general public; the gender digital divide; and normalised online abuse against women, with victims often blamed rather than supported.

Moreover, there is a shortage of comprehensive response mechanisms and support services for survivors of online harassment, such as digital security helplines, psychosocial support, and legal aid. On the other hand, there is limited regional and cross-sector collaboration between CSOs, government agencies, and the private sector (including tech companies).

A guiding strand for these efforts will be the #BeSafeByDesign campaign that highlights the necessity of safe platforms for women as well as the consequences when safety is missing. The #BeSafeByDesign obligation shifts the burden of responsibility of ensuring safety in online spaces away from women and places it on platforms where more efforts on risk assessments, accessible and stronger reporting pathways, proactive detection of abuse, and transparent accountability mechanisms are required. The initiative will also involve the practical upskilling of at-risk women in practical cybersecurity.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).

Uganda Data Governance Capacity Building Workshop

Event |

The AU-NEPAD and GIZ in collaboration with CIPESA are pleased to convene this three-day capacity-building and stakeholder engagement workshop to support the Government of Uganda in its data governance journey.

The three-day workshop will focus on providing insights into data governance and the transformative potential of data to drive equitable socio-economic development, empower citizens, safeguard collective interests, and protect digital rights in Uganda. This will include aspects on foundational infrastructure, data value creation and markets, legitimate and trustworthy data systems, data standards and categorisation, and data governance mechanisms.

Participants will critically evaluate regulatory approaches, institutional frameworks, and capacity-building strategies necessary to harnessing the power of data for socio-economic transformation and regional integration, in line with the African Union Data Policy Framework.

The workshop will take place from November 19th to 21st, 2025.

Advancing African-Centred AI is a Priority for Development in Africa

By Patricia Ainembabazi |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in the annual DataFest Africa event held on 30-31 October, 2025. Hosted by Pollicy, the event serves to celebrate data use in Africa by bringing together various stakeholders from diverse backgrounds, such as government, civil society, donors, academics, students, and private industry experts, under one roof and theme.  The event provided a timely platform to advance discussions on how Africa can harness AI and data-driven systems in ways that centre human rights, accountability, and social impact.

CIPESA featured in various sessions at the event, one of which was the launch of the ‘Made in Africa AI for Monitoring, Evaluation, Research and Learning (MERL)’ Landscape Study by the MERL Tech Initiative. At the session, CIPESA provided reflections on the role of AI in development across several humanitarian sectors in Africa.

CIPESA’s contributions complemented insights from the study that explored African approaches to AI in data-driven evidence systems and which emphasised responsive and inclusive design, contextual relevance, and ethical deployment. The Study resonated with insights from the CIPESA 2025 State of Internet Freedom in Africa report, which highlights the role of AI as  Africa navigates digital democracy.

According to the CIPESA report, AI technologies hold significant potential to improve civic engagement, extend access to public services, scale multilingual communication tools, and support fact-checking and content moderation. On the flip side, the MERL study also underscores the risks posed by AI systems that lack robust governance frameworks, including increased surveillance capacity, algorithmic bias, the spread of misinformation, and deepening digital exclusion. The aforementioned risks and challenges pose major concerns regarding readiness, accountability, and institutional capacity, given the nascent and fragmented legal and regulatory landscape for AI in the majority of African countries..

Sam Kuuku, Head of the GIZ-African Union AI Made in Africa Project, noted that it is important for countries and stakeholders to reflect on how well Africa can measure the impact of AI and evaluate the role and potential of AI use in improving livelihoods across the continent. He further reiterated the value of various European Union (EU) frameworks in providing useful guidance for African countries seeking to develop AI policies that promote both innovation and safety, to ensure that technological developments align with public interest, legal safeguards, and global standards.

The session was underscored by the need for African governments and stakeholders to benchmark global regulatory practices that are grounded in human rights principles for progressive adoption and deployment of AI.  CIPESA pointed out the EU AI Act of 2024, which offers a structured and risk-based model that categorises AI systems according to the level of potential harm and establishes controls for transparency, safety, and non-discrimination.

Key considerations for labour rights, economic justice, and the future of work were highlighted, particularly in relation to the growing role of African data annotators and platform workers within global AI supply chains. Investigations into outsourced data labelling, such as the case of Kenyan workers contracted by tech platforms to train AI models under precarious economic conditions, underlie the need for stronger labour protections and ethical AI sourcing practices. Through platforms such as DataFest Africa, there is a growing community dedicated towards shaping a forward-looking narrative in which AI is not only applied to solve African problems but is also developed, regulated, and critiqued by African actors. The pathway to an inclusive and rights-respecting digital future will rely on working collectively to embed accountability, transparency, and local expertise within emerging AI and data governance frameworks.

Safeguarding African Democracies Against AI-Driven Disinformation

ADRF Impact Series |

As Africa’s digital ecosystems expand, so too do the threats to its democratic spaces. From deepfakes to synthetic media and AI-generated misinformation, electoral processes are increasingly vulnerable to technologically sophisticated manipulation. Against this backdrop, THRAETS, a civic-tech pro-democracy organisation, implemented the Africa Digital Rights Fund (ADRF)-supported project, “Safeguarding African Elections – Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy.”

The initiative aimed to build digital resilience by equipping citizens, media practitioners, and civic actors with the knowledge and tools to detect and counter disinformation with a focus on that driven by artificial intelligence (AI) during elections across Africa.

At the heart of the project was a multi-pronged strategy to create sustainable solutions, built around three core pillars: public awareness, civic-tech innovation, and community engagement.

The project resulted in innovative civic-tech tools, each of which has the potential to address a unique facets of AI misinformation. These tools include the  Spot the Fakes which is a gamified, interactive quiz that trains users to differentiate between authentic and manipulated content. Designed for accessibility, it became a key entry point for public digital literacy, particularly among youth. Additionally, the foundation for an open-source AI tracking hub was also developed. The “Expose the AI” portal will offer free educational resources to help citizens evaluate digital content and understand the mechanics of generative AI.

A third tool, called “Community Fakes” which is a dynamic crowdsourcing platform for cataloguing and analysing AI-altered media, combined human intelligence and machine learning. Its goal is to support journalists, researchers, and fact-checkers in documenting regional AI disinformation. The inclusion of an API enables external organisations to access verified datasets which is a unique contribution to the study of AI and misinformation in the Global South. However, THRAETS notes that the effectiveness of public-facing tools such as Spot the Fakes and Community Fakes is limited by the wider digital literacy gaps in Africa.

Meanwhile, to demonstrate how disinformation intersects with politics and public discourse, THRAETS documented case studies that contextualised digital manipulation in real time. A standout example is the “Ruto Lies: A Digital Chronicle of Public Discontent”, which analysed over 5,000 tweets related to Kenya’s #RejectTheFinanceBill protests of 2024. The project revealed patterns in coordinated online narratives and disinformation tactics, achieving more than 100,000 impressions. This initiative provided a data-driven foundation for understanding digital mobilisation, narrative distortion, and civic resistance in the age of algorithmic influence.

THRAETS went beyond these tools and embarked upon a capacity building drive through which journalists, technologists, and civic leaders were trained in open-source intelligence (OSINT), fact-checking, and digital security.

In October 2024, Thraets partnered with eLab Research to conduct an intensive online training program for 10 Tunisian journalists ahead of their national elections. The sessions focused on equipping the participants with tools to identify and counter-tactics used to sway public opinion, such as detecting cheap fakes and deepfakes. Journalists were provided with hands-on experience through an engaging fake content identification quiz/game. The training provided journalists with the tools to identify and combat these threats, and this helped them prepare for election coverage, but also equipped them to protect democratic processes and maintain public trust in the long run.

This training served as a framework for a training that would take place in August 2025 as part of the Democracy Fellowship, a program funded by USAID and implemented by the African Institute for Investigative Journalism (AIIJ). This training aimed to enhance media capacity to leverage OSINT tools in their reporting.

The THRAETS project enhanced regional collaboration and strengthened local investigative capacity to expose and counter AI-driven manipulation. This project demonstrates the vital role of civic-tech innovation that integrates participation and informed design. As numerous African countries navigate elections, initiatives like THRAETS provide a roadmap for how digital tools can safeguard truth, participation, and democracy.

Find the full project insights report here.