By Alice Aparo |
Africa’s rapid digitalisation, spanning e-commerce, online services, and digital infrastructure, has been accompanied by a persistent rise of Technology-Facilitated Gender-Based Violence (TFGBV). African women and girls are exposed to several forms of TFGBV, including online harassment, algorithmic discrimination, and deepfakes that prevent equal participation in online spaces.
To commemorate this year’s International Women’s Day, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) convened a webinar themed Advancing platform accountability for women’s online safety in Africa to discuss efforts to enhance women’s online safety and hold digital platforms accountable.
A key insight from the discussion was that the rise of TFGBV in Africa is amplified by platform design, limited legal enforcement, weak moderation of content by digital platforms, and a poor system for reporting abuse and harmful content. The low levels of digital literacy, poor redress and appeals mechanisms, and lack of awareness among policymakers were also cited. Many of those who experience online abuse struggle to obtain justice and, in most cases, turn to self-censorship instead. This ultimately shrinks women’s voices in public discourse.
Further, online harm is often misframed as an individual responsibility, whereas it is largely enabled by platform design features such as anonymity and algorithm-driven content amplification that support harmful behaviour and accelerate the spread of harmful content. In her remarks, Barbra Okafor, founder and Lead Strategist at The Agency Lab, said major digital platforms prioritise “profit and scale over user safety”, adding that features like reposting and seamless sharing are built for viral amplification, not user protection.
Okafor added that when content that qualifies as harassment is posted, algorithms interpret the resulting engagement as “interest” and accelerate the distribution of the abuse rather than introducing safeguards. She described these platforms as “mini-gods” that have assumed regulatory power without corresponding accountability, making online user safety secondary to profit.
Gaps in content moderation, the limited inclusion of African linguistic expertise, and weaknesses in platform design and legal frameworks raise serious concerns about technology companies’ capacity to respond to harmful content in a timely and context-sensitive manner.
The increasing reliance on Artificial Intelligence (AI) for content moderation, yet it is largely trained on Western datasets, means it continues to struggle to detect harassment expressed in African languages or to interpret culturally specific slurs. This leaves women participating in public discourse exposed to unchecked, gendered insults and coordinated digital attacks.
While AI-based features such as deepfake detection, content filters, and automated tools such as Safety Mode and Limits exist, their effectiveness is uneven across African contexts. These measures are further constrained by structural challenges, including limited investment in local content moderation and weak legal enforcement systems.
Marie-Simone Kadurira, an independent feminist researcher and panelist, noted that digital violence often mirrors and amplifies offline abuse, reinforcing patriarchal norms through technology. This perpetuates existing gender power imbalances and harmful social norms. She added that African women, particularly those in public-facing roles such as journalism, activism, or politics, face heightened, systemic harassment.
Despite the existence of cybersecurity and data protection laws in many African countries supported by regional instruments such as the African Commission on Human and Peoples’ Rights (ACHPR) Resolutions on developing Guidelines to assist States monitor technology companies in respect of their duty to maintain information integrity through independent fact-checking (ACHPR/Res. 630 (LXXXII) 2025) and the Resolution on the protection of women against digital violence in Africa (ACHPR/Res. 522 (LXXII) 2022) – addressing TFGBV remains a persistent problem across the continent. The two resolutions emphasise the obligation of African states to protect individuals, particularly women and girls, from digital harms, including online harassment, cyberstalking, non-consensual sharing of intimate images, and other forms of abuse.
Dr. Abudu Sallam Waiswa, Head Litigation, Prosecution and Legal Advisory at the Uganda Communications Commission (UCC), said effective legal enforcement remains challenging because most major platforms, such as Meta, Google, and X, are neither based nor registered on the African continent. This creates significant jurisdictional gaps that hinder thorough investigations and accountability.
Several recommendations emerged at the discussion:
- Platforms must hire and train African local content moderators with linguistic and cultural expertise across African contexts.
- Governments must shift from reactive legislation to forward-looking, preventive frameworks. This includes mandating that platforms provide transparency on their algorithmic moderation and establishing a local physical presence to facilitate legal accountability.
- Civil society and policymakers need to deepen their understanding of how algorithmic systems work in order to effectively monitor and govern them.
- Fund women’s rights organisations to continue to provide survivor support, document abuse, advocate for policy reform, and hold both governments and tech companies accountable in the fight against TFGBV.
- Strengthen the ability of users to recognise, respond to, and recover from online harm.

