Rethinking Platform Design and Accountability to Combat TFGBV in Africa

By Alice Aparo |

Africa’s rapid digitalisation, spanning e-commerce, online services, and digital infrastructure, has been accompanied by a persistent rise of Technology-Facilitated Gender-Based Violence (TFGBV). African women and girls are exposed to several forms of TFGBV, including online harassment, algorithmic discrimination, and deepfakes that prevent equal participation in online spaces.

To commemorate this year’s International Women’s Day, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) convened a webinar themed Advancing platform accountability for women’s online safety in Africa to discuss efforts to enhance women’s online safety and hold digital platforms accountable.

A key insight from the discussion was that the rise of TFGBV in Africa is amplified by platform design, limited legal enforcement, weak moderation of content by digital platforms, and a poor system for reporting abuse and harmful content. The low levels of digital literacy, poor redress and appeals mechanisms, and lack of awareness among policymakers were also cited. Many of those who experience online abuse struggle to obtain justice and, in most cases, turn to self-censorship instead. This ultimately shrinks women’s voices in public discourse.

Further, online harm is often misframed as an individual responsibility, whereas it is largely enabled by platform design features such as anonymity and algorithm-driven content amplification that support harmful behaviour and accelerate the spread of harmful content. In her remarks, Barbra Okafor, founder and Lead Strategist at The Agency Lab, said major digital platforms prioritise “profit and scale over user safety”, adding that features like reposting and seamless sharing are built for viral amplification, not user protection.

Okafor added that when content that qualifies as harassment is posted, algorithms interpret the resulting engagement as “interest” and accelerate the distribution of the abuse rather than introducing safeguards. She described these platforms as “mini-gods” that have assumed regulatory power without corresponding accountability, making online user safety secondary to profit.

Gaps in content moderation, the limited inclusion of African linguistic expertise, and weaknesses in platform design and legal frameworks raise serious concerns about technology companies’ capacity to respond to harmful content in a timely and context-sensitive manner.

The increasing reliance on Artificial Intelligence (AI) for content moderation, yet it is largely trained on Western datasets, means it continues to struggle to detect harassment expressed in African languages or to interpret culturally specific slurs. This leaves women participating in public discourse exposed to unchecked, gendered insults and coordinated digital attacks.  

While AI-based features such as deepfake detection, content filters, and automated tools such as Safety Mode and Limits exist, their effectiveness is uneven across African contexts. These measures are further constrained by structural challenges, including limited investment in local content moderation and weak legal enforcement systems.

Marie-Simone Kadurira, an independent feminist researcher and panelist, noted that digital violence often mirrors and amplifies offline abuse, reinforcing patriarchal norms through technology. This perpetuates existing gender power imbalances and harmful social norms. She added that African women, particularly those in public-facing roles such as journalism, activism, or politics, face heightened, systemic harassment.

Despite the existence of cybersecurity and data protection laws in many African countries supported by regional instruments such as the African Commission on Human and Peoples’ Rights (ACHPR) Resolutions on developing Guidelines to assist States monitor technology companies in respect of their duty to maintain information integrity through independent fact-checking (ACHPR/Res. 630 (LXXXII) 2025) and the Resolution on the protection of women against digital violence in Africa (ACHPR/Res. 522 (LXXII) 2022) – addressing TFGBV remains a persistent problem across the continent. The two resolutions emphasise the obligation of African states to protect individuals, particularly women and girls, from digital harms, including online harassment, cyberstalking, non-consensual sharing of intimate images, and other forms of abuse.

Dr. Abudu Sallam Waiswa, Head Litigation, Prosecution and Legal Advisory at the Uganda Communications Commission (UCC), said effective legal enforcement remains challenging because most major platforms, such as Meta, Google, and X, are neither based nor registered on the African continent. This creates significant jurisdictional gaps that hinder thorough investigations and accountability.

Several recommendations emerged at the discussion:

  1. Platforms must hire and train African local content moderators with linguistic and cultural expertise across African contexts.
  2. Governments must shift from reactive legislation to forward-looking, preventive frameworks. This includes mandating that platforms provide transparency on their algorithmic moderation and establishing a local physical presence to facilitate legal accountability.
  3. Civil society and policymakers need to deepen their understanding of how algorithmic systems work in order to effectively monitor and govern them.
  4. Fund women’s rights organisations to continue to provide survivor support, document abuse, advocate for policy reform, and hold both governments and tech companies accountable in the fight against TFGBV.
  5. Strengthen the ability of users to recognise, respond to, and recover from online harm.

Outpaced by Its Own Ambition: Can Kenya Bridge Its AI Regulation  Gap?

By Raylenne Kambua |

The raw paradox at the heart of Kenya’s Artificial Intelligence (AI) moment is that the country is simultaneously sprinting ahead in AI adoption while grappling with a shrinking space for the very digital voices that AI empowers.

According to the Digital Global Update Report, Kenya recorded the world’s highest usage rate of AI tools in 2025, with 42.1% of internet users aged 16 and above reporting active use of AI-powered technologies. This level of usage indicates that AI is increasingly being woven into the daily life of Kenyans.

However, the Navigating the Implications of AI on Digital Democracy in Kenya report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) highlights that while AI empowers citizens, it also enables unprecedented surveillance and manipulation.

A Nation Leading the Way in AI Adoption

Kenya has made significant investments in digital services, innovation hubs, and connectivity under the National Digital Master Plan 2022–2032.

These developments are also transforming how citizens interact with the government. Tools such as the Office of the Data Protection Commissioner’s Linda Data chatbot and platforms such as Sauti ya Bajeti have expanded access to rights information and budget tracking.

Yet, even as AI delivered clear benefits, it also revealed its dual nature, most visibly during the 2024 #RejectFinanceBill protests, during which Gen Zs mobilised through AI-generated infographics, satire, and short-form videos. At the height of the protests on June 25, a nationwide internet disruption was enforced despite assurances from the Communications Authority. The disruption was confirmed by network monitors like Cloudflare and NetBlocks, exposing the fragility of internet freedom in Kenya.

Civil society condemned the internet shutdown as a violation of rights, while telecoms Safaricom and Airtel attributed it to outages in their undersea cable. In the aftermath, reports of abductions and enforced disappearances of digital activists escalated, with the Kenya National Commission on Human Rights documenting at least 82 cases between June and December 2024.

Kenya’s AI Policy Landscape

The launch of the Kenya National AI Strategy 2025–2030 in March 2025 signalled the country’s ambition to position itself as Africa’s leading AI innovation hub. The strategy prioritises governance, ethics, investment, digital infrastructure, data ecosystem development, and support for AI research and innovation.

Kenya has also strengthened its international profile through participation in programmes such as the United Nations High-Level Advisory Board on AI, joining the International Network of AI Safety Institutes, and assuming leadership in the World Summit on the Information Society (WSIS+20).

At the national level, initiatives such as Digital Platforms Kenya (DigiKen) and the Kenya Bureau of Standards’ draft AI Code of Practice reflect growing momentum toward operationalising AI governance and skills building. The government is also developing an AI and Emerging Technologies Policy and a Data Governance Policy, both of which are expected to be in place by July 2026.

However, the gap between ambition and readiness remains wide. Kenya ranks 93rd in the 2025 Government AI Readiness Index, due to persistent weaknesses in infrastructure, implementation, and institutional capacity.

Moreover, Kenya’s legal framework for AI remains fragmented and incomplete. Currently, there is no standalone AI law in force, but a controversial Artificial Intelligence Bill, 2026, that has raised significant concerns about over-regulation and censorship  is under discussion. Additionally regulation is based on broader laws such as the Data Protection Act 2019 and the Computer Misuse and Cybercrimes Act 2018, which were not designed to address AI-specific risks such as deepfakes, automated decision-making, algorithmic discrimination, or synthetic disinformation.

As highlighted in the CIPESA report, critical gaps remain in the use of AI. These include the absence of mandatory algorithmic impact assessments, weak safeguards against AI-driven surveillance such as facial recognition, and scant measures to address AI-generated electoral misinformation. Furthermore, regulatory authorities lack sufficient capabilities to audit and monitor sophisticated AI systems, and there are no clear licensing or accountability frameworks for AI creators and deployers.

“Without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities.” (Navigating the Implications of AI on Digital Democracy in Kenya, 2025)

The Way Ahead: AI Governance Focused on Human Rights

The CIPESA report outlines a human rights–centred approach to AI governance that is built on the following key principles:

  1. Life-Centred and Human-Centred Design and Accountability: AI should support and not replace human judgment, with strong oversight to ensure transparency and accountability.
  2. Equity and Fairness: Design AI to prevent bias and expand inclusive access, especially for underrepresented groups.
  3. Transparency and Trust: Ensure AI systems are explainable, well-documented, and open to public scrutiny and challenge.
  4. Safety, Security and Resilience: Build resilient systems with ongoing risk assessments and strong protections against misuse.
  5. International Collaboration and Ethical AI Development: Advance ethical AI through international collaboration while upholding constitutional values and human oversight.
  6. Environmental sustainability: Align AI development with climate resilience and sustainable resource use.
  7. Inclusive Participation and Cultural Relevance: Reflect local diversity and involve marginalised communities in AI design.
  8. Robust Governance and Adaptive Regulation: Maintain flexible, responsive regulation that keeps pace with technological change.

The report calls for a coordinated, multi-stakeholder approach to AI governance. It recommends that:

  • The government should enact a comprehensive AI law aligned with constitutional and international human rights standards, establish a legally mandated National AI Advisory Council with inclusive representation and strong enforcement powers.  It should also introduce clear prohibitions on high-risk practices such as real-time biometric surveillance without judicial oversight.
  • Civil society and the media should strengthen public awareness, promote accountability, and counter AI-driven disinformation.
  • Private sector actors should uphold transparency, fairness, and ethical standards across AI systems, including fair labour practices. Labour protections must be guaranteed for gig workers and data annotators within the AI value chain.
  • Academia and research institutions should continue generating evidence that can guide context-specific policy and regulation.
  • Across all stakeholders, digital literacy must be expanded, especially in underserved and rural communities, so that citizens can understand and challenge AI systems that affect them.

With the ongoing legislative processes on AI, this is a pivotal time for Kenya, as it has the momentum and the attention of the world. But momentum without action will not work. The country cannot afford slow, fragmented debates while technology is fast progressing. Additionally, Kenya must strike a careful balance between regulation and innovation, as overly restrictive rules could limit access, slow local innovation, and lock the country out of AI’s economic and social benefits. The goal should be a flexible, forward-looking framework that protects rights while still enabling growth and opportunity.

Read the full report, Navigating the Implications of AI on Digital Democracy in Kenya.

Tanzania’s Digital Rights Record Faces Fresh Scrutiny at the UPR

By Ainembabazi Patricia |

In November 2026, Tanzania will be up for its fourth cycle review under the Universal Periodic Review (UPR) mechanism of the United Nations Human Rights Council (HRC). Ahead of the review, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), the Pan African Lawyers Union (PALU), and JamiiAfrica made a joint stakeholder submission on the state of digital rights including expression, access to information, assembly and association, privacy, data protection, and gender equality in digital spaces in Tanzania.

The submission recognises the positive steps taken by Tanzania to advance the digital ecosystem  since its previous assessment in 2021. Internet connectivity has expanded significantly, and the country has adopted important digital governance measures. These include the enactment of the Personal Data Protection Act, 2022, and the establishment of the Personal Data Protection Commission, reforms to the Media Services Act, and policy initiatives such as the Digital Economy Strategic Framework 2024–2034, and emerging AI governance frameworks.

However, the report finds that these gains have not translated into a freer or safer digital civic space in the country. Repressive laws and regulations continue to be used to restrict online expression, limit publications, silence dissent and elevate censorship. The Electronic and Postal Communications (Online Content) Regulations as amended in 2020 and 2025 still require licensing for online media service providers and grant the Tanzania Communications Regulatory Authority (TCRA) broad powers to suspend or revoke licences. Vague provisions in the Cybercrimes Act and related laws also continue to criminalise legitimate expression, creating a chilling effect for journalists, bloggers, activists, and ordinary users.

According to the CIPESA, PALU and JamiiAfrica report, several incidents have narrowed the environment for the exercise of online freedoms.  Key media and engagement platforms such as the Clubhouse, X (formerly Twitter) and JamiiForums were blocked in 2023 and 2025 respectively. In 2024, Tanzania’s media space was marked by intensified restrictions.  TCRA suspended Mwananchi Communications’ online platforms for 30 days on October 2, 2024, against a backdrop of broader restrictions on unregistered Virtual Private Networks (VPNs) in October 2023.

Particularly concerning were the restrictions imposed during the 2025 general elections. The report notes that Tanzania implemented a nationwide internet shutdown from October 29 to November 3, 2025, cutting off access to major social media and communication platforms. This disrupted the ability of citizens, journalists, and observers to share information, document events, and participate in public debate during a critical democratic moment.

While Tanzania in 2022 enacted the Personal Data Protection Act, which came into force in 2023, its implementation is weakened by restrictions on anonymity and VPN usage, as well as heightened surveillance. Pronouncements by government officials against sharing certain election-related content have fostered fear and self-censorship. For many Tanzanians, especially critics of government and civic actors, legal protections for personal data remain undermined by surveillance-related risks.

The submission also draws attention to the growing threat of Technology-Facilitated Gender-based Violence (TFGBV). Women journalists, politicians, activists, and other public-facing women are increasingly subjected to online harassment, sexualised abuse, doxxing, reputational attacks, and coordinated trolling. These harms were particularly acute around the 2025 elections, when gendered disinformation and intimidation converged with broader restrictions on online speech.

To address these concerns, the report recommends that Tanzania repeals or amends vague and overly broad laws including the Cybercrimes Act and the Online Content Regulations. The report calls for compliance with international human rights standards, discouraging restrictions such as internet shutdowns and platform blocking. It also calls for judicial oversight over surveillance, greater transparency in state requests to technology companies, stronger support for the Personal Data Protection Commission, investment in digital rights literacy, and explicit protections against TFGBV.

Read the full report submission here.

CIPESA Condemns Zambia’s Cancellation of RightsCon 2026

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) notes with deep concern the Government of Zambia’s decision to postpone Rights Con 2026, which was scheduled to take place in Lusaka next week. The postponement was confirmed by the organisers on April 29, 2026. Civic convenings of this nature thrive precisely because they create a safe space for diverse, sometimes uncomfortable, conversations about rights, technology, and power. Restricting that space undermines the principles of openness, dialogue, and democratic engagement on the continent.

The information provided by the Zambia government suggests that halting of RightsCon was not a necessary and proportionate measure. It has caused undue financial losses and disrupted the plans of thousands of national and international human rights actors and the local tourism, travel and conferencing sector, while also denting Zambia’s  governance credentials and international standing.

CIPESA has joined over 130 organisations from across the world in expressing concern over the  government’s decision that raises questions about transparency, civic space, and commitment to inclusive global digital governance.

The cancellation of RightsCon 2026 escalates an ongoing crisis of democratic regression and the rise of digital authoritarianism on the continent.

In a related development, the World Press Freedom Day (WPFD) Global Conference, originally scheduled to also take place in Lusaka ahead of RightsCon. has also undergone significant changes. UNESCO has announced that the conference will now be held online, while the UNESCO/Guillermo Cano World Press Freedom Prize ceremony will be relocated to the UNESCO Headquarters in Paris, France at a later date. These developments effectively delist Zambia as the host of this year’s WPFD, although a commemorative event remains scheduled for May 4, 2026.

African Governments are Using “Smart City” Systems to Monitor Dissent and Consolidate State Control

By CIVICUS |

CIVICUS discusses the spread of AI-powered surveillance in Africa with Wairagala Wakabi, executive director of the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) and co-editor of Smart City Surveillance in Africa: Mapping Chinese AI Surveillance Across 11 Countries, the latest report by the African Digital Rights Network (ADRN) and the Institute of Development Studies (IDS).
At least 11 African governments have spent over US$2 billion on Chinese-built surveillance infrastructure that uses AI-powered cameras, biometric data collection and facial recognition to monitor public spaces. Marketed as ‘smart city’ solutions to reduce crime and manage urban growth, these systems have been rolled out with little regulation and no independent evidence of their effectiveness. This technology is instead being used to monitor activists, track protesters and silence dissent, with a chilling effect on freedoms of assembly and expression.

How widespread is AI-powered surveillance in Africa?

Under the guise of reducing crime and fighting terrorism, at least 11 governments have invested over US$2 billion in AI-powered ‘smart city’ surveillance infrastructure: Algeria, Egypt, Kenya, Mauritius, Mozambique, Nigeria, Rwanda, Senegal, Uganda, Zambia and Zimbabwe.

Governments are installing thousands of CCTV cameras linked to central command centres, paired with tools such as automatic number-plate recognition, biometric ID systems and facial recognition to track people and vehicles. The largest known investments are in Nigeria (over US$470 million), Mauritius (US$456 million) and Kenya (US$219 million), though the real total is likely much higher, since surveillance spending is often secret and the report covers only 11 of Africa’s 55 countries.

Despite being presented as tools for crime prevention, counter-terrorism, modernisation and urban management, these are not targeted security measures. They represent a broader shift toward continuous, population-level monitoring of public spaces, rolled out over the past five to ten years almost always without clear legal limits or public debate.

Are these systems achieving their stated purpose?

No, there is no compelling evidence that they have in any of the countries studied. Instead, the data points to a pattern of use that raises serious human rights concerns.

In Uganda and Zimbabwe, AI-powered surveillance including facial recognition is being used to suppress dissent rather than ensure public safety. Activists, critics of the government, opposition leaders and protesters are identified and monitored through this system, even after protests have ended. In Mozambique, smart CCTV systems have reportedly been installed in areas of strong political opposition, suggesting targeted rather than neutral surveillance.

In Senegal and Zambia, countries with relatively low terrorism threats, governments have still invested heavily, which calls into question the stated security rationale.

Across the countries studied, the scale of surveillance far exceeds any actual or perceived security threat, and the infrastructure is consistently being used to monitor dissent and consolidate state control rather than address genuine public safety needs.

Who’s supplying this technology?

While firms from Israel, South Korea and the USA supply surveillance technologies, Chinese companies are the primary suppliers and financiers. They typically offer end-to-end ‘smart city’ packages that include cameras, software platforms, data analytics systems, training and ongoing technical support. Many projects are backed by loans from Chinese state-linked banks, which makes them financially accessible in the short term but creates long-term dependencies on external vendors for maintenance, system management and upgrades.

This model undermines transparency. Procurement processes are opaque and civil society, the public and oversight institutions including parliaments rarely have information about how these systems operate, how data is stored or who has access to it. That lack of accountability is what makes abuse not just possible, but hard to detect or challenge.

What impact is this having on civic space?

This large-scale surveillance of public spaces is not legal, necessary or proportionate to the legitimate aim of providing security. Recording, analysing and retaining facial images of people in public without their consent interferes with their right to privacy and, over time, their willingness to move, assemble and speak freely.

The most immediate consequence is a chilling effect, particularly where civic space is already restricted. Knowing they can be identified and tracked, activists and journalists are less willing to attend protests for fear of later arrest or reprisals, and end up self-censoring. Civil society organisations also report heightened anxiety about the risks for their members and partners.

What should governments and civil society do?

None of the 11 countries studied have a legal framework capable of balancing the state’s security needs with its commitments to protect fundamental human rights. That must change. Governments must adopt clear regulations on surveillance, including restrictions on facial recognition and other AI tools, require independent human rights impact assessments before introducing new systems, make procurement and deployment processes transparent and establish strong oversight mechanisms, including judicial and parliamentary scrutiny, to prevent abuse.

Civil society should continue documenting abuses, raising public awareness and advocating for accountability, while also supporting affected people and communities through digital security support and legal assistance.

Technology-exporting states and donors must enforce stricter controls and safeguards on the export and financing of these tools, support rights-based approaches to digital governance and help fund independent monitoring and advocacy across Africa.

Without urgent action, these systems will continue to expand, and the rights of people across Africa will continue to shrink.

CIVICUS interviews a wide range of civil society activists, experts and leaders to gather diverse perspectives on civil society action and current issues for publication on its CIVICUS Lens platform. The views expressed in interviews are the interviewees’ and do not necessarily reflect those of CIVICUS. Publication does not imply endorsement of interviewees or the organisations they represent.

This article was first published on the Website of CIVICUS LENS on April 07, 2026