Applications are Open for a New Round of Africa Digital Rights Funding!

Announcement |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is calling for proposals to support digital rights work across Africa.

This call for proposals is the 10th under the CIPESA-run Africa Digital Rights Fund (ADRF) initiative that provides rapid response and flexible grants to organisations and networks to implement activities that promote digital rights and digital democracy, including advocacy, litigation, research, policy analysis, skills development, and movement building.

 The current call is particularly interested in proposals for work related to:

  • Data governance including aspects of data localisation, cross-border data flows, biometric databases, and digital ID.
  • Digital resilience for human rights defenders, other activists and journalists.
  • Censorship and network disruptions.
  • Digital economy.
  • Digital inclusion, including aspects of accessibility for persons with disabilities.
  • Disinformation and related digital harms.
  • Technology-Facilitated Gender-Based Violence (TFGBV).
  • Platform accountability and content moderation.
  • Implications of Artificial Intelligence (AI).
  • Digital Public Infrastructure (DPI).

Grant amounts available range between USD 5,000 and USD 25,000 per applicant, depending on the need and scope of the proposed intervention. Cost-sharing is strongly encouraged, and the grant period should not exceed eight months. Applications will be accepted until November 17, 2025. 

Since its launch in April 2019, the ADRF has provided initiatives across Africa with more than one million US Dollars and contributed to building capacity and traction for digital rights advocacy on the continent.  

Application Guidelines

Geographical Coverage

The ADRF is open to organisations/networks based or operational in Africa and with interventions covering any country on the continent.

Size of Grants

Grant size shall range from USD 5,000 to USD 25,000. Cost sharing is strongly encouraged.

Eligible Activities

The activities that are eligible for funding are those that protect and advance digital rights and digital democracy. These may include but are not limited to research, advocacy, engagement in policy processes, litigation, digital literacy and digital security skills building. 

Duration

The grant funding shall be for a period not exceeding eight months.

Eligibility Requirements

  • The Fund is open to organisations and coalitions working to advance digital rights and digital democracy in Africa. This includes but is not limited to human rights defenders, media, activists, think tanks, legal aid groups, and tech hubs. Entities working on women’s rights, or with youth, refugees, persons with disabilities, and other marginalised groups are strongly encouraged to apply.
  • The initiatives to be funded will preferably have formal registration in an African country, but in some circumstances, organisations and coalitions that do not have formal registration may be considered. Such organisations need to show evidence that they are operational in a particular African country or countries.
  • The activities to be funded must be in/on an African country or countries.

Ineligible Activities

  • The Fund shall not fund any activity that does not directly advance digital rights or digital democracy.
  • The Fund will not support travel to attend conferences or workshops, except in exceptional circumstances where such travel is directly linked to an activity that is eligible.
  • Costs that have already been incurred are ineligible.
  • The Fund shall not provide scholarships.
  • The Fund shall not support equipment or asset acquisition.

Administration

The Fund is administered by CIPESA. An internal and external panel of experts will make decisions on beneficiaries based on the following criteria:

  • If the proposed intervention fits within the Fund’s digital rights priorities.
  • The relevance to the given context/country.
  • Commitment and experience of the applicant in advancing digital rights and digital democracy.
  • Potential impact of the intervention on digital rights and digital democracy policies or practices.

The deadline for submissions is Monday, November 17, 2025. The application form can be accessed here.

State of Internet Freedom In Africa Report

2025 State of Internet Freedom In Africa Report Documents the Implications of AI on Digital Democracy in Africa

By Juliet Nanfuka | 

The 2025 edition of the Forum on Internet Freedom in Africa (FIFAfrica25) concluded on a high note with the unveiling of the latest State of Internet Freedom in Africa (SIFA) report. Titled Navigating the Implications of AI on Digital Democracy in Africa, this landmark study unpacks how artificial intelligence is shaping, disrupting, and reimagining civic space and digital rights across the continent.

Drawing on research from 14 countries (Cameroon, Egypt, Ethiopia, Ghana, Kenya, Mozambique, Namibia, Nigeria, Rwanda, Senegal, South Africa, Tunisia, Uganda, and Zimbabwe), the report documents both the immense promise and the urgent perils of AI in Africa. It highlights AI’s potential to strengthen democratic participation, improve public services, and drive innovation, while also warning of its role in amplifying surveillance, disinformation, and exclusion. 

Using a qualitative approach, including literature review and key informant interviews, the report shows that AI is rapidly transforming how Africans interact with technology, yet AI also amplifies existing vulnerabilities, introduces new challenges that undermine fundamental freedoms, and deepens existing inequalities.

The report notes that the political environment is a crucial determinant of AI’s trajectory, with strong democracies generally enabling a positive outcome. Top performers in freedom and governance indices such as South Africa, Ghana, Namibia, and Senegal are more likely to set the standard to AI rollout in Africa. Conversely, countries with lower democratic credentials such as Cameroon, Egypt, Ethiopia, and Rwanda risk constraining AI’s potential or deploying it to amplify digital authoritarianism and political repression.  

Countries such as South Africa, Tunisia and Egypt that have a higher internet access and technological development, Gross Domestic Product (GDP) per capita, and score highly on the Human Development Index (HDI), are more likely to lead in AI. Meanwhile, countries with lower or weaker levels of digital infrastructure face greater challenges and higher risks of AI replicating and worsening existing divides. Such countries include Cameroon, Mozambique and Uganda.

The political environment is a crucial determinant of AI’s trajectory, with strong democracies generally enabling a positive outcome. Economic and developmental status also dictates the capacity for AI development and adoption. 

Despite these challenges, the report documents that AI offers substantial value to the public sector by improving service delivery and enhancing transparency. Governments are leveraging AI tools for efficiency, such as the South African Revenue Services (SARS) AI Assistant for tax assessments and Nigeria’s Service-Wise GPT for streamlined governance document access. In Kenya, the Sauti ya Bajeti (Voice of the Budget) platform fosters fiscal transparency by allowing citizens to query and track government expenditures. Furthermore, countries like Tunisia and Uganda are using AI models within tax bodies to detect fraud, while Rwanda is deploying AI for judicial system improvements and identity management at borders.

The private sector and academic institutions are driving AI-inspired innovation, particularly in the areas of FinTech, AgriTech, and Natural Language Processing (NLP). For the latter, notable efforts to localise AI include Tunisia’s TUNBERT model for Tunisian Arabic, and Ghana’s Khaya, an open-source AI-powered translator tailored for local languages. In Ghana, the DeafCanTalk, is an AI-powered app that enables bidirectional translation between sign language and spoken language, and has enhanced accessibility for deaf users. Rwanda has integrated AI into healthcare using drone delivery systems for medical supplies, while Cameroon and Uganda use AI to assist farmers with pest identification. 

However, despite increasing investment, such as the ongoing USD 720 million investment in compute power by Cassava Technologies across hubs in South Africa, Egypt, Kenya, Morocco, and Nigeria, Africa receives  significantly lower AI funding than global counterparts.

Moreover, while AI is gaining traction across many sectors, the proliferation of AI-generated misinformation and disinformation is a pervasive and growing challenge that poses a critical threat to electoral integrity. During South Africa’s 2024 elections, deepfake videos were circulated to manipulate perceptions and endorse political entities. Similarly, during elections and protests in Kenya and Namibia, deepfake technology and automated campaigns were used to discredit opponents. 

The report also documents that governments are deploying AI-powered surveillance technologies, which has led to widespread privacy violations and a chilling effect on freedoms. For example, pro-government propagandists in Rwanda utilised Large Language Models (LLMs) to mass-produce synthetic messages on social media, simulating authentic support and suppressing dissenting voices. Meanwhile, algorithmic bias and exclusion are producing discriminatory outcomes, particularly against low-resource African languages. Also, AI-based content moderation is often ineffective because it lacks contextual understanding and fails to capture local nuance.

A key finding in the report is that across the continent, the pace of AI development far outstrips regulatory readiness. None of the 14 study countries has AI-specific legislation. Instead, fragmented laws on data protection, cybercrime, and copyright are stretched to cover AI, but remain inadequate. Data protection authorities are under-resourced, under-staffed, and often lack the technical expertise required to audit or govern complex AI systems.

Although many national AI strategies are emerging, they prioritise economic growth while neglecting human rights and accountability. This is also fuelled by policy processes that are often opaque and dominated by state actors, with limited multistakeholder participation.

The report  stresses that without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities. 

To avoid the current trajectory that AI is taking in Africa, in which AI risks entrenching authoritarianism and exacerbating inequalities, the report calls for a human-centred AI governance framework built on inclusivity, transparency, and context. 

It also makes recommendations, including enacting comprehensive AI-specific legislation, instituting mandatory human rights impact assessments, establishing empowered AI and data governance institutions, and promoting rights-based advocacy. Others are building technical capacity across governments, civil society and media, and developing policies that prioritise equity and human dignity alongside innovation.

AI offers Africa the opportunity to foster innovation, strengthen democracy, and drive sustainable development. This edition of the State of Internet Freedom in Africa report provides an evidence-based roadmap to ensure that Africa’s digital future remains open, inclusive, and rights-respecting.Find the report here.

Africa’s Digital Dilemma: Platform Regulation Vs Internet Freedom

By Brian Byaruhanga |

Imagine waking up to find Facebook and Instagram inaccessible on your phone – not due to a network disruption, but because the platforms pulled their services out of your country. This scenario now looms over Nigeria, as Meta, the parent company of Facebook and Instagram, may shut down its services in Nigeria over nearly USD 290 million in regulatory fines. The fines stem from allegations of anti-competitive practices, data privacy violations, and unregulated advertising content contrary to the national laws. Nigerian authorities insist the company must comply with national laws, especially those governing user data and competition. 

While this standoff centres on Nigeria, it signals a deeper struggle across Africa as governments assert digital sovereignty over global tech platforms. At the same time, millions of citizens rely on these platforms for communication, activism, access to health and education, economic livelihood, and self-expression. Striking a balance between regulation and rights in Africa’s evolving digital landscape has never been more urgent.

Meta versus Nigeria: Not Just One Country’s Battle

The tension between Meta and Nigeria is not new, nor is it unique. Similar dynamics have played out elsewhere on the continent:

  • Uganda (2021–Present): The Ugandan government blocked Facebook after the platform removed accounts linked to state actors during the 2021 elections. The block remains in place, effectively cutting off millions from a critical social media service unless they use Virtual Private Networks (VPNs) to circumvent the blockage.
  • Senegal (2023): TikTok was suspended amid political unrest, with authorities citing the app’s use for spreading misinformation and hate speech.
  • Ethiopia (2022): Facebook and Twitter were accused of amplifying hate speech during internal conflicts, prompting pressure for tighter oversight.
  • South Africa (2025): In a February 2025 report, the Competition Commission found that freedom of expression, plurality and diversity of media in South Africa had been severely infringed upon by platforms including Google and Facebook. 

The Double-Edged Sword of Regulation

Governments have legitimate reasons to demand transparency, data protection, and content moderation. Today, over two-thirds of African countries have legislation to protect personal data, and regulators are becoming more assertive. Nigeria’s Data Protection Commission (NDPC), created by a 2023 law, wasted little time in taking on a behemoth like Meta. Kenya also has an active Office of the Data Protection Commissioner, which has investigated and fined companies for data breaches. 

South Africa’s Information Regulator has been especially bold, issuing an enforcement notice to WhatsApp to comply with privacy standards after finding that the messaging service’s privacy policy in South Africa was different to that in the European Union. These actions send a clear message that privacy is a universal right, and Africans should not have weaker safeguards.

These regulatory institutions aim to ensure that citizens’ data is not exploited and that tech companies operate responsibly. Yet, in practice, digital regulation in Africa often walks a thin line between protecting rights and suppressing them.

While governments deserve scrutiny, platforms like Meta, TikTok, and X are not blameless. They often delay to respond to harmful content that fuels violence or division. Their algorithms can amplify hate, misinformation, and sensationalism, while opaque data harvesting practices continue to exploit users. For instance, Branch, a San Francisco-based microlending app operating in Kenya and Nigeria, collects extensive personal data such as handset details, SMS logs, GPS data, call records, and contact lists in exchange for small loans, sometimes for as little as USD 2. This exploitative business model capitalises on vulnerable socio-economic conditions, effectively forcing users to trade sensitive personal data for minimal financial relief.

Many African regulators are pushing back by demanding localisation of data, adherence to national laws, and greater responsiveness, but platform threats to exit rather than comply raise concerns of digital neo-colonialism where African countries are expected to accept second-tier treatment or risk exclusion.

Beyond privacy, African regulators are increasingly addressing monopolistic behaviour and unfair practices by Big Tech as part of a broader push for digital sovereignty. Nigeria’s USD 290 million fine against Meta is not just about data protection and privacy, but also fair competition, consumer rights, and the country’s authority to govern its digital space. Countries like Nigeria, South Africa and Kenya are asserting their right to regulate digital platforms within their borders, challenging the long-standing dominance of global tech firms. The actions taken against Meta highlight the growing complexity of balancing national interests with the transnational influence of tech giants. 

While Meta’s threat to exit may signal its discomfort with what it views as restrictive regulation, it also exposes the real struggle governments face in asserting control over digital infrastructure that often operates beyond state jurisdiction. Similarly, in other parts of Africa, there are inquiries and new policies targeting the market power of tech giants. For instance, South Africa’s competition authorities have looked at requiring Google and Facebook to compensate news publishers  (similar to the News Media and Digital Platforms Mandatory Bargaining Code in Australia). These moves reflect a broader global concern that a few platforms have too much control over markets and need checks to ensure fairness.

The Cost of Disruption: Economic and Social Impacts

When platforms go dark, the consequences are swift:

  • Businesses and entrepreneurs lose access to vital marketing and sales tools.
  • Creators and influencers face income loss and audience disconnection.
  • Activists and journalists find their voices limited, especially during politically charged periods.
  • Citizens are excluded from conversations and accessing information that could help them make critical decisions that affect their livelihoods.
  • Students and educators experience setbacks in remote learning, particularly in under-resourced communities that rely on social media or messaging apps to coordinate learning.
  • Access to public services is disrupted, from health services to government updates and emergency communications.

A 2023 GSMA report showed that more than 50% of small businesses in Sub-Saharan Africa use social media for customer engagement. In countries such as Nigeria, Uganda, Kenya or South Africa, Facebook and Instagram are lifelines. Losing access even temporarily sets back innovation, erodes trust, and impacts livelihoods.

A Call for Continental Solutions

Africa’s digital future must not hinge on the whims of a single government or a foreign tech giant. Both states and companies should be accountable for protecting rights in digital contexts, ensuring that development and digitisation do not trample on dignity and equity. This requires:

  • Harmonised continental policies on data protection, content regulation, and digital trade.
  • Regional norm-setting mechanisms (like the African Union) to enforce accountability for both governments and corporations.
  • Investments in African tech platforms to offer resilient alternatives.
  • Public education on digital rights to empower users against abuse from both state and corporate actors.
  • Pan-African contextualised business and human rights frameworks to ensure that digital governance aligns with both local realities and global human rights standards. This includes the operationalisation of the UN Guiding Principles on Business and Human Rights, following the examples of countries like Kenya, South Africa and Uganda, which have developed national action plans to embed human rights in corporate practice.

The stakes are high in the confrontation between Nigeria and Meta. If mismanaged, this tension could lead to fragmentation, exclusion, and setbacks for internet freedom, with ordinary users across the continent paying the price. To avoid this, the way forward must be grounded in the multistakeholder model of internet governance which aims for governments to regulate wisely and transparently, and for tech companies to respect local laws and communities, and for civil society to be actively engaged and vigilant. This will contribute to a future where the internet is open, secure, and inclusive and where innovation and justice thrive. 

Social Media 4 Peace: An Initiative to Tackle the Quagmire Of Content Moderation

By Juliet Nanfuka |

Content moderation has emerged as a critical global concern in the digital age. In Africa, coinciding with increasing digital penetration and digitalisation, along with problematic platform responses to harmful content and retrogressive national legislation, the demand for robust and rights-respecting content moderation has reached a new level of urgency. 

Online content platforms and governments are increasingly getting caught in a contentious struggle on how to address content moderation, especially as online hate speech and disinformation become more pervasive. These vices regularly seep into the real world, undermining human rights, social cohesion, democracy, and peace. They have also corroded public discourse and fragmented societies, with marginalised communities often bearing some of the gravest  consequences. 

The Social Media 4 Peace (SM4P) project run by the United Nations Educational, Scientific and Cultural Organization (UNESCO) was established to strengthen the resilience of societies to potentially harmful content spread online, in particular hate speech inciting violence, while protecting freedom of expression and enhancing the promotion of peace through digital technologies, notably social media. The project was piloted in four countries – Bosnia and Herzegovina, Colombia, Indonesia, and Kenya.

At the Forum on Internet Freedom in Africa (FIFAfrica) held in Tanzania in September 2023, UNESCO hosted a session where panellists interrogated the role of a multi-stakeholder coalition in addressing gaps in content moderation with a focus on Kenya. The session highlighted the importance of multi-stakeholder cooperation, accountability models, and safety by design to address online harmful content, particularly disinformation and hate speech in Africa. 

In March 2023, as a product of the SM4P, UNESCO in partnership with the National Cohesion and Integration Commission (NCIC) launched the National Coalition on Freedom of Expression and Content Moderation in Kenya. The formation of the coalition, whose membership has grown to include various governmental, civil society and private sector entities, is a testament to the need for content moderation efforts based on broader multi-stakeholder collaboration. 

As such, the coalition provided a learning model for the participants at FIFAfrica, who included legislators, regulators, civil society activists and policy makers, whose countries are grappling with establishing effective and rights-based content moderation mechanisms. The session explored good practices from the SM4P project in Kenya for possible replication of the model coalition across African countries. Discussions largely centred on issues around content moderation challenges and opportunities for addressing these issues in the region.

Online content moderation has presented a new regulatory challenge for governments and technology companies. Striking a balance between safeguarding freedom of expression and curtailing harmful and illegal content has presented a challenge especially as such decisions have largely fallen into the remit of platforms, and state actors have often criticised platforms’ content moderation practices. 

This has resulted in governments and platforms coming to loggerheads as was witnessed during the 2021 Uganda elections. Six days before the election, Meta blocked various accounts and removed content for what it termed as “coordinated inauthentic behavior”. Most of the accounts affected were related to pro-ruling party narratives or had links to the ruling party. In response, the Uganda government blocked social media before blocking access to the entire internet. Nigeria similarly suspended Twitter in June 2021 for deleting a post made from the president’s account.

Social media companies are taking various measures to curb such content, including by using artificial intelligence tools, employing human content moderators, collaborating with fact-checking organisations and trusted partner organisations that identify false and harmful content, and relying on user reports of harmful and illegal content. However, these measures by the platforms are often criticised as inadequate. 

On the other hand, some African governments were also criticised for enacting laws and issuing directives that undermine citizens’ fundamental rights, under the guise of combating harmful content.  

Indeed, speaking at the 2023 edition of FIFAfrica, Felicia Anthonio, #KeepItOn Campaign Manager at Access Now, stated that weaknesses in content moderation are often cited by some governments that shut down the internet. She noted that governments justify shutting down internet access as a tool to control harmful content amidst concerns of hate speech, disinformation, and incitement of violence. This was reaffirmed by Thobekile Matimbe from Paradigm Initiative who noted that “content moderation is a delicate test of speech”, stressing that if content moderation is not balanced against freedom of expression and access to information, it would result in violation of fundamental human rights. Beyond governments and social media companies, other stakeholders need to step up efforts to combat harmful content and advocate for balanced content moderation policies. For instance, speakers at the FIFAfrica session were unanimous that civil society organisations, academic institutions, and media regulators need to enhance digital media literacy and increase collaborative efforts. Further, they stressed the need for these stakeholders to regularly conduct research to underpin evidence-based advocacy, and to support the development of human rights-centered strategies in content moderation.