State of Internet Freedom In Africa Report

2025 State of Internet Freedom In Africa Report Documents the Implications of AI on Digital Democracy in Africa

By Juliet Nanfuka | 

The 2025 edition of the Forum on Internet Freedom in Africa (FIFAfrica25) concluded on a high note with the unveiling of the latest State of Internet Freedom in Africa (SIFA) report. Titled Navigating the Implications of AI on Digital Democracy in Africa, this landmark study unpacks how artificial intelligence is shaping, disrupting, and reimagining civic space and digital rights across the continent.

Drawing on research from 14 countries (Cameroon, Egypt, Ethiopia, Ghana, Kenya, Mozambique, Namibia, Nigeria, Rwanda, Senegal, South Africa, Tunisia, Uganda, and Zimbabwe), the report documents both the immense promise and the urgent perils of AI in Africa. It highlights AI’s potential to strengthen democratic participation, improve public services, and drive innovation, while also warning of its role in amplifying surveillance, disinformation, and exclusion. 

Using a qualitative approach, including literature review and key informant interviews, the report shows that AI is rapidly transforming how Africans interact with technology, yet AI also amplifies existing vulnerabilities, introduces new challenges that undermine fundamental freedoms, and deepens existing inequalities.

The report notes that the political environment is a crucial determinant of AI’s trajectory, with strong democracies generally enabling a positive outcome. Top performers in freedom and governance indices such as South Africa, Ghana, Namibia, and Senegal are more likely to set the standard to AI rollout in Africa. Conversely, countries with lower democratic credentials such as Cameroon, Egypt, Ethiopia, and Rwanda risk constraining AI’s potential or deploying it to amplify digital authoritarianism and political repression.  

Countries such as South Africa, Tunisia and Egypt that have a higher internet access and technological development, Gross Domestic Product (GDP) per capita, and score highly on the Human Development Index (HDI), are more likely to lead in AI. Meanwhile, countries with lower or weaker levels of digital infrastructure face greater challenges and higher risks of AI replicating and worsening existing divides. Such countries include Cameroon, Mozambique and Uganda.

The political environment is a crucial determinant of AI’s trajectory, with strong democracies generally enabling a positive outcome. Economic and developmental status also dictates the capacity for AI development and adoption. 

Despite these challenges, the report documents that AI offers substantial value to the public sector by improving service delivery and enhancing transparency. Governments are leveraging AI tools for efficiency, such as the South African Revenue Services (SARS) AI Assistant for tax assessments and Nigeria’s Service-Wise GPT for streamlined governance document access. In Kenya, the Sauti ya Bajeti (Voice of the Budget) platform fosters fiscal transparency by allowing citizens to query and track government expenditures. Furthermore, countries like Tunisia and Uganda are using AI models within tax bodies to detect fraud, while Rwanda is deploying AI for judicial system improvements and identity management at borders.

The private sector and academic institutions are driving AI-inspired innovation, particularly in the areas of FinTech, AgriTech, and Natural Language Processing (NLP). For the latter, notable efforts to localise AI include Tunisia’s TUNBERT model for Tunisian Arabic, and Ghana’s Khaya, an open-source AI-powered translator tailored for local languages. In Ghana, the DeafCanTalk, is an AI-powered app that enables bidirectional translation between sign language and spoken language, and has enhanced accessibility for deaf users. Rwanda has integrated AI into healthcare using drone delivery systems for medical supplies, while Cameroon and Uganda use AI to assist farmers with pest identification. 

However, despite increasing investment, such as the ongoing USD 720 million investment in compute power by Cassava Technologies across hubs in South Africa, Egypt, Kenya, Morocco, and Nigeria, Africa receives  significantly lower AI funding than global counterparts.

Moreover, while AI is gaining traction across many sectors, the proliferation of AI-generated misinformation and disinformation is a pervasive and growing challenge that poses a critical threat to electoral integrity. During South Africa’s 2024 elections, deepfake videos were circulated to manipulate perceptions and endorse political entities. Similarly, during elections and protests in Kenya and Namibia, deepfake technology and automated campaigns were used to discredit opponents. 

The report also documents that governments are deploying AI-powered surveillance technologies, which has led to widespread privacy violations and a chilling effect on freedoms. For example, pro-government propagandists in Rwanda utilised Large Language Models (LLMs) to mass-produce synthetic messages on social media, simulating authentic support and suppressing dissenting voices. Meanwhile, algorithmic bias and exclusion are producing discriminatory outcomes, particularly against low-resource African languages. Also, AI-based content moderation is often ineffective because it lacks contextual understanding and fails to capture local nuance.

A key finding in the report is that across the continent, the pace of AI development far outstrips regulatory readiness. None of the 14 study countries has AI-specific legislation. Instead, fragmented laws on data protection, cybercrime, and copyright are stretched to cover AI, but remain inadequate. Data protection authorities are under-resourced, under-staffed, and often lack the technical expertise required to audit or govern complex AI systems.

Although many national AI strategies are emerging, they prioritise economic growth while neglecting human rights and accountability. This is also fuelled by policy processes that are often opaque and dominated by state actors, with limited multistakeholder participation.

The report  stresses that without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities. 

To avoid the current trajectory that AI is taking in Africa, in which AI risks entrenching authoritarianism and exacerbating inequalities, the report calls for a human-centred AI governance framework built on inclusivity, transparency, and context. 

It also makes recommendations, including enacting comprehensive AI-specific legislation, instituting mandatory human rights impact assessments, establishing empowered AI and data governance institutions, and promoting rights-based advocacy. Others are building technical capacity across governments, civil society and media, and developing policies that prioritise equity and human dignity alongside innovation.

AI offers Africa the opportunity to foster innovation, strengthen democracy, and drive sustainable development. This edition of the State of Internet Freedom in Africa report provides an evidence-based roadmap to ensure that Africa’s digital future remains open, inclusive, and rights-respecting.Find the report here.

Africa’s Digital Dilemma: Platform Regulation Vs Internet Freedom

By Brian Byaruhanga |

Imagine waking up to find Facebook and Instagram inaccessible on your phone – not due to a network disruption, but because the platforms pulled their services out of your country. This scenario now looms over Nigeria, as Meta, the parent company of Facebook and Instagram, may shut down its services in Nigeria over nearly USD 290 million in regulatory fines. The fines stem from allegations of anti-competitive practices, data privacy violations, and unregulated advertising content contrary to the national laws. Nigerian authorities insist the company must comply with national laws, especially those governing user data and competition. 

While this standoff centres on Nigeria, it signals a deeper struggle across Africa as governments assert digital sovereignty over global tech platforms. At the same time, millions of citizens rely on these platforms for communication, activism, access to health and education, economic livelihood, and self-expression. Striking a balance between regulation and rights in Africa’s evolving digital landscape has never been more urgent.

Meta versus Nigeria: Not Just One Country’s Battle

The tension between Meta and Nigeria is not new, nor is it unique. Similar dynamics have played out elsewhere on the continent:

  • Uganda (2021–Present): The Ugandan government blocked Facebook after the platform removed accounts linked to state actors during the 2021 elections. The block remains in place, effectively cutting off millions from a critical social media service unless they use Virtual Private Networks (VPNs) to circumvent the blockage.
  • Senegal (2023): TikTok was suspended amid political unrest, with authorities citing the app’s use for spreading misinformation and hate speech.
  • Ethiopia (2022): Facebook and Twitter were accused of amplifying hate speech during internal conflicts, prompting pressure for tighter oversight.
  • South Africa (2025): In a February 2025 report, the Competition Commission found that freedom of expression, plurality and diversity of media in South Africa had been severely infringed upon by platforms including Google and Facebook. 

The Double-Edged Sword of Regulation

Governments have legitimate reasons to demand transparency, data protection, and content moderation. Today, over two-thirds of African countries have legislation to protect personal data, and regulators are becoming more assertive. Nigeria’s Data Protection Commission (NDPC), created by a 2023 law, wasted little time in taking on a behemoth like Meta. Kenya also has an active Office of the Data Protection Commissioner, which has investigated and fined companies for data breaches. 

South Africa’s Information Regulator has been especially bold, issuing an enforcement notice to WhatsApp to comply with privacy standards after finding that the messaging service’s privacy policy in South Africa was different to that in the European Union. These actions send a clear message that privacy is a universal right, and Africans should not have weaker safeguards.

These regulatory institutions aim to ensure that citizens’ data is not exploited and that tech companies operate responsibly. Yet, in practice, digital regulation in Africa often walks a thin line between protecting rights and suppressing them.

While governments deserve scrutiny, platforms like Meta, TikTok, and X are not blameless. They often delay to respond to harmful content that fuels violence or division. Their algorithms can amplify hate, misinformation, and sensationalism, while opaque data harvesting practices continue to exploit users. For instance, Branch, a San Francisco-based microlending app operating in Kenya and Nigeria, collects extensive personal data such as handset details, SMS logs, GPS data, call records, and contact lists in exchange for small loans, sometimes for as little as USD 2. This exploitative business model capitalises on vulnerable socio-economic conditions, effectively forcing users to trade sensitive personal data for minimal financial relief.

Many African regulators are pushing back by demanding localisation of data, adherence to national laws, and greater responsiveness, but platform threats to exit rather than comply raise concerns of digital neo-colonialism where African countries are expected to accept second-tier treatment or risk exclusion.

Beyond privacy, African regulators are increasingly addressing monopolistic behaviour and unfair practices by Big Tech as part of a broader push for digital sovereignty. Nigeria’s USD 290 million fine against Meta is not just about data protection and privacy, but also fair competition, consumer rights, and the country’s authority to govern its digital space. Countries like Nigeria, South Africa and Kenya are asserting their right to regulate digital platforms within their borders, challenging the long-standing dominance of global tech firms. The actions taken against Meta highlight the growing complexity of balancing national interests with the transnational influence of tech giants. 

While Meta’s threat to exit may signal its discomfort with what it views as restrictive regulation, it also exposes the real struggle governments face in asserting control over digital infrastructure that often operates beyond state jurisdiction. Similarly, in other parts of Africa, there are inquiries and new policies targeting the market power of tech giants. For instance, South Africa’s competition authorities have looked at requiring Google and Facebook to compensate news publishers  (similar to the News Media and Digital Platforms Mandatory Bargaining Code in Australia). These moves reflect a broader global concern that a few platforms have too much control over markets and need checks to ensure fairness.

The Cost of Disruption: Economic and Social Impacts

When platforms go dark, the consequences are swift:

  • Businesses and entrepreneurs lose access to vital marketing and sales tools.
  • Creators and influencers face income loss and audience disconnection.
  • Activists and journalists find their voices limited, especially during politically charged periods.
  • Citizens are excluded from conversations and accessing information that could help them make critical decisions that affect their livelihoods.
  • Students and educators experience setbacks in remote learning, particularly in under-resourced communities that rely on social media or messaging apps to coordinate learning.
  • Access to public services is disrupted, from health services to government updates and emergency communications.

A 2023 GSMA report showed that more than 50% of small businesses in Sub-Saharan Africa use social media for customer engagement. In countries such as Nigeria, Uganda, Kenya or South Africa, Facebook and Instagram are lifelines. Losing access even temporarily sets back innovation, erodes trust, and impacts livelihoods.

A Call for Continental Solutions

Africa’s digital future must not hinge on the whims of a single government or a foreign tech giant. Both states and companies should be accountable for protecting rights in digital contexts, ensuring that development and digitisation do not trample on dignity and equity. This requires:

  • Harmonised continental policies on data protection, content regulation, and digital trade.
  • Regional norm-setting mechanisms (like the African Union) to enforce accountability for both governments and corporations.
  • Investments in African tech platforms to offer resilient alternatives.
  • Public education on digital rights to empower users against abuse from both state and corporate actors.
  • Pan-African contextualised business and human rights frameworks to ensure that digital governance aligns with both local realities and global human rights standards. This includes the operationalisation of the UN Guiding Principles on Business and Human Rights, following the examples of countries like Kenya, South Africa and Uganda, which have developed national action plans to embed human rights in corporate practice.

The stakes are high in the confrontation between Nigeria and Meta. If mismanaged, this tension could lead to fragmentation, exclusion, and setbacks for internet freedom, with ordinary users across the continent paying the price. To avoid this, the way forward must be grounded in the multistakeholder model of internet governance which aims for governments to regulate wisely and transparently, and for tech companies to respect local laws and communities, and for civil society to be actively engaged and vigilant. This will contribute to a future where the internet is open, secure, and inclusive and where innovation and justice thrive.