The G20 Should Challenge the Power Dynamics in Digital Public Infrastructure

Juliet Nanfuka |

Data plays a crucial role in T20 discussions at the G20, influencing online interaction and civic engagement. The G20 should use its influence to create a multi-stakeholder agenda for Digital Public Infrastructure design.

Data is at the heart of T20 discussions around the G20, as it informs the architecture of online interaction, civic participation (and exclusion) and the governance of digital society. As such, it is also central to digital public infrastructure (DPI), serving as a foundational requirement and an enabler of new data generation and data mobility. Data drives the three key pillars of DPI – digital identification, digital payments and data exchange – in addition to other emerging features such as geospatial data and data aggregation. However, the expanding role of DPI raises questions about its alignment with constitutional guarantees, data protection frameworks and the lived realities of end users across Africa.

In 2023, India’s G20 presidency laid the foundation for discourse on DPI with great precision. A year later, the 2024 G20 Rio de Janeiro Leaders’ Declaration acknowledged ‘the contribution of digital public infrastructure to an equitable digital transformation’. It went on to note ‘the transformative power of digital technologies to bridge existing divides and empower societies and individuals including all women and girls and people in vulnerable situations. 

Consequently, DPI has been positioned as a necessary tool for international trade facilitation and industrialisation in developing countries. In Africa, this momentum has been supported by strategies such as the AU’s Digital Transformation Strategy for Africa (2020–2030), the African Continental Free Trade Area (AfCFTA) and the 2024 adoption of the Continental AI Strategy. Various countries across the continent have integrated DPI into their national strategies.

The pace of DPI integration is mirrored by growing financial investment in DPI. Examples include the $200 million Ghana Digital Acceleration Project by the World Bank in 2022 to expand broadband access and strengthen digital innovation ecosystems. In June 2025, the AfCFTA Adjustment Fund Credit Facility funded $10 million to support private sector adaptation to AfCFTA frameworks, with initial commitments to Telecel Global Services to enhance connectivity and regional integration. The company provides wholesale voice and SMS services and enterprise connectivity solutions to more than 250 telecom operators across Africa and globally.

While the expansion of DPI is often framed as a progressive step, it also carries significant governance trade-offs. The expansion of DPI in countries with weak democratic safeguards heightens the risk of state overreach, mass surveillance and reduced civic freedoms, making it essential to set clear limits on state access to citizens’ data to safeguard participation and accountability. Further, concerns over data sovereignty also loom.

Other T20 commentaries have stressed the urgent need for multi-stakeholder engagement to align DPI with the realities of developing countries. Without this alignment, DPI could increase existing regulatory gaps that compromise civic rights and consumer protection, fraud prevention and privacy. Meanwhile, the current wave of DPI design could exclude smaller economies that lack the capacity to engage in complex cross-border arrangements, such as those established between India’s Unified Payments Interface and Singapore’s PayNow. However, efforts such as the East African Community’s Cross-Border Payment System Masterplan aimed at inclusive, secure, efficient and interoperable cross-border payments in the region are underway.

If DPI is deployed without further interrogation, especially within the contexts of lower-income and developing countries that are often still navigating authoritarian systems, there is a risk of introducing yet another form or layer of digital exclusion from the global ecosystem. This could amplify existing national exclusions emerging from lack of access to the basics promised by DPI, such as national identity documents as keys to financial inclusion or access to basic services and civic rights.

When governments replace human interaction with automated systems, they risk ignoring the real-life experiences and needs of people who use – or could use – DPI. Thus, while DPI is being positioned as a solution to the challenges many developing countries are facing, it is important to keep in mind that infrastructure is not neutral. Its built-in biases, risks and design choices will ultimately impact citizens. Thus, for the real impact of DPI to be realised, it is necessary for the G20 to address concerns on:

  • The power affordances embedded in DPI design. The architecture of DPI prioritises the interests of those who design and fund it. The G20 should require that DPI initiatives undergo power mapping to identify who holds decision-making authority, how data flows are controlled and which actors stand to benefit or be marginalised by the design and deployment of DPI.
  • The institutionalisation of regulatory sandboxing. Regulatory sandboxes offer a controlled, transparent environment where DPI tools and policies can be tested for fairness, legality, inclusivity and public interest alignment before full-scale implementation. The G20 should promote the use of regulatory sandboxes as a mechanism to scrutinise DPI systems and their governance frameworks.
  • Strengthen multi-stakeholder inclusion. DPI needs to be built with the participation of more stakeholders – including civil society, private sector actors, academia and marginalised communities – in decision-making. The G20 should use its convening power to set the multi-stakeholder agenda in the design of DPI interventions. 
  • Safeguard data sovereignty. African countries developing data governance frameworks need to balance sovereignty with interoperability, and prevent a dependency on foreign-controlled systems.
  • Enhance public awareness interventions. Despite significant DPI developments, many citizens remain unaware of their implications. The media plays a critical role in bridging this gap. There should be more integration with media partners in furthering public awareness of DPI, its functions and consequences. The G20 should not negate the role of the media in driving public awareness on DPI interventions.

This commentary was first published on the T20 website on October 06, 2025.

FIFAfrica25 Invites YOU to “Be The Experience”!

FIFAfrica25 |

This year, we invite participants of the Forum on Internet Freedom in Africa (FIFAfrica25) to “Be the experience!” The Forum will encourage attendees, onsite or participating remotely to engage in various interactions that bring digital rights issues to life.  These experiences aim to break down barriers between complex digital rights policy concepts and real-world lived experiences. Ultimately, whether you are a policymaker, activist, journalist, academic, technologist, or artist, FIFAfrica25 will have a space for you to contribute.

Here is what we have lined up: 

  • An online community of attendees already meeting and engaging with each other on various topics. Be sure to be registered on the event platform to join in. 
  • An immersive exhibition where various organisations and individuals will share their work and artworks.
  • A biker doing a round trip across 10 countries (more details below) to advance the call for the #RoadToDigitalSafety 

A Run for #InternetFreedomAfrica that aims to bring together participants to jog, or walk in solidarity with the call for a free, fair and open internet. More details below.

“Be The Experience” and Win!

We have some goodies lined up to reward those who have lived up to the FIFAfrica’s Be The Experience experience, this could be through vibrant engagement that gets you high scores on the event leader board, sharing compelling post online – and tagging us, through to active engagement in sessions and with the different exhibitors at the Forum.  Use the hashtags #InternetFreedomAfrica and #FIFAfrica25 to join a vibrant community working to shape a more open, inclusive, and rights-respecting digital future for the continent. Be sure to also follow CIPESA (@cipesaug) on XFacebook and LinkedIn.

The Journey To FIFAfrica25 Already Begun

A week ago, Andrew Gole set off on an extraordinary solo motorbike journey that will span over 13,000 km across 10 African countries. His mission is to ride from Uganda all the way to Windhoek, Namibia – arriving just in time for the Forum on Internet Freedom in Africa (FIFAfrica25) where he will also be part of the Digital Security Hub. Here are some pictures of Gole at the Kenya – Uganda border alongside members of the bikers club the accompanied him from Kampala to the border.

Andrew Gole set off from Kampala, Uganda on September 12, 2025 and as of today, has traversed five of the ten countries he is expected to journey through on hos #RoadToDigitalSafey.

Join the Run for #InternetFreedomAfrica Is Heading to Windhoek


We are taking the movement for digital rights beyond the conference halls and onto the streets. On September 24, 2025, join a community of attendees and everyday internet users for a run and walk that celebrates our collective call for a free, open and secure internet across Africa.

The run is set to coincide with the arrival of Andrew Gole who is riding from Uganda to Namibia. By being a part of the run – and several other morning runs that will be part of the Forum (look out for updates in the event platform).  Whether you’re jogging, walking, or cheering from the sidelines, the Run for Internet Freedom is a moment to be part of a movement that builds digital resilience. digital inclusion and pushes back against digital repression.

More details will be shared about the run soon.

Africa’s Digital Dilemma: Platform Regulation Vs Internet Freedom

By Brian Byaruhanga |

Imagine waking up to find Facebook and Instagram inaccessible on your phone – not due to a network disruption, but because the platforms pulled their services out of your country. This scenario now looms over Nigeria, as Meta, the parent company of Facebook and Instagram, may shut down its services in Nigeria over nearly USD 290 million in regulatory fines. The fines stem from allegations of anti-competitive practices, data privacy violations, and unregulated advertising content contrary to the national laws. Nigerian authorities insist the company must comply with national laws, especially those governing user data and competition. 

While this standoff centres on Nigeria, it signals a deeper struggle across Africa as governments assert digital sovereignty over global tech platforms. At the same time, millions of citizens rely on these platforms for communication, activism, access to health and education, economic livelihood, and self-expression. Striking a balance between regulation and rights in Africa’s evolving digital landscape has never been more urgent.

Meta versus Nigeria: Not Just One Country’s Battle

The tension between Meta and Nigeria is not new, nor is it unique. Similar dynamics have played out elsewhere on the continent:

  • Uganda (2021–Present): The Ugandan government blocked Facebook after the platform removed accounts linked to state actors during the 2021 elections. The block remains in place, effectively cutting off millions from a critical social media service unless they use Virtual Private Networks (VPNs) to circumvent the blockage.
  • Senegal (2023): TikTok was suspended amid political unrest, with authorities citing the app’s use for spreading misinformation and hate speech.
  • Ethiopia (2022): Facebook and Twitter were accused of amplifying hate speech during internal conflicts, prompting pressure for tighter oversight.
  • South Africa (2025): In a February 2025 report, the Competition Commission found that freedom of expression, plurality and diversity of media in South Africa had been severely infringed upon by platforms including Google and Facebook. 

The Double-Edged Sword of Regulation

Governments have legitimate reasons to demand transparency, data protection, and content moderation. Today, over two-thirds of African countries have legislation to protect personal data, and regulators are becoming more assertive. Nigeria’s Data Protection Commission (NDPC), created by a 2023 law, wasted little time in taking on a behemoth like Meta. Kenya also has an active Office of the Data Protection Commissioner, which has investigated and fined companies for data breaches. 

South Africa’s Information Regulator has been especially bold, issuing an enforcement notice to WhatsApp to comply with privacy standards after finding that the messaging service’s privacy policy in South Africa was different to that in the European Union. These actions send a clear message that privacy is a universal right, and Africans should not have weaker safeguards.

These regulatory institutions aim to ensure that citizens’ data is not exploited and that tech companies operate responsibly. Yet, in practice, digital regulation in Africa often walks a thin line between protecting rights and suppressing them.

While governments deserve scrutiny, platforms like Meta, TikTok, and X are not blameless. They often delay to respond to harmful content that fuels violence or division. Their algorithms can amplify hate, misinformation, and sensationalism, while opaque data harvesting practices continue to exploit users. For instance, Branch, a San Francisco-based microlending app operating in Kenya and Nigeria, collects extensive personal data such as handset details, SMS logs, GPS data, call records, and contact lists in exchange for small loans, sometimes for as little as USD 2. This exploitative business model capitalises on vulnerable socio-economic conditions, effectively forcing users to trade sensitive personal data for minimal financial relief.

Many African regulators are pushing back by demanding localisation of data, adherence to national laws, and greater responsiveness, but platform threats to exit rather than comply raise concerns of digital neo-colonialism where African countries are expected to accept second-tier treatment or risk exclusion.

Beyond privacy, African regulators are increasingly addressing monopolistic behaviour and unfair practices by Big Tech as part of a broader push for digital sovereignty. Nigeria’s USD 290 million fine against Meta is not just about data protection and privacy, but also fair competition, consumer rights, and the country’s authority to govern its digital space. Countries like Nigeria, South Africa and Kenya are asserting their right to regulate digital platforms within their borders, challenging the long-standing dominance of global tech firms. The actions taken against Meta highlight the growing complexity of balancing national interests with the transnational influence of tech giants. 

While Meta’s threat to exit may signal its discomfort with what it views as restrictive regulation, it also exposes the real struggle governments face in asserting control over digital infrastructure that often operates beyond state jurisdiction. Similarly, in other parts of Africa, there are inquiries and new policies targeting the market power of tech giants. For instance, South Africa’s competition authorities have looked at requiring Google and Facebook to compensate news publishers  (similar to the News Media and Digital Platforms Mandatory Bargaining Code in Australia). These moves reflect a broader global concern that a few platforms have too much control over markets and need checks to ensure fairness.

The Cost of Disruption: Economic and Social Impacts

When platforms go dark, the consequences are swift:

  • Businesses and entrepreneurs lose access to vital marketing and sales tools.
  • Creators and influencers face income loss and audience disconnection.
  • Activists and journalists find their voices limited, especially during politically charged periods.
  • Citizens are excluded from conversations and accessing information that could help them make critical decisions that affect their livelihoods.
  • Students and educators experience setbacks in remote learning, particularly in under-resourced communities that rely on social media or messaging apps to coordinate learning.
  • Access to public services is disrupted, from health services to government updates and emergency communications.

A 2023 GSMA report showed that more than 50% of small businesses in Sub-Saharan Africa use social media for customer engagement. In countries such as Nigeria, Uganda, Kenya or South Africa, Facebook and Instagram are lifelines. Losing access even temporarily sets back innovation, erodes trust, and impacts livelihoods.

A Call for Continental Solutions

Africa’s digital future must not hinge on the whims of a single government or a foreign tech giant. Both states and companies should be accountable for protecting rights in digital contexts, ensuring that development and digitisation do not trample on dignity and equity. This requires:

  • Harmonised continental policies on data protection, content regulation, and digital trade.
  • Regional norm-setting mechanisms (like the African Union) to enforce accountability for both governments and corporations.
  • Investments in African tech platforms to offer resilient alternatives.
  • Public education on digital rights to empower users against abuse from both state and corporate actors.
  • Pan-African contextualised business and human rights frameworks to ensure that digital governance aligns with both local realities and global human rights standards. This includes the operationalisation of the UN Guiding Principles on Business and Human Rights, following the examples of countries like Kenya, South Africa and Uganda, which have developed national action plans to embed human rights in corporate practice.

The stakes are high in the confrontation between Nigeria and Meta. If mismanaged, this tension could lead to fragmentation, exclusion, and setbacks for internet freedom, with ordinary users across the continent paying the price. To avoid this, the way forward must be grounded in the multistakeholder model of internet governance which aims for governments to regulate wisely and transparently, and for tech companies to respect local laws and communities, and for civil society to be actively engaged and vigilant. This will contribute to a future where the internet is open, secure, and inclusive and where innovation and justice thrive. 

The Surveillance Footprint in Africa Threatens Privacy and Data Protection

By Edrine Wanyama 

Digital and physical surveillance by states, private companies that develop technology or supply governments and unscrupulous individuals globally and across Africa is a major threat to the digital civic space and operations of civil society organisations (CSOs), human rights defenders (HRDs), activists, political opposition, government critics and the media. The highly intrusive technology, which is often facilitated by biometric data collection systems such as for processing of national identification documents, voter cards, travel documents, mandatory SIM card registration and the installation of CCTV cameras for “smart cities”, adversely impacts the digital civic space. 

Given these developments, the Digital Rights Alliance Africa (DRAA), a network of CSOs, media, lawyers and tech specialists from across Africa that seeks to champion digital civic space and counter threats to digital rights on the continent, recently held a learning session on “Understanding Surveillance Trends, Threats and Challenges for Civil Society.” The Alliance was created by the International Center for Not-for-Profit Law (ICNL) and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) in response to rising digital authoritarianism. It currently has members from more than 12 countries, who collectively conduct research and advocacy and share experiences around navigating digital threats and influencing strategic digital policy reforms in line with the alliance’s outcome declaration

The virtual learning session built capacity among the Alliance members to better understand digital surveillance and the related threats facing democracy actors. Discussions delved into the nature of surveillance, the regulatory environment, and strategies to navigate and counter surveillance risks and threats. The threats and risks include harassment, arbitrary arrests, persecution and prosecution on trumped up charges. 

While emphasising the need to understand emerging surveillance technologies, ecosystem and deployment tactics, Richard Ngamita, the Team Leader at Thraets, highlighted the huge investment (estimated at USD 1 billion annually) which African governments have made in acquiring surveillance technologies from China, Israel, the United States of America and Europe. Ngamita urged CSOs, HRDs and other actors to build digital security capacity to protect against illegal surveillance.

Victoria Ibezim-Ohaeri, the Executive Director of Spaces for Change, while referencing the  Proliferation of Dual-Use Surveillance Technologies in Nigeria: Deployment, Risks & Accountability – Spaces for Change report, highlighted weak regulation and unaccountable practices by states that facilitate unlawful surveillance across the continent and their implications on rights. According to the report,

“The greatest concern around surveillance technologies is their potential misuse for political repression and human rights abuses. Surveillance practices also undermine the citizens’ dignity, autonomy, and security, translating to significant reductions in citizens’ agency. Agency reductions are magnified by the state’s power to punish dissent. This creates a chilling effect as citizens self-censor or avoid public engagement for fear of being surveilled or punished. The citizens have little agency to challenge or resist the state’s surveillance because of low digital literacy, poverty and broader limitations in access to justice.”

Michaela Shapiro, the Global Engagement and Advocacy Officer at Article 19, United Kingdom, discussed the governing norms of surveillance globally while paying particular attention to the common gaps that need policy action at the country level in Africa. Recalling the intensification of digital and physical surveillance as part of state responses to curb the spread of Covid-19 in the absence of clear oversight mechanisms, Michaela emphasised the role of CSOs in advocating for data and privacy protection. 

To-date, the leading instrument of data protection on the continent, the African Union Convention on Cyber Security and Personal Data Protection has only 16 ratifications out of 55 states, while only 36 states have enacted specific laws on privacy and data protection rights.

Surveillance in Africa generally poses a major threat to individuals’ data and privacy rights since governments exercise wide access over the data subjects’ rights. National security and the loopholes in the laws are usually exploited to abuse and violate data rights. While there are regional and international standards, these are often overlooked with governments taking measures that are not provided for by the law, rendering them unlawful, arbitrary and disproportionate under human rights law. 

By way of progressive actions, speakers noted and made recommendations to States and non-state actors to the effect that:

States and Governments 

  • Address surveillance and bolster personal data and privacy protections through adopting robust legal and regulatory frameworks and repealing restrictive digital laws and policies.
  • Promote and enhance transparency and accountability through the establishment of independent surveillance oversight boards.
  • Strictly regulate the use of surveillance technologies by law enforcement and intelligence agencies to ensure accountability.
  • Collaborate with other countries to develop harmonised privacy standards within the established regional and international standards to have settled positions on cross-border controls on surveillance.

Civil Society Organisations

  • Build and enhance capacities of HRDs and other players in data governance and accountability to equip them with knowledge to counter common data privacy threats by governments and corporate entities.
  • Push for ethical and responsible use of technology to prevent and minimise technology-related violations. 
  • Challenge all forms of unlawful use of surveillance practices through legal action by, among others, taking legal actions.

Tech Sector

  • Conduct regular audits and impact assessments to address potential privacy breaches and enhance accountability and transparency. 
  • Prioritise privacy and integrate privacy protections into their products and services including data collection minimisation and establish strong security measures for privacy.
  • Prioritise ethical considerations in the development and deployment of new technologies to guarantee strong protections against potential violations.

What Does Meta’s About-Turn on Content Moderation Bode for Africa?

By CIPESA Writer |

Meta’s recent decision to get rid of its third-party fact-checkers, starting within the United States, has sent shockwaves globally, raising significant concerns about the concept of free speech and the fight against disinformation and misinformation. The announcement was part of a raft of major policy changes announced on January 7, 2025 by Meta’s CEO Mark Zuckerberg that will affect its platforms Facebook, Instagram and Threads used by three billion people worldwide. They include the introduction of the user-generated “Community Notes” model, elimination of third-party fact-checkers, reduced content restrictions and enforcement, and enabling the personalisation of civic or political content.

While the announcement makes no reference to Africa, the changes will trickle down to the continent. Meta’s decision is particularly concerning for Africa which is unique in terms of linguistic and cultural diversity, limited digital and media information literacy, coupled with the growing challenges of hate speech and election-related disinformation, lack of context-specific content moderation policies, and inadequate investment in local fact-checking initiatives.

Africa’s content moderation context and needs are also quite different from those of Europe or North America due to the predominant use of local languages that are often overlooked by automated fact-checking algorithms and content filters.

Notably, the justifications given by Meta are quite weak, as the new changes appear to undermine its own initiatives to promote free speech, particularly the work of its third-party fact-checking program and the Oversight Board, which it set up to help resolve some of the most difficult questions around freedom of expression online and information integrity. The decision also appears to be politically and economically motivated as the company seeks to re-align itself with and appease the incoming Trump administration that has been critical against fact-checking and get assistance in pushing back against regulation of its activities outside the U.S.

The company also amended its policy on Hateful Conduct on January 7, 2025, and replaced the term “hate speech” with “hateful conduct” and eliminated previous thresholds for taking down hate content and will allow more hateful speech against specific groups. Further, whereas the company is moving its Trust and Safety and Content Moderation Teams to Texas, it is yet to set up such robust teams for Africa.

Importance of Fact-Checking

Fact-checking plays a critical role in combating disinformation and misinformation and fostering informed public discourse. By verifying the accuracy of online content, fact-checkers help to identify unauthentic content and counter the spread of false narratives that can incite violence, undermine trust in institutions, or distort democratic processes.

Additionally, it promotes accountability and reduces the virality of misleading content, particularly during sensitive periods, such as elections, political unrest, public health crises, or conflict situations, where accurate and credible information is crucial for decision-making. Moreover, fact-checking fosters media literacy by encouraging audiences to critically evaluate information sources.

Fact-checking organisations such as Politifact have criticised the assertions by the Meta CEO that fact-checkers were “too politically biased” and had “destroyed more trust than they had created, especially in the U.S.”, yet decisions and power to take down content have been squarely Meta’s responsibility, with fact-checkers only providing independent review of posts. The Meta assertions also undermine the work of independent media outlets and civil society who have been accused by authoritarian regimes of being corrupt political actors.

 However, fact-checking is not without its challenges and downsides. The process can inadvertently suppress free expression, especially in contexts where the line between disinformation and legitimate dissent is blurred. In Africa, where cultural and linguistic diversity is vast, and resources for local-language moderation are limited, fact-checking algorithms or teams may misinterpret context, leading to unjust content removal or amplification of bias. Furthermore, fact-checking initiatives can become tools for censorship if not governed transparently, particularly in authoritarian settings.

Despite these challenges, the benefits of fact-checking far outweigh their challenges. Instead of getting rid of fact-checking, Meta and other big tech companies should strengthen its implementation by providing enough resources to both recruit, train and provide psycho-social services to fact-checkers.

Impact of the Decision for Africa
  1. Increase of Disinformation

Africa faces a distinct set of challenges that make effective content moderation and fact-checking particularly crucial. Disinformation and misinformation in Africa have had far-reaching consequences, from disrupting electoral processes and influencing the choice of candidates by unsuspecting voters to jeopardising public health. Disinformation during elections has fueled violence, while health-related misinformation during health crises, such as during the Covid-19 pandemic, endangered lives by undermining public health efforts. False claims about the virus, vaccines, or cures led to vaccine hesitancy, resistance to public health measures like mask mandates, and the proliferation of harmful treatments. This eroded trust in health institutions, slowed down pandemic response efforts, and contributed to preventable illnesses and deaths, disproportionately affecting vulnerable populations.

The absence of fact-checking exacerbates the existing challenges of context insensitivity, as automated systems and under-resourced moderation teams fail to address the nuances of African content. The introduction of the user-driven Community Notes, which is similar to the model used on X, will still require experts’ input, especially in a region where many governments are authoritarian. Yet, media and information literacy and access to credible and reliable information is limited, and Meta’s platforms are primary ways to access independent news and information.

Research on the use of Community Notes on X has shown that the model has limited effectiveness in reducing the spread of disinformation, as it “might be too slow to intervene in the early (and most viral stages of the diffusion”, which is the most critical. The move also undermines efforts by civil society and fact-checking organisations in the region who have been working tirelessly to combat the spread of harmful content online.

  1. Political Manipulation and Increased Malign Influence

Dialing down on moderation and oversight may empower political actors who wish to manipulate public opinion through disinformation campaigns resulting in the surge of such activities. Given that social media has been instrumental in mobilising political movements across Africa, the lack of robust content moderation and fact-checking could hinder democratic processes and amplify extremist views and propaganda. Research has shown an apparent link between disinformation and political instability in Africa.

Unchecked false narratives not only mislead voters, but also distort public discourse and diminish public trust in key governance and electoral institutions. Authoritarian regimes may also use it to undermine dissent. Moreover, the relaxation of content restrictions on sensitive and politically divisive topics like immigration and gender could open floodgates for targeted hate speech, incitement and discrimination which could exacerbate gender disinformation, ethnic and political tensions. Likewise, weak oversight may enable foreign/external actors to manipulate elections.

  1. Regulatory and Enforcement Gaps

The effect of Meta easing restrictions on moderation of sensitive topics and reduced oversight of content could lead to an increase of harmful content on their platforms. Already, various African countries have  weak regulatory frameworks for harmful content and thus rely on companies like Meta to self-regulate effectively. Meta’s decision could spur efforts by some African governments to introduce new and more repressive laws to restrict certain types of content and hold platforms accountable for their actions. As our research has shown, such laws could be abused and employed to suppress dissent and curtail online freedoms such as expression, assembly, and association as well as access to information, creating an even more precarious environment.

  1. Limited Engagement with Local Actors

Meta’s decision to abandon fact-checking raises critical concerns for Africa, coming after the tech giant’s January 2023 decision to sever ties with their East African content moderation contractor, Sama, based out of Nairobi, Kenya, that was responsible for content moderation in the region. The Sama-operated hub announced its exit from content moderation services to focus on data annotation tasks, citing the prevailing economic climate as a reason for streamlining operations. Additionally, the Nairobi hub faced legal and ethical challenges, including allegations of poor working conditions, inadequate mental health support for moderators exposed to graphic content, and unfair labour practices. These issues led to lawsuits against both Sama and Meta, intensifying scrutiny of their practices.

Meanwhile, fact-checking partnerships with local organisations have played a crucial role in addressing disinformation, and their elimination erodes trust in Meta’s commitment to advancing information integrity in the region. Meta has fact-checking arrangements with various companies across 119 countries, including 26 in Africa. Some of the companies in Africa include AFP, AFP – Coverage, AFP – Hub, Africa Check, Congo Check, Dubawa, Fatabyyano فتبين,  Les Observateurs de France 24 and PesaCheck. In the aftermath of Meta’s decision to sever ties with their East African third-party content moderators, Sama let go of about 200 employees.

Opportunities Amidst Challenges

While Meta’s decision to abandon fact-checking is a concerning development, it also presents an opportunity for African stakeholders to utilise regional instruments, such as the African Charter on Human and Peoples’ Rights and the Declaration of Principles on Freedom of Expression and Access to Information in Africa, to assert thought leadership and demand better practices from platforms. Engaging with Meta’s regional leadership and building coalitions with other civil society actors can amplify advocacy about the continent’s longstanding digital rights and disinformation concerns and demands for more transparency and accountability.

Due to the ongoing pushback against the recently announced changes, Meta should be more receptive to dialogue and recommendations to review and contextualise the new proposals. For Africa, Meta must address its shortcomings by urgently investing and strengthening localised content moderation in Africa. It must reinvest in fact-checking partnerships, particularly with African organisations that understand local contexts. These partnerships are essential for addressing misinformation in local languages and underserved regions.

The company must also improve its automated content moderation tools, including by developing tools that can handle African culture, languages and dialects, hire more qualified moderators with contextual knowledge, provide comprehensive training for them and expand its partnerships with local stakeholders. Moreover, the company must ensure meaningful transparency and accountability as many of its transparency and content enforcement reports lack critical information and disaggregated data about its actions in most African countries.

Lastly, both governments and civil society in Africa must invest in digital, media and information literacy which is essential to empower users to critically think about and evaluate online content. Meta should partner with local organisations to promote digital literacy initiatives and develop educational campaigns tailored to different regions and languages. This will help build resilience against misinformation and foster a more informed digital citizenry.

In conclusion, it remains to be seen how the new changes by Meta will be implemented in the U.S., and subsequently in Africa, and how the company will address the gaps left by fact-checkers and mitigate the risks and negative consequences stemming from its decision. Notably, while there is widespread acknowledgement that content moderation systems on social media platforms are broken, efforts to promote and protect rights to free expression and access to information online should be encouraged. However, these efforts should not come at the expense of user trust and safety, and information integrity.