Social Media’s Role in Hate Speech: A Double-Edged Sword for South Sudan

By Ochaya Jackson & George Gumikiriza |

The lead-up to and aftermath of the now-stalled December 2024 elections in South Sudan has highlighted the role of social media as a powerful tool for communication, civic engagement, and information sharing. Platforms such as Facebook, WhatsApp, X (formerly Twitter), and TikTok have connected people across the world’s youngest nation, enabling dialogue, amplifying marginalised voices, and spreading crucial information. However, alongside these benefits, social media has also become a breeding ground for hate speech, misinformation, and incitement to violence.

The Rise of Hate Speech on Social Media

From June to November 2024, DefyHateNow (recently renamed Digital Rights Frontlines – DRF) monitored incidents of hate speech in South Sudan. The monitoring was done on content created and shared via social media platforms. Of the 255 incidents recorded, Facebook accounted for 89.4%, with WhatsApp, X, and TikTok coming in as close second. The monitoring findings further indicate that 50.5% of online content contained misinformation or disinformation, while 39.9% was classified as hate speech.

Facebook is the most widely used social media platform in South Sudan, which explains why it holds most of the illegal and harmful content. The popularity of the platform partly arises from its “free mode” feature which allows MTN mobile subscribers in South Sudan to access Facebook, create and share content when they do not have an internet data bundle; only viewing or uploading photos and videos requires users to have data.

Social media’s accessibility and rapid reach make it easy for harmful content to spread, fueling ethnic and political tensions. Given South Sudan’s history of conflict, inflammatory online rhetoric can have real-world consequences, inciting violence, deepening divisions, and undermining peacebuilding efforts.

Why Does Hate Speech Spread So Easily?

As part of the project, DefyHateNow convened the country’s first Symposium in commemoration of the International Day for Countering Hate Speech as a platform for collective action to combat hate speech. The engagements identified several factors that contribute to the proliferation of hate speech and disinformation in South Sudan:

Ethnic and Political Divisions – Long-standing ethnic rivalries and political conflicts provide fertile ground for harmful narratives that further divide communities.

Lack of Digital Literacy – Many social media users lack the skills to critically assess the credibility of online content, making them more susceptible to misinformation.

Anonymity and Lack of Accountability – Many harmful posts are made under fake names or anonymous accounts, reducing the fear of repercussions.

Weak Regulatory Frameworks – South Sudan lacks robust policies to hold social media platforms accountable for harmful content.

Algorithmic Amplification – Social media algorithms prioritise engagement, often promoting divisive and inflammatory content because it generates more reactions and shares.

The Positive Side: Social Media for Good

Despite these challenges, social media remains a vital tool for positive change. Platforms have been used for:

Peacebuilding and Dialogue – Initiatives like #defyhatenow and DRF’s online campaigns promote counter-speech and encourage respectful conversations.

Fact-checking and Misinformation Prevention – Programmes like 211Check work to verify online information and educate communities about identifying false narratives.

Civic Engagement – Social media allows citizens to engage with governance, report human rights abuses, and access critical updates on national issues.

Curiosity – The disinformation awareness campaigns conducted raise the level of literacy and criticality among online audiences, which makes them detect and counter disinformation.

Towards maximising the benefits of social media, DefyHateNow also conducted awareness campaigns through the publication of animations in print media, radio talk shows and dissemination of posters across South Sudan’s capital Juba. The campaign messages reinforced the call for action against hate speech, misinformation and disinformation as well as raised awareness about their dangers and how to identify them.

Ahead of the rescheduled elections slated for December 2026, collective effort from tech companies, policymakers, civil society, the media and individual users is required to address the challenges of hate speech and disinformation. By promoting digital literacy, implementing stronger regulations, and encouraging responsible social media use, South Sudan can harness the power of social media platforms for peace and progress.

DefyHate Now’s work was supported by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) in the context of the Africa Digital Rights Fund (ADRF).

Do you want to be part of the solution? Join Digital Rights Frontlines (DRF) in advocating for safer digital spaces. Stay informed, report harmful content, and contribute to a more inclusive and responsible online community.

For more information, visit www.digitalrights.ngo or contact us at comms@digitalrights.ngo

Digital Shadows: Deepfakes Used As Violence Against Women in Journalism and Politics During African Elections

By Risper Arose |

In the ever-evolving digital landscape, technology has transformed how we communicate, access information, and engage with the world. But with these advances comes a darker side – one that disproportionately affects women in journalism and politics, particularly during pivotal moments such as elections. One such dark side is deepfakes – synthetic media generated by Artificial Intelligence (AI) that manipulates voices, faces, and actions –  which have become a powerful tool for deception.

Media manipulation in itself is not new and has long influenced politics and relations, helping create and spread narratives. For instance, in the 20th century, photos were manually edited to manipulate public opinion during political repression campaigns. At the time, this was a slow and meticulous process requiring skilled labour. With the rise of digital technology, manipulating media has become much easier and cheaper. This proliferation of deepfakes is further fuelled by the ever-growing unprecedented power of the internet and social media platforms to rapidly and virally disseminate digital content.

While deepfakes can be used for creative and educational purposes, in the age of information warfare they have increasingly been weaponised, causing significant disruption to the stability, integrity, and trust in institutions, the media, and society as a whole. They have the potential to further undermine norms of truth and trust on individual, organisational, and societal levels.

Additionally, the fact that deepfakes initially appeared as non-consensual pornography highlights their malicious potential, particularly as a gendered tool that disproportionately targets women. This also demonstrates the unique and troubling ability of AI to mimic real humans without consent, undermining their credibility and the agendas they champion, tarnishing their reputations, and inciting harassment. The effects ripple far beyond the screen, manifesting in offline violence, professional fallout, and psychological scars.

This month, Tanda Community Network launched a new research report that provides a comprehensive overview of the critical issue, shedding light on the interplay between deepfake technology, Technology-Facilitated Gender-Based Violence (TFGBV), and democratic processes.

The report highlights case studies from three African countries: Ghana, Senegal and Namibia, revealing how deepfakes are weaponised during elections. With support from the Africa Digital Rights Fund (ADRF), Tanda Community Network carried out focus group discussions, interviewed policymakers, technologists, women journalists, women politicians and civil society in the digital sector and surveyed over 100  women in the three countries.

The findings indicate that deepfake attacks inflict lasting socio-cultural, professional and psychological harm. For female journalists and politicians, the stakes are even higher, with the violence often spilling over into offline spaces.

Many victims of TFGBV, including deepfake attacks, fear the stigma associated with speaking out, leading many to suffer in silence. This leads to severe underreporting, fueled by a lack of trust, inadequate support systems, and the absence of effective tools or expertise to detect and combat these threats.

Compounding this is widespread media illiteracy. Media literacy is alarmingly low, leaving the public vulnerable to manipulation and unable to differentiate between authentic and fake content.

Perhaps the most concerning finding is the lack of robust legal frameworks to address deepfakes and online harassment. Across the study countries, there is no specific, enforceable legislation to hold perpetrators accountable for TFGBV involving deepfakes.

It goes without saying that there is an urgent need for safeguards against such attacks. These attacks, which often manifest as targeted online harassment, have severe implications – not just for the individuals involved but also for public trust in information.

To address these challenges, a multi-pronged approach is essential.

  • Education must play a central role in combating the threat of deepfakes. Awareness campaigns and media literacy programs should aim to teach users to critically analyse the content they consume, recognise digital manipulations, and navigate online spaces safely. Toolkits and training programs must be tailored for different stakeholder groups, including journalists, policymakers, and grassroots communities, to equip them with the skills needed to identify and respond to these threats effectively.
  • Policymakers must prioritise creating enforceable legal frameworks that clearly define and punish perpetrators of digital violence. These laws should also address the broader spectrum of TFGBV, acknowledging its various manifestations and far-reaching impacts.
  • Social media platforms also have a critical responsibility to ensure accountability. They must develop and enforce robust policies to address the spread of deepfakes and harassment, investing in technologies that can detect manipulated content before it causes harm. Transparency and collaboration with civil society organisations would enhance these platforms’ ability to mitigate risks.
  • Institutions, including research organisations and academia, need to focus on developing actionable, evidence-based solutions. Research should be targeted toward understanding the nuances of deepfake attacks on different stakeholder groups and providing implementable recommendations that can influence policy and practice.

The key to combating deepfakes lies in people-centered solutions that can empower everyone.  This requires conscious efforts that put community consultation and community leadership front and centre in the decision-making process, moving away from siloed interventions that are often top-down and driven by external motivations.

By addressing these challenges, women can participate fully and fearlessly in shaping our democracy and society. This report is a call to action to lawmakers, civil society and tech platforms to create safer, more inclusive spaces for women politicians and journalists and in a broader sense the general public.

As deepfakes grow smarter and harder to detect, the challenges they pose will only intensify. Yet, we are not starting from scratch. Existing knowledge and past experiences in combating digital threats provide a foundation we can build upon. The fight against deepfakes and digital violence is not just about protecting women in journalism and politics; it is about safeguarding democracy, fostering inclusion, and ensuring that technology serves humanity rather than undermining it.

Digital Shadows is more than a research report; it is a call to action for governments, tech companies, and civil society, among other stakeholders, to advocate for meaningful policies and equip the general public with the tools they need to navigate this evolving landscape.

Let’s act now—because the longer we wait, the more entrenched these digital shadows will become.

Download the Digital Shadow full report
Risper Arose is the Partnership Lead at Tanda Community Network.

African Women’s Digital Safety: From Resolution to Reality

Edrine Wanyama |

Amplifying the Resolution on the Protection of Women Against Digital Violence in Africa: Towards Meaningful Actions by States

Two and a half years after the African Commission on Human and Peoples’ Rights (ACHPR) adopted the Resolution on the Protection of Women Against Digital Violence in Africa, its implementation remains a pipe dream. With Technology-Facilitated Gender-Based Violence (TFGBV) continuing to proliferate across the continent, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is challenging African governments to use the occasion of International Women’s Day to commit to taking legislative and practical measures to implement this pivotal resolution.

ACHPR/Res. 522 (LXXII) 2022 is important as it offers ground breaking approaches to addressing digital violence against women on the African continent.  While the digital realm should be a space of innovation and empowerment, it has become a battleground where women face harassment, intimidation, and violence. The non-consensual sharing of intimate images, sexist hate speech, misogynistic disinformation campaigns, cyberstalking, cyber bullying, cyber flashing, unsolicited sexually explicit content, doxing, deep fakes, trolling and mansplaining have steadily increased and contributed to a growing digital gender divide in Africa. 

This digital gender divide further exacerbates existing inequalities and hinders progress towards achieving gender equality in the region, stripping girls and women of their voices and hindering meaningful participation in online discourse. The inequalities also hinder the attainment of Sustainable Development Goals, including Goal 5 that, among others, aims to advance gender equality and the empowerment of all women and girls as a prerequisite for development.

Given the snail-speed implementation of the 2022 Resolution, in 2024  the Commission adopted another resolution, ACHPR/Res.591 (LXXX) 2024, which mandates the Special Rapporteur on the Rights of Women in Africa and the Special Rapporteur on Freedom of Expression and Access to Information in Africa to undertake a study on the causes, manifestations, and impacts of digital violence against women in Africa. It also aims to further the development of comprehensive norms and standards to assist countries ​in ​address​ing TFGBV.  

This Resolution underscores the need to fulfill Article 9 of the African Charter on free expression and access to information, and Article 4 of the Protocol to the African Charter on Human and Peoples’ Rights on the Rights of Women in Africa on the rights to life, integrity and security of the person of the woman. 

According to CIPESA’s Programme Manager, Ashnah Kalemera, governments are not to be reminded of their obligations with regards to gender equity online but should take all the necessary measures and “accelerate actions including adopting appropriate laws to address TFGBV.” 

While TFGBV has become a major global challenge, many approaches adopted to tackle it on the continent either fall short of the capability to hold those responsible for rights violations accountable, or focus on curtailing the digital civic spaces. For example, electoral periods such as in Uganda have  witnessed  multiple reports of targeted online violence against women, with some existing laws on cybercrime often targeting the female victims and not the perpetrators of gender-based violence online.

In a recent report on Kenya, almost 90% of young adults enrolled in tertiary institutions in the country’s capital Nairobi have reportedly suffered gender-based violence in online spaces, with 39% having experienced the harms personally. These harms, according to the study, are more pronounced amongst females (64.4%) in comparison to males (35.5%). Meanwhile, sexism and sexualisation of content such as in Zimbabwe and Uganda, attacks on female journalists in Ghana, Namibia and Tanzania, the harassment of female journalists in South Africa and against women in politics  in Kenya continually undermined their political and public affairs.

Guided by this year’s theme, Accelerate Actions to commemorate International Women’s Day, CIPESA calls on African governments to undertake the following actions to implement Resolution ACHPR/Res. 522 (LXXII) 2022.

Adopt Gender-Sensitive Legal and Policy Frameworks

Adoption of gender-sensitive legal and policy frameworks is critical to provide the legal basis for addressing TFGBV. States, technology companies including social media platforms, media and news organisations, and other stakeholders should recognise online violence from a gender lens, enact laws and policies that employ gender-balanced language, criminalise all forms of online violence and prioritise the digital safety of women and girls.

Evidence-Based Research for Gendered Actions

Evidence-based research is crucial for innovation and development of effective gendered actions to inform targeted interventions, policies, and programs that aim to combat online violence. Data that establishes the nature, prevalence, extent and the risk factors of TFGBV and the impact it poses should be collected and analysed by states alongside other stakeholders like CSOs. Such studies can be the foundational basis for identifying and addressing the root causes of the violence for more effective gendered actions against the vice.

Capacity Building and Awareness Raising

In line with the resolution, there is a need for capacity building and awareness raising in addressing TFGBV. Capacity building and awareness raising through various fora such as the media has the capacity to empower governments, individuals, communities, and institutions to understand, prevent, and respond to violence against women. Programmes such as digital literacy, advocacy interventions, community and network-led education, and capacitating law enforcement officers, the judiciary and other institutions will contribute to the wider goal of addressing online violence targeting women. Specific efforts in privacy awareness, online safety and digital hygiene will contribute to the creation of safer spaces for women who are disproportionately targeted by online violence.

Cooperation with Stakeholders including CSOs and Service Providers

The Resolution calls for cooperation of states with stakeholders including CSOs and service providers to end TFGBV. Collaboration amongst these players can help to combat TFGBV. CSOs can continually play the watchdog role of outreach and monitoring state efforts and activities. Service providers should engage in promoting responsibility over content and enhance accountability over the use of the online spaces and platforms. Similarly, there should be joint efforts to end violence against women such as through information sharing, capacity building, conducting joint campaigns and employing policy advocacy and tech solutions such as use of technology tools to track and investigate suspected cases of violence against women.

Protection and Support for Victims

The effects of violence in any form can be devastating. The devastating effects call for mitigation of the harm caused and empowering of survivors to heal and seek justice. States need to adopt comprehensive approaches which facilitate mitigation of harms including taking appropriate action for immediate support and providing safe spaces for survivors, safety planning and documentation of evidence. Similarly, clear mechanisms for reporting and redress including law enforcement and legal assistance for survivors can go a long way in victim support. Psychological and emotional support and providing self-care resources are also key. Additionally, digital security and privacy support, community support and advocacy such as awareness raising, provision of specialised services such as trauma-informed care and culturally sensitive services, and education including digital literacy programmes and public awareness aimed at enhancing preventive measures are important strategies for combating TFGBV. 

Buttressing Prevention Measures

The ACHPR/Res. 522 (LXXII) 2022 enlists a number of actions which African Union Member States should undertake. If undertaken, these actions could check on tech-enabled violence against women. They could also be the basis upon which equality in the enjoyment of fundamental rights and freedoms in the online space can be achieved. By strengthening prevention measures, a society that is pro-rights and freedoms that ensures a safe and inclusive space for empowering women and girls will be attained. Thus, individuals, groups, and communities through buttressed approaches will be equipped with knowledge, tools and skills to prevent, respond to and combat online violence.


Conclusively, the ACHPR/Res. 522 (LXXII) 2022 is a step forward in the fight against gender discrimination and women targeted violence in the online spaces. It sets a powerful benchmark for dealing with and addressing TFGBV. Its multi-faceted approach of bringing various stakeholders including governments, civil society, and the private sector together and, dealing with the issues in a comprehensive manner especially by states, provides a progressive roadmap for creating a safer and more inclusive online environment for women across Africa.

Digital Rights Alliance Africa Condemns Social Media Shutdown in South Sudan

Statement |

Social Media Shutdown in South Sudan Will Aggravate Human Rights Violations

The Digital Rights Alliance Africa (DRAA) – a network of non-government organisations that champions the digital civic space and counters threats to digital rights on the continent – is deeply concerned by the recent shutdown of social media platforms by the South Sudan government. The government claims the disruption is aimed to curb the dissemination of graphic content that portrays violence against South Sudanese nationals in neighbouring Sudan, and will last three months. 

The measure is a response to escalating violence and protests across the country arising from the killing of South Sudanese nationals by the Sudanese armed forces in Sudan’s El Gezira state. In response, nationals of South Sudan staged riots during which at least 16 Sudanese citizens were killed. 

The shutdown will aggravate an already precarious human rights situation, undermine the ability of citizens to document the crimes being committed, and deny the public  access to information that is vital to making decisions in life-and-death situations – such as how to access essential services like healthcare or routes to safety away how from the conflict zones.

Moreover, fundamental rights and freedoms, including freedom of expression, access to information, and peaceful assembly and association, will be undermined. Social media platforms and digital spaces are critical to fostering transparency, dialogue, and trust in times of crisis. Shutting down these spaces creates an information vacuum that breeds disinformation, which not only deepens societal divisions but also undermines efforts toward restoring peace and the rule of law.

According to article 24 of the Transitional Constitution of South Sudan,

  1. Every citizen shall have the right to the freedom of expression, reception and dissemination of information, publication, and access to the press without prejudice to public order, safety or morals as prescribed by law.

           (2) All levels of government shall guarantee the freedom of the press and other media as shall    

                         be regulated by law in a democratic society.

The constitution further guarantees freedom of assembly and association in article 25 and access to information under article 32.

Having acceded to the International Covenant on Civil and Political Rights (ICCPR) and the African Charter on Human and Peoples’ Rights, South Sudan has an obligation to respect and promote fundamental human rights and freedoms including expression and access to information, assembly and association.

Shutting down social media restricts vital communication, suppresses civic engagement, and hinders citizens’ ability to participate in democratic processes. The shutdown is contrary to the established international human rights standards which require that such restrictions on citizens’ rights must only be implemented where they meet the three part test of (i) being provided for by law; (ii) serving a legitimate aim and (iii) being necessary and proportionate in a free and democratic society. Imposing a shutdown on social media constitutes a disproportionate measure that instead restricts free access to information online – a critical mode of communication in periods of instability.
The decision to curtail access to social media platforms is a dent to South Sudan’s commitment to regional and international laws and undermines the realisation of civil liberties in the online spaces for the people of South Sudan. Specifically, it violates the Declaration of Principles on Freedom of Expression and Access to Information in Africa 2019 which, among others, recognises the importance of internet access. It also goes against the recent 2024 Resolution on Internet Shutdowns and Elections in Africa, which emphasises that states should not interfere with the right of individuals to seek, receive and impart information through any means of communication and digital technologies, and should avoid interrupting access to the internet and other digital technologies.

DRAA calls upon the South Sudan government to:

  1. Immediately lift the social media ban and restore access to social media platforms to ensure  free expression and access to critical information by the citizens.
  2. Respect human and peoples’ rights, including digital rights, in accordance with regional and international instruments, which protect the rights of the people of South Sudan  to communicate, assemble and associate.  
  3. Address the root causes of the current unrest and engage in meaningful and transparent dialogue with community leaders, civil society organisations, and affected communities to address underlying grievances and promote reconciliation to build an accountable, peaceful and inclusive society.
  4. Protect all affected communities and take urgent and necessary steps to safeguard vulnerable groups, including Sudanese traders and other minorities, ensuring their safety and dignity are preserved.
  5. Refrain from actions of ordering internet service providers to shut down the internet, to disrupt internet connections  to ensure constant free expression,  open flow of information and the holding of the perpetrators of human rights violations accountable.

DRAA urges the African Union, regional bodies, and the international community to hold South Sudan accountable for these repressive measures. We also continue to stand in solidarity with the people of South Sudan and reaffirm our commitment to advocating for digital rights and freedoms across Africa. 

Signed by CIPESA in collaboration with the Digital Rights Alliance Africa (DRAA)

About Digital Rights Alliance Africa (DRAA) 

The Digital Rights Alliance Africa is a network of traditional NGOs, media, lawyers and tech specialists from across Africa that seeks to champion digital civic space and counter threats to digital rights on the continent. The Alliance was created by the International Center for Not-for-Profit Law (ICNL) and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) in response to the rising digital authoritarianism in the region. It currently has members from 11 countries, who collectively monitor, engage in research, advocacy, share strategies for navigating digital threats and promote digital policy reforms in line with their shared vision outlined in the outcome declaration endorsed in 2023.

For more information about DRAA’s work and digital rights advocacy in Africa, visit their website or read the full statement.

What Does Meta’s About-Turn on Content Moderation Bode for Africa?

By CIPESA Writer |

Meta’s recent decision to get rid of its third-party fact-checkers, starting within the United States, has sent shockwaves globally, raising significant concerns about the concept of free speech and the fight against disinformation and misinformation. The announcement was part of a raft of major policy changes announced on January 7, 2025 by Meta’s CEO Mark Zuckerberg that will affect its platforms Facebook, Instagram and Threads used by three billion people worldwide. They include the introduction of the user-generated “Community Notes” model, elimination of third-party fact-checkers, reduced content restrictions and enforcement, and enabling the personalisation of civic or political content.

While the announcement makes no reference to Africa, the changes will trickle down to the continent. Meta’s decision is particularly concerning for Africa which is unique in terms of linguistic and cultural diversity, limited digital and media information literacy, coupled with the growing challenges of hate speech and election-related disinformation, lack of context-specific content moderation policies, and inadequate investment in local fact-checking initiatives.

Africa’s content moderation context and needs are also quite different from those of Europe or North America due to the predominant use of local languages that are often overlooked by automated fact-checking algorithms and content filters.

Notably, the justifications given by Meta are quite weak, as the new changes appear to undermine its own initiatives to promote free speech, particularly the work of its third-party fact-checking program and the Oversight Board, which it set up to help resolve some of the most difficult questions around freedom of expression online and information integrity. The decision also appears to be politically and economically motivated as the company seeks to re-align itself with and appease the incoming Trump administration that has been critical against fact-checking and get assistance in pushing back against regulation of its activities outside the U.S.

The company also amended its policy on Hateful Conduct on January 7, 2025, and replaced the term “hate speech” with “hateful conduct” and eliminated previous thresholds for taking down hate content and will allow more hateful speech against specific groups. Further, whereas the company is moving its Trust and Safety and Content Moderation Teams to Texas, it is yet to set up such robust teams for Africa.

Importance of Fact-Checking

Fact-checking plays a critical role in combating disinformation and misinformation and fostering informed public discourse. By verifying the accuracy of online content, fact-checkers help to identify unauthentic content and counter the spread of false narratives that can incite violence, undermine trust in institutions, or distort democratic processes.

Additionally, it promotes accountability and reduces the virality of misleading content, particularly during sensitive periods, such as elections, political unrest, public health crises, or conflict situations, where accurate and credible information is crucial for decision-making. Moreover, fact-checking fosters media literacy by encouraging audiences to critically evaluate information sources.

Fact-checking organisations such as Politifact have criticised the assertions by the Meta CEO that fact-checkers were “too politically biased” and had “destroyed more trust than they had created, especially in the U.S.”, yet decisions and power to take down content have been squarely Meta’s responsibility, with fact-checkers only providing independent review of posts. The Meta assertions also undermine the work of independent media outlets and civil society who have been accused by authoritarian regimes of being corrupt political actors.

 However, fact-checking is not without its challenges and downsides. The process can inadvertently suppress free expression, especially in contexts where the line between disinformation and legitimate dissent is blurred. In Africa, where cultural and linguistic diversity is vast, and resources for local-language moderation are limited, fact-checking algorithms or teams may misinterpret context, leading to unjust content removal or amplification of bias. Furthermore, fact-checking initiatives can become tools for censorship if not governed transparently, particularly in authoritarian settings.

Despite these challenges, the benefits of fact-checking far outweigh their challenges. Instead of getting rid of fact-checking, Meta and other big tech companies should strengthen its implementation by providing enough resources to both recruit, train and provide psycho-social services to fact-checkers.

Impact of the Decision for Africa
  1. Increase of Disinformation

Africa faces a distinct set of challenges that make effective content moderation and fact-checking particularly crucial. Disinformation and misinformation in Africa have had far-reaching consequences, from disrupting electoral processes and influencing the choice of candidates by unsuspecting voters to jeopardising public health. Disinformation during elections has fueled violence, while health-related misinformation during health crises, such as during the Covid-19 pandemic, endangered lives by undermining public health efforts. False claims about the virus, vaccines, or cures led to vaccine hesitancy, resistance to public health measures like mask mandates, and the proliferation of harmful treatments. This eroded trust in health institutions, slowed down pandemic response efforts, and contributed to preventable illnesses and deaths, disproportionately affecting vulnerable populations.

The absence of fact-checking exacerbates the existing challenges of context insensitivity, as automated systems and under-resourced moderation teams fail to address the nuances of African content. The introduction of the user-driven Community Notes, which is similar to the model used on X, will still require experts’ input, especially in a region where many governments are authoritarian. Yet, media and information literacy and access to credible and reliable information is limited, and Meta’s platforms are primary ways to access independent news and information.

Research on the use of Community Notes on X has shown that the model has limited effectiveness in reducing the spread of disinformation, as it “might be too slow to intervene in the early (and most viral stages of the diffusion”, which is the most critical. The move also undermines efforts by civil society and fact-checking organisations in the region who have been working tirelessly to combat the spread of harmful content online.

  1. Political Manipulation and Increased Malign Influence

Dialing down on moderation and oversight may empower political actors who wish to manipulate public opinion through disinformation campaigns resulting in the surge of such activities. Given that social media has been instrumental in mobilising political movements across Africa, the lack of robust content moderation and fact-checking could hinder democratic processes and amplify extremist views and propaganda. Research has shown an apparent link between disinformation and political instability in Africa.

Unchecked false narratives not only mislead voters, but also distort public discourse and diminish public trust in key governance and electoral institutions. Authoritarian regimes may also use it to undermine dissent. Moreover, the relaxation of content restrictions on sensitive and politically divisive topics like immigration and gender could open floodgates for targeted hate speech, incitement and discrimination which could exacerbate gender disinformation, ethnic and political tensions. Likewise, weak oversight may enable foreign/external actors to manipulate elections.

  1. Regulatory and Enforcement Gaps

The effect of Meta easing restrictions on moderation of sensitive topics and reduced oversight of content could lead to an increase of harmful content on their platforms. Already, various African countries have  weak regulatory frameworks for harmful content and thus rely on companies like Meta to self-regulate effectively. Meta’s decision could spur efforts by some African governments to introduce new and more repressive laws to restrict certain types of content and hold platforms accountable for their actions. As our research has shown, such laws could be abused and employed to suppress dissent and curtail online freedoms such as expression, assembly, and association as well as access to information, creating an even more precarious environment.

  1. Limited Engagement with Local Actors

Meta’s decision to abandon fact-checking raises critical concerns for Africa, coming after the tech giant’s January 2023 decision to sever ties with their East African content moderation contractor, Sama, based out of Nairobi, Kenya, that was responsible for content moderation in the region. The Sama-operated hub announced its exit from content moderation services to focus on data annotation tasks, citing the prevailing economic climate as a reason for streamlining operations. Additionally, the Nairobi hub faced legal and ethical challenges, including allegations of poor working conditions, inadequate mental health support for moderators exposed to graphic content, and unfair labour practices. These issues led to lawsuits against both Sama and Meta, intensifying scrutiny of their practices.

Meanwhile, fact-checking partnerships with local organisations have played a crucial role in addressing disinformation, and their elimination erodes trust in Meta’s commitment to advancing information integrity in the region. Meta has fact-checking arrangements with various companies across 119 countries, including 26 in Africa. Some of the companies in Africa include AFP, AFP – Coverage, AFP – Hub, Africa Check, Congo Check, Dubawa, Fatabyyano فتبين,  Les Observateurs de France 24 and PesaCheck. In the aftermath of Meta’s decision to sever ties with their East African third-party content moderators, Sama let go of about 200 employees.

Opportunities Amidst Challenges

While Meta’s decision to abandon fact-checking is a concerning development, it also presents an opportunity for African stakeholders to utilise regional instruments, such as the African Charter on Human and Peoples’ Rights and the Declaration of Principles on Freedom of Expression and Access to Information in Africa, to assert thought leadership and demand better practices from platforms. Engaging with Meta’s regional leadership and building coalitions with other civil society actors can amplify advocacy about the continent’s longstanding digital rights and disinformation concerns and demands for more transparency and accountability.

Due to the ongoing pushback against the recently announced changes, Meta should be more receptive to dialogue and recommendations to review and contextualise the new proposals. For Africa, Meta must address its shortcomings by urgently investing and strengthening localised content moderation in Africa. It must reinvest in fact-checking partnerships, particularly with African organisations that understand local contexts. These partnerships are essential for addressing misinformation in local languages and underserved regions.

The company must also improve its automated content moderation tools, including by developing tools that can handle African culture, languages and dialects, hire more qualified moderators with contextual knowledge, provide comprehensive training for them and expand its partnerships with local stakeholders. Moreover, the company must ensure meaningful transparency and accountability as many of its transparency and content enforcement reports lack critical information and disaggregated data about its actions in most African countries.

Lastly, both governments and civil society in Africa must invest in digital, media and information literacy which is essential to empower users to critically think about and evaluate online content. Meta should partner with local organisations to promote digital literacy initiatives and develop educational campaigns tailored to different regions and languages. This will help build resilience against misinformation and foster a more informed digital citizenry.

In conclusion, it remains to be seen how the new changes by Meta will be implemented in the U.S., and subsequently in Africa, and how the company will address the gaps left by fact-checkers and mitigate the risks and negative consequences stemming from its decision. Notably, while there is widespread acknowledgement that content moderation systems on social media platforms are broken, efforts to promote and protect rights to free expression and access to information online should be encouraged. However, these efforts should not come at the expense of user trust and safety, and information integrity.