Policy brief: Human Rights Implications of Health Care Digitalisation in Kenya

News Update |

This policy brief draws on the key findings of a human rights impact assessment of Digital Health Services to make concrete recommendations for a human rights-based digitalisation of health care services in Kenya.

Drawing on a human rights impact assessment conducted in October-November 2024, the brief shows how the transition from the National Health Insurance Fund (NHIF) to the Social Health Insurance Fund (SHIF) has faced significant challenges that impact the right to health, particularly for vulnerable and marginalised groups and addresses broader concerns as to the role of digitalisation in health care management and its implications for service delivery. 

Notably, Kenya’s journey towards a rights-based digital health system requires a coordinated approach that addresses infrastructure, regulatory enforcement, gender equality, and resource allocation and management. By adopting the recommendations found in this brief, Kenya can create a digital health environment that not only advances healthcare service delivery but also protects, promotes and respects the rights of all its citizens, particularly those most at risk of exclusion.

Read the full policy brief here.

This article was initially published by the Danish Institute for Human Rights on April 02, 2025.

Call for Applications: DPI Journalism Fellowship for Eastern Africa

Call for Applications |

Date of Publication: 1 April 2025.

Application Deadline: 21 April 2025 – 18.00 East African Time.

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Co-Develop, invites applications for the Digital Public Infrastructure (DPI) Journalism Fellowship for Eastern Africa.

This regional fellowship aims to build a new generation of journalists with the knowledge and skills to investigate and report on Digital Public Infrastructure and Digital Public Goods (DPGs). The fellowship is inspired by a similar Co-Develop-funded initiative implemented by the Media Foundation for West Africa (MFWA), which supported fellows to produce over 100 impactful stories that spurred public debate and influenced policy.

Through rigorous training, mentorship, and financial support, selected journalists will explore the promises, challenges, and lived experiences related to DPI across Eastern Africa.

What is Digital Public Infrastructure?

DPI refers to foundational digital systems and services that enable secure, inclusive, and efficient delivery of both public and private services. These include, among others:

  • Digital ID systems.
  • Instant and interoperable payment platforms.
  • Open data platforms.
  • Data exchange frameworks.
  • e-Government systems.

Well-designed DPI holds transformative potential—but without public understanding and critical engagement, it can also deepen exclusion, surveillance, and limit adoption/uptake.

Fellowship Details

Duration: 6 months (June–December 2025).

Structure:

  • June 2025: Virtual training workshops and editorial guidance.
  • July 2025: Story development and mentoring.
  • August 2025: In-person workshop in Nairobi, Kenya, peer learning, and advanced training.

Outputs: Each Fellow is expected to produce at least three high-quality, published stories on DPI or DPGs during the fellowship.

Benefits

  • A grant of up to USD 1,500 to support story production.
  • Access to reporting grants post-fellowship.
  • Mentorship from senior journalists and digital policy experts.
  • Certificate of Completion.
  • Travel, accommodation, and incidental expenses for the in-person workshop.

Eligibility Criteria

The fellowship is open to journalists based in the following Eastern Africa countries:

Burundi, Democratic Republic of the Congo, Ethiopia, Kenya, Rwanda, Somalia, South Sudan, Tanzania, and Uganda.

Applicants must:

  • Be a practicing journalist with at least three years of professional experience.
  • Demonstrate strong interest or experience in reporting on digital technologies, governance, human rights, or development.
  • Be proficient in English or French. 
  • Be available to fully participate in the three-month fellowship and in post-fellowship activities.
  • Be affiliated with a credible media outlet willing to support their reporting.

Selection Process

The selection will be based on merit and demonstrated interest in DPI-related reporting. The process includes:

  • Initial application screening.
  • Interviews with shortlisted candidates.
  • Final selection by a panel of media and policy experts.
  • Women and early-career journalists are strongly encouraged to apply.

How to Apply

Applicants should complete this form by 21 April 2025.

For more information, please visit: https://cipesa.org or contact programmes@cipesa.org

Protecting Global Democracy in the Digital Age: Insights from PAI’s Community of Practice

By Christian Cardona |

2024 was a historic year for global elections, with approximately four billion eligible voters casting a vote in 72 countries. It was also a historic year for AI-generated content, with a significant presence in elections all around the world. The use of synthetic media, or AI-generated media (visual, auditory, or multimodal content that has been generated or modified via artificial intelligence), can affect elections by impacting voting procedures and candidate narratives, and enabling the spread of harmful content. Widespread access to improved AI applications has increased the quality and quantity of the synthetic content being distributed, accelerating harm and distrust.

As we look toward global elections in 2025 and beyond, it is vital that we recognize one of the primary harms of generative AI in 2024 elections has been the creation of deepnudes of women candidates. Not only is this type of content harmful to the individuals, but also likely creates a chilling effect on female political participation in future elections. The AI and Elections Community of Practice (COP) has provided us with key insights, such as these, and actionable data that can help inform policymakers and platforms as they seek to safeguard future elections in the AI age.

To understand how various stakeholders and actors anticipated and addressed the use of generative AI during elections and are responding to potential risks, the COP provided an avenue for Partnership on AI (PAI) stakeholders to present their ongoing efforts, receive feedback from peers, and discuss difficult questions and tradeoffs when it comes to deploying this technology. In the last three meetings of the eight-part series, PAI was joined by the Center for Democracy & Technology (CDT), the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), and Digital Action to discuss AI’s use in election information and AI regulations in the West and beyond.

Investigating the Spread of Election Information with Center for Democracy & Technology (CDT)

The Center for Democracy & Technology has worked for thirty years to improve civil rights and civil liberties in the digital age, including through almost a decade of research and policy work on trust, security, and accessibility in American elections. In the sixth meeting of the series, CDT provided an inside look into two recent research reports published on the confluence of democracy, AI, and elections.

The first report investigates how chatbots from companies such as OpenAI, Anthropic, MistralAI, and Meta, handle responses to election-based queries, specifically for voters with disabilities. The report found that 61% of responses from chatbots tested provided answers that were insufficient (defined in this report as a response that included one or more of the following: incorrect information, omission of key information, structural issues, or evasion) in at least one of the four ways assessed by the study, including that 41% of the responses contained factual errors, such as incorrect voter registration deadlines. In one case, a chatbot provided information that cited a non-existent law. A quarter of the responses were likely to prevent or dissuade voters with disabilities from voting, raising concerns about the reliability of chatbots in providing important election information.

The second report explored political advertising across social media platforms and how changes in policies at seven major tech companies over the last four years have impacted US elections. As organizations seek more opportunities to leverage generative AI tools in an election context, whether for chatbots or political ads, they must continue investing in research on user safety and implementing evaluation thresholds for deployment, and ensure full transparency on product limitations once deployed.

AI Regulations and Trends in African Democracy with CIPESA

A “think and do tank,” the Collaboration on International ICT Policy for East and Southern Africa focuses on technology policy and practice as it intersects with society, human rights, and livelihoods. In the seventh meeting of the series, CIPESA provided an overview of their work on AI regulations and trends in Africa, touching topics like national and regional AI strategies, and elections and harmful content.

As the use of AI continues to grow in Africa, most AI regulation across the continent focuses on the ethical use of AI and human rights impacts, while lacking specific guidance on the impact of AI on elections. Case studies show that AI is undermining electoral integrity on the continent, distorting public perception given the limited skills of many to discern and fact-check misleading content. A June 2024 report by Clemson University’s Media Forensics Hub found that the Rwandan government used large language models (LLMs) to generate pro-government propaganda during elections in early 2024. Over 650,000 messages attacking government critics, designed to look like authentic support for the government, were sent from 464 accounts.

The 2024 general elections in South Africa saw similar misuse of AI, with AI-generated content targeting politicians and leveraging racial and xenophobic undertones to sway voter sentiment. Examples include a deepfake depicting Donald Trump supporting the uMkhonto weSizwe (MK) party and a manipulated 2009 video of rapper Eminem supporting the Economic Freedom Fighters Party (EFF). The discussion emphasized the need to maintain a focus on AI as it advances in the region with particular attention given to mitigating the challenges AI poses in electoral contexts.

AI tools are lowering the barrier to entry for those seeking to sway elections, whether individuals, political parties, or ruling governments. As the use of AI tools grows in Africa, countries must take steps to implement stronger regulation around the use of AI and elections (without stifling expression) and ensure country-specific efforts are part of a broader regional strategy.

Catalyzing Global AI Change for Democracy with Digital Action

Digital Action is a nonprofit organization that mobilizes civil society organizations, activists, and funders across the world to call out digital threats and take joint action. In the eighth and final meeting in the PAI AI and Elections series, Digital Action shared an overview of the organization’s Year of Democracy campaign. The discussions centered on protecting elections and citizens’ rights and freedoms across the world, as well as exploring how social media content has had an impact on elections.

The main focus of Digital Action’s work in 2024 was supporting the Global Coalition For Tech Justice, which called on Big Tech companies to fully and equitably resource efforts to protect 2024 elections through a set of specific, measurable demands. While the media expected to see very high profile examples of generative AI swaying election results around the world, they instead saw corrosive effects on political campaigning, harms to individual candidates and communities, as well as likely broader harms to trust and future political participation.

Many elections around the world were impacted by AI-generated content being shared on social media, including Pakistan, Indonesia, India, South Africa and Brazil, with minorities and female political candidates being particularly vilified. In Brazil, deepnudes appeared on a social media platform and adult content websites depicting two female politicians in the leadup to the 2024 municipal elections. While one of the politicians took legal action, the slow pace of court processes and lack of proactive steps by social media platforms prevented a timely fix.

To mitigate future harms, Digital Action called for each Big tech company to establish and publish fully and equitably resourced Action Plans (globally and for each country holding elections). By doing so, tech companies can provide greater protection to groups, such as female politicians, that are often at risk during election periods.

What’s To Come

PAI’s AI and Elections COP series has concluded after eight convenings with presentations from industry, media, and civil society. Over the course of the year, presenters provided attendees with different perspectives and real-world examples on how generative AI has impacted global elections, as well as how platforms are working to combat harm from synthetic content.

Some of key takeaways from the series include:

  1. Down-ballot candidates and female politicians are more vulnerable to the negative impacts of generative AI in elections. While there were some attempts to use generative AI to influence national elections (you can read more about this in PAI’s case study), down-ballot candidates were often more susceptible to harm than nationally-recognized ones. Often, local candidates with fewer resources were unable to effectively combat harmful content. Deepfakes were also shown to prevent increased participation of female politicians in some general elections.
  2. Platforms should dedicate more resources to localizing generative AI policy enforcement. Platforms are attempting to protect users from harmful synthetic content by being transparent about the use of generative AI in election ads, providing resources to elected officials to tackle election-related security challenges, and adopting many of the disclosure mechanisms recommended in PAI’s Synthetic Media Framework. However, they have fallen short in localizing enforcement policies with a lack of language support and in-country collaboration with local governments, civil society organizations, and community organizations that represent minority and marginalized groups such as persons with disabilities and women. As a result, generative AI has been used to cause real-world harm before being addressed.
  3. Globally, countries need to adopt more coherent regional strategies to regulate the use of generative AI in elections, balancing free expression and safety. In the U.S., a lack of federal legislation on the use of generative AI in elections has led to various individual efforts from states and industry organizations. As a result, there is a fractured approach to keeping users safe without a cohesive overall strategy. In Africa, attempts by countries to regulate AI are very disparate. Some countries such as Rwanda, Kenya, and Senegal have adopted AI strategies that emphasize infrastructure and economic development but fail to address ways to mitigate risks that generative AI presents in free and fair elections. While governments around the world have shown some initiative to catch up, they must work with organizations, both at the industry and state level, to implement best practices and lessons learned. These government efforts cannot exist in a vacuum. Regulations must cohere and contribute to broader global governance efforts to regulate the use of generative AI in elections while ensuring safety and free speech protections.

While the AI and Elections Community of Practice has come to an end, we continue to push forward in our work to responsibly develop, create, and share synthetic media.

This article was initially published by Partnership on AI on March 11, 2025

CIPESA and Partners Advocate for Inclusion of Technology-Facilitated Gender-Based Violence in Uganda’s Sexual Offences Bill

By Ainembabazi Patricia |

On February 18, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) alongside Pollicy and the Gender Tech Initiative appeared before Members of Uganda’s Parliament to advocate for the inclusion of Technology-Facilitated Gender-Based Violence (TFGBV) in the Sexual Offences Bill 2024.

The rapid evolution of digital technologies has reshaped societal interactions, leading to increased perpetration of online violence. In Uganda, online users increasingly face digital forms of abuse that often mirror or escalate offline sexual offences, yet efforts to combat gender-based violence are met with both legal and practical challenges.

The Sexual Offences Bill aims to address sexual offences by providing for the effectual prevention of sexual violence, enhancement of the punishment of sexual offenders, providing for the protection of victims during trial of sexual offences, and providing for extra-territorial application of the law.

In the presentation to Committee of Legal and Parliamentary Affairs and the Committee on Gender, Labour, and Social Development, CIPESA and partners emphasised the necessity of closing the policy gap between digital and physical sexual offenses in the Bill, to ensure that Uganda’s legal system is responsive to the realities of technology advancements and related violence. We argued that while the Bill is timely and presents real issues of sexual violence especially against women, there are some pertinent aspects that have been left out and should be included.

According to the United Nations Population Fund (UNFPA), TFGBV is “an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated, and amplified in part or fully by the use of information and communication technologies or digital media, against a person on the basis of their gender.” It includes cyberstalking, doxing, non-consensual sharing of intimate images, cyberbullying, and other forms of online harassment.

In Uganda, TFGBV is not addressed by the existing laws including the Penal Code Act and the Computer Misuse Act. Adding TFGBV to the bill will provide an opportunity to bridge this legal gap by explicitly incorporating TFGBV as a prosecutable offence.

CIPESA and partners’ recommendations to the Committees were to:

1. Include and Explicitly Define TFGBV

Under Part I (Preliminary), the Bill provides definitions for various terms related to sexual offences, including references to digital and online platforms. However, it does not explicitly define TFGBV or recognise its various manifestations. This omission limits the Bill’s effectiveness in addressing emerging forms of online sexual offences.

We propose an introduction of a new clause under Part I defining TFGBV, to ensure the Bill adequately addresses offences committed via digital means. The definition should align with international standards, such as the UNFPA’s definition of TFGBV, and should ensure consistency with Uganda’s digital policy frameworks, including the Constitution of the Republic of Uganda 1995, the Data Protection and Privacy Act, 2019, the Computer Misuse (Amendment) Act 2022, Penal Code Act Cap 120, and the Uganda Communications Act 2013.

2. Recognising Various Forms of TFGBV

Clause 7 of the Bill provides for the penalisation of Indecent Communication or transmission of sexual content without consent. It criminalises the sharing of unsolicited material of a sexual nature, including the unauthorised distribution of nude images or videos. However, the provision does not explicitly mention cyber harassment, online grooming, sextortion, or non-consensual intimate image sharing (commonly known as “revenge porn”).  As such, we recommended the expansion of Clause 7 to explicitly recognise and define offences such as Cyber harassment, Non-consensual intimate image sharing, Online grooming, and Sextortion. This addition will clarify legal pathways for victims and broaden the scope of protection against digital sexual exploitation. 

3. Replacing “Online Platform” with “Technology-Facilitated Gender-Based Violence”

In clause 1 the Bill defines “on-line platform” as any computer-based technology that facilitates the sharing of information, ideas, or other forms of expression. This encompasses social media sites, websites, and other digital communication tools. Clause 6 addresses the offense of indecent exposure, criminalising the intentional display of one’s sexual organs in public or through electronic means, including online platforms and clause 7 pertains to the non-consensual sharing of intimate content. However, these provisions do not comprehensively categorise TFGBV as a distinct form of sexual offences. Accordingly, “Online Platform” should be replaced with “Technology-Facilitated Gender-Based Violence” to ensure the Bill adequately captures all digital gender-based offences, including deepfake pornography, cyberstalking, and sexual exploitation through content generated by artificial intelligence.

4. Criminalising Voyeurism

The Bill does not explicitly criminalise voyeurism, which refers to the act of secretly observing, recording, or distributing images or videos of individuals in private settings without their consent, often for sexual gratification. Thee is increasing prevalence of voyeurism through hidden cameras, non-consensual recordings, and live-streamed sexual abuse.  Voyeurism should be criminalised with a clear definition provided under clause 1 and the scope and penalty defined under Part II of the Bill.

5. Strengthening Accountability for Technology Platforms

The Bill does not impose specific responsibilities on digital platforms and service providers in cases of TFGBV. We argued for the addition of a new clause under Part III (Procedures and Evidential Provisions) mandating digital platforms and service providers to cooperate in investigations related to TFGBV, and provide relevant data and evidence upon request by law enforcement. Similarly,  the provision should expand into the obligation to ensure data protection compliance and  implementation of proactive measures to detect, remove, and report sexual exploitation content.  This provision will enhance accountability and facilitate the prosecution of perpetrators. 

6. Aligning Uganda’s Legislation with Regional and International Frameworks

The Bill does not explicitly state its alignment with regional and international human rights instruments addressing sexual violence and digital rights.  We recommend an addition of a new clause under Part I (Preliminaries) stating that the Bill shall be interpreted in a manner that aligns with the African Commission on Human and Peoples’ Rights (ACHPR) Resolution 522 (2022) and the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). This will reinforce Uganda’s commitment to and application of international best practices in combating sexual offences. 

7. Enhancing Legal Remedies for Survivors

Clause 42 (Settlement in Capital Sexual Offences) prohibits compromise settlements in cases of rape, aggravated rape, and defilement, prescribing a 10-year prison sentence for offenders who attempt to settle such cases outside court. However, the Bill does not provide civil remedies for victims of TFGBV-related crimes, nor does it ensure access to psycho-social support.  We recommend an expansion of Clause 42 to include  civil remedies, including compensation for victims of TFGBV, psychosocial and legal support, ensuring survivors receive necessary rehabilitation, and mandatory reporting obligations for online platforms hosting TFGBV-related content. 

The inclusion of TFGBV in the Sexual Offences Bill 2024 will not only strengthen the fight against gender-based violence but also ensure that survivors access justice. The proposed legislative changes will reinforce Uganda’s commitment to upholding digital rights and gender equality in the digital age. The country will also join the ranks of pioneers such as South Africa who have taken legislative steps to criminalise online gender-based violence.

By incorporating the proposed provisions and amendments, the Sexual Offences Bill, 2024 will clearly define online-based sexual offenses, bring perpetrators of online violence to book and provide protection for survivors of digital sexual offences. It will also contribute to the building and strengthening of accountability for technology platforms. Once enacted, the law will also go strides in ensuring that Uganda’s legal framework aligns with regional and international human rights standards on protection of survivors while guaranteeing effective prosecution of offenders of technology-facilitated sexual offences.

Download the Full report here

What Does Meta’s About-Turn on Content Moderation Bode for Africa?

By CIPESA Writer |

Meta’s recent decision to get rid of its third-party fact-checkers, starting within the United States, has sent shockwaves globally, raising significant concerns about the concept of free speech and the fight against disinformation and misinformation. The announcement was part of a raft of major policy changes announced on January 7, 2025 by Meta’s CEO Mark Zuckerberg that will affect its platforms Facebook, Instagram and Threads used by three billion people worldwide. They include the introduction of the user-generated “Community Notes” model, elimination of third-party fact-checkers, reduced content restrictions and enforcement, and enabling the personalisation of civic or political content.

While the announcement makes no reference to Africa, the changes will trickle down to the continent. Meta’s decision is particularly concerning for Africa which is unique in terms of linguistic and cultural diversity, limited digital and media information literacy, coupled with the growing challenges of hate speech and election-related disinformation, lack of context-specific content moderation policies, and inadequate investment in local fact-checking initiatives.

Africa’s content moderation context and needs are also quite different from those of Europe or North America due to the predominant use of local languages that are often overlooked by automated fact-checking algorithms and content filters.

Notably, the justifications given by Meta are quite weak, as the new changes appear to undermine its own initiatives to promote free speech, particularly the work of its third-party fact-checking program and the Oversight Board, which it set up to help resolve some of the most difficult questions around freedom of expression online and information integrity. The decision also appears to be politically and economically motivated as the company seeks to re-align itself with and appease the incoming Trump administration that has been critical against fact-checking and get assistance in pushing back against regulation of its activities outside the U.S.

The company also amended its policy on Hateful Conduct on January 7, 2025, and replaced the term “hate speech” with “hateful conduct” and eliminated previous thresholds for taking down hate content and will allow more hateful speech against specific groups. Further, whereas the company is moving its Trust and Safety and Content Moderation Teams to Texas, it is yet to set up such robust teams for Africa.

Importance of Fact-Checking

Fact-checking plays a critical role in combating disinformation and misinformation and fostering informed public discourse. By verifying the accuracy of online content, fact-checkers help to identify unauthentic content and counter the spread of false narratives that can incite violence, undermine trust in institutions, or distort democratic processes.

Additionally, it promotes accountability and reduces the virality of misleading content, particularly during sensitive periods, such as elections, political unrest, public health crises, or conflict situations, where accurate and credible information is crucial for decision-making. Moreover, fact-checking fosters media literacy by encouraging audiences to critically evaluate information sources.

Fact-checking organisations such as Politifact have criticised the assertions by the Meta CEO that fact-checkers were “too politically biased” and had “destroyed more trust than they had created, especially in the U.S.”, yet decisions and power to take down content have been squarely Meta’s responsibility, with fact-checkers only providing independent review of posts. The Meta assertions also undermine the work of independent media outlets and civil society who have been accused by authoritarian regimes of being corrupt political actors.

 However, fact-checking is not without its challenges and downsides. The process can inadvertently suppress free expression, especially in contexts where the line between disinformation and legitimate dissent is blurred. In Africa, where cultural and linguistic diversity is vast, and resources for local-language moderation are limited, fact-checking algorithms or teams may misinterpret context, leading to unjust content removal or amplification of bias. Furthermore, fact-checking initiatives can become tools for censorship if not governed transparently, particularly in authoritarian settings.

Despite these challenges, the benefits of fact-checking far outweigh their challenges. Instead of getting rid of fact-checking, Meta and other big tech companies should strengthen its implementation by providing enough resources to both recruit, train and provide psycho-social services to fact-checkers.

Impact of the Decision for Africa
  1. Increase of Disinformation

Africa faces a distinct set of challenges that make effective content moderation and fact-checking particularly crucial. Disinformation and misinformation in Africa have had far-reaching consequences, from disrupting electoral processes and influencing the choice of candidates by unsuspecting voters to jeopardising public health. Disinformation during elections has fueled violence, while health-related misinformation during health crises, such as during the Covid-19 pandemic, endangered lives by undermining public health efforts. False claims about the virus, vaccines, or cures led to vaccine hesitancy, resistance to public health measures like mask mandates, and the proliferation of harmful treatments. This eroded trust in health institutions, slowed down pandemic response efforts, and contributed to preventable illnesses and deaths, disproportionately affecting vulnerable populations.

The absence of fact-checking exacerbates the existing challenges of context insensitivity, as automated systems and under-resourced moderation teams fail to address the nuances of African content. The introduction of the user-driven Community Notes, which is similar to the model used on X, will still require experts’ input, especially in a region where many governments are authoritarian. Yet, media and information literacy and access to credible and reliable information is limited, and Meta’s platforms are primary ways to access independent news and information.

Research on the use of Community Notes on X has shown that the model has limited effectiveness in reducing the spread of disinformation, as it “might be too slow to intervene in the early (and most viral stages of the diffusion”, which is the most critical. The move also undermines efforts by civil society and fact-checking organisations in the region who have been working tirelessly to combat the spread of harmful content online.

  1. Political Manipulation and Increased Malign Influence

Dialing down on moderation and oversight may empower political actors who wish to manipulate public opinion through disinformation campaigns resulting in the surge of such activities. Given that social media has been instrumental in mobilising political movements across Africa, the lack of robust content moderation and fact-checking could hinder democratic processes and amplify extremist views and propaganda. Research has shown an apparent link between disinformation and political instability in Africa.

Unchecked false narratives not only mislead voters, but also distort public discourse and diminish public trust in key governance and electoral institutions. Authoritarian regimes may also use it to undermine dissent. Moreover, the relaxation of content restrictions on sensitive and politically divisive topics like immigration and gender could open floodgates for targeted hate speech, incitement and discrimination which could exacerbate gender disinformation, ethnic and political tensions. Likewise, weak oversight may enable foreign/external actors to manipulate elections.

  1. Regulatory and Enforcement Gaps

The effect of Meta easing restrictions on moderation of sensitive topics and reduced oversight of content could lead to an increase of harmful content on their platforms. Already, various African countries have  weak regulatory frameworks for harmful content and thus rely on companies like Meta to self-regulate effectively. Meta’s decision could spur efforts by some African governments to introduce new and more repressive laws to restrict certain types of content and hold platforms accountable for their actions. As our research has shown, such laws could be abused and employed to suppress dissent and curtail online freedoms such as expression, assembly, and association as well as access to information, creating an even more precarious environment.

  1. Limited Engagement with Local Actors

Meta’s decision to abandon fact-checking raises critical concerns for Africa, coming after the tech giant’s January 2023 decision to sever ties with their East African content moderation contractor, Sama, based out of Nairobi, Kenya, that was responsible for content moderation in the region. The Sama-operated hub announced its exit from content moderation services to focus on data annotation tasks, citing the prevailing economic climate as a reason for streamlining operations. Additionally, the Nairobi hub faced legal and ethical challenges, including allegations of poor working conditions, inadequate mental health support for moderators exposed to graphic content, and unfair labour practices. These issues led to lawsuits against both Sama and Meta, intensifying scrutiny of their practices.

Meanwhile, fact-checking partnerships with local organisations have played a crucial role in addressing disinformation, and their elimination erodes trust in Meta’s commitment to advancing information integrity in the region. Meta has fact-checking arrangements with various companies across 119 countries, including 26 in Africa. Some of the companies in Africa include AFP, AFP – Coverage, AFP – Hub, Africa Check, Congo Check, Dubawa, Fatabyyano فتبين,  Les Observateurs de France 24 and PesaCheck. In the aftermath of Meta’s decision to sever ties with their East African third-party content moderators, Sama let go of about 200 employees.

Opportunities Amidst Challenges

While Meta’s decision to abandon fact-checking is a concerning development, it also presents an opportunity for African stakeholders to utilise regional instruments, such as the African Charter on Human and Peoples’ Rights and the Declaration of Principles on Freedom of Expression and Access to Information in Africa, to assert thought leadership and demand better practices from platforms. Engaging with Meta’s regional leadership and building coalitions with other civil society actors can amplify advocacy about the continent’s longstanding digital rights and disinformation concerns and demands for more transparency and accountability.

Due to the ongoing pushback against the recently announced changes, Meta should be more receptive to dialogue and recommendations to review and contextualise the new proposals. For Africa, Meta must address its shortcomings by urgently investing and strengthening localised content moderation in Africa. It must reinvest in fact-checking partnerships, particularly with African organisations that understand local contexts. These partnerships are essential for addressing misinformation in local languages and underserved regions.

The company must also improve its automated content moderation tools, including by developing tools that can handle African culture, languages and dialects, hire more qualified moderators with contextual knowledge, provide comprehensive training for them and expand its partnerships with local stakeholders. Moreover, the company must ensure meaningful transparency and accountability as many of its transparency and content enforcement reports lack critical information and disaggregated data about its actions in most African countries.

Lastly, both governments and civil society in Africa must invest in digital, media and information literacy which is essential to empower users to critically think about and evaluate online content. Meta should partner with local organisations to promote digital literacy initiatives and develop educational campaigns tailored to different regions and languages. This will help build resilience against misinformation and foster a more informed digital citizenry.

In conclusion, it remains to be seen how the new changes by Meta will be implemented in the U.S., and subsequently in Africa, and how the company will address the gaps left by fact-checkers and mitigate the risks and negative consequences stemming from its decision. Notably, while there is widespread acknowledgement that content moderation systems on social media platforms are broken, efforts to promote and protect rights to free expression and access to information online should be encouraged. However, these efforts should not come at the expense of user trust and safety, and information integrity.