By Simone Toussi |

Disinformation is on the rise in Africa, spurred by increased internet connectivity and social media usage. In 2021, the International Telecommunication Union (ITU) estimated that 33% of the continent’s population of 1.37 billion had access to the internet, with about 255 million individuals active on Facebook. YouTube, Twitter, Pinterest, Instagram, and LinkedIn are the other platforms with large numbers of users. 

This rapid adoption of online platforms has led to shifts in political discourse, enabling strong public participation, organising, and online protests that have in some cases, such as Sudan and Algeria, contributed to the overthrow of autocratic leaders. Consequently, many governments in the region consider the internet and social media a threat and have unleashed repressive strategies to curtail their use, including retrogressive legislation, internet shutdowns, and disinformation campaigns. 

A new report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) explores the nature, perpetrators, pathways and effects of disinformation in Cameroon, Ethiopia, Kenya, Nigeria and Uganda and shows how contextual similarities have underpinned the proliferation of disinformation. These countries are classified as ‘Not Free’ or ‘Partly free’ in terms of speech and internet freedom and are largely authoritarian with a penchant for constraining the digital space.

According to the report, elections and armed conflicts are key drivers of disinformation. Yet authoritarianism has played a big part too, as governments have used both disinformation and the response to it to entrench themselves in power, shrink civic space, and target opponents and critics.

The increased use of digital technologies, low media literacy levels, the lucrative nature of disinformation, the fractious politics (Kenya and Uganda), conflict situations (such as in Ethiopia, Cameroon and Nigeria), and the closure of civic space that makes offline speech dangerous (Uganda, Ethiopia, Cameroon) fuel disinformation.

Common tactics used by disinformation actors include mass sharing, which leverages the viral power of social media and the lucrative nature of disinformation for “influencers for hire”. There is also a significant rise in political astroturfing, mass brigading and the use of fake and pseudonymous social media accounts. Coordinated Inauthentic Behaviour (CIB) on Facebook and Twitter is prevalent too, and between 2019 and 2021, Facebook dismantled several such schemes, some of which perpetuated disinformation, with many linked to French and Russian actors. 

The main disinformation instigators are political actors including governments, ruling parties and opposition parties, while key spreaders are social media “gurus” or digital “influencers” that are often paid to create or spread disinformation.

Weaponising Disinformation Laws

In the countries studied, governments have weaponised disinformation laws to silence critical voices. Rather than serving to counter the ills of disinformation, related laws have in most cases been used to target political critics while government officials complicit in promoting disinformation are protected. 

Moreover, the retrogressive laws enacted to combat disinformation have been used to further stifle legitimate expression while hampering access to critical and pluralistic information. Instructively, some of these laws are vague and ambiguous and fail to distinguish between disinformation or falsified information, often making their enforcement open to the subjective interpretation of law enforcement agencies, who become the arbiters of the truth. The laws have been used to arrest, charge and prosecute individuals, thereby promoting censorship and undermining legitimate speech.

Cameroon’s Law on Cybersecurity and Cybercrime and the law governing electronic communications are often cited in actions against spreaders of “false news”, while Nigeria has employed the Criminal Code Act and the Cybercrimes Act 2015. Ethiopia enacted the problematic Hate Speech and Disinformation Prevention and Suppression Proclamation in 2020. Without a specific disinformation law, Uganda relies on the Penal Code Act, the Computer Misuse Act of 2011 and the Communications Act of 2013 to target “false news”. Kenya relies on the Computer Misuse and Cybercrimes Act 2018, the Kenya Information and Communications Act, 2013 (KICA) and the National Cohesion and Integration Act, 2008. 

Impact of Disinformation on Democracy and Human Rights

Disinformation erodes trust in democratic institutions, hampers citizens’ ability to make informed decisions, and affects the right of citizens to hold individual opinions without interference. Disinformation can therefore hijack the political discourse and undermine elections by limiting access to credible, factual and pluralistic information about candidates, parties, and issues, in order to make informed choices. 

When it occurs in an election context, disinformation affects electoral processes by fuelling politically motivated violence and preventing citizens and democracy actors from accessing credible, timely, and reliable information. In times of socio-economic or political crises, disinformation uses existing ethnic divisions to further divide, and perpetuates tribal antagonisms through hate speech. 

The existence of countermeasures such as unclear legal provisions further creates a climate of fear that leads to self-censorship, in the same way that internet shutdowns and content takedowns ordered by governments to limit the spread of false information instead limit access to pluralistic information. In turn, these counter-measures further restrict the participation of many citizens in online political discourse and limit their ability to express themselves without fear of retaliatory attacks. 

Across all the five study countries, platforms’ remedial measures to tackle misinformation and disinformation remain ineffective and inadequate.

Recommendations

Governments:

  • Desist from selectively applying laws on countering disinformation to targeting critics, media, the political opposition and human rights groups. 
  • Repeal repressive laws and amend existing ones such as Kenya’s Computer Misuse Act, Uganda’s Communications Act, Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation, and Cameroon’s Law on Cyber Security and Cybercrime, to provide clear definitions of disinformation and ensure they conform to international human rights standards. 
  • Train law enforcement agencies as to what constitutes disinformation and how to combat it without stifling citizens’ rights.

Intermediaries:

  • Deepen collaboration with local media and civil society groups in African countries to identify, debunk and moderate disinformation. 
  • Work to reduce the processing and response times for complaints regarding disinformation content reported to encourage reporting and to minimise the circulation of disinformation.
  • Increase transparency in content moderation measures and conduct periodic reviews of policies with broad public consultations.

Media:

  • Build the capacity of journalists and editors on fact-checking and countering disinformation online. 
  • Work closely with fact-checkers to identify and expose disinformation.
  • Institute in-house systems to enhance fact-checking and information verification.

Civil Society:

  • Undertake strategic litigation to challenge retrogressive laws and practices that undermine access to the internet and digital rights under the guise of fighting disinformation. 
  • Advocate against laws and practices that hamper the ability of journalists to provide accurate information, and hamper citizens’ rights to information and free expression. 
  • Monitor, report and hold states accountable for their violations of international human rights principles including restrictions on the enjoyment of digital rights.

Read the full report: Disinformation Pathways and Effects: Case Studies from Five African Countries