Human Rights Day: Here’s How African Countries Should Advance Digital Rights

By Edrine Wanyama and Patricia Ainembabazi |

As the world marks Human Rights Day 2024, themed Our Rights, Our Future, Right Now, we are reminded of the urgent need to advance and protect human rights in an increasingly digital world.  Today, CIPESA joins the world in commemorating Human Rights Day and reflecting on the immense opportunities that the digital age brings for the realisation of human rights. Indeed, this year’s theme emphasises the need for immediate actions to safeguard rights in the digital sphere for a just and equitable future.

Whereas human rights have traditionally been enjoyed in offline spaces, the digital landscape presents unprecedented opportunities for the enjoyment of a broad range of rights, including access to information, civic participation, and freedom of expression, assembly, and association. However, the potential of digital technology to catalyse the enjoyment of these rights has steadily been threatened by challenges such as internet shutdowns, regressive laws that enable governments to clamp down on the digital civic space, and the digital divide.

The threats to digital rights, democracy, and the rule of law in Africa are numerous. They are often the result of growing authoritarianism and repression, political instability, corruption, the breakdown of public institutions, gender disparities, and growing socio-economic inequalities. Below are key intervention areas to advance digital rights on the continent.

Combat Internet  Shutdowns and Internet Censorship  

Internet shutdowns are increasingly used as a tool to suppress dissent, stifle freedom of expression, restrict access to information and freedom of assembly and association. The #KeepItOn coalition documented at least 146 incidents of shutdowns in 37 countries in Africa between January 2016 and June 2023. These disruptions continue despite evidence that they harm individuals’ rights, are counterproductive for democracy, and have long lasting impacts on national economies and individuals’ livelihoods.

A separate survey of 53 African countries shows that, as of 2023, the majority (44) had restrictions on political media, 34 had implemented social media restrictions, two restricted VPN use and seven restricted the use of messaging and Voice Over IP applications.
Governments must commit to keeping the internet open and accessible, while telecom companies must uphold transparency and resist arbitrary shutdown orders. The African Union’s recent Resolution 580 by the African Commission on Human and Peoples’ Rights (ACHPR) should specifically guide governments in keeping the internet on, even during electoral periods.

Curb Unmitigated Surveillance  

The privacy of individuals while using digital technologies is critical to protecting freedom of expression, the right to privacy, assembly, and association. Unregulated surveillance practices threaten privacy and freedom of expression across Africa, often targeting journalists, activists, and political opponents. Governments must adopt robust data protection laws, ensure judicial oversight over surveillance, and implement transparency mechanisms to prevent abuse.  In many countries,  laws governing state surveillance have gaps that allow state institutions to target government critics or political opposition members by conducting surveillance without sufficient judicial, parliamentary, or other independent, transparent and accountable oversight. 

Through research and training, CIPESA has highlighted the dangers of mass surveillance and supported the development of data protection frameworks. Our work with National Human Rights Institutions in countries like Ethiopia has strengthened their capacity to monitor and address surveillance abuses. 

Combat Disinformation  

The proliferation of disinformation is detrimental to citizens’ fundamental rights, including freedom of expression, access to information, freedom of assembly and association and participation, especially in electoral democracy. It also means that many citizens lack access to impartial and diverse information. Disinformation undermines trust, polarises societies, and disrupts democratic processes. Combating disinformation requires governments, civil society, and private sector collaboration on fact-checking, media literacy campaigns, and rights-respecting regulations.  

Our extensive research on countering disinformation in Africa provides actionable recommendations for addressing this challenge. By partnering with media organisations, platforms, and fact-checking initiatives, CIPESA has promoted factual reporting and fought misinformation, particularly during elections.

Fight Technology-Facilitated Gender-Based Violence (TFGBV)

Online harassment and abuse disproportionately target women and marginalised groups, limiting their ability to engage freely in digital spaces. Governments, intermediaries, and civil society must collaborate to ensure safer online environments and provide support systems for victims. Also, African countries need clear laws against TFGBV, with attendant capacity development for the judiciary and law enforcers to implement those laws.

CIPESA continues to conduct workshops on addressing gender-based violence in digital spaces and supporting organisations working on these issues, equipping key actors with tools to report and counter this vice. Our advocacy efforts have also emphasised platform accountability and comprehensive anti-TFGBV policies. 

De-weaponize the Law  

The digital civic space and the emerging issues such as disinformation, misinformation, false news and national security and public order have created opportunities for authoritarian governments to weaponise laws in the name of efforts to curb “abuse” by citizens. Unfortunately, the laws are employed as repressive tools targeted at curtailing freedom of expression, access to information, assembly and association online. Indeed they have been employed to gag the spaces within which freedoms were enjoyed, and to silence critics and dissenters. Governments should embark on a clear reform agenda to repeal all draconian legislation and enact laws which are progressive and align with the established regional and international human rights standards. 

As part of CIPESA’s efforts to expose civic space wrongs and manipulations through publishing of policy briefs and legal analyses, we enjoin partners, collaborators and other tech sector players in amplifying voices that call for actions to expose the misuse of laws on information disorder, anti-cybercrime laws and other repressive legislation through evidence based advocacy that could fundamentally  influence successful challenge of unjust laws in courts, regional forums and  human rights enforcement mechanisms for galvanisation of success across the continent.  

Arrest the Digital Divide  

The digital divide remains a significant barrier to the enjoyment of rights and to inclusive citizen participation, with rural, underserved communities, and marginalised groups disproportionately affected. This divide excludes millions from accessing opportunities in education, healthcare, and economic participation. Common contributing factors include high internet usage costs, expensive digital devices, inadequate digital infrastructure and low digital literacy. Addressing this gap requires affordable internet, investment in rural connectivity, and digital literacy programmes.

CIPESA’s research sheds light on the main barriers to connectivity and affordability, including the effective use of Universal Service Funds. Promoting inclusive digital access, particularly for marginalised communities, requires collective action from governments and other tech sector players, calculated towards enabling equitable access to, and utilisation of digital tools.

Promote Multistakeholder Engagements  

The complexity of digital rights challenges necessitates continuous collaboration and building of partnerships amongst governments, civil society, and private sector actors. CIPESA has facilitated multistakeholder dialogues that bring together diverse actors to address digital rights concerns, including national dialogues and the annual Forum on Internet Freedom in Africa (FIFAfrica). These engagements have led to actionable commitments form governments, civil society and other tech sector players and strengthened partnerships for progressive reforms. 


Last Word

CIPESA reaffirms its commitment to advancing digital rights for all across Africa. However, the challenges to meaningful enjoyment of digital rights and the advancement of digital democracy are myriad. The solutions lie in concerted efforts by various actors, including governments, the private sector, and civil society, all of whom must act now to protect digital rights for a better human rights future . 

CIPESA Conducts Digital Rights Training for Ethiopian Human Rights Commission Staff

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has conducted a digital rights training for staff of the Ethiopian Human Rights Commission (EHRC) in a programme that benefitted 22 experts from various departments of the statutory entity. 

The training was a response to the desire by the commission to build its organisational capacity in understanding and defending digital rights and CIPESA’s vision to grow the ability of African national human rights institutions (NHRIs) to monitor, protect and promote digital freedoms.

Conducted in the Ethiopian capital Addis Ababa on October 11-12, 2023, the programme aimed to build the EHRC staff’s understanding of digital rights issues and the link with traditional rights. Participants went on to brainstorm how the EHRC should strengthen human rights protection in the digital space and through the use of technology.

Dr. Abdi Jibril, the Commissioner for Civil and Political and Socio-Economic Rights at the EHRC, noted that the proliferation of digital technology has contributed positively to human rights protection. It was therefore necessary to maximise the benefits of digital technology and to expand its usage for the promotion and enforcement of human rights.

The importance of growing the capacity of NHRIs was underscored by Line Gamrath Rasmussen, Senior Adviser, Human Rights, Tech and Business at the Danish Institute for Human Rights and CIPESA Executive Director Dr. Wairagala Wakabi. African NHRIs are not always well versed with the opportunities and challenges which technology presents, which creates a need for capacity development and developing partnerships with stakeholders such as civil society. 

As legislation governing the technology domain is fast-evolving, NHRIs in many countries are playing catch up. As such, these institutions need to constantly keep updating themselves on new legislation and implications of these laws on human rights in the digital domain. The NHRIs need to  enhance their capacity to document, investigate and report on digital rights. 

The NHRIs also need to pay specific attention to the vices such as hate speech, disinformation, and technology-facilitated gender-based violence (TF GBV), that are being perpetuated with the aid of technology.

Dr. Daniel Bekele, the Chief Commissioner at EHRC, stated that social media companies and messaging platforms are not doing enough to moderate harmful content in Africa yet in other geographical regions they have invested more resources and efforts in content moderation. He said African countries need to work together in order to build a strong force against the powerful platforms. The official proposed that the African Union (AU), working with relevant governments and other stakeholders, should spearhead the development of regulations which African countries can jointly use in their engagements with the tech giants on issues such as content moderation. 

 The two-day training discussed the positive and negative effects of digital technology on human rights and how the commission’s work to enforce human rights can be strengthened through the use of digital technology.

Among other topics, the training also addressed the human rights, governance and technology landscape in Sub-Saharan Africa; public and private sector digitalisation and challenges for human rights; the link between online and offline rights; transparency and accountability of the private sector in upholding human rights; and opportunities for NHRIs to advance online rights at national, regional and international levels. It also featured deep dives into key digital rights concerns such as surveillance, online violence against women, disinformation, and network disruptions. 

At the end of the training, the EHRC staff identified key actions the commission could integrate in its annual work plans, such as digital rights monitoring, advocacy for enabling laws to be enacted, and developing tools for follow up on implementation of recommendations on digital rights by treaty bodies and the Human Rights Council. Others were collaborations with local and regional actors including media, fact-checkers, civil society organisations, and platforms; working with the police and other national mechanisms to tackle hate speech and disinformation while protecting human rights; and conducting digital literacy.

Trainers in the programme were drawn from CIPESA, the Centre for the Advancement of Rights and Democracy (CARD), the Danish Institute for Human Rights, the Centre for International Private Enterprise (CIPE), the African Centre for Media Excellence (ACME), Inform Africa, and the Kenya National Cohesion and Integration Commission (NCIC).

Meanwhile, after the aforementioned training, CIPESA teamed up with Ethiopian civil society partners to conduct a training on disinformation and hate speech for journalists, bloggers and digital rights activists. Like many African countries, Ethiopia is grappling with a significant and alarming rise in hate speech and disinformation, particularly on social media platforms. This surge in disinformation is undermining social cohesion, promoting conflict, and leading to a concerning number of threats against journalists and human rights defenders.

The proliferation of disinformation is to citizens’ fundamental rights as studies have shown that many Ethiopians feel their right to freedom of expression is compromised. The prevalence of disinformation also means that many Ethiopians lack access to impartial and diverse information.

Disinformation has been directly fueling conflict in several regions of Ethiopia. According to workshop participants and reports, both pro-government and anti-government actors have perpetuated this vice, whose real-world consequences are severe, including the loss of life and large-scale violent events.

Whereas Ethiopia in 2020 enacted legislation to curb hate speech and disinformation, the effectiveness of this law has been called into question. Some critics argue that it has not been effectively implemented and could be used to undermine citizens’ rights.

The training equipped 21 journalists, bloggers and activists with knowledge to navigate this law and with skills to call out and fight disinformation and hate speech. The efforts of the trained journalists, and those which the human rights commission could implement, are expected to boost the fight against online harms and contribute to the advancement of digital rights in Ethiopia.

Smell The Coffee Kenya, Disinformation Is Brewing!

By Juliet Nanfuka |

Just over a year ago, Kenya was in the midst of a bitterly contested general election held in August 2022. The electoral period was characterised by hate speech and disinformation, which remain prevalent today. Indeed, recent studies have highlighted a booming disinformation industry in the country, fuelled by political, economic and personal interests with many actors including politicians, content creators, and citizens churning out hate speech and disinformation on social media platforms. 

During the election period, disinformation and hate speech circulated widely as social media personalities and ordinary citizens on various sides of the political divide coordinated and shared inciteful and hateful content. Influencers with a large following on the platforms are often bankrolled by politicians to recruit and coordinate micro-influencers to develop common disinformation and hate narratives and push hashtags which often trend on social media. Further, social media trolls engage through Facebook posts, tweets and WhatsApp groups with targeted hate against ethnic communities such as the Kalenjin, Kikuyu and Luo, the ethnic communities of current president William Ruto, former president Uhuru Kenyatta, and former Prime Minister Raila Odinga, respectively. 

Amidst the election-related disinformation blitz, social media platforms seemed to do either too little or nothing to stop the spread of harmful and illegal content. An investigation by Global Witness and Foxglove in June 2022 showed that Facebook failed to detect inflammatory and violent hate speech ads posted on its platforms in Swahili and English. Further, the investigation found that even after putting out a statement in July 2022 on its efforts to combat harmful content, including 37,000 pieces of Kenyan content removed from Facebook and Instagram for violating its hate speech policies, similar hate speech ads were later approved on their platforms. 

Likewise, in July 2022 Twitter was blamed by local actors for profiting from its negligence by allowing its trending topic section to be exploited through paid influencers to amplify malicious, coordinated, inauthentic attacks to silence Kenya’s civil society, muddy their reputations and stifle the reach of their messaging. In September 2021, Twitter suspended 100 accounts from Kenya for violating the platform’s manipulations and spam policy after being found to have been tweeting pre-determined hashtags meant to misinform the public and attack certain personalities. In June 2022, the company suspended 41 accounts promoting the hashtag #ChebukatiCannotBeTrusted, which suggested that the then Chairperson of the Independent Electoral and Boundaries Commission (IEBC) was supporting one of the presidential candidates. 

TikTok, which has gained popularity among younger audiences, has also come under scrutiny after disinformation and hate content was found on its platform ahead of the August 2022 election. A study by Mozilla found 132 videos that had been viewed collectively over four million times, which were spreading hate speech and inciting violence against some ethnic communities. Some also featured synthetic and manipulated content couched as Netflix documentaries, news stories and fake opinion polls or fake tweets aimed at triggering violence, fear and violence as was witnessed during the 2007 post-election period. According to the report, TikTok suffered context bias and its content moderation practices were not robust enough to tackle the spread of such content on its platform. TikTok has since removed the videos highlighted in the report. 

According to Kenya’s hate speech watchdog, the National Cohesion and Integration Commission (NCIC), hate speech content is most prevalent on Facebook and Twitter. In July 2022, the NCIC ordered Meta to address hate speech and incitement on its Facebook platform within a week or face a suspension of its operations in the country. In August 2022, the Commission also found an increase in hate content on TikTok. Some of the hate and disinformation hashtags it identified on the various platforms included #RejectRailaOdinga, #Riggagy and #RutoMalizaUfungwe, which propagated falsehoods against candidates in the presidential election.

Some critics have argued that social media platforms have shown a consistent failure to adequately moderate content in Kenya. Furthermore, the platforms’ attempts at content moderation are implemented in a lacklustre, under-funded and opaque system that is neither participatory nor rights-respecting. Other studies have also shown that platforms continue to inconsistently enforce their own rules through flawed content moderation practices and in essence permit the spread of extreme, divisive and polarising content partly due to their lack of understanding of Kenya’s cultural context and local languages and slang.

The government’s attempts at legislating on disinformation and hate speech have not been without setbacks. In 2018, the Computer Misuse and Cybercrimes Act, 2018 was adopted, imposing punitive fines and prison terms on the publication of false information and false, misleading and fictitious data. Unfortunately, these provisions have been unjustly used to target bloggers for exposing corruption or seeking state accountability. 

A case by the Bloggers Association of Kenya challenging the constitutionality of the law remains pending an appeal of the decision by the High Court in February 2020 allowing the enforcement of the law. Section 13 of the National Cohesion and Integration Act constricts the definition of hate speech to “ethnic hatred” and fails to capture the constitutional limitations under Article 31, which include propaganda for war, incitement to violence, hate speech, and advocacy of hatred. This means various hate speech content remains lawful in the absence of a clear criminal prohibition.

Moreover, the NCIC, which was formed following the 2007 post-election violence, has been plagued by numerous challenges in its attempt to fight hate and promote peace and national cohesion. The commission for most of its active life has been underfunded, thus hindering its ability and capacity to monitor hate speech online, investigate incidents and conduct public awareness and engagements. Further, political interference with its work means that it has been incapable of enforcing the law to get successful convictions of offenders who are mostly the political elite

More importantly, successive government administrations have failed to implement the recommendations of the Truth Justice and Reconciliation Commission (TJRC) report to address the drivers of hate and disinformation. The report identified those drivers as Kenya’s historical inter-ethnic tensions that are systemic and deep-rooted in its social, cultural, religious, economic and political fabric. Disinformation and hate speech in Kenya thrive on these unresolved historical tensions around political ideology, ethnicity, economics, and demography.  

Today, a majority of social media users in Kenya are aware of and fuel hate speech and disinformation on social media. To some, it is all fun and games, as they assume no feelings get hurt. To many, however, disinformation triggers pain, fear, tension and hate. Last year, a local politician advised Kenyans to put matters of politics in their lungs, not their hearts. This attitude is also a problem, as such views may breed a level of acceptance and normalisation of disinformation and hate speech in the country by encouraging people to grow a ‘thick skin’ instead of objectively addressing the root causes of the vice. People, including Kenyans, are known to act on their feelings. As we have seen in neighbouring countries such as the Democratic Republic of Congo, Ethiopia and Sudan, hate speech and disinformation can drive violence with devastating consequences. 

The failure to resolve Kenya’s underlying tensions means the country risks further social division and fragmentation of society as well as diminished progress due to a continuation of governance policies and practices that further entrench discrimination and exclusion in accessing opportunities, resources and essential services. The hate that arises from the effects of such policies and practices, and the disinformation deployed to justify and perpetuate them, affects people’s mental health and emotional well-being. Moreover, they cement long-held historical fears, suspicions and animosity that continue to undermine the ability of Kenyans to trust each other or the government and could inhibit the willingness of sections of the public to cooperate in nation-building for the common good. 

Be that as it may, there are some promising efforts, such as the recently launched local coalition on freedom of expression and content moderation and the draft guidelines for regulating digital platforms spearheaded by UNESCO that seek to promote engagement and tackle content that potentially damages human rights and democracy. The multistakeholder coalition is an initiative of UNESCO and ARTICLE 19 that aims to bridge the gap between local stakeholders and social media companies and to improve content moderation practices, including supporting regulatory reform, building the capacity of state and non-state actors, and raising awareness on the ethical use of digital platforms. While these twin initiatives are new and largely untested, they present an opportunity to ensure more rights-respecting content moderation practices, the application of common norms based on human rights standards and stronger multistakeholder engagement in the content moderation process.

Finally, it may be easy to blame social media companies for the weaknesses in their content moderation systems, and by all means they need to be held to account. However, better algorithms alone cannot fix our society or our social norms. Kenyans must wake up and smell the coffee. Leaders need to drop the divisive acts and work together with stakeholders and citizens to address historical tensions and foster a culture of inclusion, tolerance, respect and understanding. While at it, they should promote responsible social media use, fact-checking, and media literacy in order to counter the negative impact of hate speech and disinformation and ultimately build a more just, harmonious, democratic and equitable society.

Disinformation and Hate Speech Continue to Fuel the Conflict in Eastern DR Congo 

By Nadine Kampire Temba and CIPESA Writer |

The Democratic Republic of Congo (DR Congo) continues to witness an information war, characterised by spiralling incitement, misinformation, disinformation and hate speech on social media. The state of affairs has undermined cohesion between communities and continues unabated due to various factors. 

Firstly, the country’s long history of political instability has created an environment where misinformation, disinformation and hate speech thrive. Over the past three decades, the DR Congo has witnessed cyclic and indiscriminate violence, and internationalised conflict. These have been fuelled by impunity for atrocities, endemic corruption, poor governance, leadership wrangles and differences over the control of an estimated USD 24 trillion worth of mineral resources, pitting the government against neighbouring countries, at least 120 armed groups and other parties. 

The instability has left at least 24.6 million people (one in four Congolese) at risk of food insecurity and a further six million citizens internally displaced, the highest on the continent. Human rights violations have remained commonplace despite years of humanitarian interventions. More recently, the conflict has escalated in Ituri and Kivu provinces in the eastern part of the country. Violence between government forces and armed groups has led to the death of at least 1,300 people since October 2022 and forced about 300,000 civilians to flee their homes and villages. 

Secondly, divisions among the country’s diverse ethnic groups have contributed to the escalation of tensions and hostility, while disputes with neighbouring Rwanda have led to the deterioration of diplomatic relations between the two states. Congo, United Nations experts and western governments accuse Rwanda of backing the March 23 Movement (M23) rebel group which continues to extend its control in North Kivu – accusations Rwanda and the rebel group deny. 

The Congolese government has labelled the M23 a “terrorist movement” and blamed it for committing atrocities, including summary executions and forced enlistment of civilians. In January 2023, Congo accused Rwanda of shooting down its fighter jet and described this as a “deliberate act of aggression that amounts to an act of war”. Rwanda claimed the jet had violated its airspace on three occasions. This came eight months after Kinshasa banned Rwanda Air from its airspace.

For its part, the Rwanda government accuses the Congolese army of utilising proxy armed groups, such as the Democratic Forces for the Liberation of Rwanda (Les Forces démocratiques de libération du Rwanda, FDLR), to contain the M23 offensive and to destabilise Rwanda.

The strained relations between the two countries, coupled with social divisions based on ethnicity, religion, nationality and political affiliation, continue to be exploited by politicians and groups affiliated to both countries to create tension and fuel hate speech and disinformation online and offline. On October 30, 2022, the Congolese government ordered the Rwandan ambassador, Vincent Karega, to leave the country within 48 hours in retaliation for Kigali’s alleged support to the M23. The DR Congo also recalled its top diplomat from Rwanda in a further souring of relations. A day later, on October 31, 2022, thousands of Congolese citizens, mostly in Goma city, attended anti-Rwanda protests to denounce Kigali’s alleged support of the M23, a mostly Congolese Tutsi group and what they called the “hypocrisy of the international community in the face of Rwanda’s aggression.” 

During the protests, names of individuals identified as Rwandans were read out, resulting in attacks and lynching of some individuals. In response, online trolls affiliated to Rwanda have targeted Congolese political leaders, journalists and civil society leaders. The targets of attacks in Congo have included the Banyarwanda (Tutsi) and the Banyamulenge whose citizenship, equal rights and belonging in DR Congo have been constantly questioned. They often face threats, attacks, and dehumanising stereotypes as they are perceived as foreigners or Rwandan implants supporting the M23 rebellion. 

Amidst these tensions is a weak and underdeveloped media environment in both DR CongoC and Rwanda, coupled with low media literacy among the population, which are enabling the spread of false information without being challenged or fact-checked. The situation has been further complicated by the lack of both skills and tools in content moderation and editorial guidance on the part of local media outlets and journalists working from both sides of the border. 

Media houses have to compete with social media platforms where users have found a sense of “community” by connecting with a variety of actors at the national level and in the diaspora who anonymously disseminate and amplify well-scripted radical messages, conspiracy theories and polarising narratives to wider audiences and appeal to ethnic loyalties and sow discord among communities. Some of the rivalling groups have deployed bots and trolls in order to manipulate the public opinion on social media.

According to Congolese journalist Desanges Kihuha, the media that are committed to providing truthful information are struggling to match the speed at which conflict-related disinformation and misinformation are spreading on social media platforms due to limited skills and funding. Thus, any actor intent on spreading false information can publish information online where it can easily gain virality without being fact-checked. “In the current context of war and insecurity in North Kivu province, misinformation continues to spread at a fast rate due to the use of digital and social media. Unfortunately, there is little press coverage of this phenomenon of hate speech and fake news,” says Kihuha.

Related to the above is that a significant part of the population, especially those in rural areas, lack access to accurate, verifiable and reliable information, while at the same time, the youth rely on social media for information. In addition, the social and economic challenges affecting the public, such as high poverty levels and limited access to basic services and infrastructure, create frustration, resentment, anger and distrust of the state, making the public vulnerable to exploitation. 

As a result, politicians, armed groups and their allies exploit these vulnerabilities to create tension by manipulating public opinion to generate support for their extremist political views or groups and channelling the public anger to promote hate speech and disinformation to further escalate the ethnic and regional conflicts. Theogene Magarambe, a Rwandan journalist, describes this as the “instrumentalisation of the M23 insurgency” in order to distract the public from governance shortcomings and the failure to restore peace and rule of law in Kivu. 

The failure by governments on both sides of the border to create an environment to push back against the political polarisation and disinformation online is widely acknowledged. “At the practical level, policies related to content moderation and regulation are currently inexistent, though we are engaging cross-border communities in order to create space for dialogue, hosting workshops and platforms where we exchange knowledge,” says Marion Ngavho Kambale, who is the head of the civil society of the North Kivu province.  Magarambe adds: “Today the true legitimacy test for any credible government is whether it can implement legal safeguards on privately developed technologies and hold platform operators accountable for failure to moderate content.”

Critics point to the challenge of the structural conception of social media platforms, whose business models and algorithms mostly prioritise content based on its engagement value rather than its accuracy or truth. Platforms such as Facebook have been criticised for inaction in the face of online ethnic incitement and massacres in Ethiopia – a potential risk in the DR Congo-Rwanda conflict. As Arsene Tungali, a digital rights and internet governance expert observed, inadequate actions by the likes of Facebook, Twitter, and WhatsApp means “the devastating effects of political polarisation, hate speech and disinformation peddled on  social media remain a problem.”

Louis Gitinywa, a digital rights and technology lawyer, says although the internet has offered citizens and private actors in the two countries a robust civic space for organising and engaging with key societal priorities, the “lawlessness and disinformation online … continue to contribute to fighting and killings”. 

Overall, addressing hate and disinformation in the DR Congo-Rwanda conflict will require a sustained and coordinated effort from multiple actors. Looking ahead, there is a critical need to build capacity and expertise amongst all the stakeholders in order to formulate effective strategies for content moderation. This includes building the legal expertise and strategic litigation to hold liable social media such as Facebook and Twitter for failing to effectively put adequate measures to moderate content in native and indigenous languages.

Further, since media literacy is limited, it is important to build the capacity of journalists, media practitioners and civil society reporting on the conflict to be aware of the complex information environment, relevant skills in fact-checking, professional ethics, content moderation as well as building their own professional networks for sharing credible information with counterparts across borders and avoiding sensationalism in reporting. They can also use the available platforms to promote responsible social media use, tolerance and dialogue between different groups in order to build trust. Moreover, the different actors should desist from propagating hate speech and disinformation. 

Nadine Kampire Temba is a journalist and digital rights lawyer based in Goma city, DR Congo, and a fellow with CIPESA. She coordinates Afia Amani Grands Lacs, an online media outlet that undertakes fact-checking and defends press freedom in the Great Lakes Region. You can follow her on Twitter @nadineKampire

Countering Digital Authoritarianism in Africa

By Apolo Kakaire |

The Internet which is viewed as the panacea for democracy, participation and inclusion is increasingly becoming a tool of repression deployed by regimes across the world to stifle rights and voice.  Africa, a continent already replete with poor democratic credentials and practices seems to be rapidly catching up on the new ‘epidemic’- digital authoritarianism.

The use of technology tactics to advance repressive political interests has come to be  referred to as digital authoritarianism. However, the tactics employed by authoritarian regimes have also been deployed by democratic states for purposes of surveillance, spread of misinformation, disinformation, and the disruption of civic and political participation under the pretext of fighting cybercrime, and in the interest of protecting national security, and maintaining public order.

Big technology companies are key drivers of digital authoritarianism through the creation, innovation and supply of repressive technology and related support. Moreover, political parties, interest groups, and smaller private companies have lapped it up too, developing and using tools and strategies of digital authoritarianism.

Digital authoritarianism is a great case study in understanding and appreciating the impact of technology on human rights. While laws legalising surveillance and interception of communications, and widespread data collection and processing may not be a problem in themselves, it is the ambiguity often present within those laws that give governments wide latitude of interpretation to facilitate the rights abuse that is a growing challenge.

At the Forum on Internet Freedom in Africa 2022 (FIFAfrica22), Global Voices Advox, shared findings from the Unfreedom Monitor– a project exploring the political and social context that fuels the emergence of digital authoritarianism in 17 countries. They hosted a panel discussion in which project researchers from India, Nigeria, Sudan and Zimbabwe presented the project findings on the connections between political contexts, analogue rights, and the growing use of digital communications technology to advance authoritarian governance.

The findings paint a grim picture for  freedom of the media, expression, and democracy in general. In Zimbabwe for instance, the Unfreedom Monitor report notes that; “the press walks a precarious line between national security and the professional obligation to report truthfully” on issues that happen in the country. It is an observation that is replicated in the mapping conducted in Morocco, Egypt, and Tanzania 

In Sudan, where internet censorship, bad laws and repressed liberties and network disruptions are commonplace, Khattab Hamad noted that the contours and motives of digital authoritarianism include fear of losing power, protecting the existence of regional or international alliances, and geopolitical motives protecting private and family interests. He added that terrorism and support for terrorist groups was another motive for authoritarianism in the country. 

In Tanzania, researchers found that often, laws are enacted as precursors to enable various methods of digital authoritarianism. For example, the Cybercrime Act which was hurriedly enacted just months before the October 2015 elections. “There were many other such laws, including the amendments to the Non-Governmental Orgnaisations (NGO) Act, that saw NGOs being deregistered and control on them tightened in the lead up to the 2020 elections”, they revealed.

In Uganda, network disruptions in the run up to and during recent elections is another example of digital authoritarianism. “Sometimes the internet is restored after elections. So, the question is what exactly is the purpose? What are you hiding? Why do you deny your people access to information? Internet shutdowns also question the credibility of elections”, said Felicia Anthonio of Access Now. She added that network disruptions affect engagement between voters and political candidates, in addition to limiting  electoral oversight and monitoring by human rights activists and election observers. 

As part of the Unfreedom Monitor project, Global Voices Advox has established a publicly available database on digital authoritarianism to support advocacy in light of the “urgency of a fast deteriorating situation”, said Sindhuri Nandhakumar, a researcher  with the project. 

While applauding the research and database in supporting evidence-based advocacy, digital rights activists at FIFAfrica22 noted that given the behaviour of authoritarian regimes, advocacy at the national level may be met with a lot of resistance. As such, more engagement was called for  through special mandates and periodic human rights review mechanisms at the African Union (AU) and the United Nations Human Rights Council.   

“Advocacy [against digital authoritarianism] at national level will be difficult. Positive results could be registered through Special rapporteurs at the AU and states through the Universal Periodic Review (UPR), towards securing accountability”, said Arsene Tungali from the Democratic Republic of Congo.

For African digital rights activists, the Global Voices Advox research and database unravels new  avenues for collaborative advocacy and transnational opportunities for interventions to stem this spread of digital authoritarianism. The findings however also point at the need for a concerted and robust response to its growing traction.

As elections in Africa remain a major flashing point for digital authoritarianism as all manner of manipulation of voters, narratives, even results abound, it remains a key area of transnational cooperation. Ahead of the elections in Zimbabwe, slated for July-August 2023, Advox will come up with tips on awareness raising on voter rights and the role of technology in elections. Zimbabwe provides a good opportunity to pilot, learn and perhaps adopt some interventions to counter this behemoth.