CIPESA Conducts Digital Rights Training for Ethiopian Human Rights Commission Staff

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has conducted a digital rights training for staff of the Ethiopian Human Rights Commission (EHRC) in a programme that benefitted 22 experts from various departments of the statutory entity. 

The training was a response to the desire by the commission to build its organisational capacity in understanding and defending digital rights and CIPESA’s vision to grow the ability of African national human rights institutions (NHRIs) to monitor, protect and promote digital freedoms.

Conducted in the Ethiopian capital Addis Ababa on October 11-12, 2023, the programme aimed to build the EHRC staff’s understanding of digital rights issues and the link with traditional rights. Participants went on to brainstorm how the EHRC should strengthen human rights protection in the digital space and through the use of technology.

Dr. Abdi Jibril, the Commissioner for Civil and Political and Socio-Economic Rights at the EHRC, noted that the proliferation of digital technology has contributed positively to human rights protection. It was therefore necessary to maximise the benefits of digital technology and to expand its usage for the promotion and enforcement of human rights.

The importance of growing the capacity of NHRIs was underscored by Line Gamrath Rasmussen, Senior Adviser, Human Rights, Tech and Business at the Danish Institute for Human Rights and CIPESA Executive Director Dr. Wairagala Wakabi. African NHRIs are not always well versed with the opportunities and challenges which technology presents, which creates a need for capacity development and developing partnerships with stakeholders such as civil society. 

As legislation governing the technology domain is fast-evolving, NHRIs in many countries are playing catch up. As such, these institutions need to constantly keep updating themselves on new legislation and implications of these laws on human rights in the digital domain. The NHRIs need to  enhance their capacity to document, investigate and report on digital rights. 

The NHRIs also need to pay specific attention to the vices such as hate speech, disinformation, and technology-facilitated gender-based violence (TF GBV), that are being perpetuated with the aid of technology.

Dr. Daniel Bekele, the Chief Commissioner at EHRC, stated that social media companies and messaging platforms are not doing enough to moderate harmful content in Africa yet in other geographical regions they have invested more resources and efforts in content moderation. He said African countries need to work together in order to build a strong force against the powerful platforms. The official proposed that the African Union (AU), working with relevant governments and other stakeholders, should spearhead the development of regulations which African countries can jointly use in their engagements with the tech giants on issues such as content moderation. 

 The two-day training discussed the positive and negative effects of digital technology on human rights and how the commission’s work to enforce human rights can be strengthened through the use of digital technology.

Among other topics, the training also addressed the human rights, governance and technology landscape in Sub-Saharan Africa; public and private sector digitalisation and challenges for human rights; the link between online and offline rights; transparency and accountability of the private sector in upholding human rights; and opportunities for NHRIs to advance online rights at national, regional and international levels. It also featured deep dives into key digital rights concerns such as surveillance, online violence against women, disinformation, and network disruptions. 

At the end of the training, the EHRC staff identified key actions the commission could integrate in its annual work plans, such as digital rights monitoring, advocacy for enabling laws to be enacted, and developing tools for follow up on implementation of recommendations on digital rights by treaty bodies and the Human Rights Council. Others were collaborations with local and regional actors including media, fact-checkers, civil society organisations, and platforms; working with the police and other national mechanisms to tackle hate speech and disinformation while protecting human rights; and conducting digital literacy.

Trainers in the programme were drawn from CIPESA, the Centre for the Advancement of Rights and Democracy (CARD), the Danish Institute for Human Rights, the Centre for International Private Enterprise (CIPE), the African Centre for Media Excellence (ACME), Inform Africa, and the Kenya National Cohesion and Integration Commission (NCIC).

Meanwhile, after the aforementioned training, CIPESA teamed up with Ethiopian civil society partners to conduct a training on disinformation and hate speech for journalists, bloggers and digital rights activists. Like many African countries, Ethiopia is grappling with a significant and alarming rise in hate speech and disinformation, particularly on social media platforms. This surge in disinformation is undermining social cohesion, promoting conflict, and leading to a concerning number of threats against journalists and human rights defenders.

The proliferation of disinformation is to citizens’ fundamental rights as studies have shown that many Ethiopians feel their right to freedom of expression is compromised. The prevalence of disinformation also means that many Ethiopians lack access to impartial and diverse information.

Disinformation has been directly fueling conflict in several regions of Ethiopia. According to workshop participants and reports, both pro-government and anti-government actors have perpetuated this vice, whose real-world consequences are severe, including the loss of life and large-scale violent events.

Whereas Ethiopia in 2020 enacted legislation to curb hate speech and disinformation, the effectiveness of this law has been called into question. Some critics argue that it has not been effectively implemented and could be used to undermine citizens’ rights.

The training equipped 21 journalists, bloggers and activists with knowledge to navigate this law and with skills to call out and fight disinformation and hate speech. The efforts of the trained journalists, and those which the human rights commission could implement, are expected to boost the fight against online harms and contribute to the advancement of digital rights in Ethiopia.

Smell The Coffee Kenya, Disinformation Is Brewing!

By Juliet Nanfuka |

Just over a year ago, Kenya was in the midst of a bitterly contested general election held in August 2022. The electoral period was characterised by hate speech and disinformation, which remain prevalent today. Indeed, recent studies have highlighted a booming disinformation industry in the country, fuelled by political, economic and personal interests with many actors including politicians, content creators, and citizens churning out hate speech and disinformation on social media platforms. 

During the election period, disinformation and hate speech circulated widely as social media personalities and ordinary citizens on various sides of the political divide coordinated and shared inciteful and hateful content. Influencers with a large following on the platforms are often bankrolled by politicians to recruit and coordinate micro-influencers to develop common disinformation and hate narratives and push hashtags which often trend on social media. Further, social media trolls engage through Facebook posts, tweets and WhatsApp groups with targeted hate against ethnic communities such as the Kalenjin, Kikuyu and Luo, the ethnic communities of current president William Ruto, former president Uhuru Kenyatta, and former Prime Minister Raila Odinga, respectively. 

Amidst the election-related disinformation blitz, social media platforms seemed to do either too little or nothing to stop the spread of harmful and illegal content. An investigation by Global Witness and Foxglove in June 2022 showed that Facebook failed to detect inflammatory and violent hate speech ads posted on its platforms in Swahili and English. Further, the investigation found that even after putting out a statement in July 2022 on its efforts to combat harmful content, including 37,000 pieces of Kenyan content removed from Facebook and Instagram for violating its hate speech policies, similar hate speech ads were later approved on their platforms. 

Likewise, in July 2022 Twitter was blamed by local actors for profiting from its negligence by allowing its trending topic section to be exploited through paid influencers to amplify malicious, coordinated, inauthentic attacks to silence Kenya’s civil society, muddy their reputations and stifle the reach of their messaging. In September 2021, Twitter suspended 100 accounts from Kenya for violating the platform’s manipulations and spam policy after being found to have been tweeting pre-determined hashtags meant to misinform the public and attack certain personalities. In June 2022, the company suspended 41 accounts promoting the hashtag #ChebukatiCannotBeTrusted, which suggested that the then Chairperson of the Independent Electoral and Boundaries Commission (IEBC) was supporting one of the presidential candidates. 

TikTok, which has gained popularity among younger audiences, has also come under scrutiny after disinformation and hate content was found on its platform ahead of the August 2022 election. A study by Mozilla found 132 videos that had been viewed collectively over four million times, which were spreading hate speech and inciting violence against some ethnic communities. Some also featured synthetic and manipulated content couched as Netflix documentaries, news stories and fake opinion polls or fake tweets aimed at triggering violence, fear and violence as was witnessed during the 2007 post-election period. According to the report, TikTok suffered context bias and its content moderation practices were not robust enough to tackle the spread of such content on its platform. TikTok has since removed the videos highlighted in the report. 

According to Kenya’s hate speech watchdog, the National Cohesion and Integration Commission (NCIC), hate speech content is most prevalent on Facebook and Twitter. In July 2022, the NCIC ordered Meta to address hate speech and incitement on its Facebook platform within a week or face a suspension of its operations in the country. In August 2022, the Commission also found an increase in hate content on TikTok. Some of the hate and disinformation hashtags it identified on the various platforms included #RejectRailaOdinga, #Riggagy and #RutoMalizaUfungwe, which propagated falsehoods against candidates in the presidential election.

Some critics have argued that social media platforms have shown a consistent failure to adequately moderate content in Kenya. Furthermore, the platforms’ attempts at content moderation are implemented in a lacklustre, under-funded and opaque system that is neither participatory nor rights-respecting. Other studies have also shown that platforms continue to inconsistently enforce their own rules through flawed content moderation practices and in essence permit the spread of extreme, divisive and polarising content partly due to their lack of understanding of Kenya’s cultural context and local languages and slang.

The government’s attempts at legislating on disinformation and hate speech have not been without setbacks. In 2018, the Computer Misuse and Cybercrimes Act, 2018 was adopted, imposing punitive fines and prison terms on the publication of false information and false, misleading and fictitious data. Unfortunately, these provisions have been unjustly used to target bloggers for exposing corruption or seeking state accountability. 

A case by the Bloggers Association of Kenya challenging the constitutionality of the law remains pending an appeal of the decision by the High Court in February 2020 allowing the enforcement of the law. Section 13 of the National Cohesion and Integration Act constricts the definition of hate speech to “ethnic hatred” and fails to capture the constitutional limitations under Article 31, which include propaganda for war, incitement to violence, hate speech, and advocacy of hatred. This means various hate speech content remains lawful in the absence of a clear criminal prohibition.

Moreover, the NCIC, which was formed following the 2007 post-election violence, has been plagued by numerous challenges in its attempt to fight hate and promote peace and national cohesion. The commission for most of its active life has been underfunded, thus hindering its ability and capacity to monitor hate speech online, investigate incidents and conduct public awareness and engagements. Further, political interference with its work means that it has been incapable of enforcing the law to get successful convictions of offenders who are mostly the political elite

More importantly, successive government administrations have failed to implement the recommendations of the Truth Justice and Reconciliation Commission (TJRC) report to address the drivers of hate and disinformation. The report identified those drivers as Kenya’s historical inter-ethnic tensions that are systemic and deep-rooted in its social, cultural, religious, economic and political fabric. Disinformation and hate speech in Kenya thrive on these unresolved historical tensions around political ideology, ethnicity, economics, and demography.  

Today, a majority of social media users in Kenya are aware of and fuel hate speech and disinformation on social media. To some, it is all fun and games, as they assume no feelings get hurt. To many, however, disinformation triggers pain, fear, tension and hate. Last year, a local politician advised Kenyans to put matters of politics in their lungs, not their hearts. This attitude is also a problem, as such views may breed a level of acceptance and normalisation of disinformation and hate speech in the country by encouraging people to grow a ‘thick skin’ instead of objectively addressing the root causes of the vice. People, including Kenyans, are known to act on their feelings. As we have seen in neighbouring countries such as the Democratic Republic of Congo, Ethiopia and Sudan, hate speech and disinformation can drive violence with devastating consequences. 

The failure to resolve Kenya’s underlying tensions means the country risks further social division and fragmentation of society as well as diminished progress due to a continuation of governance policies and practices that further entrench discrimination and exclusion in accessing opportunities, resources and essential services. The hate that arises from the effects of such policies and practices, and the disinformation deployed to justify and perpetuate them, affects people’s mental health and emotional well-being. Moreover, they cement long-held historical fears, suspicions and animosity that continue to undermine the ability of Kenyans to trust each other or the government and could inhibit the willingness of sections of the public to cooperate in nation-building for the common good. 

Be that as it may, there are some promising efforts, such as the recently launched local coalition on freedom of expression and content moderation and the draft guidelines for regulating digital platforms spearheaded by UNESCO that seek to promote engagement and tackle content that potentially damages human rights and democracy. The multistakeholder coalition is an initiative of UNESCO and ARTICLE 19 that aims to bridge the gap between local stakeholders and social media companies and to improve content moderation practices, including supporting regulatory reform, building the capacity of state and non-state actors, and raising awareness on the ethical use of digital platforms. While these twin initiatives are new and largely untested, they present an opportunity to ensure more rights-respecting content moderation practices, the application of common norms based on human rights standards and stronger multistakeholder engagement in the content moderation process.

Finally, it may be easy to blame social media companies for the weaknesses in their content moderation systems, and by all means they need to be held to account. However, better algorithms alone cannot fix our society or our social norms. Kenyans must wake up and smell the coffee. Leaders need to drop the divisive acts and work together with stakeholders and citizens to address historical tensions and foster a culture of inclusion, tolerance, respect and understanding. While at it, they should promote responsible social media use, fact-checking, and media literacy in order to counter the negative impact of hate speech and disinformation and ultimately build a more just, harmonious, democratic and equitable society.

Disinformation and Hate Speech Continue to Fuel the Conflict in Eastern DR Congo 

By Nadine Kampire Temba and CIPESA Writer |

The Democratic Republic of Congo (DR Congo) continues to witness an information war, characterised by spiralling incitement, misinformation, disinformation and hate speech on social media. The state of affairs has undermined cohesion between communities and continues unabated due to various factors. 

Firstly, the country’s long history of political instability has created an environment where misinformation, disinformation and hate speech thrive. Over the past three decades, the DR Congo has witnessed cyclic and indiscriminate violence, and internationalised conflict. These have been fuelled by impunity for atrocities, endemic corruption, poor governance, leadership wrangles and differences over the control of an estimated USD 24 trillion worth of mineral resources, pitting the government against neighbouring countries, at least 120 armed groups and other parties. 

The instability has left at least 24.6 million people (one in four Congolese) at risk of food insecurity and a further six million citizens internally displaced, the highest on the continent. Human rights violations have remained commonplace despite years of humanitarian interventions. More recently, the conflict has escalated in Ituri and Kivu provinces in the eastern part of the country. Violence between government forces and armed groups has led to the death of at least 1,300 people since October 2022 and forced about 300,000 civilians to flee their homes and villages. 

Secondly, divisions among the country’s diverse ethnic groups have contributed to the escalation of tensions and hostility, while disputes with neighbouring Rwanda have led to the deterioration of diplomatic relations between the two states. Congo, United Nations experts and western governments accuse Rwanda of backing the March 23 Movement (M23) rebel group which continues to extend its control in North Kivu – accusations Rwanda and the rebel group deny. 

The Congolese government has labelled the M23 a “terrorist movement” and blamed it for committing atrocities, including summary executions and forced enlistment of civilians. In January 2023, Congo accused Rwanda of shooting down its fighter jet and described this as a “deliberate act of aggression that amounts to an act of war”. Rwanda claimed the jet had violated its airspace on three occasions. This came eight months after Kinshasa banned Rwanda Air from its airspace.

For its part, the Rwanda government accuses the Congolese army of utilising proxy armed groups, such as the Democratic Forces for the Liberation of Rwanda (Les Forces démocratiques de libération du Rwanda, FDLR), to contain the M23 offensive and to destabilise Rwanda.

The strained relations between the two countries, coupled with social divisions based on ethnicity, religion, nationality and political affiliation, continue to be exploited by politicians and groups affiliated to both countries to create tension and fuel hate speech and disinformation online and offline. On October 30, 2022, the Congolese government ordered the Rwandan ambassador, Vincent Karega, to leave the country within 48 hours in retaliation for Kigali’s alleged support to the M23. The DR Congo also recalled its top diplomat from Rwanda in a further souring of relations. A day later, on October 31, 2022, thousands of Congolese citizens, mostly in Goma city, attended anti-Rwanda protests to denounce Kigali’s alleged support of the M23, a mostly Congolese Tutsi group and what they called the “hypocrisy of the international community in the face of Rwanda’s aggression.” 

During the protests, names of individuals identified as Rwandans were read out, resulting in attacks and lynching of some individuals. In response, online trolls affiliated to Rwanda have targeted Congolese political leaders, journalists and civil society leaders. The targets of attacks in Congo have included the Banyarwanda (Tutsi) and the Banyamulenge whose citizenship, equal rights and belonging in DR Congo have been constantly questioned. They often face threats, attacks, and dehumanising stereotypes as they are perceived as foreigners or Rwandan implants supporting the M23 rebellion. 

Amidst these tensions is a weak and underdeveloped media environment in both DR CongoC and Rwanda, coupled with low media literacy among the population, which are enabling the spread of false information without being challenged or fact-checked. The situation has been further complicated by the lack of both skills and tools in content moderation and editorial guidance on the part of local media outlets and journalists working from both sides of the border. 

Media houses have to compete with social media platforms where users have found a sense of “community” by connecting with a variety of actors at the national level and in the diaspora who anonymously disseminate and amplify well-scripted radical messages, conspiracy theories and polarising narratives to wider audiences and appeal to ethnic loyalties and sow discord among communities. Some of the rivalling groups have deployed bots and trolls in order to manipulate the public opinion on social media.

According to Congolese journalist Desanges Kihuha, the media that are committed to providing truthful information are struggling to match the speed at which conflict-related disinformation and misinformation are spreading on social media platforms due to limited skills and funding. Thus, any actor intent on spreading false information can publish information online where it can easily gain virality without being fact-checked. “In the current context of war and insecurity in North Kivu province, misinformation continues to spread at a fast rate due to the use of digital and social media. Unfortunately, there is little press coverage of this phenomenon of hate speech and fake news,” says Kihuha.

Related to the above is that a significant part of the population, especially those in rural areas, lack access to accurate, verifiable and reliable information, while at the same time, the youth rely on social media for information. In addition, the social and economic challenges affecting the public, such as high poverty levels and limited access to basic services and infrastructure, create frustration, resentment, anger and distrust of the state, making the public vulnerable to exploitation. 

As a result, politicians, armed groups and their allies exploit these vulnerabilities to create tension by manipulating public opinion to generate support for their extremist political views or groups and channelling the public anger to promote hate speech and disinformation to further escalate the ethnic and regional conflicts. Theogene Magarambe, a Rwandan journalist, describes this as the “instrumentalisation of the M23 insurgency” in order to distract the public from governance shortcomings and the failure to restore peace and rule of law in Kivu. 

The failure by governments on both sides of the border to create an environment to push back against the political polarisation and disinformation online is widely acknowledged. “At the practical level, policies related to content moderation and regulation are currently inexistent, though we are engaging cross-border communities in order to create space for dialogue, hosting workshops and platforms where we exchange knowledge,” says Marion Ngavho Kambale, who is the head of the civil society of the North Kivu province.  Magarambe adds: “Today the true legitimacy test for any credible government is whether it can implement legal safeguards on privately developed technologies and hold platform operators accountable for failure to moderate content.”

Critics point to the challenge of the structural conception of social media platforms, whose business models and algorithms mostly prioritise content based on its engagement value rather than its accuracy or truth. Platforms such as Facebook have been criticised for inaction in the face of online ethnic incitement and massacres in Ethiopia – a potential risk in the DR Congo-Rwanda conflict. As Arsene Tungali, a digital rights and internet governance expert observed, inadequate actions by the likes of Facebook, Twitter, and WhatsApp means “the devastating effects of political polarisation, hate speech and disinformation peddled on  social media remain a problem.”

Louis Gitinywa, a digital rights and technology lawyer, says although the internet has offered citizens and private actors in the two countries a robust civic space for organising and engaging with key societal priorities, the “lawlessness and disinformation online … continue to contribute to fighting and killings”. 

Overall, addressing hate and disinformation in the DR Congo-Rwanda conflict will require a sustained and coordinated effort from multiple actors. Looking ahead, there is a critical need to build capacity and expertise amongst all the stakeholders in order to formulate effective strategies for content moderation. This includes building the legal expertise and strategic litigation to hold liable social media such as Facebook and Twitter for failing to effectively put adequate measures to moderate content in native and indigenous languages.

Further, since media literacy is limited, it is important to build the capacity of journalists, media practitioners and civil society reporting on the conflict to be aware of the complex information environment, relevant skills in fact-checking, professional ethics, content moderation as well as building their own professional networks for sharing credible information with counterparts across borders and avoiding sensationalism in reporting. They can also use the available platforms to promote responsible social media use, tolerance and dialogue between different groups in order to build trust. Moreover, the different actors should desist from propagating hate speech and disinformation. 

Nadine Kampire Temba is a journalist and digital rights lawyer based in Goma city, DR Congo, and a fellow with CIPESA. She coordinates Afia Amani Grands Lacs, an online media outlet that undertakes fact-checking and defends press freedom in the Great Lakes Region. You can follow her on Twitter @nadineKampire

Countering Digital Authoritarianism in Africa

By Apolo Kakaire |

The Internet which is viewed as the panacea for democracy, participation and inclusion is increasingly becoming a tool of repression deployed by regimes across the world to stifle rights and voice.  Africa, a continent already replete with poor democratic credentials and practices seems to be rapidly catching up on the new ‘epidemic’- digital authoritarianism.

The use of technology tactics to advance repressive political interests has come to be  referred to as digital authoritarianism. However, the tactics employed by authoritarian regimes have also been deployed by democratic states for purposes of surveillance, spread of misinformation, disinformation, and the disruption of civic and political participation under the pretext of fighting cybercrime, and in the interest of protecting national security, and maintaining public order.

Big technology companies are key drivers of digital authoritarianism through the creation, innovation and supply of repressive technology and related support. Moreover, political parties, interest groups, and smaller private companies have lapped it up too, developing and using tools and strategies of digital authoritarianism.

Digital authoritarianism is a great case study in understanding and appreciating the impact of technology on human rights. While laws legalising surveillance and interception of communications, and widespread data collection and processing may not be a problem in themselves, it is the ambiguity often present within those laws that give governments wide latitude of interpretation to facilitate the rights abuse that is a growing challenge.

At the Forum on Internet Freedom in Africa 2022 (FIFAfrica22), Global Voices Advox, shared findings from the Unfreedom Monitor– a project exploring the political and social context that fuels the emergence of digital authoritarianism in 17 countries. They hosted a panel discussion in which project researchers from India, Nigeria, Sudan and Zimbabwe presented the project findings on the connections between political contexts, analogue rights, and the growing use of digital communications technology to advance authoritarian governance.

The findings paint a grim picture for  freedom of the media, expression, and democracy in general. In Zimbabwe for instance, the Unfreedom Monitor report notes that; “the press walks a precarious line between national security and the professional obligation to report truthfully” on issues that happen in the country. It is an observation that is replicated in the mapping conducted in Morocco, Egypt, and Tanzania 

In Sudan, where internet censorship, bad laws and repressed liberties and network disruptions are commonplace, Khattab Hamad noted that the contours and motives of digital authoritarianism include fear of losing power, protecting the existence of regional or international alliances, and geopolitical motives protecting private and family interests. He added that terrorism and support for terrorist groups was another motive for authoritarianism in the country. 

In Tanzania, researchers found that often, laws are enacted as precursors to enable various methods of digital authoritarianism. For example, the Cybercrime Act which was hurriedly enacted just months before the October 2015 elections. “There were many other such laws, including the amendments to the Non-Governmental Orgnaisations (NGO) Act, that saw NGOs being deregistered and control on them tightened in the lead up to the 2020 elections”, they revealed.

In Uganda, network disruptions in the run up to and during recent elections is another example of digital authoritarianism. “Sometimes the internet is restored after elections. So, the question is what exactly is the purpose? What are you hiding? Why do you deny your people access to information? Internet shutdowns also question the credibility of elections”, said Felicia Anthonio of Access Now. She added that network disruptions affect engagement between voters and political candidates, in addition to limiting  electoral oversight and monitoring by human rights activists and election observers. 

As part of the Unfreedom Monitor project, Global Voices Advox has established a publicly available database on digital authoritarianism to support advocacy in light of the “urgency of a fast deteriorating situation”, said Sindhuri Nandhakumar, a researcher  with the project. 

While applauding the research and database in supporting evidence-based advocacy, digital rights activists at FIFAfrica22 noted that given the behaviour of authoritarian regimes, advocacy at the national level may be met with a lot of resistance. As such, more engagement was called for  through special mandates and periodic human rights review mechanisms at the African Union (AU) and the United Nations Human Rights Council.   

“Advocacy [against digital authoritarianism] at national level will be difficult. Positive results could be registered through Special rapporteurs at the AU and states through the Universal Periodic Review (UPR), towards securing accountability”, said Arsene Tungali from the Democratic Republic of Congo.

For African digital rights activists, the Global Voices Advox research and database unravels new  avenues for collaborative advocacy and transnational opportunities for interventions to stem this spread of digital authoritarianism. The findings however also point at the need for a concerted and robust response to its growing traction.

As elections in Africa remain a major flashing point for digital authoritarianism as all manner of manipulation of voters, narratives, even results abound, it remains a key area of transnational cooperation. Ahead of the elections in Zimbabwe, slated for July-August 2023, Advox will come up with tips on awareness raising on voter rights and the role of technology in elections. Zimbabwe provides a good opportunity to pilot, learn and perhaps adopt some interventions to counter this behemoth.

Digital Rights Prioritised at The 73rd Session of The ACHPR

By CIPESA Writer |

Digital rights as key to the realisation and enforcement of human rights on the African continent was  among the thematic focus areas of the Forum on the Participation of NGOs in the 73rd Ordinary Session of the African Commission on Human and Peoples’ Rights (ACHPR) held on October 17-18, 2022 in Banjul, the Gambia. Under the theme “Human Rights and Governance in Africa: A Multi-Dimensional Approach in Addressing Conflict, Crisis and Inequality”, the Forum also featured thematic discussions on conflict, the Africa Continental Free Trade Agreement, the environment, climate change, gender-based violence, post Covid-19 strategies and civic space for human rights and good governance.

The Forum on the Participation of NGOs in the Ordinary Sessions of the ACHPR is an advocacy platform coordinated by the African Centre for Democracy and Human Rights Studies. It aims to promote advocacy, lobbying and networking among non-governmental organisations (NGOs) for the promotion and protection of human rights in Africa. The Forum allows for sharing updates on the human rights situation on the continent by African and international NGOs with a view of identifying responses as well as adopting strategies towards promoting and protecting human rights on the continent.

A session in which the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated alongside Paradigm Initiative (PIN), the International Center for Not-for-Profit Law (ICNL) and the Centre for Human Rights-University of Pretoria, discussed the relationship between human rights and technology.

Thobekile Matimbe from PIN observed that internet shutdowns in the region are worrying and a major threat to freedom of expression, access to information, freedom of association and peaceful assembly contrary to article 9 of the African Charter on Human and People Rights (ACHPR) and the ACHPR Declaration of Principles on freedom of expression and access to information in Africa. She  expounded on the profound adverse impacts of internet shutdowns and disruptions on socio-economic rights, including the right to education, housing, health, and even social security. Matimbe specifically called for an end to the now two years internet and phone shutdown in Ethiopia’s Tigray region, while also regretting the continued violation of international human rights standards by States in other parts of the continent. 

Introducing digital rights as human rights and situating the different human rights groups within the digital rights discourse, Irene Petras from ICNL highlighted the technological evolution on the continent and the interrelatedness and interdependence of the internet with various rights and freedoms. According to her, internet shutdowns are an emerging concern that is adversely impacting the digital civic space. 

According to Access Now, in 2021 at least 182 internet shutdowns were experienced in 34 countries across the globe. In Africa, shutdowns were recorded in 12 countries on up to 19 occasions. The affected countries were Chad, the Democratic Republic of the Congo, Ethiopia, Gabon, Niger, Uganda and Zambia, which experienced internet restrictions during elections. Eswatini, Ethiopia, Gabon, Senegal and South Sudan experienced internet shutdowns due to protests and civil unrest. 

According to CIPESA’s legal officer Edrine Wanyama, given the long-standing authoritarianism and democracy deficits in most parts of the continent, elections, protests and demonstrations and examination periods are  the key drivers of internet shutdowns in Africa. Wanyama also noted that the consequences of internet shutdowns were wide ranging, extending to economic and financial loss, undermining freedom of expression, access to information and access to the internet, aggravating the digital exclusion gap, placing doubt on credibility of elections, facilitating loss of trust in governments and often fueling disinformation and hate speech

Given the social, economic and political benefits of the internet, Hlengiwe Dube of the Centre for Human Rights at the University of Pretoria urged states to re-think its availability and access at all times, as opposed to imposing information blackouts and creating situations for litigation.  She noted that meaningful access and creation of a facilitative environment for internet access has widely been advanced as part of the Sustainable Development Goals (SDGs)

The session called for active monitoring and documentation of internet shutdowns by NGOs including through collaborative and partnership building efforts, utilising investigative tools like Observatory of Network Interference (OONI) and NetBlocks which help to detect disruptions, and engaging in strategic litigation. 

The joint recommendations provided for inclusion in the NGOs Statement to the African Commission on Human and Peoples’ Rights (ACHPR) 73rd Ordinary Session by the thematic cluster on digital rights and security are to:

African Commission on Human and Peoples’ Rights (ACHPR) 

  1. In the event of an internet shutdown or any state-perpetrated network disruption, the ACHPR should condemn in the strongest terms such practices and reiterate the state obligations under international human rights law and standards. 
  2. In its assessment of State periodic reports, the ACHPR should engage States under assessment on issues of internet access including the occurrence of interferences through measures such as the removal, blocking or filtering of content and assess compliance with international human rights law and standards.
  3. The ACHPR should engage with stakeholders including State Parties, national human rights institutions and NGOs to develop guidance on internet freedom in Africa aimed at realising an open and secure internet in the promotion of freedom of expression and access to information online.

States Parties

  1. States should recognise and respect that universal, equitable, affordable and meaningful access to the internet is necessary for the realisation of human rights by adopting legal, policy and other measures to promote access to the internet and amend laws that unjustifiably restrict access to the internet.
  2. States parties should desist from unnecessarily implementing internet shutdowns and any other arbitrary actions that limit access to, and use of the internet and restore all disrupted digital networks where such disruptions have been ordered. Where limitation measures that disrupt access to the internet and social media are inevitable, they should be narrowly applied and should be prescribed by the law; serve a legitimate aim and be necessary and proportionate means to achieve a stated aim in a democratic society. 
  3. The State, as the duty bearer, should create a conducive environment for business entities to operate in a manner that respects human rights. 

Non-Governmental Organisations 

  • NGOs and other stakeholders should monitor and document the occurrence of internet shutdowns including their impact on human rights and development; raise awareness of the shutdowns and continuously advocate for an open and secure internet.

The Private Sector

  • Telecommunications companies and internet service providers, in their response to shut down requests, should take the relevant legal measures to avoid internet shutdowns and whenever they receive Internet Shutdown requests from States, the companies should insist on human rights due diligence before such measures are taken to mitigate their impact on human rights, ensuring transparency.