Countering Digital Authoritarianism in Africa

By Apolo Kakaire |

The Internet which is viewed as the panacea for democracy, participation and inclusion is increasingly becoming a tool of repression deployed by regimes across the world to stifle rights and voice.  Africa, a continent already replete with poor democratic credentials and practices seems to be rapidly catching up on the new ‘epidemic’- digital authoritarianism.

The use of technology tactics to advance repressive political interests has come to be  referred to as digital authoritarianism. However, the tactics employed by authoritarian regimes have also been deployed by democratic states for purposes of surveillance, spread of misinformation, disinformation, and the disruption of civic and political participation under the pretext of fighting cybercrime, and in the interest of protecting national security, and maintaining public order.

Big technology companies are key drivers of digital authoritarianism through the creation, innovation and supply of repressive technology and related support. Moreover, political parties, interest groups, and smaller private companies have lapped it up too, developing and using tools and strategies of digital authoritarianism.

Digital authoritarianism is a great case study in understanding and appreciating the impact of technology on human rights. While laws legalising surveillance and interception of communications, and widespread data collection and processing may not be a problem in themselves, it is the ambiguity often present within those laws that give governments wide latitude of interpretation to facilitate the rights abuse that is a growing challenge.

At the Forum on Internet Freedom in Africa 2022 (FIFAfrica22), Global Voices Advox, shared findings from the Unfreedom Monitor– a project exploring the political and social context that fuels the emergence of digital authoritarianism in 17 countries. They hosted a panel discussion in which project researchers from India, Nigeria, Sudan and Zimbabwe presented the project findings on the connections between political contexts, analogue rights, and the growing use of digital communications technology to advance authoritarian governance.

The findings paint a grim picture for  freedom of the media, expression, and democracy in general. In Zimbabwe for instance, the Unfreedom Monitor report notes that; “the press walks a precarious line between national security and the professional obligation to report truthfully” on issues that happen in the country. It is an observation that is replicated in the mapping conducted in Morocco, Egypt, and Tanzania 

In Sudan, where internet censorship, bad laws and repressed liberties and network disruptions are commonplace, Khattab Hamad noted that the contours and motives of digital authoritarianism include fear of losing power, protecting the existence of regional or international alliances, and geopolitical motives protecting private and family interests. He added that terrorism and support for terrorist groups was another motive for authoritarianism in the country. 

In Tanzania, researchers found that often, laws are enacted as precursors to enable various methods of digital authoritarianism. For example, the Cybercrime Act which was hurriedly enacted just months before the October 2015 elections. “There were many other such laws, including the amendments to the Non-Governmental Orgnaisations (NGO) Act, that saw NGOs being deregistered and control on them tightened in the lead up to the 2020 elections”, they revealed.

In Uganda, network disruptions in the run up to and during recent elections is another example of digital authoritarianism. “Sometimes the internet is restored after elections. So, the question is what exactly is the purpose? What are you hiding? Why do you deny your people access to information? Internet shutdowns also question the credibility of elections”, said Felicia Anthonio of Access Now. She added that network disruptions affect engagement between voters and political candidates, in addition to limiting  electoral oversight and monitoring by human rights activists and election observers. 

As part of the Unfreedom Monitor project, Global Voices Advox has established a publicly available database on digital authoritarianism to support advocacy in light of the “urgency of a fast deteriorating situation”, said Sindhuri Nandhakumar, a researcher  with the project. 

While applauding the research and database in supporting evidence-based advocacy, digital rights activists at FIFAfrica22 noted that given the behaviour of authoritarian regimes, advocacy at the national level may be met with a lot of resistance. As such, more engagement was called for  through special mandates and periodic human rights review mechanisms at the African Union (AU) and the United Nations Human Rights Council.   

“Advocacy [against digital authoritarianism] at national level will be difficult. Positive results could be registered through Special rapporteurs at the AU and states through the Universal Periodic Review (UPR), towards securing accountability”, said Arsene Tungali from the Democratic Republic of Congo.

For African digital rights activists, the Global Voices Advox research and database unravels new  avenues for collaborative advocacy and transnational opportunities for interventions to stem this spread of digital authoritarianism. The findings however also point at the need for a concerted and robust response to its growing traction.

As elections in Africa remain a major flashing point for digital authoritarianism as all manner of manipulation of voters, narratives, even results abound, it remains a key area of transnational cooperation. Ahead of the elections in Zimbabwe, slated for July-August 2023, Advox will come up with tips on awareness raising on voter rights and the role of technology in elections. Zimbabwe provides a good opportunity to pilot, learn and perhaps adopt some interventions to counter this behemoth.

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.

Privacy Imperilled: Analysis of Surveillance, Encryption and Data Localisation Laws in Africa

By Evelyn Lirri |

Across Africa, the proliferation of digital technologies is being matched by state measures that negate the right to privacy. The accelerated adoption of digital technologies has come with increased collection and sharing of large quantities of personal data, which is a major concern as several countries lack data privacy laws and many that have them are not implementing the laws. 

As a result, the right to privacy has come under growing siege, which is in turn negatively impacting the enjoyment of other rights, including freedom of expression, association, and access to information online.

In this report, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) analyses country-specific laws that various governments on the continent have enacted and how they impact privacy and data security through surveillance, restrictions on encryption, data localisation, and biometric databases. The report covers 23 countries – Algeria, Angola, Benin, Burkina Faso, Burundi, Cape Verde, the Central Africa Republic (CAR), Congo Brazzaville, the Democratic Republic of Congo (DRC), Gabon, Guinea Conakry, Ivory Coast, Lesotho, Liberia, Madagascar, Mauritania, Morocco, Niger, Sao Tome and Principe, Sierra Leone, South Sudan, and Togo.

According to the report, governments across the continent continue to collect and process personal data, intercept communications and permit surveillance without putting in place the requisite oversight mechanisms and adequate remedies, despite being signatories to regional and international conventions that recognise the right to privacy and provide safeguards for data protection, such as the revised Declaration of Principles of Freedom of Expression and Access to Information in Africa, the International Covenant on Civil and Political Rights, and the Universal Declaration of Human Rights.

Weak Oversight of Surveillance Operations 

One of the emerging concerns is the lack of independent judicial oversight over surveillance operations. In some countries, surveillance operations are entirely carried out and overseen by bodies within the executive, with parliaments and courts of law excluded. In Lesotho, interception warrants may be issued by the Minister responsible for the National Security Services, while in Niger, interception is ordered by the President. In South Sudan, this responsibility is vested with the Director General of the National Security Service, while in The Gambia it lies with the Minister of Interior. In Togo, the Prime Minister, and the Ministers responsible for the economy and finance, defence, justice, and security and civil protection can trigger interception of communications.

In countries such as Benin, the Democratic Republic of the Congo (DRC), Morocco, Niger, and Togo, justification for surveillance is specified under the law. The reasons provided include the preservation of national security or defence, investigation of crimes, prevention of terrorism, organised crime, and activities that undermine public peace or public order. However, these crimes are not defined or are vaguely defined, which gives latitude to state authorities to broadly interpret the laws in undermining the rights of critics and opponents.

 Limitations on Encryption

The use of encryption is critical in helping citizens to protect their data and communications while enjoying the right to privacy and freedom of expression. In several countries, however, this right is being threatened as governments impose restrictions that require the registration of encryption service providers, ban certain types of encryptions, and compel service providers to hand over decrypted data.

In Algeria, individuals and organisations that want to acquire and use encryption services must be granted authorisation by the country’s Regulatory Authority of Post and Electronic Communications. On the other hand, in countries such as the Democratic Republic of Congo, the Central Africa Republic, Niger, Benin, Guinea Conakry, Ivory Coast, Congo-Brazzaville, Morocco, Togo and Burkina Faso, an authorisation may be sought if the encryption is not exclusively for providing authentication or integrity control functions. Failure to seek authorisation or using prohibited encryption could attract a heavy penalty including jail time, a fine, or both.

Countries like Mali, Tanzania, and Malawi also require service providers to disclose specific software to be used for encryption. Such prohibitive provisions undermine privacy and freedom of expression that access to encryption accords.

Compelled Assistance by Service Providers

Governments are also using compelled assistance – where state agencies seek access to data from service providers, including through courts of law and regulators, to gain access to individuals’ private data. This includes access to the secret code of encrypted data, or to decrypted data, and generally requiring service providers to render assistance to state agencies in the interception of communications.     

Laws in countries like Benin, Ivory Coast, Congo-Brazzaville, Gabon, Guinea Conakry, and Sierra Leone specify grounds on which the state can access encrypted data of individuals and also facilitate lawful interception of communications. Laws in several countries require intermediaries such as telecom companies and Internet Service Providers (ISPs) to facilitate surveillance.

 As the report notes, compelled service provider assistance as stipulated in some countries’ laws is quite worrisome as it gives governments and their agencies unfettered access to individuals’ private data beyond limits prescribed by law or permissible by international standards.

Data Localisation

Various countries have enacted laws to control the cross-border transfer of personal data for a multitude of reasons, including national security, personal data protection, and data sovereignty. Algeria, Niger, Morocco, Benin, Cape Verde, Madagascar, Guinea Conakry, Ivory Coast, Congo Brazzaville, Sao Tome & Principe and Togo have laws that prohibit cross-border transfer of personal data unless authorised by data protection authorities.

However, as the report’s findings show, despite having laws in place, enforcement remains weak. Further, data localisation requirements could, in the absence of robust legal and practical safeguards, further facilitate efforts by state and non-state actors to undermine privacy-related rights. Morocco, Algeria, and Ivory Coast are some of the countries where data localisation measures are being implemented.

 Biometric Data Collection

Recent years have seen a number of African countries undertake mass collection, processing and storage of personal data through initiatives such as mandatory SIM card registration, electronic biometric passports, IDs, and driving licences. Although many countries have also passed laws on data protection and privacy, weak implementation mechanisms, coupled with the absence of the requisite safeguards, remain a threat to individual privacy. This is particularly so in instances where regulatory authorities have the power to direct telecom operators to hand over information such as that contained in the SIM card databases.  

Furthermore, the existing oversight mechanisms and provisions for remedies in the case of data breaches have not been effective enough to protect the personal information and communication of individuals in line with internationally recognised human rights standards.    Many countries have enacted data protection laws but have additional legislation that gives the state and its agencies power to access citizens’ biometric information, often under the guise of protecting national security. This is the case with countries such as Kenya, Gabon, Uganda, Lesotho, Mauritius, Morocco, Niger, Sao Tome, Togo, Algeria, Congo Brazzaville, and Ivory Coast.

 Recommendations

 Government:

  • Enact data protection laws in countries such as Liberia, Sierra Leone and South Sudan to provide for and guarantee protection of personal data.
  • Review existing laws, policies and practices on surveillance, including COVID-19 surveillance, biometric data collection, encryption and data localisation, to ensure they comply with article 9 of the African Charter and with the principles in the African Commission on Human and Peoples’ Rights Declaration of Principles on Freedom of Expression and Access to Information in Africa 2019.
  • Cease blanket compelled service provider assistance and provide for clear, activity-bound and court-mandated assistance.
  • Submit periodic reports to the different international human rights treaty body monitoring mechanisms such as the African Commission on Human and Peoples’ Rights, the Human Rights Committee and the Universal Periodic Review process, on the measures taken to guarantee the right to privacy and data protection.

Civil Society:

  • Work collaboratively with stakeholders such as the private sector and academia, including through litigation to challenge laws and measures that violate privacy rights.
  • Monitor and document privacy rights violations through evidence-based research.
  • Conduct regular analysis of proposed laws to identify the gaps and propose revisions before they are enacted into law.
  • Advocate for the promotion and protection of the right to privacy and data protection through various advocacy engagements.

Private Sector: 

  • Develop, publish and implement internal privacy and data protection policies and best practices in handling customer data so as to guarantee customers’ data protection and privacy.
  • Regularly publish transparency reports that highlight all cases of personal data and information disclosure to government agencies as well as other assistance offered to governments to enable communication interception and monitoring.
  • Develop technologies and solutions and use privacy-enhancing technologies that embed and integrate privacy principles by design and default.
  • Comply with the United Nations Business and Human Rights Principles by conducting human rights impact assessments to ensure that measures undertaken do not harm individual rights to privacy and data protection.

Find the full report here: Privacy Imperilled: Analysis of Surveillance, Encryption And Data Localization Laws in Africa  

See another CIPESA report Mapping and Analysis of Privacy Laws in Africa that maps privacy-related laws in 19 other countries.

Register for The Data Privacy Summit 2021

Online Event |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) alongside Article 19, Facebook, FGI Benin are pleased to host the Data Privacy Summit 2021 (#DataPrivacySummit21) in commemoration of Data Privacy Day.

Data Privacy Day was launched by the Committee of Ministers of the Council of Europe on 26th April 2006, to be celebrated each year on 28th January; the anniversary of the signing of Convention 108 – the first legally binding international treaty on privacy and data protection. Since then, this day has come to represent international efforts to empower individuals and businesses to respect privacy, safeguard data and build trust.

Data Privacy Summit 2021, thus aims to raise awareness on contemporary privacy and data protection issues in Africa and the Middle East, as well as to inspire individuals, policymakers, organisations to take action and adopt best practices that protect privacy while promoting innovation in a manner that mitigates risks in the increasing use of digital technologies.

To see the lineup of sessions and speakers, register here.

Civil Society Groups Denounce the European Union’s Involvement in Surveillance in Africa

Open Letter |

The increasing involvement of foreign entities in undermining democracy and respect for human rights in the digital sphere in Africa is widely documented. Whereas these schemes have mostly been attributed to spyware vendors and data analytics firms, recent disclosures have implicated the European Union (EU).

Investigations by Privacy International have revealed the use of EU aid and cooperation programmes to train and equip security forces in Africa with surveillance techniques. The disclosures reveal that the European Union Agency for Law Enforcement Training (CEPOL) has trained police and security agencies in Algeria, Morocco and Tunisia in phone and internet surveillance, including social media monitoring, telecommunications metadata analysis, device investigations and data extraction. According to Privacy International, whereas cybersecurity, terrorism and violent extremism are threats in the countries that CEPOL is supporting, “the absence of effective privacy and security safeguards and in contexts where security agencies arbitrarily target activists, journalists and others, surveillance techniques and tools pose a serious threat to people’s rights and their work.”

In Algeria, Egypt, Niger, Libya, Morocco, and Tunisia, EU bodies are reportedly training and equipping border and migration authorities with surveillance tools, including wiretapping systems and other phone surveillance tools, in a bid to “outsource” the EU’s border controls. Further, support for the development of biometric identity systems in Cote d’Ivoire, Mali and Senegal with EU aid funds is raising serious privacy concerns.

In response to the revelations, Privacy International, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), together with 12 civil society organisations from Europe and Africa have submitted a letter to the European Commission calling for urgent reforms to EU aid and cooperation programmes to ensure they promote privacy protections in non-member countries and do not facilitate the use of surveillance which violates fundamental rights.

In the letter, the civil society organisations call on the European Commission to enact strict due diligence and risk assessment procedures, and to agree to transparency, parliamentary scrutiny and public oversight measures aimed at protecting human rights in non-member countries.

A copy of the letter is available here.