Safeguarding Digital Rights in Africa’s Growing Digital Economy

By Loyce Kyogabirwe |

Increased digitalisation and adoption of technology in Africa has fuelled the continent’s economy, with commerce and transactions increasingly being conducted online. Innovation and use of web and mobile applications have also encouraged the growth of micro, small and medium-sized enterprises, which has advanced financial inclusion and employment, and made the technology sector a key contributor to African countries’ Gross Domestic Product (GDP). For instance, platforms such as Jumia which is operational in 11 African countries have transformed the retail, travel and food markets. Other notable online platforms include Appruve and Esoko (Ghana), mFarm (Kenya) and Novus Agro (Nigeria).

African governments have prioritised the integration of technology into more sectors to drive social and economic transformation. However, the rapid adoption of technology tools and platforms has also been met with growing concerns about the impact on digital rights, including data protection and privacy, the digital divide, freedom of expression and surveillance. Other worrying trends include network disruptions, digital taxation, data localisation requirements, and encryption regulations. There is a growing consensus among digital rights advocates that the adoption of technology tools and policies impacting the digital space should not only advance economic inclusion, but also be carefully assessed and implemented in a way that respects human rights in the digital age. 

According to a GSMA report, in 2020, “mobile technologies and services generated more than USD 130 billion of economic value” while USD 155 billion is projected to be generated by 2025. The report further says that “495 million people subscribed to mobile services in Sub-Saharan Africa” by the end of 2020, representing 46% of the region’s population, and this is expected to increase to around 615 million subscribers by 2025, reaching the mark of 50% of Africa’s population.

In an effort to advance digital rights across Africa’s growing digital economy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) through the Africa Digital Rights Fund (ADRF) has worked to support advocacy initiatives, skills development, and movement building to effectively influence policy and practice on digital rights and the digital economy. Efforts by the ADRF grantees have engaged with state and non-state actors, providing replicable insights into how governments and the private sector in the region can safeguard digital rights while advancing the digital economy. 

In Ghana, the Financial Inclusion Forum Africa developed a Data Protection and Privacy Policy to serve as an internal guide on how digital financial service providers in the country should collect, store, and process individuals’ data. The policy outlines principles on the management of personal data in compliance with Ghana’s Data Protection Act 2012 and the International Organization for Standardization and International Electrotechnical Commission Standards for Information Security Management – ISO 27001:2013. The policy benefited from reviews and input from leading digital financial service providers such as Appruve, Jumo, Vodaphone Cash, and G Money, alongside industry experts and regulators such as the eCrime Bureau, RegTheory, and CUTS (Consumer Unit and Trust Society) Ghana. This provided insights into the policy’s viability and applicability by tapping on real-life experiences of these service providers. 

Similarly, the Centre for International Trade, Economics and Environment (CUTS) and Mzalendo Trust have worked to advance consumer protection, security and inclusion, and public awareness within the digital economy in Kenya. The two Kenya-based grantees engaged with stakeholders such as the Capital Markets Authority, Kenya ICT Action Network (KICTANet), Association of Freelance Journalists, Open Institute, The Centre for Intellectual Property and Information Technology Law (CIPIT) at Strathmore University, Article 19, County Assemblies Forum, Internews, and the Election Observation Group (ELOG).

In Mozambique, efforts by the Mozambican Disabled Persons’ Organisation Forum (FAMOD) under ADRF focused on accessibility and compliance assessments of online services, including for employment, telecommunications, and revenue collection. These assessments helped identify key areas where advocacy campaigns for digital inclusion of persons with disabilities would be most impactful. Meanwhile, in an effort to promote women’s safety and participation online in Namibia, the local chapter of the Internet Society (ISOC) conducted policy engagements on the protection of women and girls as part of the Data Protection Bill. 

In Somalia, the work of Digital Shelter made significant breakthroughs in stakeholder dialogue and engagement on aspects of digitalisation that previously have not been prioritised or discussed regularly. Engagements, including in partnership with the Institute of Innovation, Technology & Entrepreneurship (IITE), the ICT and e-Governance Department in Ministry Communications and Technology, the private sector and activists, have focused on youth skilling, digital empowerment, data protection and privacy, and an open and inclusive internet. 

Finally, ADRF grantee, Alt Advisory, recently published research on a rights-based assessment of Artificial Intelligence (AI) applications in South Africa. The research involved inputs from 14 leading companies in the country’s financial services, retail and e-commerce sectors and two government bodies – the Home Affairs Department and the Department of Health. The findings of the study indicated human rights gaps in AI profiling and the need to bolster compliance with rights guarantees under relevant laws and policies and enforcement by the country’s data protection watchdog, the Information Regulator, and other regulatory bodies.

The ADRF grantees’ interventions in Ghana, Kenya, Mozambique, Somalia, and South Africa highlight the value of evidence-based advocacy that informs multi-stakeholder deliberations on the digital economy and digital rights. Together with the work of the broader ADRF cohort, it presents key lessons on digitalisation in Africa and the need for operationalisation of supporting frameworks such as for cyber security, data protection and privacy; increased participation of minority and marginalised groups in the design of initiatives; multi-stakeholder collaboration; harmonisation of national and local government plans; and digital literacy skills building. To learn more about the ADRF programme, please visit https://cipesa.org/the-africa-digital-rights-fund-english/

Training webinar on Internet Universality Indicators convened for African Countries

By Juliet Nanfuka |

On 26 October, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) convened a regional training webinar to raise awareness of the Internet Universality ROAM-X indicators and their potential to promote Internet development to advance media freedom and digital rights in Africa. ), The UNESCO Information for All Programme (IFAP) and International Programme for the Development of Communication (IPDC) jointly supported the training.

Present at the meeting were PROTEGE QV (Cameroon), Youth Net and Counselling, YONECO (Malawi), namTshuwe (Namibia), Digital Shelter (Somalia), and CIPESA (Uganda). Each partner presented the state of digital rights in their respective country as a foundation for discussing the ROAM-X indicators with Malawi and Somalia hosting physical convenings. 

In her opening remarks, Dorothy Gordon, Chair of UNESCO’s Internet For All Programme (IFAP) stated: “There is a need to take control of the digitally mediated future and understand the impact of policies on our digital environments: the ROAM-X indicators give stakeholders factual tools to discuss and advocate for the future we want to see in Africa.”  

Xianhong Hu, UNESCO’s Programme Specialist  representing IFAP Secretariat, unpacked the 303 the Internet Universality ROAM-X indicators and elaborated on the eight-step multi-stakeholder methodology of conducting national assessments. She highlighted that the unique value of applying ROAM-X indicators is to improve national digital ecosystems and foster cross-border and cross-jurisdictional digital collaboration. 

UNESCO encouraged more African countries to pursue a ROAM-X assessment as a tool to evaluate the ever-changing developments in technology, reverse the digital divide, and to harness digital transformation. Given the launch of the Namibian national assessment and the follow-up ROAM-X assessment in Kenya, as well as the monitoring of new developments following the Covid-19 pandemic and the 2022 national elections, incorporating ROAM-X assessment is critical.

UNESCO and CIPESA jointly reaffirmed the need for increased mobilisation using the multistakeholder approach to ensure an open and inclusive implementation process, and to scale up Internet development in African countries over the next two years. 

Participants urged UNESCO to continue its support in organising more capacity-building activities to meet the growing demand to assess ROAM-X indicators in African countries.  

All participants were invited to continue their engagement with UNESCO and attend its events at the December 2022 Internet Governance Forum (IGF), in  Addis Ababa, Ethiopia which include sessions on the ROAM-X indicators, a Day-0 pre-event and a Dynamic Coalition session.

International Day of Persons With Disabilities (IDPWD) 2022

The theme this year is “Transformative solutions for inclusive development: the role of innovation in fuelling an accessible and equitable world“.

The annual observance of the International Day of Persons with Disabilities (IDPD) on 3 December was proclaimed in 1992 by the United Nations General Assembly resolution 47/3. The observance of the Day aims to promote an understanding of disability issues and mobilize support for the dignity, rights and well-being of persons with disabilities.

The 2022 global observance to commemorate the International Day of Persons with Disabilities will be around the overarching theme of innovation and transformative solutions for inclusive development, covering in three different interactive dialogues the following thematic topics:

Click here for more information on the event.

Digital Rights Prioritised at The 73rd Session of The ACHPR

By CIPESA Writer |

Digital rights as key to the realisation and enforcement of human rights on the African continent was  among the thematic focus areas of the Forum on the Participation of NGOs in the 73rd Ordinary Session of the African Commission on Human and Peoples’ Rights (ACHPR) held on October 17-18, 2022 in Banjul, the Gambia. Under the theme “Human Rights and Governance in Africa: A Multi-Dimensional Approach in Addressing Conflict, Crisis and Inequality”, the Forum also featured thematic discussions on conflict, the Africa Continental Free Trade Agreement, the environment, climate change, gender-based violence, post Covid-19 strategies and civic space for human rights and good governance.

The Forum on the Participation of NGOs in the Ordinary Sessions of the ACHPR is an advocacy platform coordinated by the African Centre for Democracy and Human Rights Studies. It aims to promote advocacy, lobbying and networking among non-governmental organisations (NGOs) for the promotion and protection of human rights in Africa. The Forum allows for sharing updates on the human rights situation on the continent by African and international NGOs with a view of identifying responses as well as adopting strategies towards promoting and protecting human rights on the continent.

A session in which the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated alongside Paradigm Initiative (PIN), the International Center for Not-for-Profit Law (ICNL) and the Centre for Human Rights-University of Pretoria, discussed the relationship between human rights and technology.

Thobekile Matimbe from PIN observed that internet shutdowns in the region are worrying and a major threat to freedom of expression, access to information, freedom of association and peaceful assembly contrary to article 9 of the African Charter on Human and People Rights (ACHPR) and the ACHPR Declaration of Principles on freedom of expression and access to information in Africa. She  expounded on the profound adverse impacts of internet shutdowns and disruptions on socio-economic rights, including the right to education, housing, health, and even social security. Matimbe specifically called for an end to the now two years internet and phone shutdown in Ethiopia’s Tigray region, while also regretting the continued violation of international human rights standards by States in other parts of the continent. 

Introducing digital rights as human rights and situating the different human rights groups within the digital rights discourse, Irene Petras from ICNL highlighted the technological evolution on the continent and the interrelatedness and interdependence of the internet with various rights and freedoms. According to her, internet shutdowns are an emerging concern that is adversely impacting the digital civic space. 

According to Access Now, in 2021 at least 182 internet shutdowns were experienced in 34 countries across the globe. In Africa, shutdowns were recorded in 12 countries on up to 19 occasions. The affected countries were Chad, the Democratic Republic of the Congo, Ethiopia, Gabon, Niger, Uganda and Zambia, which experienced internet restrictions during elections. Eswatini, Ethiopia, Gabon, Senegal and South Sudan experienced internet shutdowns due to protests and civil unrest. 

According to CIPESA’s legal officer Edrine Wanyama, given the long-standing authoritarianism and democracy deficits in most parts of the continent, elections, protests and demonstrations and examination periods are  the key drivers of internet shutdowns in Africa. Wanyama also noted that the consequences of internet shutdowns were wide ranging, extending to economic and financial loss, undermining freedom of expression, access to information and access to the internet, aggravating the digital exclusion gap, placing doubt on credibility of elections, facilitating loss of trust in governments and often fueling disinformation and hate speech

Given the social, economic and political benefits of the internet, Hlengiwe Dube of the Centre for Human Rights at the University of Pretoria urged states to re-think its availability and access at all times, as opposed to imposing information blackouts and creating situations for litigation.  She noted that meaningful access and creation of a facilitative environment for internet access has widely been advanced as part of the Sustainable Development Goals (SDGs)

The session called for active monitoring and documentation of internet shutdowns by NGOs including through collaborative and partnership building efforts, utilising investigative tools like Observatory of Network Interference (OONI) and NetBlocks which help to detect disruptions, and engaging in strategic litigation. 

The joint recommendations provided for inclusion in the NGOs Statement to the African Commission on Human and Peoples’ Rights (ACHPR) 73rd Ordinary Session by the thematic cluster on digital rights and security are to:

African Commission on Human and Peoples’ Rights (ACHPR) 

  1. In the event of an internet shutdown or any state-perpetrated network disruption, the ACHPR should condemn in the strongest terms such practices and reiterate the state obligations under international human rights law and standards. 
  2. In its assessment of State periodic reports, the ACHPR should engage States under assessment on issues of internet access including the occurrence of interferences through measures such as the removal, blocking or filtering of content and assess compliance with international human rights law and standards.
  3. The ACHPR should engage with stakeholders including State Parties, national human rights institutions and NGOs to develop guidance on internet freedom in Africa aimed at realising an open and secure internet in the promotion of freedom of expression and access to information online.

States Parties

  1. States should recognise and respect that universal, equitable, affordable and meaningful access to the internet is necessary for the realisation of human rights by adopting legal, policy and other measures to promote access to the internet and amend laws that unjustifiably restrict access to the internet.
  2. States parties should desist from unnecessarily implementing internet shutdowns and any other arbitrary actions that limit access to, and use of the internet and restore all disrupted digital networks where such disruptions have been ordered. Where limitation measures that disrupt access to the internet and social media are inevitable, they should be narrowly applied and should be prescribed by the law; serve a legitimate aim and be necessary and proportionate means to achieve a stated aim in a democratic society. 
  3. The State, as the duty bearer, should create a conducive environment for business entities to operate in a manner that respects human rights. 

Non-Governmental Organisations 

  • NGOs and other stakeholders should monitor and document the occurrence of internet shutdowns including their impact on human rights and development; raise awareness of the shutdowns and continuously advocate for an open and secure internet.

The Private Sector

  • Telecommunications companies and internet service providers, in their response to shut down requests, should take the relevant legal measures to avoid internet shutdowns and whenever they receive Internet Shutdown requests from States, the companies should insist on human rights due diligence before such measures are taken to mitigate their impact on human rights, ensuring transparency.

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.