Key Takeaways From the 2022 African Union Data Policy Framework

By CIPESA Writer |

The Africa Union Data Policy Framework published in July 2022 is one of the most significant documents on data governance on the continent. The Policy Framework is an extensive blueprint to guide African countries’ efforts to establish effective data governance regimes to leverage the data and digital revolution the continent is currently experiencing.

The Policy Framework draws from 30 years worth of experience in attempts to harmonise official statistics in Africa. Compared to the previous initiatives by the African Union (AU), the Policy Framework interrogates the key contextual and capacity challenges inherent in most African countries. It also demonstrates that, in spite of some hiccups, countries can still come up with reasonable and enforceable digital data governance policies and legislative frameworks.

Indeed, as a brief by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) shows, the Policy Framework is beneficial  to data governance, data rights and privacy in Africa. However, its implementation is likely to come up against notable challenges.

The Policy Framework seeks to become a major reference point and blueprint for governments in Africa on data protection legislation, cross-border sharing of critical data and information to facilitate trade and development. Further, it seeks to calibrate the normally complicated balancing act between enforcing data protection and promoting privacy on one hand, while not curtailing data rights, access to information, open data and open government, and promoting cross-border sharing of data without limitations of sovereignty and protectionism by Member States.

It also emphasises the importance of data interoperability and recommends harmonised national data systems that aggregate different and disparate data systems into singular ones accessible to all parties.

However, the brief notes that the implementation plan of the Policy Framework will need actionable and practical guidelines that countries can follow in ensuring that interests in national security, public order and national economic sovereignty do not unnecessarily stand in the way of the immense benefits of data privacy, open data sharing, intercountry digital collaboration on trade and commerce, and the power of harmonisation of data systems.

Benefits the Data Policy Framework Can Deliver to Data Governance and Data Rights in Africa

    • The Policy Framework provides a key reference resource for governments that are currently designing or reviewing their data governance policy and legal instruments. It is well researched and was collaboratively developed by key institutions of the AU and associate organisations.
    • Countries that will draw from the Policy Framework into their data policy making will receive enormous goodwill from Member States on collaboration and cross-border data sharing efforts, goodwill that emanates from the authority and goodwill that the African Union enjoys among Member States.
    • The Policy Framework acknowledges the unique and complex contexts of each country. It is not prescriptive, as it gives countries wiggle room to preserve their national and sovereign interests while designing policies that are in tune with continental best practice that the Policy Framework offers.
    • Private institutions, civil society and development partners will also find the Policy Framework to be an important resource to guide efforts to collaborate and harmonise their strategies on supporting data and digital ecosystems in Africa, without unnecessary duplication.  
    • The Policy Framework has potential to provide the all-important middle between data rights and privacy while not compromising easy access to key development data and information. Advocates in these arenas will find the Policy Framework an important guiding tool for their data advocacy strategies.

According to the CIPESA brief, the challenges which implementation of the Policy Framework is likely to face include limited financing both at country and at AU levels. Others are the inherent integration problems within the continent, such as the culture of secrecy among African nations, inward-looking and sovereignty concerns that have already delayed AU initiatives like the continental passport and visa-free travel, and the challenges Regional Economic Communities (RECs) still face in realising free movement of goods and people, common markets and political federation.

Further, the brief notes that many AU Member States have challenges with political democracy, with some routinely shutting down the internet during elections and others weaponising technology use, with cybercrime and surveillance laws being used to crack down on critics and political opponents. “Such nations might find some of the progressive pronouncements and recommendations of the Policy Framework a bit too much of an ask,” notes the brief, arguing that this complex political economy of individual African states could be the biggest challenge for implementing the Policy Framework at country level.

In this brief, CIPESA highlights five key takeaways from the 2022 African Union Data Policy Framework.

Digital Rights Prioritised at The 73rd Session of The ACHPR

By CIPESA Writer |

Digital rights as key to the realisation and enforcement of human rights on the African continent was  among the thematic focus areas of the Forum on the Participation of NGOs in the 73rd Ordinary Session of the African Commission on Human and Peoples’ Rights (ACHPR) held on October 17-18, 2022 in Banjul, the Gambia. Under the theme “Human Rights and Governance in Africa: A Multi-Dimensional Approach in Addressing Conflict, Crisis and Inequality”, the Forum also featured thematic discussions on conflict, the Africa Continental Free Trade Agreement, the environment, climate change, gender-based violence, post Covid-19 strategies and civic space for human rights and good governance.

The Forum on the Participation of NGOs in the Ordinary Sessions of the ACHPR is an advocacy platform coordinated by the African Centre for Democracy and Human Rights Studies. It aims to promote advocacy, lobbying and networking among non-governmental organisations (NGOs) for the promotion and protection of human rights in Africa. The Forum allows for sharing updates on the human rights situation on the continent by African and international NGOs with a view of identifying responses as well as adopting strategies towards promoting and protecting human rights on the continent.

A session in which the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated alongside Paradigm Initiative (PIN), the International Center for Not-for-Profit Law (ICNL) and the Centre for Human Rights-University of Pretoria, discussed the relationship between human rights and technology.

Thobekile Matimbe from PIN observed that internet shutdowns in the region are worrying and a major threat to freedom of expression, access to information, freedom of association and peaceful assembly contrary to article 9 of the African Charter on Human and People Rights (ACHPR) and the ACHPR Declaration of Principles on freedom of expression and access to information in Africa. She  expounded on the profound adverse impacts of internet shutdowns and disruptions on socio-economic rights, including the right to education, housing, health, and even social security. Matimbe specifically called for an end to the now two years internet and phone shutdown in Ethiopia’s Tigray region, while also regretting the continued violation of international human rights standards by States in other parts of the continent. 

Introducing digital rights as human rights and situating the different human rights groups within the digital rights discourse, Irene Petras from ICNL highlighted the technological evolution on the continent and the interrelatedness and interdependence of the internet with various rights and freedoms. According to her, internet shutdowns are an emerging concern that is adversely impacting the digital civic space. 

According to Access Now, in 2021 at least 182 internet shutdowns were experienced in 34 countries across the globe. In Africa, shutdowns were recorded in 12 countries on up to 19 occasions. The affected countries were Chad, the Democratic Republic of the Congo, Ethiopia, Gabon, Niger, Uganda and Zambia, which experienced internet restrictions during elections. Eswatini, Ethiopia, Gabon, Senegal and South Sudan experienced internet shutdowns due to protests and civil unrest. 

According to CIPESA’s legal officer Edrine Wanyama, given the long-standing authoritarianism and democracy deficits in most parts of the continent, elections, protests and demonstrations and examination periods are  the key drivers of internet shutdowns in Africa. Wanyama also noted that the consequences of internet shutdowns were wide ranging, extending to economic and financial loss, undermining freedom of expression, access to information and access to the internet, aggravating the digital exclusion gap, placing doubt on credibility of elections, facilitating loss of trust in governments and often fueling disinformation and hate speech

Given the social, economic and political benefits of the internet, Hlengiwe Dube of the Centre for Human Rights at the University of Pretoria urged states to re-think its availability and access at all times, as opposed to imposing information blackouts and creating situations for litigation.  She noted that meaningful access and creation of a facilitative environment for internet access has widely been advanced as part of the Sustainable Development Goals (SDGs)

The session called for active monitoring and documentation of internet shutdowns by NGOs including through collaborative and partnership building efforts, utilising investigative tools like Observatory of Network Interference (OONI) and NetBlocks which help to detect disruptions, and engaging in strategic litigation. 

The joint recommendations provided for inclusion in the NGOs Statement to the African Commission on Human and Peoples’ Rights (ACHPR) 73rd Ordinary Session by the thematic cluster on digital rights and security are to:

African Commission on Human and Peoples’ Rights (ACHPR) 

  1. In the event of an internet shutdown or any state-perpetrated network disruption, the ACHPR should condemn in the strongest terms such practices and reiterate the state obligations under international human rights law and standards. 
  2. In its assessment of State periodic reports, the ACHPR should engage States under assessment on issues of internet access including the occurrence of interferences through measures such as the removal, blocking or filtering of content and assess compliance with international human rights law and standards.
  3. The ACHPR should engage with stakeholders including State Parties, national human rights institutions and NGOs to develop guidance on internet freedom in Africa aimed at realising an open and secure internet in the promotion of freedom of expression and access to information online.

States Parties

  1. States should recognise and respect that universal, equitable, affordable and meaningful access to the internet is necessary for the realisation of human rights by adopting legal, policy and other measures to promote access to the internet and amend laws that unjustifiably restrict access to the internet.
  2. States parties should desist from unnecessarily implementing internet shutdowns and any other arbitrary actions that limit access to, and use of the internet and restore all disrupted digital networks where such disruptions have been ordered. Where limitation measures that disrupt access to the internet and social media are inevitable, they should be narrowly applied and should be prescribed by the law; serve a legitimate aim and be necessary and proportionate means to achieve a stated aim in a democratic society. 
  3. The State, as the duty bearer, should create a conducive environment for business entities to operate in a manner that respects human rights. 

Non-Governmental Organisations 

  • NGOs and other stakeholders should monitor and document the occurrence of internet shutdowns including their impact on human rights and development; raise awareness of the shutdowns and continuously advocate for an open and secure internet.

The Private Sector

  • Telecommunications companies and internet service providers, in their response to shut down requests, should take the relevant legal measures to avoid internet shutdowns and whenever they receive Internet Shutdown requests from States, the companies should insist on human rights due diligence before such measures are taken to mitigate their impact on human rights, ensuring transparency.

Uganda’s Changes On Computer Misuse Law Spark Fears It Will Be Used To Silence Dissidents

By News  Writer |

Uganda’s controversial Computer Misuse (Amendment) Bill 2022, which rights groups say will likely be used to silence dissenting voices online, has come into force after the country’s President Yoweri Kaguta Museveni signed it into law yesterday.

The country’s legislators had passed amendments to the 2011 Computer Misuse Act in early September, limiting writing or sharing of content on online platforms, and restricting the distribution of children’s details without the consent of their parents or guardians.

The bill was brought before the house to “deter the misuse of online and social media platforms.” A document tabled before the house stated that the move was necessitated by reasoning that “enjoyment of the right to privacy is being affected by the abuse of online and social media platforms through the sharing of unsolicited, false, malicious, hateful and unwarranted information.”

The new law, which is also curbing the spread of hate speech online, recommends the application of several punitive measures, including ineligibility by offenders to hold public office for 10 years and imprisonment for individuals who “without authorization, accesses another person’s data or information, voice or video records and shares any information that relates to another person” online.

Rights groups and a section of online communities are worried the law might be abused by regimes, especially the current one, to limit free speech and punish persons that criticize the government. Some have plans to challenge it in court.

Fears expressed by varying groups come in the wake of increasing crackdowns on individuals that don’t shy away from critiquing Museveni’s (Uganda’s longest-serving president, who also blocked social media in the run up to last year’s general election) authoritarian regime online.

Recently, a Ugandan TikToker, Teddy Nalubowa, was remanded in prison for recording and sharing a video that celebrated the death of a former security minister, who led the troops that killed 50 civilians protesting the arrest of opposition politician Robert Kyagulanyi Ssentamu (Bobi Wine) in 2020. Nalubowa, a member of Ssentamu’s National Unity Platform, was charged with offensive communication in contravention of the Computer Misuse Act 2011 amid public outcry over the harassment and intimidation of dissidents. Ssentamu, Museveni’s critic and country’s opposition leader, recently said the new amendment is targeting his ilk.

The Committee to Protect Journalists (CPJ) had earlier called on Museveni not to sign the bill into law, saying that it was an added arsenal that authorities could use to target critical commentators, and punish media houses by criminalizing the work of journalists, especially those undertaking investigations.

The Collaboration for International ICT Policy in East and Southern Africa (CIPESA) had also made recommendations including the deletion of Clause 5, which bars people from sending unsolicited information online, saying that it could be abused and misused by the government.

“In the alternative, a clear definition and scope of the terms “unsolicited” and “solicited” should be provided,” it said.

It also called for the scrapping of punitive measures, and the deletion of clauses on personal information and data, which duplicated the country’s data protection law.

The CIPESA said the law also is likely to infringe on the digital rights of individuals, including the freedom of expression and access to information, adding that the provisions did not address issues, like trolling and harassment, brought forth by emerging technologies as the law sought to do in the first place.

This article was first published by the Ghana Business on Oct 15, 2022.

New Law in Uganda Imposes Restrictions on Use of Internet

By Rodney Muhumuza |

Ugandan President Yoweri Museveni has signed into law legislation criminalizing some internet activity despite concerns the law could be used to silence legitimate criticism.

The bill, passed by the legislature in September, was brought by a lawmaker who said it was necessary to punish those who hide behind computers to hurt others. That lawmaker argued in his bill that the “enjoyment of the right to privacy is being affected by the abuse of online and social media platforms through the sharing of unsolicited, false, malicious, hateful and unwarranted information.”

The new legislation increases restrictions in a controversial 2011 law on the misuse of a computer. Museveni signed the bill on Thursday, according to a presidential spokesman’s statement.

The legislation proposes jail terms of up to 10 years in some cases, including for offenses related to the transmission of information about a person without their consent as well as the sharing or intercepting of information without authorization.
Opponents of the law say it will stifle freedom of expression in a country where many of Museveni’s opponents, for years unable to stage street protests, often raise their concerns on Twitter and other online sites.
Others say it will kill investigative journalism.

The law is “a blow to online civil liberties in Uganda,” according to an analysis by a watchdog group known as Collaboration on International ICT Policy for East and Southern Africa, or CIPESA.

The Committee to Protect Journalists is among groups that urged Museveni to veto the bill, noting its potential to undermine press freedom.

“Ugandan legislators have taken the wrong turn in attempting to make an already problematic law even worse. If this bill becomes law, it will only add to the arsenal that authorities use to target critical commentators and punish independent media,” the group’s Muthoki Mumo said in a statement after lawmakers passed the bill.

Museveni, 78, has held power in this East African country since 1986 and won his current term last year.

Although Museveni is popular among some Ugandans who praise him for restoring relative peace and economic stability, many of his opponents often describe his rule as authoritarian.

This article was first published by the Washington Post on Oct 13, 2022

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.