The Impact of Artificial Intelligence on Data Protection and Privacy in Africa

By Edrine Wanyama |

Artificial Intelligence (AI) is playing a critical role in digitalisation in Africa and has the potential to fundamentally impact various aspects of society. However, countries on the continent lack specific laws on AI, with front-runners such as Egypt, Ghana, Kenya, Mauritius and Rwanda only having policies or strategic plans but no legislation.

Despite its potential, AI poses challenges for data protection, notably in sectors such as transportation, banking, health care, retail, and e-commerce, where mass data is collected.  Yet it is unclear how African governments are prepared to deal with AI-enabled data and privacy breaches.

Today, at least 36 African countries have enacted data protection and privacy laws that regulate the collection and processing of personal data. Similarly, the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) entered into force in June 2023.

The laws adopted by states and the Malabo Convention stipulate various data rights for individuals. They include the right to access personal information, the right to prevent the processing of personal data, and the right of individuals to be informed of the intended use of their personal data, including in cases of automated data processing where the decision significantly affects the data subject.

Others include the right to access personal data in the custody of data collectors, controllers and processors; the right to object to the processing of all or part of one’s personal data; the right to rectification, blocking, erasure and destruction of personal data; and the right to a remedy in case of data privacy breaches.

In a new brief, CIPESA notes that AI raises concerns of bias and discrimination when dealing with data, perpetrating abusive data practices, spreading misinformation and disinformation, enhancing real-time surveillance, and aggravating cyber-attacks such as phishing. The brief makes recommendations on striking a balance between innovation and privacy protection through reinforcing legal and regulatory frameworks, advocating for transparency and accountability, and cultivating education and awareness.

The right to information requires that data subjects are provided with information, including the justification for collecting data, identification of the data controller, and categories of data to be collected. According to the brief, AI may not ably facilitate this right since it may not adhere to some of the steps and precautions required to observe and guarantee the right to access personal information. Most access-related rights require skilled and competent staff. On the other hand, AI systems are usually programmed to handle specific kinds of data and their capacities are limited to built-in competencies of the tasks they can perform.

With AI systems, it may be difficult for individuals to object to processing of their personal data. As such, AI may not guarantee the accuracy of the data or lawfulness and purpose of data processing. These challenges arise since there is no assurance that technology has been prepared to comply with data rights and principles.

In relation to automated decision making, decisions by AI may be made against the data subject solely based on technologies with no human involvement. Thus, AI may interpret and audit data in an inaccurate or unfair manner. This can perpetrate discriminatory practices based on personal data relating to tribe, race, gender, religion, and political inclination.

The right to rectification relates to dealing with inaccurate, outdated, misleading, incomplete or erroneous data, and an individual may request the data controller to stop any further processing or to erase personal data. However, AI may not fully comprehend this right. Rectification and erasure require human intervention to rightly diagnose existing problematic data issues if the accuracy of data is to be guaranteed.

On the other hand, portability of data as a right requires that data from a controller should be in simple and machine-readable format. Such data transfers can be used by governments to inform development and promote healthy competition in sectors. However, AI presents privacy challenges to portability, such as indiscriminate data transfers that could aggravate the confidentiality risk. In other cases, AI systems may perpetuate transmission of the wrong data. Also, some AI systems can perpetuate data lock-in as they may be designed to make it impossible for data to be ported or for individuals to switch to other services.

Where there are data breaches by data controllers and processors, the right to effective remedy arises. However, AI may not have clear mechanisms for analysing cases of breaches and issuing appropriate remedies. The determination of a violation requires human intervention since AI is largely untrained on how rules of remedy are applied. Additionally, AI may not comprehend various languages and unique procedures as it is often not adapted to suit different contexts or conceptions of justice.

Ultimately, AI presents both opportunities and challenges for personal data protection. Accordingly, there has to be a balance between innovation and privacy protection to ensure transparency and accountability in data collection, management and processing, while maximising the benefits presented by AI. This can happen with coordinated efforts by governments, decision makers, developers, service providers, civil society organisations and academia in developing, adopting and applying policies and other measures that seek to enhance maximisation of the benefits of AI.

Read the full brief here.

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.