By Edrine Wanyama |
Artificial Intelligence (AI) is playing a critical role in digitalisation in Africa and has the potential to fundamentally impact various aspects of society. However, countries on the continent lack specific laws on AI, with front-runners such as Egypt, Ghana, Kenya, Mauritius and Rwanda only having policies or strategic plans but no legislation.
Despite its potential, AI poses challenges for data protection, notably in sectors such as transportation, banking, health care, retail, and e-commerce, where mass data is collected. Yet it is unclear how African governments are prepared to deal with AI-enabled data and privacy breaches.
Today, at least 36 African countries have enacted data protection and privacy laws that regulate the collection and processing of personal data. Similarly, the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) entered into force in June 2023.
The laws adopted by states and the Malabo Convention stipulate various data rights for individuals. They include the right to access personal information, the right to prevent the processing of personal data, and the right of individuals to be informed of the intended use of their personal data, including in cases of automated data processing where the decision significantly affects the data subject.
Others include the right to access personal data in the custody of data collectors, controllers and processors; the right to object to the processing of all or part of one’s personal data; the right to rectification, blocking, erasure and destruction of personal data; and the right to a remedy in case of data privacy breaches.
In a new brief, CIPESA notes that AI raises concerns of bias and discrimination when dealing with data, perpetrating abusive data practices, spreading misinformation and disinformation, enhancing real-time surveillance, and aggravating cyber-attacks such as phishing. The brief makes recommendations on striking a balance between innovation and privacy protection through reinforcing legal and regulatory frameworks, advocating for transparency and accountability, and cultivating education and awareness.
The right to information requires that data subjects are provided with information, including the justification for collecting data, identification of the data controller, and categories of data to be collected. According to the brief, AI may not ably facilitate this right since it may not adhere to some of the steps and precautions required to observe and guarantee the right to access personal information. Most access-related rights require skilled and competent staff. On the other hand, AI systems are usually programmed to handle specific kinds of data and their capacities are limited to built-in competencies of the tasks they can perform.
With AI systems, it may be difficult for individuals to object to processing of their personal data. As such, AI may not guarantee the accuracy of the data or lawfulness and purpose of data processing. These challenges arise since there is no assurance that technology has been prepared to comply with data rights and principles.
In relation to automated decision making, decisions by AI may be made against the data subject solely based on technologies with no human involvement. Thus, AI may interpret and audit data in an inaccurate or unfair manner. This can perpetrate discriminatory practices based on personal data relating to tribe, race, gender, religion, and political inclination.
The right to rectification relates to dealing with inaccurate, outdated, misleading, incomplete or erroneous data, and an individual may request the data controller to stop any further processing or to erase personal data. However, AI may not fully comprehend this right. Rectification and erasure require human intervention to rightly diagnose existing problematic data issues if the accuracy of data is to be guaranteed.
On the other hand, portability of data as a right requires that data from a controller should be in simple and machine-readable format. Such data transfers can be used by governments to inform development and promote healthy competition in sectors. However, AI presents privacy challenges to portability, such as indiscriminate data transfers that could aggravate the confidentiality risk. In other cases, AI systems may perpetuate transmission of the wrong data. Also, some AI systems can perpetuate data lock-in as they may be designed to make it impossible for data to be ported or for individuals to switch to other services.
Where there are data breaches by data controllers and processors, the right to effective remedy arises. However, AI may not have clear mechanisms for analysing cases of breaches and issuing appropriate remedies. The determination of a violation requires human intervention since AI is largely untrained on how rules of remedy are applied. Additionally, AI may not comprehend various languages and unique procedures as it is often not adapted to suit different contexts or conceptions of justice.
Ultimately, AI presents both opportunities and challenges for personal data protection. Accordingly, there has to be a balance between innovation and privacy protection to ensure transparency and accountability in data collection, management and processing, while maximising the benefits presented by AI. This can happen with coordinated efforts by governments, decision makers, developers, service providers, civil society organisations and academia in developing, adopting and applying policies and other measures that seek to enhance maximisation of the benefits of AI.
Read the full brief here.