Why It’s Not Yet Uhuru for Artificial Intelligence in Africa and What To Do About It

By CIPESA Writer |

At the first Global Summit on Artificial Intelligence (AI) in Africa in Kigali, Rwanda earlier this month, it was evident that African countries are hugely lagging behind the rest of the world in developing and utilising AI. Also clear was that if the continent makes the right investments today, it stands to reap considerable benefits.

The challenges Africa faces were well-articulated at the summit that brought together 2,000 participants from 97 countries, as were the solutions. Some important steps were taken, such as issuance of the Africa Declaration on Artificial Intelligence that aims to mobilise USD 60 billion for the prospective Africa AI Fund, the unveiling of the Gates Foundation investment in AI Scaling Labs in four African countries, announcement of the Cassava AI Factory that is said to be Africa’s first AI-enabled data centre, and endorsement of the Africa Artificial Intelligence Council.

Just Where Does Africa Lie?

Crystal Rugege, Managing Director of the Rwanda Centre for the Fourth Industrial Revolution, which hosted the summit, noted that AI could unlock USD 2.9 trillion for Africa’s economy by 2030, thereby lifting 11 million Africans out of poverty and creating 500,000 jobs annually. However, Rugege added, “this will not happen by chance. It requires bold, decisive leadership and collective action.”

Some independent researchers and scholars feel most African countries are not doing enough to stimulate AI innovation and uptake. Indeed, speakers at an independent webinar held on the eve of the Kigali summit criticised the “ambitious prediction” of the USD 2.9 trillion AI dividend for Africa, citing the lack of inclusive AI policy-making, and African countries’ failure to invest in a workforce that is fit for the AI age.

A handful of countries (including Ethiopia, Ghana, Nigeria, Senegal, Kenya, Mauritius, Egypt, Tunisia, Algeria, Rwanda and Zambia) have developed AI Strategies and at least eight others are in the process of doing so, but there is minimal government-funded AI innovation and deployment. Africa receives only a pittance of the global AI funding.

Key Hindrances

The summit was not blind to the key hindrances to AI development and deployment. Africa’s limited computational power (or compute) including a shortage of locally-based data centres was severally cited. Africa holds less than 1% of global data centre capacity, which is insufficient to train and run AI models. Also, while the continent has the world’s youngest population, it is lowly skilled. Moreover, only 5% of the region’s AI talent has access to the computational power and resources needed to carry out complex tasks. Many countries also lack the requisite energy supplies to power sustained AI development. Also, Africa’s 60% mobile internet usage gap is slowing AI adoption and economic growth.

Accordingly, the summit – and the declaration it issued – focussed on how to address these bottlenecks. Recommendations include to focus education systems on Fourth Industrial Revolution skills including to build for and adapt to AI; developing AI infrastructure (innovation labs, data centres, sustainable energy); scaling African AI businesses (including enabling them to access affordable funding); and enhancing AI research.

Mahmoud Ali Youssouf, Chairperson of the African Union (AU) Commission, stressed the need to create a harmonised regulatory environment to enable cross-border AI trade and investment; and to leverage Africa’s rich and diverse datasets to fuel AI innovation and power global AI models.

Important Steps in Kigali
  • The Africa Declaration on Artificial Intelligence builds on the foundational strategies, policies and commitments of the AU (such as its AI Strategy and the Data Policy Framework) and the United Nations. It seeks to develop a comprehensive talent pipeline through AI education and research; establishes frameworks for open, secure and inclusive data governance; provides for deployments of affordable and sustainable computing infrastructure accessible to researchers, innovators and entrepreneurs across Africa; and aims to create supportive ecosystems with regional AI incubation hubs driving innovation and scaling African AI enterprises domestically and globally.

    The Declaration envisages the establishment of a USD 60 billion Africa AI Fund, leveraging public, private, and philanthropic capital. The Fund would invest in developing and expanding AI infrastructure, scaling African AI enterprises, building a robust pipeline of AI practitioners, and strengthening domestic AI research capabilities, while upholding principles of equity and inclusion.
  • The AI Scaling Labs: The Gates Foundation and Rwanda’s Ministry of ICT and Innovation signed a Memorandum of Understanding (MoU) to establish the Rwanda AI Scaling Hub, in which the foundation will invest USD 7.5 million. It will initially focus on healthcare, agriculture, and education. Over the next 12 months, the foundation plans to establish similar centres in Kenya, Nigeria, and Senegal “to break down barriers to scale and help move promising AI innovations to impact.”
  • The Cassava AI Factory: Cassava Technologies announced the Cassava AI Factory, reportedly Africa’s first AI-enabled data centre, powered by NVIDIA accelerated computing. “Building digital infrastructure for the AI economy is a priority if Africa is to take full advantage of the fourth industrial revolution,” said Cassava Founder and Chairman, Strive Masiyiwa. “Our AI Factory provides the infrastructure for this innovation to scale, empowering African businesses, startups and researchers with access to cutting-edge AI infrastructure to turn their bold ideas into real-world breakthroughs – and now, they don’t have to look beyond Africa to get it.”

    By keeping AI infrastructure and data within Africa, Cassava Technologies says it is strengthening the continent’s digital independence, driving local innovation and supporting African AI talent and businesses. Its first deployment in South Africa (in June 2025) will be followed by expansion to Egypt, Kenya, Morocco, and Nigeria.
  • The Africa Artificial Intelligence Council: The Smart Africa Alliance Steering Committee Meeting co-chaired by the International Telecommunications Union (ITU) Secretary and the AU Commissioner for Infrastructure and Energy, endorsed the creation of the Council to drive continental coordination on critical AI pillars, including AI computing infrastructure, data sets and data infrastructure development, skills development, market use cases, and governance/policy.
  • Use Cases and Sandboxes: Documentation of tangible use cases and sandboxes that support innovation and regulation is vital in AI development on the continent. On the sidelines of the summit, CIPESA contributed to two co-creation initiatives. The Datasphere Initiative held a Co-creation Lab on the role of AI sandboxes in supporting regulatory innovation and ethical AI governance in Africa. Meanwhile, Qhala hosted a Digital Trade and Regulatory Sandbox session focused on digital health, smartphones, and cross-border trade. Separately, the Rwanda Health Intelligence Centre was unveiled, which enables AI-driven emergency medical services delivery and real-time collection of data on healthcare outcomes in hospitals, thus strengthening evidence-based decision-making.

Ultimately, the AI promise remains high but for it to be realised, the ideas from the Kigali summit must be translated into actions. Countries must stump up funds for research and scaling innovations, support their citizens in acquiring AI-relevant skills, expand internet access and affordability, provide supportive infrastructure, and incentivise foreign investment and technology transfer. Moreover, they should ensure that national laws and regulations promote fair, safe, secure, inclusive and responsible AI, and conform to continental aspirations such as the African Union AI Strategy.

The Impact of Artificial Intelligence on Data Protection and Privacy in Africa

By Edrine Wanyama |

Artificial Intelligence (AI) is playing a critical role in digitalisation in Africa and has the potential to fundamentally impact various aspects of society. However, countries on the continent lack specific laws on AI, with front-runners such as Egypt, Ghana, Kenya, Mauritius and Rwanda only having policies or strategic plans but no legislation.

Despite its potential, AI poses challenges for data protection, notably in sectors such as transportation, banking, health care, retail, and e-commerce, where mass data is collected.  Yet it is unclear how African governments are prepared to deal with AI-enabled data and privacy breaches.

Today, at least 36 African countries have enacted data protection and privacy laws that regulate the collection and processing of personal data. Similarly, the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) entered into force in June 2023.

The laws adopted by states and the Malabo Convention stipulate various data rights for individuals. They include the right to access personal information, the right to prevent the processing of personal data, and the right of individuals to be informed of the intended use of their personal data, including in cases of automated data processing where the decision significantly affects the data subject.

Others include the right to access personal data in the custody of data collectors, controllers and processors; the right to object to the processing of all or part of one’s personal data; the right to rectification, blocking, erasure and destruction of personal data; and the right to a remedy in case of data privacy breaches.

In a new brief, CIPESA notes that AI raises concerns of bias and discrimination when dealing with data, perpetrating abusive data practices, spreading misinformation and disinformation, enhancing real-time surveillance, and aggravating cyber-attacks such as phishing. The brief makes recommendations on striking a balance between innovation and privacy protection through reinforcing legal and regulatory frameworks, advocating for transparency and accountability, and cultivating education and awareness.

The right to information requires that data subjects are provided with information, including the justification for collecting data, identification of the data controller, and categories of data to be collected. According to the brief, AI may not ably facilitate this right since it may not adhere to some of the steps and precautions required to observe and guarantee the right to access personal information. Most access-related rights require skilled and competent staff. On the other hand, AI systems are usually programmed to handle specific kinds of data and their capacities are limited to built-in competencies of the tasks they can perform.

With AI systems, it may be difficult for individuals to object to processing of their personal data. As such, AI may not guarantee the accuracy of the data or lawfulness and purpose of data processing. These challenges arise since there is no assurance that technology has been prepared to comply with data rights and principles.

In relation to automated decision making, decisions by AI may be made against the data subject solely based on technologies with no human involvement. Thus, AI may interpret and audit data in an inaccurate or unfair manner. This can perpetrate discriminatory practices based on personal data relating to tribe, race, gender, religion, and political inclination.

The right to rectification relates to dealing with inaccurate, outdated, misleading, incomplete or erroneous data, and an individual may request the data controller to stop any further processing or to erase personal data. However, AI may not fully comprehend this right. Rectification and erasure require human intervention to rightly diagnose existing problematic data issues if the accuracy of data is to be guaranteed.

On the other hand, portability of data as a right requires that data from a controller should be in simple and machine-readable format. Such data transfers can be used by governments to inform development and promote healthy competition in sectors. However, AI presents privacy challenges to portability, such as indiscriminate data transfers that could aggravate the confidentiality risk. In other cases, AI systems may perpetuate transmission of the wrong data. Also, some AI systems can perpetuate data lock-in as they may be designed to make it impossible for data to be ported or for individuals to switch to other services.

Where there are data breaches by data controllers and processors, the right to effective remedy arises. However, AI may not have clear mechanisms for analysing cases of breaches and issuing appropriate remedies. The determination of a violation requires human intervention since AI is largely untrained on how rules of remedy are applied. Additionally, AI may not comprehend various languages and unique procedures as it is often not adapted to suit different contexts or conceptions of justice.

Ultimately, AI presents both opportunities and challenges for personal data protection. Accordingly, there has to be a balance between innovation and privacy protection to ensure transparency and accountability in data collection, management and processing, while maximising the benefits presented by AI. This can happen with coordinated efforts by governments, decision makers, developers, service providers, civil society organisations and academia in developing, adopting and applying policies and other measures that seek to enhance maximisation of the benefits of AI.

Read the full brief here.

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.