African Economic Research Consortium (AERC) Summit 2025 

Update |

This year, the African Economic Research Consortium (AERC) is holding its first Summit in the context of its new 10-year Strategic Plan (2025-2035), Nairobi, Kenya. The three-day Summit themed ‘A Renewed AERC for Africa’s New Development Priorities’, is designed to hardwire the research-policy bridge.

This event is taking place from November 30 to December 02, 2025. For more information, click here.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).

Uganda Data Governance Capacity Building Workshop

Event |

The AU-NEPAD and GIZ in collaboration with CIPESA are pleased to convene this three-day capacity-building and stakeholder engagement workshop to support the Government of Uganda in its data governance journey.

The three-day workshop will focus on providing insights into data governance and the transformative potential of data to drive equitable socio-economic development, empower citizens, safeguard collective interests, and protect digital rights in Uganda. This will include aspects on foundational infrastructure, data value creation and markets, legitimate and trustworthy data systems, data standards and categorisation, and data governance mechanisms.

Participants will critically evaluate regulatory approaches, institutional frameworks, and capacity-building strategies necessary to harnessing the power of data for socio-economic transformation and regional integration, in line with the African Union Data Policy Framework.

The workshop will take place from November 19th to 21st, 2025.

Safeguarding African Democracies Against AI-Driven Disinformation

ADRF Impact Series |

As Africa’s digital ecosystems expand, so too do the threats to its democratic spaces. From deepfakes to synthetic media and AI-generated misinformation, electoral processes are increasingly vulnerable to technologically sophisticated manipulation. Against this backdrop, THRAETS, a civic-tech pro-democracy organisation, implemented the Africa Digital Rights Fund (ADRF)-supported project, “Safeguarding African Elections – Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy.”

The initiative aimed to build digital resilience by equipping citizens, media practitioners, and civic actors with the knowledge and tools to detect and counter disinformation with a focus on that driven by artificial intelligence (AI) during elections across Africa.

At the heart of the project was a multi-pronged strategy to create sustainable solutions, built around three core pillars: public awareness, civic-tech innovation, and community engagement.

The project resulted in innovative civic-tech tools, each of which has the potential to address a unique facets of AI misinformation. These tools include the  Spot the Fakes which is a gamified, interactive quiz that trains users to differentiate between authentic and manipulated content. Designed for accessibility, it became a key entry point for public digital literacy, particularly among youth. Additionally, the foundation for an open-source AI tracking hub was also developed. The “Expose the AI” portal will offer free educational resources to help citizens evaluate digital content and understand the mechanics of generative AI.

A third tool, called “Community Fakes” which is a dynamic crowdsourcing platform for cataloguing and analysing AI-altered media, combined human intelligence and machine learning. Its goal is to support journalists, researchers, and fact-checkers in documenting regional AI disinformation. The inclusion of an API enables external organisations to access verified datasets which is a unique contribution to the study of AI and misinformation in the Global South. However, THRAETS notes that the effectiveness of public-facing tools such as Spot the Fakes and Community Fakes is limited by the wider digital literacy gaps in Africa.

Meanwhile, to demonstrate how disinformation intersects with politics and public discourse, THRAETS documented case studies that contextualised digital manipulation in real time. A standout example is the “Ruto Lies: A Digital Chronicle of Public Discontent”, which analysed over 5,000 tweets related to Kenya’s #RejectTheFinanceBill protests of 2024. The project revealed patterns in coordinated online narratives and disinformation tactics, achieving more than 100,000 impressions. This initiative provided a data-driven foundation for understanding digital mobilisation, narrative distortion, and civic resistance in the age of algorithmic influence.

THRAETS went beyond these tools and embarked upon a capacity building drive through which journalists, technologists, and civic leaders were trained in open-source intelligence (OSINT), fact-checking, and digital security.

In October 2024, Thraets partnered with eLab Research to conduct an intensive online training program for 10 Tunisian journalists ahead of their national elections. The sessions focused on equipping the participants with tools to identify and counter-tactics used to sway public opinion, such as detecting cheap fakes and deepfakes. Journalists were provided with hands-on experience through an engaging fake content identification quiz/game. The training provided journalists with the tools to identify and combat these threats, and this helped them prepare for election coverage, but also equipped them to protect democratic processes and maintain public trust in the long run.

This training served as a framework for a training that would take place in August 2025 as part of the Democracy Fellowship, a program funded by USAID and implemented by the African Institute for Investigative Journalism (AIIJ). This training aimed to enhance media capacity to leverage OSINT tools in their reporting.

The THRAETS project enhanced regional collaboration and strengthened local investigative capacity to expose and counter AI-driven manipulation. This project demonstrates the vital role of civic-tech innovation that integrates participation and informed design. As numerous African countries navigate elections, initiatives like THRAETS provide a roadmap for how digital tools can safeguard truth, participation, and democracy.

Find the full project insights report here.

The Four Pillars Shaping The Trajectory of AI in Africa

By Juliet Nanfuka |

Mainstream narratives often frame Africa’s Artificial Intelligence (AI) rollout in Africa as a technological challenge. However, four key pillars are informing the trajectory of AI in Africa, and in so doing, are laying bare a chasm that influences the broader digital ecosystem, including access, development, civic participation, and digital democracy. These pillars are a country’s democratic credentials, economic gaps, legacy governance structures and fragmented regulation, and in-built influence in the design of AI that serves to exclude more than it serves to include users, particularly in Africa. 

According to the 2025 edition of the State of Internet Freedom in Africa report, political regimes and their associated democratic credentials have come to play a key role in the trajectory of AI in various African countries. Countries categorised as democratic, such as South Africa, Ghana, Namibia, and Senegal, have displayed the capacity to deploy AI aimed at improving governance, accountability, and accessibility. 

For example in South Africa, the South African Revenue Service (SARS) employs the Lwazi AI-powered assistant to streamline tax assessment processes, enhancing efficiency and reducing corruption.  In Kenya, the Sauti ya Bajeti (Voice of the Budget) platform uses AI to help citizens query and track public expenditure, empowering civic participation and fiscal accountability. Meanwhile, Ghana has been a standout innovator with Khaya, an open-source AI translator supporting local languages and easing communication barriers, as well as  DeafCanTalk, an app enabling real-time translation between sign language and spoken word. These apps have utilised AI to meet digital inclusion needs, and have  improved accessibility and communication within the country. 

In contrast, in more authoritarian regimes like Cameroon, Egypt, Ethiopia, and Rwanda, AI runs the risk of becoming another tool used by the state to entrench digital authoritarianism and restrict civic freedoms. These countries also rank as weak performers on the Freedom in the World Report, such as Cameroon, which scored 15 points, followed by Egypt (18), Ethiopia (18), and Rwanda (21), which rate as Not Free. Regarding internet freedom, a similar pattern emerges with Egypt scoring 28 points out of 100, followed by Ethiopia (27) and Rwanda (36), leading to a Not Free ranking.

Examples of the problematic use of AI include the case of Rwanda, where pro-government propagandists used Large Language Models (LLMs) to mass-produce synthetic online messages that mimic grassroots support while suppressing dissent. Although Rwanda has also introduced AI in judicial and border management systems, these technologies have dual-use potential which blur the line between governance and surveillance.

A second pillar that influences the trajectory of AI in African countries is economic and infrastructural inequality. Countries with stronger infrastructure, higher Gross Domestic Product (GDP) per capita, higher internet penetration levels, and better Human Development Index (HDI) scores have proven more likely to shape AI development. These include countries such as South Africa, Tunisia and Egypt. Countries with weaker digital infrastructure, limited data networks and high connectivity costs, face the risk of being left behind or becoming dependent on external technologies.

Africa still has a small share of global data centres and accounts for only 1% of global compute capacity, making it hard to train, fine-tune, or evaluate models locally and cheaply.

This power imbalance has resulted in a two-tier continent which is seeing parts of the continent progressively adopt, integrate AI and also benefit from AI infrastructure investment, while parts of the continent remain lagging and reliant on adopted systems that may not be responsive to their intended uses in different contexts. Albeit, the bulk of the continent remains a consumer of AI and largely dependent on external funding to build its AI infrastructure.

Examples of private sector entities making significant investments in the African AI industry include Microsoft and G42 which in 2024, launched a USD 1 billion initiative to develop a sustainable AI data centre in Kenya. In September 2025, Airtel commenced construction of its 44 MW sustainable data centre in Kenya, which is expected to be the largest in East Africa, once completed in 2027. Earlier this year, in March, Microsoft announced a USD 297 million investment to expand its cloud and AI systems in the country. Meanwhile, Google is also funding the South African Centre for Artificial Intelligence Research (CAIR) for infrastructure and expertise to strengthen local AI capacity.  In October 2025, Rwanda received a USD 17.5 million investment from the Bill & Melinda Gates Foundation to establish the Rwanda AI Scaling Hub, an initiative designed to drive AI innovation across various sectors, including health, agriculture, and education.

A third pillar which also has direct consequences for democracy, is the fact that AI governance has an entrenched power imbalance which favours the state. In many countries, particularly those with weaker democratic credentials, civil society, media and private actors are often sidelined. The report notes that despite AI’s swift evolution, across 14 countries (Cameroon, Egypt, Ethiopia, Ghana, Kenya, Mozambique, Namibia, Nigeria, Rwanda, Senegal, South Africa, Tunisia, Uganda, and Zimbabwe) studied, none have developed a comprehensive AI-specific legislation yet resulting in the reliance on existing and fragmented legal frameworks that do not adequately regulate or address complex AI concerns.

The leading countries have developed guidelines, AI policies and strategies, data protection laws, and applied sector legislation to AI governance. In contrast, the lagging countries generally lack this foundational framework, creating a vacuum which could heighten AI-driven risks in the absence of effective oversight. Rwanda was among the first countries to adopt a national AI policy in 2023.  Since then, various other countries, including Egypt, Ethiopia, Ghana, Kenya, Nigeria, Rwanda, Senegal, South Africa, and Tunisia, have either launched national AI strategies or have been developing foundational policy frameworks over the last two years. 

However, in some instances, these policy processes, when they exist, often occur behind closed doors, without meaningful multi-stakeholder participation. In many instances, economic growth objectives dominate national AI strategies, while digital rights, transparency and accountability are sidelined. 

The fourth pillar pertains to AI as an instrument of inequality and social fracturing. The spread of deepfakes, AI-generated misinformation and algorithmic exclusion have become a real threat to political participation and access. This has played out on several occasions and is present in all countries despite their democratic credentials such as in the 2024 elections and protests in Kenya. In Namibia and South Africa, AI-driven campaigns are believed to have influenced perceptions of legitimacy and outcome.

For the myriad of languages that exist on the continent. Only a handful are factored in the machinery of AI. This has seen low-resource languages get lost in the digital ecosystem, content moderation is designed for Western norms as a result of the languages used in the training of AI, and many users in the continent do not have the savvy or skills to challenge these systems. This has resulted in an algorithmic second-class citizenship which is seeing AI bypass the needs of users in Africa, including the resources required to enable adequate civic engagement, transparency and accountability. 

Through these four pillars, the State of Internet Freedom in Africa 2025 highlights that AI design, deployment, and impact are ultimately reflections of the power structures that define it globally. This power imbalance plays out within the continent at the national level where decision making on AI’s trajectory remains largely confined.

The report calls for a human-centred AI governance in Africa, through deliberate and inclusive approaches. Find the full report here