Why Data and AI Governance Are Central to Africa’s Digital Trade Ambitions

By CIPESA Writer |

Digital technologies are changing how African businesses trade and connect across borders. However, digital trade on the continent remains hugely constrained, including by regulatory fragmentation, infrastructure gaps, and bureaucratic hurdles. How then should African countries leverage the growing digitalisation and emerging technologies such as Artificial Intelligence (AI) to boost their digital economies?

According to the World Trade Organization (WTO), in 2024, Africa’s exports of digitally delivered services (DDS) were valued at USD 41.3 billion, representing just one percent of global exports. Nonetheless, the continent’s prospects are promising. The WTO and the World Bank project that greater use of digital technologies could boost Africa’s digital services exports by USD 74 billion between 2023 and 2040, doubling Africa’s share of global exports.

Evidently, if African countries do not address existing barriers and take decisive action, the continent risks becoming an even more marginal player in the global digital trade ecosystem. How to bridge the barriers and leverage data and AI to shape digital trade and Africa’s economic future was at the centre of discussions at the African Economic Research Consortium (AERC) Summit 2025, held in Nairobi, Kenya, last December.

A panel on digital trade and the governance of digital and AI economies, where the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) featured, stressed that, although frameworks such as the African Continental Free Trade Area (AfCFTA) Digital Trade Protocol are a step in the right direction, they could fail to significantly grow digital trade if member states lack enabling data and AI governance systems and practices.

Today, DDS account for approximately 35% of Africa’s total services export value, and have been rising at a double-digit rate, outpacing growth in other regions globally. However, growth in digital services trade remains uneven, concentrated in a handful of countries, mostly South Africa, Morocco, Ghana, Egypt, and Mauritius. Kenya, Nigeria and Tunisia are also notable players but with lower export values than the leading African countries.

Regional initiatives such as the AfCFTA Digital Trade Protocol can help to expand digital trade beyond domestic markets, including in countries that currently lag. The protocol, which was adopted two years ago, aims to harmonise rules for cross-border digital trade across Africa, including on electronic transactions, data governance, and digital payments. Meanwhile, the African Guidelines on Integrating Data Provisions in Protocols on Digital Trade of 2024, emphasise harmonised data governance as an enabler of secure and inclusive digital trade across Africa.

The African Union Data Policy Framework (AUDPF) similarly provides for interoperable data ecosystems across the continent, that are enabled by harmonised laws that support both innovation and rights protection. The various regional efforts support the dream of a Digital Single Market by 2030, as envisaged by the Africa Digital Transformation Strategy of the African Union.

The Galore of Barriers

The region currently lacks an operational continent‑wide harmonised framework for data protection, e‑commerce regulation, digital taxation, or AI governance. This gap raises compliance costs and presents a barrier to businesses that aim to scale operations across borders. This undermines cross‑border digital trade and data flows. Moreover, lack of regulations for paperless trade, including on electronic invoicing, e-signatures and e-contracts, presents an additional hurdle.

On the other hand, high taxes on goods, services, data, and devices drive up costs for businesses, yet several entrepreneurs struggle to access affordable digital financial services, including for effecting cross-border payments. These challenges are made worse by low internet speeds, unreliable electricity supply, as well as weak understanding of export regulations, data protection, and cybersecurity.

Addressing these barriers would offer entrepreneurs a range of benefits. Businesses can reach new customers beyond national borders without investing much in physical export infrastructure, which can reduce costs and expand their market reach. Also, interoperable digital payments can help to minimise settlement delays and overcome currency conversion hurdles.

Priorities on AI and Data Governance

Projections by a WTO 2025 report show that AI could boost the value of cross-border flows of goods and services by around 40% by 2040, due to productivity gains and lower trade costs. However, Africa’s readiness for AI regulation and uptake, particularly by small and medium enterprises, remains low. The WTO report points to AI’s potential to reduce logistics costs, overcome language barriers, ease regulatory compliance, and boost productivity.

In a March 2025 survey among firms from across the world, the most cited benefits of AI were improved trade efficiency (22%), optimised trade decision-making (14%), expanding the foreign customer base (10%), enhanced supply chain management (9%), and broader import and export product ranges (9% and 8% respectively).

How data and AI are governed is therefore key for the future of Africa’s digital economy. If African countries do not put in place robust and harmonised legislation, they will risk perpetuating patterns of the so-called “AI colonialism” in which African data and users fuel global AI markets yet their economies do not receive proportionate economic benefits. Many African countries are adopting AI in the public and private sectors but lack comprehensive AI-specific laws and governance frameworks and often rely instead on outdated laws that pre-date the current technologies.

The State of Internet Freedom in Africa 2025 report calls for human‑centred AI laws that ensure transparency in algorithms, clear accountability, and effective mechanisms for liability and redress. The report urges governments to strengthen independent AI and data oversight institutions, invest in digital infrastructure and inclusion, expand internet access, and ensure AI tools serve local languages. The report also highlights that Africa’s AI market is projected to grow from USD 4.51 billion in 2025 to USD 16.5 billion by 2030.

Africa thus urgently needs cross-border data governance frameworks that support trusted data flows, reduce fragmented national rules, and establish interoperable standards to boost regional digital trade under initiatives such as AfCFTA and the AUDPF. At the same time, investments in affordable connectivity, local cloud capacity, public digital platforms, and datasets in African languages are essential.

The Role of Civil Society and Think Tanks

The Summit discussion stressed the urgent need for research to inform policy, particularly on cross-border data flows, AI adoption, and ways for Africa to avoid new forms of dependency while getting greater value from its data and digital innovation.

Also essential is civil society engagement in monitoring the implementation of continental digital trade and data initiatives, supporting harmonisation of policies and standards, and building the capacity of policymakers, regulators, and businesses.

Actions to Grow Digital Trade in Africa

  • Embrace digital transformation and connectivity by investing in robust networks and backup systems.
  • Implement robust cyber security frameworks while ensuring effective cyber leadership and prioritising investments in cyber infrastructure, skilling, awareness.
  • Recognise data as a trade enabler by ensuring trade agreements have provisions that prevent unnecessary restrictions on data flows.
  • Harmonise data protection standards to reduce compliance costs for businesses and build trust among different stakeholders.
  • Adopt and implement Intellectual Property (IP) laws to ensure that local innovators and individuals in the region benefit.
  • Build robust digital infrastructure with a focus on Digital Public Infrastructure (DPI) and data privacy.
  • Assess and address the impact of emerging technologies like artificial intelligence, blockchain and IoT, ensuring they foster innovation and address ethical challenges.

Source: CIPESA – Policy Considerations for Enhancing Digital Trade in East Africa

The Four Pillars Shaping The Trajectory of AI in Africa

By Juliet Nanfuka |

Mainstream narratives often frame Africa’s Artificial Intelligence (AI) rollout in Africa as a technological challenge. However, four key pillars are informing the trajectory of AI in Africa, and in so doing, are laying bare a chasm that influences the broader digital ecosystem, including access, development, civic participation, and digital democracy. These pillars are a country’s democratic credentials, economic gaps, legacy governance structures and fragmented regulation, and in-built influence in the design of AI that serves to exclude more than it serves to include users, particularly in Africa. 

According to the 2025 edition of the State of Internet Freedom in Africa report, political regimes and their associated democratic credentials have come to play a key role in the trajectory of AI in various African countries. Countries categorised as democratic, such as South Africa, Ghana, Namibia, and Senegal, have displayed the capacity to deploy AI aimed at improving governance, accountability, and accessibility. 

For example in South Africa, the South African Revenue Service (SARS) employs the Lwazi AI-powered assistant to streamline tax assessment processes, enhancing efficiency and reducing corruption.  In Kenya, the Sauti ya Bajeti (Voice of the Budget) platform uses AI to help citizens query and track public expenditure, empowering civic participation and fiscal accountability. Meanwhile, Ghana has been a standout innovator with Khaya, an open-source AI translator supporting local languages and easing communication barriers, as well as  DeafCanTalk, an app enabling real-time translation between sign language and spoken word. These apps have utilised AI to meet digital inclusion needs, and have  improved accessibility and communication within the country. 

In contrast, in more authoritarian regimes like Cameroon, Egypt, Ethiopia, and Rwanda, AI runs the risk of becoming another tool used by the state to entrench digital authoritarianism and restrict civic freedoms. These countries also rank as weak performers on the Freedom in the World Report, such as Cameroon, which scored 15 points, followed by Egypt (18), Ethiopia (18), and Rwanda (21), which rate as Not Free. Regarding internet freedom, a similar pattern emerges with Egypt scoring 28 points out of 100, followed by Ethiopia (27) and Rwanda (36), leading to a Not Free ranking.

Examples of the problematic use of AI include the case of Rwanda, where pro-government propagandists used Large Language Models (LLMs) to mass-produce synthetic online messages that mimic grassroots support while suppressing dissent. Although Rwanda has also introduced AI in judicial and border management systems, these technologies have dual-use potential which blur the line between governance and surveillance.

A second pillar that influences the trajectory of AI in African countries is economic and infrastructural inequality. Countries with stronger infrastructure, higher Gross Domestic Product (GDP) per capita, higher internet penetration levels, and better Human Development Index (HDI) scores have proven more likely to shape AI development. These include countries such as South Africa, Tunisia and Egypt. Countries with weaker digital infrastructure, limited data networks and high connectivity costs, face the risk of being left behind or becoming dependent on external technologies.

Africa still has a small share of global data centres and accounts for only 1% of global compute capacity, making it hard to train, fine-tune, or evaluate models locally and cheaply.

This power imbalance has resulted in a two-tier continent which is seeing parts of the continent progressively adopt, integrate AI and also benefit from AI infrastructure investment, while parts of the continent remain lagging and reliant on adopted systems that may not be responsive to their intended uses in different contexts. Albeit, the bulk of the continent remains a consumer of AI and largely dependent on external funding to build its AI infrastructure.

Examples of private sector entities making significant investments in the African AI industry include Microsoft and G42 which in 2024, launched a USD 1 billion initiative to develop a sustainable AI data centre in Kenya. In September 2025, Airtel commenced construction of its 44 MW sustainable data centre in Kenya, which is expected to be the largest in East Africa, once completed in 2027. Earlier this year, in March, Microsoft announced a USD 297 million investment to expand its cloud and AI systems in the country. Meanwhile, Google is also funding the South African Centre for Artificial Intelligence Research (CAIR) for infrastructure and expertise to strengthen local AI capacity.  In October 2025, Rwanda received a USD 17.5 million investment from the Bill & Melinda Gates Foundation to establish the Rwanda AI Scaling Hub, an initiative designed to drive AI innovation across various sectors, including health, agriculture, and education.

A third pillar which also has direct consequences for democracy, is the fact that AI governance has an entrenched power imbalance which favours the state. In many countries, particularly those with weaker democratic credentials, civil society, media and private actors are often sidelined. The report notes that despite AI’s swift evolution, across 14 countries (Cameroon, Egypt, Ethiopia, Ghana, Kenya, Mozambique, Namibia, Nigeria, Rwanda, Senegal, South Africa, Tunisia, Uganda, and Zimbabwe) studied, none have developed a comprehensive AI-specific legislation yet resulting in the reliance on existing and fragmented legal frameworks that do not adequately regulate or address complex AI concerns.

The leading countries have developed guidelines, AI policies and strategies, data protection laws, and applied sector legislation to AI governance. In contrast, the lagging countries generally lack this foundational framework, creating a vacuum which could heighten AI-driven risks in the absence of effective oversight. Rwanda was among the first countries to adopt a national AI policy in 2023.  Since then, various other countries, including Egypt, Ethiopia, Ghana, Kenya, Nigeria, Rwanda, Senegal, South Africa, and Tunisia, have either launched national AI strategies or have been developing foundational policy frameworks over the last two years. 

However, in some instances, these policy processes, when they exist, often occur behind closed doors, without meaningful multi-stakeholder participation. In many instances, economic growth objectives dominate national AI strategies, while digital rights, transparency and accountability are sidelined. 

The fourth pillar pertains to AI as an instrument of inequality and social fracturing. The spread of deepfakes, AI-generated misinformation and algorithmic exclusion have become a real threat to political participation and access. This has played out on several occasions and is present in all countries despite their democratic credentials such as in the 2024 elections and protests in Kenya. In Namibia and South Africa, AI-driven campaigns are believed to have influenced perceptions of legitimacy and outcome.

For the myriad of languages that exist on the continent. Only a handful are factored in the machinery of AI. This has seen low-resource languages get lost in the digital ecosystem, content moderation is designed for Western norms as a result of the languages used in the training of AI, and many users in the continent do not have the savvy or skills to challenge these systems. This has resulted in an algorithmic second-class citizenship which is seeing AI bypass the needs of users in Africa, including the resources required to enable adequate civic engagement, transparency and accountability. 

Through these four pillars, the State of Internet Freedom in Africa 2025 highlights that AI design, deployment, and impact are ultimately reflections of the power structures that define it globally. This power imbalance plays out within the continent at the national level where decision making on AI’s trajectory remains largely confined.

The report calls for a human-centred AI governance in Africa, through deliberate and inclusive approaches. Find the full report here

Applications are Open for a New Round of Africa Digital Rights Funding!

Announcement |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is calling for proposals to support digital rights work across Africa.

This call for proposals is the 10th under the CIPESA-run Africa Digital Rights Fund (ADRF) initiative that provides rapid response and flexible grants to organisations and networks to implement activities that promote digital rights and digital democracy, including advocacy, litigation, research, policy analysis, skills development, and movement building.

 The current call is particularly interested in proposals for work related to:

  • Data governance including aspects of data localisation, cross-border data flows, biometric databases, and digital ID.
  • Digital resilience for human rights defenders, other activists and journalists.
  • Censorship and network disruptions.
  • Digital economy.
  • Digital inclusion, including aspects of accessibility for persons with disabilities.
  • Disinformation and related digital harms.
  • Technology-Facilitated Gender-Based Violence (TFGBV).
  • Platform accountability and content moderation.
  • Implications of Artificial Intelligence (AI).
  • Digital Public Infrastructure (DPI).

Grant amounts available range between USD 5,000 and USD 25,000 per applicant, depending on the need and scope of the proposed intervention. Cost-sharing is strongly encouraged, and the grant period should not exceed eight months. Applications will be accepted until November 17, 2025. 

Since its launch in April 2019, the ADRF has provided initiatives across Africa with more than one million US Dollars and contributed to building capacity and traction for digital rights advocacy on the continent.  

Application Guidelines

Geographical Coverage

The ADRF is open to organisations/networks based or operational in Africa and with interventions covering any country on the continent.

Size of Grants

Grant size shall range from USD 5,000 to USD 25,000. Cost sharing is strongly encouraged.

Eligible Activities

The activities that are eligible for funding are those that protect and advance digital rights and digital democracy. These may include but are not limited to research, advocacy, engagement in policy processes, litigation, digital literacy and digital security skills building. 

Duration

The grant funding shall be for a period not exceeding eight months.

Eligibility Requirements

  • The Fund is open to organisations and coalitions working to advance digital rights and digital democracy in Africa. This includes but is not limited to human rights defenders, media, activists, think tanks, legal aid groups, and tech hubs. Entities working on women’s rights, or with youth, refugees, persons with disabilities, and other marginalised groups are strongly encouraged to apply.
  • The initiatives to be funded will preferably have formal registration in an African country, but in some circumstances, organisations and coalitions that do not have formal registration may be considered. Such organisations need to show evidence that they are operational in a particular African country or countries.
  • The activities to be funded must be in/on an African country or countries.

Ineligible Activities

  • The Fund shall not fund any activity that does not directly advance digital rights or digital democracy.
  • The Fund will not support travel to attend conferences or workshops, except in exceptional circumstances where such travel is directly linked to an activity that is eligible.
  • Costs that have already been incurred are ineligible.
  • The Fund shall not provide scholarships.
  • The Fund shall not support equipment or asset acquisition.

Administration

The Fund is administered by CIPESA. An internal and external panel of experts will make decisions on beneficiaries based on the following criteria:

  • If the proposed intervention fits within the Fund’s digital rights priorities.
  • The relevance to the given context/country.
  • Commitment and experience of the applicant in advancing digital rights and digital democracy.
  • Potential impact of the intervention on digital rights and digital democracy policies or practices.

The deadline for submissions is Monday, November 17, 2025. The application form can be accessed here.

CIPESA Delivers Training to Ugandan Editors on AI in the Newsroom

By CIPESA Writer |

Artificial intelligence (AI)-related legal and national policy frameworks were the focus for Ugandan editors at an August 20, 2025, workshop organised by the Uganda Editors Guild and World Association of News Publishers (WAN IFRA). The training deliberated on responsible adoption of AI tools by newsrooms and saw participants brainstorm how to effectively navigate the complexities that AI poses to the media industry and the practice of journalism.

WAN-IFRA WIN Deputy Executive, Operations, Jane Godia emphasised that artificial intelligence is evolving rapidly and media houses can no longer afford to ignore the shift. “What we’re really focused on is how to embrace AI in ways that strengthen the core of journalism, and not to replace it, but to enhance its usage while safeguarding credibility and editorial independence,” she said.

Godia urged newsrooms to develop clear AI policies to guide ethical and responsible reporting in this new era in order to promote meaningful conversations about establishing practical, well-defined policies that harness the power of AI without compromising journalistic ethics.

At the workshop, the Collaboration on International ICT for East and Southern Africa (CIPESA) presentations focused on the state of artificial intelligence regulation and noted with concern, the lack of an AI-specific legislation in the country. However, there are several laws and policies in which provisions that touch the application and use of AI can be drawn. CIPESA highlighted existing legal frameworks enabling AI deployment, current regulatory gaps, and the consequent implications of AI on newsrooms.

The key legal instruments highlighted include the Uganda Data Protection and Privacy Act enacted in 2019, which provides for the protection and regulation of personal data, and whose data protection rights and principles apply to processing of data by AI systems. Section 27 of this Act specifically provides for rights related to automated decision-making, which brings the application of AI directly under the section.

The other instruments discussed include the Copyright and Neighboring Rights Act, which protects the rights of proprietors and authors from unfair use, and the National Payment Systems Act, which regulates payment systems and grants the Central Bank regulatory oversight over payments. Furthermore, the National Information Technology Authority, Uganda (NITA-U) Act establishes the National Information Technology Authority with a mandate to enhance public service delivery and to champion the transformation of livelihoods of Ugandans using information and communication technologies (ICT). While these laws do not specifically mention AI, some of their provisions can be utilised to regulate AI-related practices and processes.

Other laws discussed include the Uganda Communications Act enacted in 2013, which establishes the Uganda Communications Commission as the communications sector regulator that, among others, oversees the deployment of AI in the sector. Meanwhile, the Regulation of Interception of Communications Act (RICA) enacted in 2010, requires telecommunication service providers in section 8(1)(b) to aid interception of communications by installing hardware and software, which are essentially AI manned. Also relevant is the Anti-Terrorism Act provides for the interception of communication for persons suspected to be engaged in perpetration of acts of terrorism and the Computer Misuse Act provides for several offences committed using computers.

In addition to the laws, various AI-linked policy frameworks were also presented. These include Vision 2040, which is intended to drive Uganda into a middle-income status country by 2040; the National Fourth Industrial Revolution (4IR) Strategy (2020), which aims to position Uganda as a continental hub for 4IR technologies by 2040; and Uganda’s third National Development Plan (NDP III), which is a comprehensive framework to guide the country’s development. These strategic frameworks cover some areas of Machine Learning and AI integration by virtue of being technology-oriented.

Making reference to the Artificial Intelligence in Eastern Africa Newsrooms report, Edrine Wanyama,  Programmes Manager-Legal at CIPESA, highlighted the advantages of AI in newsrooms as extending to increased increased productivity and efficiency in task performance, decrease in daily workload, faster reporting of news stories, quicker fact-checks and detection of disinformation and misinformation patterns.

On the flip side, the workshop also highlighted the current risks associated with use of AI in newsrooms, including facilitating disinformation and misinformation, the tradeoff of accuracy for speed by journalists and editors, over-reliance on AI tools at the cost of individual creativity, the erosion of journalistic ethics and integrity, and the threat of job loss that looms over journalists and editors.

Dr. Peter G. Mwesige, Chief of Party at CIPESA, urged editors to think beyond what AI can do for journalists and newsrooms, and treat AI itself as a beat to be covered critically. Citing trends from other markets, he observed that media coverage is often incomplete, swinging between hype and alarm, and called for explanatory, evidence-based reporting on the promise and limits of AI. He noted that one of AI’s most compelling capabilities is processing large data sets, such as election results, rapidly and at scale.

On the ethical front, Dr. Mwesige emphasised the need for transparency, saying journalists should disclose material use of AI in significant editorial tasks. He urged newsrooms to adopt clear internal policies or integrate AI guidance into existing editorial guidelines.

Dr. Mwesige concluded that while AI can assist with brainstorming story ideas, editing, and transcription, among others, “journalists must still put in the hard work.”
Following the deliberations, CIPESA presented recommendations that challenged the use of AI in the newsroom and the protection of the participants, if AI is to be used meaningfully and ethically without compromising integrity and professionalism.

  • Ethically use AI by, among others, complying with acceptable standards such as the Paris Charter on AI, respect for copyright and acknowledge sources of works.
  • In collaboration with other newsrooms and media houses, develop best practices including policies to guide the integration and application of AI in their work.
  • Media houses should collaboratively invest resources in training journalists in responsible and ethical use of AI.
  • Employ and deploy the use of fact-checkers to deal with information disorders like misinformation, disinformation and deepfakes.
  • Respect other people’s rights, such as intellectual property rights and the right to privacy, while using AI.
  • Use AI under the exercise of extra caution when generating content to avoid cases of unethical usage that often undermines journalism’s ethical standards.
  • Prioritise human oversight over the application and use of AI to ensure that all cases of excessive intrusion by AI are ironed out and a human aspect is added to generated content.

Protecting Global Democracy in the Digital Age: Insights from PAI’s Community of Practice

By Christian Cardona |

2024 was a historic year for global elections, with approximately four billion eligible voters casting a vote in 72 countries. It was also a historic year for AI-generated content, with a significant presence in elections all around the world. The use of synthetic media, or AI-generated media (visual, auditory, or multimodal content that has been generated or modified via artificial intelligence), can affect elections by impacting voting procedures and candidate narratives, and enabling the spread of harmful content. Widespread access to improved AI applications has increased the quality and quantity of the synthetic content being distributed, accelerating harm and distrust.

As we look toward global elections in 2025 and beyond, it is vital that we recognize one of the primary harms of generative AI in 2024 elections has been the creation of deepnudes of women candidates. Not only is this type of content harmful to the individuals, but also likely creates a chilling effect on female political participation in future elections. The AI and Elections Community of Practice (COP) has provided us with key insights, such as these, and actionable data that can help inform policymakers and platforms as they seek to safeguard future elections in the AI age.

To understand how various stakeholders and actors anticipated and addressed the use of generative AI during elections and are responding to potential risks, the COP provided an avenue for Partnership on AI (PAI) stakeholders to present their ongoing efforts, receive feedback from peers, and discuss difficult questions and tradeoffs when it comes to deploying this technology. In the last three meetings of the eight-part series, PAI was joined by the Center for Democracy & Technology (CDT), the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), and Digital Action to discuss AI’s use in election information and AI regulations in the West and beyond.

Investigating the Spread of Election Information with Center for Democracy & Technology (CDT)

The Center for Democracy & Technology has worked for thirty years to improve civil rights and civil liberties in the digital age, including through almost a decade of research and policy work on trust, security, and accessibility in American elections. In the sixth meeting of the series, CDT provided an inside look into two recent research reports published on the confluence of democracy, AI, and elections.

The first report investigates how chatbots from companies such as OpenAI, Anthropic, MistralAI, and Meta, handle responses to election-based queries, specifically for voters with disabilities. The report found that 61% of responses from chatbots tested provided answers that were insufficient (defined in this report as a response that included one or more of the following: incorrect information, omission of key information, structural issues, or evasion) in at least one of the four ways assessed by the study, including that 41% of the responses contained factual errors, such as incorrect voter registration deadlines. In one case, a chatbot provided information that cited a non-existent law. A quarter of the responses were likely to prevent or dissuade voters with disabilities from voting, raising concerns about the reliability of chatbots in providing important election information.

The second report explored political advertising across social media platforms and how changes in policies at seven major tech companies over the last four years have impacted US elections. As organizations seek more opportunities to leverage generative AI tools in an election context, whether for chatbots or political ads, they must continue investing in research on user safety and implementing evaluation thresholds for deployment, and ensure full transparency on product limitations once deployed.

AI Regulations and Trends in African Democracy with CIPESA

A “think and do tank,” the Collaboration on International ICT Policy for East and Southern Africa focuses on technology policy and practice as it intersects with society, human rights, and livelihoods. In the seventh meeting of the series, CIPESA provided an overview of their work on AI regulations and trends in Africa, touching topics like national and regional AI strategies, and elections and harmful content.

As the use of AI continues to grow in Africa, most AI regulation across the continent focuses on the ethical use of AI and human rights impacts, while lacking specific guidance on the impact of AI on elections. Case studies show that AI is undermining electoral integrity on the continent, distorting public perception given the limited skills of many to discern and fact-check misleading content. A June 2024 report by Clemson University’s Media Forensics Hub found that the Rwandan government used large language models (LLMs) to generate pro-government propaganda during elections in early 2024. Over 650,000 messages attacking government critics, designed to look like authentic support for the government, were sent from 464 accounts.

The 2024 general elections in South Africa saw similar misuse of AI, with AI-generated content targeting politicians and leveraging racial and xenophobic undertones to sway voter sentiment. Examples include a deepfake depicting Donald Trump supporting the uMkhonto weSizwe (MK) party and a manipulated 2009 video of rapper Eminem supporting the Economic Freedom Fighters Party (EFF). The discussion emphasized the need to maintain a focus on AI as it advances in the region with particular attention given to mitigating the challenges AI poses in electoral contexts.

AI tools are lowering the barrier to entry for those seeking to sway elections, whether individuals, political parties, or ruling governments. As the use of AI tools grows in Africa, countries must take steps to implement stronger regulation around the use of AI and elections (without stifling expression) and ensure country-specific efforts are part of a broader regional strategy.

Catalyzing Global AI Change for Democracy with Digital Action

Digital Action is a nonprofit organization that mobilizes civil society organizations, activists, and funders across the world to call out digital threats and take joint action. In the eighth and final meeting in the PAI AI and Elections series, Digital Action shared an overview of the organization’s Year of Democracy campaign. The discussions centered on protecting elections and citizens’ rights and freedoms across the world, as well as exploring how social media content has had an impact on elections.

The main focus of Digital Action’s work in 2024 was supporting the Global Coalition For Tech Justice, which called on Big Tech companies to fully and equitably resource efforts to protect 2024 elections through a set of specific, measurable demands. While the media expected to see very high profile examples of generative AI swaying election results around the world, they instead saw corrosive effects on political campaigning, harms to individual candidates and communities, as well as likely broader harms to trust and future political participation.

Many elections around the world were impacted by AI-generated content being shared on social media, including Pakistan, Indonesia, India, South Africa and Brazil, with minorities and female political candidates being particularly vilified. In Brazil, deepnudes appeared on a social media platform and adult content websites depicting two female politicians in the leadup to the 2024 municipal elections. While one of the politicians took legal action, the slow pace of court processes and lack of proactive steps by social media platforms prevented a timely fix.

To mitigate future harms, Digital Action called for each Big tech company to establish and publish fully and equitably resourced Action Plans (globally and for each country holding elections). By doing so, tech companies can provide greater protection to groups, such as female politicians, that are often at risk during election periods.

What’s To Come

PAI’s AI and Elections COP series has concluded after eight convenings with presentations from industry, media, and civil society. Over the course of the year, presenters provided attendees with different perspectives and real-world examples on how generative AI has impacted global elections, as well as how platforms are working to combat harm from synthetic content.

Some of key takeaways from the series include:

  1. Down-ballot candidates and female politicians are more vulnerable to the negative impacts of generative AI in elections. While there were some attempts to use generative AI to influence national elections (you can read more about this in PAI’s case study), down-ballot candidates were often more susceptible to harm than nationally-recognized ones. Often, local candidates with fewer resources were unable to effectively combat harmful content. Deepfakes were also shown to prevent increased participation of female politicians in some general elections.
  2. Platforms should dedicate more resources to localizing generative AI policy enforcement. Platforms are attempting to protect users from harmful synthetic content by being transparent about the use of generative AI in election ads, providing resources to elected officials to tackle election-related security challenges, and adopting many of the disclosure mechanisms recommended in PAI’s Synthetic Media Framework. However, they have fallen short in localizing enforcement policies with a lack of language support and in-country collaboration with local governments, civil society organizations, and community organizations that represent minority and marginalized groups such as persons with disabilities and women. As a result, generative AI has been used to cause real-world harm before being addressed.
  3. Globally, countries need to adopt more coherent regional strategies to regulate the use of generative AI in elections, balancing free expression and safety. In the U.S., a lack of federal legislation on the use of generative AI in elections has led to various individual efforts from states and industry organizations. As a result, there is a fractured approach to keeping users safe without a cohesive overall strategy. In Africa, attempts by countries to regulate AI are very disparate. Some countries such as Rwanda, Kenya, and Senegal have adopted AI strategies that emphasize infrastructure and economic development but fail to address ways to mitigate risks that generative AI presents in free and fair elections. While governments around the world have shown some initiative to catch up, they must work with organizations, both at the industry and state level, to implement best practices and lessons learned. These government efforts cannot exist in a vacuum. Regulations must cohere and contribute to broader global governance efforts to regulate the use of generative AI in elections while ensuring safety and free speech protections.

While the AI and Elections Community of Practice has come to an end, we continue to push forward in our work to responsibly develop, create, and share synthetic media.

This article was initially published by Partnership on AI on March 11, 2025