Protecting Global Democracy in the Digital Age: Insights from PAI’s Community of Practice

By Christian Cardona |

2024 was a historic year for global elections, with approximately four billion eligible voters casting a vote in 72 countries. It was also a historic year for AI-generated content, with a significant presence in elections all around the world. The use of synthetic media, or AI-generated media (visual, auditory, or multimodal content that has been generated or modified via artificial intelligence), can affect elections by impacting voting procedures and candidate narratives, and enabling the spread of harmful content. Widespread access to improved AI applications has increased the quality and quantity of the synthetic content being distributed, accelerating harm and distrust.

As we look toward global elections in 2025 and beyond, it is vital that we recognize one of the primary harms of generative AI in 2024 elections has been the creation of deepnudes of women candidates. Not only is this type of content harmful to the individuals, but also likely creates a chilling effect on female political participation in future elections. The AI and Elections Community of Practice (COP) has provided us with key insights, such as these, and actionable data that can help inform policymakers and platforms as they seek to safeguard future elections in the AI age.

To understand how various stakeholders and actors anticipated and addressed the use of generative AI during elections and are responding to potential risks, the COP provided an avenue for Partnership on AI (PAI) stakeholders to present their ongoing efforts, receive feedback from peers, and discuss difficult questions and tradeoffs when it comes to deploying this technology. In the last three meetings of the eight-part series, PAI was joined by the Center for Democracy & Technology (CDT), the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), and Digital Action to discuss AI’s use in election information and AI regulations in the West and beyond.

Investigating the Spread of Election Information with Center for Democracy & Technology (CDT)

The Center for Democracy & Technology has worked for thirty years to improve civil rights and civil liberties in the digital age, including through almost a decade of research and policy work on trust, security, and accessibility in American elections. In the sixth meeting of the series, CDT provided an inside look into two recent research reports published on the confluence of democracy, AI, and elections.

The first report investigates how chatbots from companies such as OpenAI, Anthropic, MistralAI, and Meta, handle responses to election-based queries, specifically for voters with disabilities. The report found that 61% of responses from chatbots tested provided answers that were insufficient (defined in this report as a response that included one or more of the following: incorrect information, omission of key information, structural issues, or evasion) in at least one of the four ways assessed by the study, including that 41% of the responses contained factual errors, such as incorrect voter registration deadlines. In one case, a chatbot provided information that cited a non-existent law. A quarter of the responses were likely to prevent or dissuade voters with disabilities from voting, raising concerns about the reliability of chatbots in providing important election information.

The second report explored political advertising across social media platforms and how changes in policies at seven major tech companies over the last four years have impacted US elections. As organizations seek more opportunities to leverage generative AI tools in an election context, whether for chatbots or political ads, they must continue investing in research on user safety and implementing evaluation thresholds for deployment, and ensure full transparency on product limitations once deployed.

AI Regulations and Trends in African Democracy with CIPESA

A “think and do tank,” the Collaboration on International ICT Policy for East and Southern Africa focuses on technology policy and practice as it intersects with society, human rights, and livelihoods. In the seventh meeting of the series, CIPESA provided an overview of their work on AI regulations and trends in Africa, touching topics like national and regional AI strategies, and elections and harmful content.

As the use of AI continues to grow in Africa, most AI regulation across the continent focuses on the ethical use of AI and human rights impacts, while lacking specific guidance on the impact of AI on elections. Case studies show that AI is undermining electoral integrity on the continent, distorting public perception given the limited skills of many to discern and fact-check misleading content. A June 2024 report by Clemson University’s Media Forensics Hub found that the Rwandan government used large language models (LLMs) to generate pro-government propaganda during elections in early 2024. Over 650,000 messages attacking government critics, designed to look like authentic support for the government, were sent from 464 accounts.

The 2024 general elections in South Africa saw similar misuse of AI, with AI-generated content targeting politicians and leveraging racial and xenophobic undertones to sway voter sentiment. Examples include a deepfake depicting Donald Trump supporting the uMkhonto weSizwe (MK) party and a manipulated 2009 video of rapper Eminem supporting the Economic Freedom Fighters Party (EFF). The discussion emphasized the need to maintain a focus on AI as it advances in the region with particular attention given to mitigating the challenges AI poses in electoral contexts.

AI tools are lowering the barrier to entry for those seeking to sway elections, whether individuals, political parties, or ruling governments. As the use of AI tools grows in Africa, countries must take steps to implement stronger regulation around the use of AI and elections (without stifling expression) and ensure country-specific efforts are part of a broader regional strategy.

Catalyzing Global AI Change for Democracy with Digital Action

Digital Action is a nonprofit organization that mobilizes civil society organizations, activists, and funders across the world to call out digital threats and take joint action. In the eighth and final meeting in the PAI AI and Elections series, Digital Action shared an overview of the organization’s Year of Democracy campaign. The discussions centered on protecting elections and citizens’ rights and freedoms across the world, as well as exploring how social media content has had an impact on elections.

The main focus of Digital Action’s work in 2024 was supporting the Global Coalition For Tech Justice, which called on Big Tech companies to fully and equitably resource efforts to protect 2024 elections through a set of specific, measurable demands. While the media expected to see very high profile examples of generative AI swaying election results around the world, they instead saw corrosive effects on political campaigning, harms to individual candidates and communities, as well as likely broader harms to trust and future political participation.

Many elections around the world were impacted by AI-generated content being shared on social media, including Pakistan, Indonesia, India, South Africa and Brazil, with minorities and female political candidates being particularly vilified. In Brazil, deepnudes appeared on a social media platform and adult content websites depicting two female politicians in the leadup to the 2024 municipal elections. While one of the politicians took legal action, the slow pace of court processes and lack of proactive steps by social media platforms prevented a timely fix.

To mitigate future harms, Digital Action called for each Big tech company to establish and publish fully and equitably resourced Action Plans (globally and for each country holding elections). By doing so, tech companies can provide greater protection to groups, such as female politicians, that are often at risk during election periods.

What’s To Come

PAI’s AI and Elections COP series has concluded after eight convenings with presentations from industry, media, and civil society. Over the course of the year, presenters provided attendees with different perspectives and real-world examples on how generative AI has impacted global elections, as well as how platforms are working to combat harm from synthetic content.

Some of key takeaways from the series include:

  1. Down-ballot candidates and female politicians are more vulnerable to the negative impacts of generative AI in elections. While there were some attempts to use generative AI to influence national elections (you can read more about this in PAI’s case study), down-ballot candidates were often more susceptible to harm than nationally-recognized ones. Often, local candidates with fewer resources were unable to effectively combat harmful content. Deepfakes were also shown to prevent increased participation of female politicians in some general elections.
  2. Platforms should dedicate more resources to localizing generative AI policy enforcement. Platforms are attempting to protect users from harmful synthetic content by being transparent about the use of generative AI in election ads, providing resources to elected officials to tackle election-related security challenges, and adopting many of the disclosure mechanisms recommended in PAI’s Synthetic Media Framework. However, they have fallen short in localizing enforcement policies with a lack of language support and in-country collaboration with local governments, civil society organizations, and community organizations that represent minority and marginalized groups such as persons with disabilities and women. As a result, generative AI has been used to cause real-world harm before being addressed.
  3. Globally, countries need to adopt more coherent regional strategies to regulate the use of generative AI in elections, balancing free expression and safety. In the U.S., a lack of federal legislation on the use of generative AI in elections has led to various individual efforts from states and industry organizations. As a result, there is a fractured approach to keeping users safe without a cohesive overall strategy. In Africa, attempts by countries to regulate AI are very disparate. Some countries such as Rwanda, Kenya, and Senegal have adopted AI strategies that emphasize infrastructure and economic development but fail to address ways to mitigate risks that generative AI presents in free and fair elections. While governments around the world have shown some initiative to catch up, they must work with organizations, both at the industry and state level, to implement best practices and lessons learned. These government efforts cannot exist in a vacuum. Regulations must cohere and contribute to broader global governance efforts to regulate the use of generative AI in elections while ensuring safety and free speech protections.

While the AI and Elections Community of Practice has come to an end, we continue to push forward in our work to responsibly develop, create, and share synthetic media.

This article was initially published by Partnership on AI on March 11, 2025

Policy Alternatives for an Artificial Intelligence Ecosystem in Uganda

CIPESA |

Economic projections show that by 2030, artificial intelligence (AI) will add USD 15.7 trillion to the global economy. Of this, USD 1.2 trillion will be generated in Africa and could boost the continent’s Gross Domestic Product by 5.6%. Despite AI’s transformative potential, there are concerns about the risks it poses to individuals’ rights and freedoms. There is therefore a need to foster a trusted and ethical AI ecosystem that elicits peoples’ confidence while guaranteeing an enabling atmosphere for innovation, to best harness AI for the greater public good for all. 

The discussion on AI in Uganda is still in early stages. Nonetheless, the country needs to develop a comprehensive and AI-specific legal and institutional governance framework to provide for regulatory oversight over AI and the diverse actors in the AI ecosystem. Currently, various pieces of legislation, which majorly focus on general-purpose technologies, constitute the legal framework relevant to AI. However, these laws do not provide sufficient regulatory cover to AI, its associated benefits and mitigation of risks to human security, rights and freedoms. 

In a new policy brief, the Collaboration on ICT Policy for East and Southern Africa (CIPESA) reviews the AI policymaking journeys of various countries, such as Kenya, South Africa, Singapore, Luxembourg, France and Germany, and proposes 11 actions Uganda could take to fulfil its aspiration to effectively regulate and harness AI.

The existing key policy frameworks include the Uganda Vision 2040, which emphasises the importance of Science, Technology, Engineering and Innovation (STEI) as critical drivers of economic growth and social transformation; and the National Fourth Industrial Revolution (4IR) Strategy that aims to accelerate Uganda’s development into an innovative, productive and competitive society using 4IR technologies, with  emphasis on  using AI in the public sector to improve financial management and tax revenue collection. Meanwhile, the third National Development Plan (NDP III) identifies the promotion of digital transformation and the adoption of 4IR technologies, including AI, as critical components for achieving Uganda’s vision of becoming a middle-income country. 

The legal frameworks that impact AI-related oversight include the Constitution that lays out crucial benchmarks for the regulation of AI. It provides for the role of the state in stimulating agricultural, industrial, technological and scientific development by adopting appropriate policies and enacting enabling legislation. The constitution also provides for the right to privacy, freedom from discrimination, and the right to equality. 

Other key laws include the Data Protection and Privacy Act of 2019 which, even if it was not drafted with AI in mind, is directly relevant to the regulation of AI technologies through the lens of data protection. The Computer Misuse Act of 2011 provides a framework that addresses unlawful use of computers and electronic systems. Relevant to the governance of AI is section 12, which criminalises unauthorised access to a computer or electronic system.  

The National Information Technology Authority, Uganda (NITA-U) Act offers a foundation for improving infrastructure to support AI regulation efforts, and  established NITA-U, a body responsible for regulating, coordinating, and promoting information technology in the country. 

Overall, the current policy and legal framework, however fragmented, provides a starting point for enacting comprehensive, AI-specific legislation.

The growing adoption of AI brings a host of opportunities that positively impact society, including improved productivity and efficiency for individuals, the health sector, civil society organisations, the media, financial institutions, manufacturing industries, supplier chains, agriculture, climate and weather research and academia. AI is also being used by public agencies such as Uganda Revenue Authority to support more effective revenue collection. Uganda’s telecommunications operators are also utilising AI, for example to send targeted messages that encourage users to subscribe to loan offers such as Airtel Wewole and MTN MoKash..

Prospects for AI Regulation in Uganda

As Uganda’s journey of AI adoption and usage gains traction, the following guiding actions that underlie progressive AI frameworks across various countries could help quicken and offer direction to Uganda’s AI aspirations.

  1. Establishment of an AI governance institutional framework to guide the national adoption and usage of AI.
  2. Development and implementation of a “living” framework of best practices on AI that operates across the diverse sectors affected by AI. Singapore provides a best practice in this regard where, as a national agenda, there is consistent codification of best practices that inform the safe evolution of AI in the different spheres. The best practices framework allows for complementing of the regulatory framework. By adopting this best practice framework, Uganda would keep up with the evolution of AI without necessarily undertaking statutory amendments especially in the AI/technology world where there are rapid changes. 
  3. Implementation of checks and balances through the creation of specific policies, regulations, guidelines, and laws to manage AI effectively and address the existing significant gaps in its regulation and oversight. To address this, key stakeholders – including the Ministry of ICT and National Guidance, the Uganda Communications Commission, NITA-U, and the Personal Data Protection Office – must collaborate to develop comprehensive and tailored regulations. This effort should focus on understanding AI’s specific dynamics, impacts, and challenges within the Ugandan context and not wholesomely adopting or replicating legislation from other jurisdictions, given the divergences in context at continental, regional and national levels.
  4. Tap into the African AI Frameworks for Inspiration. Drawing on regional and international frameworks, such as the African Union’s AI Policy and the European Union’s AI Act, will offer key strategic guidelines and intervention measures to shape a robust and effective AI legislation in Uganda. 
  5. Establish a National Research and Innovative Fund on AI to effectively tap into and harvest the dividends that come with AI. This kind of funding requiring direct government intervention is informed by the reality that surrounds the high levels of uncertainty of outcomes in tech  innovation. 
  6. Develop and implement a National Strategy for AI to enhance policy coordination and coherence and offer direction and guidance. This would encompass the national vision for AI in Uganda’s social and economic development, and guide all other initiatives on progressive AI regulation.
  7. Develop and implement a National Citizenry Awareness and Public Education Programme on AI to better prepare citizens to engage with AI responsibly, ensure inclusion and advocate for ethical practices.
  8. Apply human rights protective AI to influence the designing of AI systems with fairness, transparency, and accountability, and employ diverse and representative datasets to mitigate biases related to ethnicity, gender, and socioeconomic status.
  9. Establish  a mechanism that can enforce ethical use of AI by the various stakeholders, including through emphasising transparency and accountability in AI deployment.
  1. Establish cyber security protocols to counter inherent vulnerability to cyber-attacks and other attendant digital security risks that come with AI.  
  2. Create a conducive atmosphere for citizenry platforms for AI engagements. These platforms can be conduits for encouraging best practices, and latest research information among other emerging issues on AI that could benefit the country. An AI ecosystem should thus favour and strategically support such inter-agency, inter-sector and public-private collaboration and formal linkages to also facilitate AI technology transfer from explorations, studies and innovation to actual application.

Read the full brief here.

Introducing “Community Fakes”, A Crowdsourcing Platform to Combat AI-Generated Deepfakes

ADRF Grantee Update | 

As the world enters the era of artificial intelligence, the rise of deepfakes and AI-generated media presents significant threats to the integrity of democratic processes, particularly in fragile democracies. These processes are vital for ensuring fairness, accountability, and citizen engagement. 

When compromised, the foundational values of democracy—and society’s trust in its leaders and institutions—are at risk. Safeguarding democracy in the AI era requires vigilance, collaboration, and innovative solutions, such as building a database of verified AI manipulations to protect the truth and uphold free societies.

In the Global South, where political stability is often tenuous, the stakes are even higher. Elections can easily be influenced by mis/disinformation, now accessible at minimal cost and requiring little technical skill. Malicious actors can easily use these tools to create and amplify false content at scale. This risk is amplified in authoritarian regimes, where AI-generated mis/disinformation is increasingly weaponised to manipulate public opinion, undermine elections, or silence dissent. From fabricated videos of political figures to manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust, further destabilising already fragile democracies.

Despite ongoing efforts by social media platforms and AI companies to develop detection tools, these solutions remain inadequate, particularly in culturally and linguistically diverse regions like the Global South. Detection algorithms often rely on patterns trained on Western datasets, which fail to account for local cultural cues, dialects, and subtleties. This gap allows deepfake creators to exploit these nuances, leaving communities vulnerable to disinformation, especially during critical events like elections.

Recognising the urgency of addressing these challenges, Threats developed Community Fakes, an incident database and central repository for researchers to submit, share, and analyse deepfakes and other AI-altered media. This platform enables collaboration, combining human insights with AI tools to create a robust defence against disinformation. By empowering users to identify, upload, and discuss suspect content, Community Fakes offers a comprehensive, adaptable approach to protecting the integrity of information.

The initiative was made possible through the CIPESA-run African Digital Rights Fund (ADRF), which supports innovative interventions to advance digital rights across Africa. The grant to Thraets for the project titled “Safeguarding African Elections—Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy” aims to counter the increasing risks posed by AI-generated disinformation, which could jeopardise free and fair elections. 

The project has conducted research on elections in Tunisia and Ghana, with the findings feeding into tutorials for journalists and fact-checkers on identifying and countering AI-generated electoral disinformation and awareness campaigns on the need for transparency on the capabilities of AI tools and their risks to democracy. 

Additionally, the project held an Ideathon to generate novel ideas for combating AI-generated disinformation and developed the Spot the Fakes quiz, which gives users the opportunity to dive into the world of AI-generated synthetic media and how to distinguish between the authentic and the fake.

Community Fakes will crowdsource human intelligence to complement AI-based detection, thereby allowing users to leverage their unique insights to spot inconsistencies in AI-generated media that machines may overlook, while having conversations with other experts around the observed patterns. Users can submit suspected deepfakes to the platform, which the global community can then scrutinise, verify, and expose. According to Thraets, this approach ensures that even the most convincing deepfakes can be exposed before they can do irreparable harm. 

Find a full outline of Community Fakes here.

Towards Ethical AI Regulation in Africa

By Tusi Fokane |

The ubiquity of generative artificial intelligence (AI)-based applications across various sectors has led to debates on the most effective regulatory approach to encourage innovation whilst minimising risks. The benefits and potential of AI are evident in various industries ranging from financial and customer services to education, agriculture and healthcare. AI holds particular promise for developing countries to transform their societies and economies. 

However, there are concerns that, without adequate regulatory safeguards, AI technologies could further exacerbate existing governance concerns around ethical deployment, privacy, algorithmic bias, workforce disruptions, transparency, and disinformation. Stakeholders have called for increased engagement and collaboration between policymakers, academia, and industry to develop legal and regulatory frameworks and standards for ethical AI adoption. 

The Global North has taken a leading position in exploring various regulatory modalities. These range from risk-based or proportionate regulation as proposed by the European Commission’s AI Act. Countries such as Finland and Estonia have opted for a greater focus on maintaining trust and collaboration at national level by adopting a human-centric approach to AI. The United Kingdom (UK) has taken a “context-specific” approach, embedding AI regulation within existing regulatory institutions. Canada has prioritised bias and discrimination, whereas other jurisdictions such as France, Germany and Italy have opted for greater emphasis on transparency and accountability in developing AI regulation. 

On the other hand, China has taken a more firm approach to AI regulation, distributing policy responsibility amongst existing standards bodies. The United States of America (USA) has adopted an incremental approach, introducing additional guidance to existing legislation and emphasising rights and safety. 

Whilst there are divergent approaches to AI regulation, there is at least some agreement, at a muti-lateral level, on the need for a human-rights based approach to ensure ethical AI deployment which respects basic freedoms, fosters transparency and accountability, and promotes diversity and inclusivity through actionable policies and specific strategies. 

Developments in AI regulation in Africa

Regulatory responses in Africa have been disparate, although the publication of the African Union Development Agency (AUDA-NEPAD) White Paper: Regulation and Responsible Adoption of AI for Africa Towards Achievement of AU Agenda 2063 is anticipated to introduce greater policy coherence. The White Paper follows the 2021 AI blueprint and the African Commission on Human and Peoples’ Rights Resolution 473, which calls for a human-rights-centred approach to AI governance. 

The White Paper calls for a harmonised approach to AI adoption and underscores the importance of developing an enabling governance framework to “provide guidelines for implementation and also keep AI development in check for negative impacts.” Furthermore, the White Paper calls on member states to adopt national AI strategies that emphasise data safety, security and protection in an effort to promote the ethical use of AI. 

The White Paper proposes a mixed regulatory and governance framework, depending on the AI use-case. First, the proposals encompass self-regulation, which would be enforced through sectoral codes of conduct, and which offer a degree of flexibility to match an evolving AI landscape. Second, the White Paper suggests the adoption of standards and certification to establish industry benchmarks. The third proposal is for a distinction between hard and soft regulation, depending on the identified potential for harm. Finally, the White Paper calls for AI regulatory sandboxes to allow for testing under regulatory supervision. 

Figure 1: Ethical AI framework

However, there are still concerns that African countries are lagging behind in fostering AI innovation and putting in place the necessary regulatory framework. According to the 2023 Government AI Readiness Index, Benin, Mauritius, Rwanda, Senegal, and South Africa are ahead in government efforts around AI out of the 24 African countries assessed. The index measures a country’s progress against four pillars: government/strategy, data & infrastructure, technology sector, and global governance/international collaboration.  

The national AI strategies of Mauritius, Rwanda, Egypt, Kenya, Senegal and Benin have a strong focus on infrastructure and economic development whilst also laying the foundation for AI regulation within their jurisdictions. For its part, Nigeria has adopted a more collaborative approach in co-creating its National Artificial Intelligence Strategy, with calls for input from AI researchers. 

Thinking beyond technical AI regulation 

Despite increasingly positive signs of AI adoption in Africa, there are concerns that the pace of AI regulation on the continent is too slow, and that it may not be fit for purpose for local and national conditions. Some analysts have warned against wholesale adoption of policies and strategies imported from the Global North, and which may fail to consider country-specific contexts.

Some Global South academics and civil society organisations have raised questions regarding the importation of regulatory standards from the Global North, some even referring to the practice as ‘data colonialism’. The apprehension of copy-pasting Global North standards is premised on the continent’s over-reliance on Big Tech digital ecosystems and infrastructure. Researchers indicate that “These context-sensitive issues raised on various continents can best be understood as a combination of social and technical systems. As AI systems make decisions about the present and future through classifying information and developing models from historical data, one of the main critiques of AI has been that these technologies reproduce or heighten existing inequalities involving gender, race, coloniality, class and citizenship.” 

Other stakeholders caution against ‘AI neocolonialism’, which replicates Western conventions, often resulting in poor labour outcomes and stifling the potential for the development of local AI approaches.  

Proposals for African solutions for AI deployment 

There is undoubtedly a need for ethical and effective AI regulation in Africa. This calls for the development of strategies and a context-specific regulatory and legal foundation, and this has been occurring in various stages. African policy-makers should ensure a multi-stakeholder and collaborative approach to designing AI governance solutions in order to ensure ethical AI, which respects human rights, is transparent and inclusive. This becomes even more significant given the potential risks to AI during election season on the continent. 
Beyond framing local regulatory solutions, stakeholders have called for African governments to play a greater role in global AI discussions to guard against regulatory blindspots that may emerge from importing purely Western approaches. Perhaps the strongest call is for African leaders to leverage expertise on the continent, and promote greater collaboration amongst African policymakers. 

CIPESA Joins International Initiative to Develop “AI Charter in Media”

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has joined a conglomeration of  international organisations and experts to develop a charter aimed at guiding the use of Artificial Intelligence (AI) in the media. 

According to Reporters Without Borders (RSJ), the group that is coordinating the development of the Charter, 16 partner organisations, as well as 31 media, AI and academic professionals representing 18 different nationalities, are involved in the process. The CIPESA Executive Director, Dr. Wairagala Wakabi, is among the experts on the committee that is led by journalist and Nobel Peace Prize laureate Maria Ressa.

The RSJ stated that the growing interest in the project highlights the real need to clearly and collaboratively develop an ethical framework to safeguard information integrity, at a time when generative AI and other algorithm-based technologies are being rapidly deployed in the news and information sphere.

Part of the committee’s responsibility is to develop a set of principles, rights, and obligations for information professionals regarding the use of AI-based systems, by the end of 2023. This is a response to the realisation that the rapid deployment of AI in the media industry presents a major threat to information integrity.

See here details about the initiative, the partner organisations and experts.