Protecting Global Democracy in the Digital Age: Insights from PAI’s Community of Practice

By Christian Cardona |

2024 was a historic year for global elections, with approximately four billion eligible voters casting a vote in 72 countries. It was also a historic year for AI-generated content, with a significant presence in elections all around the world. The use of synthetic media, or AI-generated media (visual, auditory, or multimodal content that has been generated or modified via artificial intelligence), can affect elections by impacting voting procedures and candidate narratives, and enabling the spread of harmful content. Widespread access to improved AI applications has increased the quality and quantity of the synthetic content being distributed, accelerating harm and distrust.

As we look toward global elections in 2025 and beyond, it is vital that we recognize one of the primary harms of generative AI in 2024 elections has been the creation of deepnudes of women candidates. Not only is this type of content harmful to the individuals, but also likely creates a chilling effect on female political participation in future elections. The AI and Elections Community of Practice (COP) has provided us with key insights, such as these, and actionable data that can help inform policymakers and platforms as they seek to safeguard future elections in the AI age.

To understand how various stakeholders and actors anticipated and addressed the use of generative AI during elections and are responding to potential risks, the COP provided an avenue for Partnership on AI (PAI) stakeholders to present their ongoing efforts, receive feedback from peers, and discuss difficult questions and tradeoffs when it comes to deploying this technology. In the last three meetings of the eight-part series, PAI was joined by the Center for Democracy & Technology (CDT), the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), and Digital Action to discuss AI’s use in election information and AI regulations in the West and beyond.

Investigating the Spread of Election Information with Center for Democracy & Technology (CDT)

The Center for Democracy & Technology has worked for thirty years to improve civil rights and civil liberties in the digital age, including through almost a decade of research and policy work on trust, security, and accessibility in American elections. In the sixth meeting of the series, CDT provided an inside look into two recent research reports published on the confluence of democracy, AI, and elections.

The first report investigates how chatbots from companies such as OpenAI, Anthropic, MistralAI, and Meta, handle responses to election-based queries, specifically for voters with disabilities. The report found that 61% of responses from chatbots tested provided answers that were insufficient (defined in this report as a response that included one or more of the following: incorrect information, omission of key information, structural issues, or evasion) in at least one of the four ways assessed by the study, including that 41% of the responses contained factual errors, such as incorrect voter registration deadlines. In one case, a chatbot provided information that cited a non-existent law. A quarter of the responses were likely to prevent or dissuade voters with disabilities from voting, raising concerns about the reliability of chatbots in providing important election information.

The second report explored political advertising across social media platforms and how changes in policies at seven major tech companies over the last four years have impacted US elections. As organizations seek more opportunities to leverage generative AI tools in an election context, whether for chatbots or political ads, they must continue investing in research on user safety and implementing evaluation thresholds for deployment, and ensure full transparency on product limitations once deployed.

AI Regulations and Trends in African Democracy with CIPESA

A “think and do tank,” the Collaboration on International ICT Policy for East and Southern Africa focuses on technology policy and practice as it intersects with society, human rights, and livelihoods. In the seventh meeting of the series, CIPESA provided an overview of their work on AI regulations and trends in Africa, touching topics like national and regional AI strategies, and elections and harmful content.

As the use of AI continues to grow in Africa, most AI regulation across the continent focuses on the ethical use of AI and human rights impacts, while lacking specific guidance on the impact of AI on elections. Case studies show that AI is undermining electoral integrity on the continent, distorting public perception given the limited skills of many to discern and fact-check misleading content. A June 2024 report by Clemson University’s Media Forensics Hub found that the Rwandan government used large language models (LLMs) to generate pro-government propaganda during elections in early 2024. Over 650,000 messages attacking government critics, designed to look like authentic support for the government, were sent from 464 accounts.

The 2024 general elections in South Africa saw similar misuse of AI, with AI-generated content targeting politicians and leveraging racial and xenophobic undertones to sway voter sentiment. Examples include a deepfake depicting Donald Trump supporting the uMkhonto weSizwe (MK) party and a manipulated 2009 video of rapper Eminem supporting the Economic Freedom Fighters Party (EFF). The discussion emphasized the need to maintain a focus on AI as it advances in the region with particular attention given to mitigating the challenges AI poses in electoral contexts.

AI tools are lowering the barrier to entry for those seeking to sway elections, whether individuals, political parties, or ruling governments. As the use of AI tools grows in Africa, countries must take steps to implement stronger regulation around the use of AI and elections (without stifling expression) and ensure country-specific efforts are part of a broader regional strategy.

Catalyzing Global AI Change for Democracy with Digital Action

Digital Action is a nonprofit organization that mobilizes civil society organizations, activists, and funders across the world to call out digital threats and take joint action. In the eighth and final meeting in the PAI AI and Elections series, Digital Action shared an overview of the organization’s Year of Democracy campaign. The discussions centered on protecting elections and citizens’ rights and freedoms across the world, as well as exploring how social media content has had an impact on elections.

The main focus of Digital Action’s work in 2024 was supporting the Global Coalition For Tech Justice, which called on Big Tech companies to fully and equitably resource efforts to protect 2024 elections through a set of specific, measurable demands. While the media expected to see very high profile examples of generative AI swaying election results around the world, they instead saw corrosive effects on political campaigning, harms to individual candidates and communities, as well as likely broader harms to trust and future political participation.

Many elections around the world were impacted by AI-generated content being shared on social media, including Pakistan, Indonesia, India, South Africa and Brazil, with minorities and female political candidates being particularly vilified. In Brazil, deepnudes appeared on a social media platform and adult content websites depicting two female politicians in the leadup to the 2024 municipal elections. While one of the politicians took legal action, the slow pace of court processes and lack of proactive steps by social media platforms prevented a timely fix.

To mitigate future harms, Digital Action called for each Big tech company to establish and publish fully and equitably resourced Action Plans (globally and for each country holding elections). By doing so, tech companies can provide greater protection to groups, such as female politicians, that are often at risk during election periods.

What’s To Come

PAI’s AI and Elections COP series has concluded after eight convenings with presentations from industry, media, and civil society. Over the course of the year, presenters provided attendees with different perspectives and real-world examples on how generative AI has impacted global elections, as well as how platforms are working to combat harm from synthetic content.

Some of key takeaways from the series include:

  1. Down-ballot candidates and female politicians are more vulnerable to the negative impacts of generative AI in elections. While there were some attempts to use generative AI to influence national elections (you can read more about this in PAI’s case study), down-ballot candidates were often more susceptible to harm than nationally-recognized ones. Often, local candidates with fewer resources were unable to effectively combat harmful content. Deepfakes were also shown to prevent increased participation of female politicians in some general elections.
  2. Platforms should dedicate more resources to localizing generative AI policy enforcement. Platforms are attempting to protect users from harmful synthetic content by being transparent about the use of generative AI in election ads, providing resources to elected officials to tackle election-related security challenges, and adopting many of the disclosure mechanisms recommended in PAI’s Synthetic Media Framework. However, they have fallen short in localizing enforcement policies with a lack of language support and in-country collaboration with local governments, civil society organizations, and community organizations that represent minority and marginalized groups such as persons with disabilities and women. As a result, generative AI has been used to cause real-world harm before being addressed.
  3. Globally, countries need to adopt more coherent regional strategies to regulate the use of generative AI in elections, balancing free expression and safety. In the U.S., a lack of federal legislation on the use of generative AI in elections has led to various individual efforts from states and industry organizations. As a result, there is a fractured approach to keeping users safe without a cohesive overall strategy. In Africa, attempts by countries to regulate AI are very disparate. Some countries such as Rwanda, Kenya, and Senegal have adopted AI strategies that emphasize infrastructure and economic development but fail to address ways to mitigate risks that generative AI presents in free and fair elections. While governments around the world have shown some initiative to catch up, they must work with organizations, both at the industry and state level, to implement best practices and lessons learned. These government efforts cannot exist in a vacuum. Regulations must cohere and contribute to broader global governance efforts to regulate the use of generative AI in elections while ensuring safety and free speech protections.

While the AI and Elections Community of Practice has come to an end, we continue to push forward in our work to responsibly develop, create, and share synthetic media.

This article was initially published by Partnership on AI on March 11, 2025

Advancing Advocacy and Awareness on Digital Rights for Businesses in Uganda

By Nadhifah Muhammad and Tendo Racheal |

Imagine running a business in today’s fast-paced digital world, where almost everything from customer data, marketing to financial transactions happening online. Now, imagine having little or no knowledge on how to protect that data, relevant laws and regulations or worse, unknowingly violating digital rights. That is the reality for many businesses in Uganda today. 

Data protection, data privacy, cybersecurity, and surveillance are not just techy buzzwords, they’re essential to building a safe and inclusive digital economy. Yet, many small and medium enterprises (SMEs), which account for 90% of Uganda’s private sector, either do not fully understand responsible digital practices or lack the tools to do so.

That’s where the Advancing Respect for Human Rights by Businesses in Uganda (ARBHR) project comes in. With support from Enabel and the European Union, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is co-implementing this project which seeks to reduce human rights abuses connected to business activities in Uganda, particularly those impacting women and children. 

Among others, CIPESA is working to raise awareness on digital rights in the business context. As businesses increasingly rely on digital technologies to operate and innovate, their role in upholding digital rights becomes paramount. Yet many Ugandan businesses, particularly SMEs, lack a comprehensive understanding of digital rights principles and their obligations in upholding them. 

Early this year, CIPESA published a call for applications to the Civil Society (CSO) Fund for entities interested in championing digital rights in the business sector. Six CSOs were selected under the competitive process and, together with four innovation hubs, SME, employer and employee associations, will be supported to implement awareness-raising activities. These include Evidence and Methods Lab, Boundless Minds, Wakiso District Human Rights Committee, Media Focus on Africa Uganda, Girls for Climate Action, Recreation for Development and Peace Uganda, Private Sector Foundation Uganda, Federation of Uganda Employers and The Innovation Village.

To ensure that the partners effectively undertake their interventions, CIPESA convened a three-day bootcamp on March 4–8, 2025 aimed at enhancing their knowledge and skills in implementing awareness raising and advocacy campaigns as part of advancing the business and human rights agenda. The bootcamp brought together 35 participants. 

Key topics of discussion included Trends in Business and Digital Rights in Uganda, such as Privacy and Data Protection, Cybersecurity, Inclusion and Labour Rights; Impact Communications and Storytelling for Awareness and Advocacy; as well as Digital Content Creation.

The discussions were framed under the Uganda National Action Plan on Business and Human Rights (NAPBHR), which seeks to protect human rights, enhance corporate digital responsibility to respect human rights, and ensure access to remedy for victims of human rights violations and abuses resulting from non-compliance by business entities in the country.

The project is very timely to create more awareness on business and human rights issues especially in regards to labour rights, effective redress mechanisms for BHR [Business and Human Rights] violations and engendering of digital rights. –  Training Participant

Uganda’s ARBHR aligns with the United Nations Guiding Principles on Business and Human Rights (UNGPs), which outline the corporate responsibility to respect, protect, and remedy human rights abuses in business operations. By equipping businesses with the knowledge and tools to integrate digital rights into their policies and practices, the ARBHR project is contributing to a global movement that ensures businesses operate ethically, respect fundamental freedoms, and uphold human dignity in the digital space. 

For Uganda’s business sector to thrive in a digitally connected world, businesses must align with these principles, creating a culture where human rights are not an afterthought but a core business responsibility. 

Therefore, as partners roll out their awareness raising action plans over the next eight months, it is envisaged that over 200,000 individuals will be reached in the regions of Albertine, Busoga and Kampala Metropolitan. Through radio talk shows, skits, social media campaigns, community meetings, capacity building trainings, visualised Information, Education, and Communication (IEC) products, and digital clinics, these stakeholders will have enhanced appreciation of digital rights protection to foster a more informed and active community of advocates for rights-respecting practices among businesses in Uganda. 

So, if you’re a business owner, a CSO representative, or just someone passionate about digital rights, this is your chance to be part of something bigger. Join the conversation, and let’s build a digital future we can all trust.

NEW BRIEF: Policy Considerations for Enhancing Digital Trade in East Africa

By Lillian Nalwoga |

The East African region is on the cusp of a digital revolution, with significant strides being made in digital trade and payments. This is driven by remarkable growth in internet penetration, mobile money services, and the adoption of emerging technologies like 5G and Artificial Intelligence (AI).

Further, initiatives such as the African Continental Free Trade Area (AfCFTA) and the East African Community (EAC) e-Commerce Strategy are laying the groundwork for a thriving digital economy. The World Bank projects digital services exports from Africa to reach USD 74 billion by 2040, highlighting the immense opportunity at hand. Despite these strides, there are several key challenges that need to be addressed to fully unlock the region’s digital potential.

In this brief, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) outlines barriers to digital trade and presents key policy recommendations for promoting a human rights-based digital economy in the region.

According to the brief, the key barriers hindering the advancement of digital trade in East Africa include:

  • Limited Digital Infrastructure and Internet Access: While mobile internet penetration is growing, issues like internet subsea cable cuts, network disruptions, low digital literacy, and low affordability persist. Uneven distribution of infrastructure, high deployment costs, and slow adoption of new technologies further exacerbate the digital divide.
  • Fragmented Approaches to Digital Economy Taxation: Differing digital service taxes (DST) across countries create complexities and may impede innovation and cross-border trade. Kenya, Uganda, and Tanzania all levy DST, with Kenya’s rate being the highest in the region.
  • Data Governance and Privacy Concerns: While some countries have adopted data protection laws, harmonise action is lacking. Issues like data localisation requirements and the need for a comprehensive regional approach to data privacy and management remain.
  • Limited Local Data Centres: The region has a limited number of data centres, which hinders data localisation efforts and the advancement of AI and other data-intensive technologies. Restrictive regulatory frameworks in some countries further complicate the use of cloud solutions.
  • Rising Cybersecurity Threats: Cyber risks are a major concern, with increasing cyber attacks targeting various sectors. Cybercrime laws, while necessary, sometimes contain vague provisions that can be used to curtail online freedoms.

To overcome these challenges and fully leverage the digital economy, the policy brief offers several key recommendations:

  • Embrace Digital Transformation and Connectivity: Invest in robust networks, backup systems, and address single points of failure in internet connectivity.
  • Implement Robust Cybersecurity Frameworks: Prioritise investments in cyber infrastructure, skilling, and awareness.
  • Recognise Data as a Trade Enabler: Ensure trade agreements prevent unnecessary restrictions on data flows and adopt balanced data localisation policies.
  • Harmonise Data Protection Standards: Reduce compliance costs and build trust by harmonising data protection standards across the region.
  • Build Robust Digital Infrastructure: Focus on Digital Public Infrastructure (DPI), data policy, privacy, and protection.
  • Speed up the Adoption of the EAC Data Governance Policy Framework: Secure resources for its implementation.
  • Assess and Address the Impact of Emerging Technologies: Ensure policies foster innovation while addressing ethical and legal challenges.

The East African region has the potential to become a major player in the global digital economy. By addressing the existing barriers and implementing these recommendations, the region can create a thriving digital ecosystem that benefits all its residents.

Read the full brief here.

The Surveillance Footprint in Africa Threatens Privacy and Data Protection

By Edrine Wanyama 

Digital and physical surveillance by states, private companies that develop technology or supply governments and unscrupulous individuals globally and across Africa is a major threat to the digital civic space and operations of civil society organisations (CSOs), human rights defenders (HRDs), activists, political opposition, government critics and the media. The highly intrusive technology, which is often facilitated by biometric data collection systems such as for processing of national identification documents, voter cards, travel documents, mandatory SIM card registration and the installation of CCTV cameras for “smart cities”, adversely impacts the digital civic space. 

Given these developments, the Digital Rights Alliance Africa (DRAA), a network of CSOs, media, lawyers and tech specialists from across Africa that seeks to champion digital civic space and counter threats to digital rights on the continent, recently held a learning session on “Understanding Surveillance Trends, Threats and Challenges for Civil Society.” The Alliance was created by the International Center for Not-for-Profit Law (ICNL) and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) in response to rising digital authoritarianism. It currently has members from more than 12 countries, who collectively conduct research and advocacy and share experiences around navigating digital threats and influencing strategic digital policy reforms in line with the alliance’s outcome declaration

The virtual learning session built capacity among the Alliance members to better understand digital surveillance and the related threats facing democracy actors. Discussions delved into the nature of surveillance, the regulatory environment, and strategies to navigate and counter surveillance risks and threats. The threats and risks include harassment, arbitrary arrests, persecution and prosecution on trumped up charges. 

While emphasising the need to understand emerging surveillance technologies, ecosystem and deployment tactics, Richard Ngamita, the Team Leader at Thraets, highlighted the huge investment (estimated at USD 1 billion annually) which African governments have made in acquiring surveillance technologies from China, Israel, the United States of America and Europe. Ngamita urged CSOs, HRDs and other actors to build digital security capacity to protect against illegal surveillance.

Victoria Ibezim-Ohaeri, the Executive Director of Spaces for Change, while referencing the  Proliferation of Dual-Use Surveillance Technologies in Nigeria: Deployment, Risks & Accountability – Spaces for Change report, highlighted weak regulation and unaccountable practices by states that facilitate unlawful surveillance across the continent and their implications on rights. According to the report,

“The greatest concern around surveillance technologies is their potential misuse for political repression and human rights abuses. Surveillance practices also undermine the citizens’ dignity, autonomy, and security, translating to significant reductions in citizens’ agency. Agency reductions are magnified by the state’s power to punish dissent. This creates a chilling effect as citizens self-censor or avoid public engagement for fear of being surveilled or punished. The citizens have little agency to challenge or resist the state’s surveillance because of low digital literacy, poverty and broader limitations in access to justice.”

Michaela Shapiro, the Global Engagement and Advocacy Officer at Article 19, United Kingdom, discussed the governing norms of surveillance globally while paying particular attention to the common gaps that need policy action at the country level in Africa. Recalling the intensification of digital and physical surveillance as part of state responses to curb the spread of Covid-19 in the absence of clear oversight mechanisms, Michaela emphasised the role of CSOs in advocating for data and privacy protection. 

To-date, the leading instrument of data protection on the continent, the African Union Convention on Cyber Security and Personal Data Protection has only 16 ratifications out of 55 states, while only 36 states have enacted specific laws on privacy and data protection rights.

Surveillance in Africa generally poses a major threat to individuals’ data and privacy rights since governments exercise wide access over the data subjects’ rights. National security and the loopholes in the laws are usually exploited to abuse and violate data rights. While there are regional and international standards, these are often overlooked with governments taking measures that are not provided for by the law, rendering them unlawful, arbitrary and disproportionate under human rights law. 

By way of progressive actions, speakers noted and made recommendations to States and non-state actors to the effect that:

States and Governments 

  • Address surveillance and bolster personal data and privacy protections through adopting robust legal and regulatory frameworks and repealing restrictive digital laws and policies.
  • Promote and enhance transparency and accountability through the establishment of independent surveillance oversight boards.
  • Strictly regulate the use of surveillance technologies by law enforcement and intelligence agencies to ensure accountability.
  • Collaborate with other countries to develop harmonised privacy standards within the established regional and international standards to have settled positions on cross-border controls on surveillance.

Civil Society Organisations

  • Build and enhance capacities of HRDs and other players in data governance and accountability to equip them with knowledge to counter common data privacy threats by governments and corporate entities.
  • Push for ethical and responsible use of technology to prevent and minimise technology-related violations. 
  • Challenge all forms of unlawful use of surveillance practices through legal action by, among others, taking legal actions.

Tech Sector

  • Conduct regular audits and impact assessments to address potential privacy breaches and enhance accountability and transparency. 
  • Prioritise privacy and integrate privacy protections into their products and services including data collection minimisation and establish strong security measures for privacy.
  • Prioritise ethical considerations in the development and deployment of new technologies to guarantee strong protections against potential violations.

Kenya’s Digital Crossroads: Surveillance, Activism, and the Urgent Fight for Digital Rights in 2025

Victor Kapiyo |

In East Africa, Kenya has over the years been regarded as a model of excellence in digital rights. However, more recently, the country has been plagued by alarming practices that threaten its standing. These include a heightened crackdown on activism, including the abduction and intimidation of activists and journalists, politically motivated internet censorship, rising disinformation, cyber threats and data breaches, and a media decline. Nonetheless, it has not been all doom and gloom as there are glimmers of hope aided by increasing internet use and a population that has displayed remarkable resilience and pushback against continued threats to digital rights.

In this brief, The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) explores these trends and presents some recommendations for consideration by stakeholders. 

The key trends that the brief highlights include the following:

  1. Increased Internet Usage: Kenya’s internet and social media usage has been on the rise with 22.7 million internet users, of which 13 million are active on social media. Likewise, cellular mobile connections stood at 66 million in 2024. However, digital access remains uneven across the country, with urban areas reporting the highest adoption, and affordability being the main barrier to access.
  2. Growing Crackdowns on Activism: The #OccupyParliament and #RejectFinanceBill2024 protests were met with excessive force, arrests, abductions, and crackdowns on organisers and participants. The government also invoked various laws and deployed sophisticated digital tools to monitor protestors without adequate oversight.
  3. Spiralling Censorship of Online Speech: The government imposed a nationwide internet shutdown during the #RejectFinanceBill2024 protests and later blocked access to Telegram for two weeks to prevent cheating during the national examination period. Further, government officials issued several warnings to the public over the “irresponsible use of social media” and threatened to regulate social media platforms and block websites.
  4. Disinformation Persists: Kenya’s disinformation enterprise remains sophisticated, lucrative and largely funded by political actors that exploit the divisions around ideological, ethnic, economic, and demographic lines while harnessing the power of social media. However, government responses to disinformation through the enforcement of the Computer Misuse and Cybercrimes Act (2018) continue to raise concerns about censorship due to the misapplication of the law to muzzle legitimate speech.
  5. Gaps in Access to Information: Access to information about key government projects has remained inconsistent, with widespread secrecy, delays or outright refusal characterising projects. However, the delivery of government services online through the eCitizen portal has facilitated enhanced access to information and services even as the digital divide expands.
  6. Growing Data Breaches and Cyber Threats: As the population embraces the digital economy, the number of cyber threats recorded increased in 2024, with the majority being system vulnerabilities, malware and brute force attacks. Also, there are concerns over unchecked state surveillance, and the adequacy of safeguards to protect citizens’ data amidst rising data breaches.
  7. Media under Siege: Kenya’s media rankings have declined, with the Media Council of Kenya reporting a total of 74 cases of press freedom violations in 2024. The violations included cases of harassment, intimidation, and arbitrary arrests of journalists, particularly those reporting on politically sensitive topics such as corruption, protests, and human rights abuses.
  8. Change of Guard at the MoICDE: William Kabogo was appointed as the third Cabinet Secretary for the Ministry of Information, Communications, and the Digital Economy (MoICDE) in a span of two years. The billionaire politician announced his readiness to regulate social media and shut down the internet if national security is threatened. 

In conclusion, the brief calls for continued vigilance and action to stem the downward spiral. 

Summary of Recommendations:

  1. The government should commit to maintaining free, open and secure internet access by international human rights standards.
  2. The government should take measures to expand ICT infrastructure in rural and underserved areas.
  3. The Computer Misuse and Cybercrimes Act should be amended to narrow its scope and ensure that response measures comply with the three-part test and the law is not used to censor or suppress digital rights.
  4. The capacity of the Office of the Data Protection Commissioner should be strengthened to ensure compliance of all government online services and digital initiatives to the Data Protection Act.
  5. Kenya should address its cybersecurity constraints.
  6. Stakeholders should work to strengthen legal protections for journalists and media outlets.
  7. The government, including at the county level, should continue to invest in the digitisation of public records and services to facilitate efficiency, transparency and accountability.

Read the full brief here.