ADRF Grantee Update |
As the world enters the era of artificial intelligence, the rise of deepfakes and AI-generated media presents significant threats to the integrity of democratic processes, particularly in fragile democracies. These processes are vital for ensuring fairness, accountability, and citizen engagement.
When compromised, the foundational values of democracy—and society’s trust in its leaders and institutions—are at risk. Safeguarding democracy in the AI era requires vigilance, collaboration, and innovative solutions, such as building a database of verified AI manipulations to protect the truth and uphold free societies.
In the Global South, where political stability is often tenuous, the stakes are even higher. Elections can easily be influenced by mis/disinformation, now accessible at minimal cost and requiring little technical skill. Malicious actors can easily use these tools to create and amplify false content at scale. This risk is amplified in authoritarian regimes, where AI-generated mis/disinformation is increasingly weaponised to manipulate public opinion, undermine elections, or silence dissent. From fabricated videos of political figures to manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust, further destabilising already fragile democracies.
Despite ongoing efforts by social media platforms and AI companies to develop detection tools, these solutions remain inadequate, particularly in culturally and linguistically diverse regions like the Global South. Detection algorithms often rely on patterns trained on Western datasets, which fail to account for local cultural cues, dialects, and subtleties. This gap allows deepfake creators to exploit these nuances, leaving communities vulnerable to disinformation, especially during critical events like elections.
Recognising the urgency of addressing these challenges, Threats developed Community Fakes, an incident database and central repository for researchers to submit, share, and analyse deepfakes and other AI-altered media. This platform enables collaboration, combining human insights with AI tools to create a robust defence against disinformation. By empowering users to identify, upload, and discuss suspect content, Community Fakes offers a comprehensive, adaptable approach to protecting the integrity of information.
The initiative was made possible through the CIPESA-run African Digital Rights Fund (ADRF), which supports innovative interventions to advance digital rights across Africa. The grant to Thraets for the project titled “Safeguarding African Elections—Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy” aims to counter the increasing risks posed by AI-generated disinformation, which could jeopardise free and fair elections.
The project has conducted research on elections in Tunisia and Ghana, with the findings feeding into tutorials for journalists and fact-checkers on identifying and countering AI-generated electoral disinformation and awareness campaigns on the need for transparency on the capabilities of AI tools and their risks to democracy.
Additionally, the project held an Ideathon to generate novel ideas for combating AI-generated disinformation and developed the Spot the Fakes quiz, which gives users the opportunity to dive into the world of AI-generated synthetic media and how to distinguish between the authentic and the fake.
Community Fakes will crowdsource human intelligence to complement AI-based detection, thereby allowing users to leverage their unique insights to spot inconsistencies in AI-generated media that machines may overlook, while having conversations with other experts around the observed patterns. Users can submit suspected deepfakes to the platform, which the global community can then scrutinise, verify, and expose. According to Thraets, this approach ensures that even the most convincing deepfakes can be exposed before they can do irreparable harm.
Find a full outline of Community Fakes here.