ADRF Grantee Update |
As the world enters the era of artificial intelligence, the rise of deepfakes and AI-generated media presents significant threats to the integrity of democratic processes, particularly in fragile democracies. These processes are vital for ensuring fairness, accountability, and citizen engagement. When compromised, the foundational values of democracy—and society’s trust in its leaders and institutions—are at risk. Safeguarding democracy in the AI era requires vigilance, collaboration, and innovative solutions, such as building a database of verified AI manipulations to protect the truth and uphold free societies.
In the Global South, where political stability is often tenuous, the stakes are even higher. Elections can easily be influenced by mis/disinformation, now accessible at minimal cost and requiring little technical skill. Malicious actors can easily use these tools to create and amplify false content at scale. This risk is amplified in authoritarian regimes, where AI-generated mis/disinformation is increasingly weaponised to manipulate public opinion, undermine elections, or silence dissent. From fabricated videos of political figures to manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust, further destabilizing already fragile democracies.
Despite ongoing efforts by social media platforms and AI companies to develop detection tools, these solutions remain inadequate, particularly in culturally and linguistically diverse regions like the Global South. Detection algorithms often rely on patterns trained on Western datasets, which fail to account for local cultural cues, dialects, and subtleties. This gap allows deepfake creators to exploit these nuances, leaving communities vulnerable to disinformation, especially during critical events like elections.
Recognising the urgency of addressing these challenges, Threats developed Community Fakes, an incident database and central repository for researchers to submit, share, and analyze deepfakes and other AI-altered media. This platform enables collaboration, combining human insights with AI tools to create a robust defense against disinformation. By empowering users to identify, upload, and discuss suspect content, Community Fakes offers a comprehensive, adaptable approach to protecting the integrity of information.
The initiative was made possible through the African Digital Rights Fund (ADRF), a program by CIPESA that supports innovative interventions to advance digital rights across Africa. Now in its eighth round, the Fund is supporting projects focused on Artificial Intelligence (AI), hate speech, disinformation, microtargeting, network disruptions, data access, and online violence against women journalists and politicians.Find a full outline of Community Fakes here.