Introducing “Community Fakes”, A Crowdsourcing Platform to Combat AI-Generated Deepfakes

ADRF Grantee Update | 

As the world enters the era of artificial intelligence, the rise of deepfakes and AI-generated media presents significant threats to the integrity of democratic processes, particularly in fragile democracies. These processes are vital for ensuring fairness, accountability, and citizen engagement. 

When compromised, the foundational values of democracy—and society’s trust in its leaders and institutions—are at risk. Safeguarding democracy in the AI era requires vigilance, collaboration, and innovative solutions, such as building a database of verified AI manipulations to protect the truth and uphold free societies.

In the Global South, where political stability is often tenuous, the stakes are even higher. Elections can easily be influenced by mis/disinformation, now accessible at minimal cost and requiring little technical skill. Malicious actors can easily use these tools to create and amplify false content at scale. This risk is amplified in authoritarian regimes, where AI-generated mis/disinformation is increasingly weaponised to manipulate public opinion, undermine elections, or silence dissent. From fabricated videos of political figures to manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust, further destabilising already fragile democracies.

Despite ongoing efforts by social media platforms and AI companies to develop detection tools, these solutions remain inadequate, particularly in culturally and linguistically diverse regions like the Global South. Detection algorithms often rely on patterns trained on Western datasets, which fail to account for local cultural cues, dialects, and subtleties. This gap allows deepfake creators to exploit these nuances, leaving communities vulnerable to disinformation, especially during critical events like elections.

Recognising the urgency of addressing these challenges, Threats developed Community Fakes, an incident database and central repository for researchers to submit, share, and analyse deepfakes and other AI-altered media. This platform enables collaboration, combining human insights with AI tools to create a robust defence against disinformation. By empowering users to identify, upload, and discuss suspect content, Community Fakes offers a comprehensive, adaptable approach to protecting the integrity of information.

The initiative was made possible through the CIPESA-run African Digital Rights Fund (ADRF), which supports innovative interventions to advance digital rights across Africa. The grant to Thraets for the project titled “Safeguarding African Elections—Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy” aims to counter the increasing risks posed by AI-generated disinformation, which could jeopardise free and fair elections. 

The project has conducted research on elections in Tunisia and Ghana, with the findings feeding into tutorials for journalists and fact-checkers on identifying and countering AI-generated electoral disinformation and awareness campaigns on the need for transparency on the capabilities of AI tools and their risks to democracy. 

Additionally, the project held an Ideathon to generate novel ideas for combating AI-generated disinformation and developed the Spot the Fakes quiz, which gives users the opportunity to dive into the world of AI-generated synthetic media and how to distinguish between the authentic and the fake.

Community Fakes will crowdsource human intelligence to complement AI-based detection, thereby allowing users to leverage their unique insights to spot inconsistencies in AI-generated media that machines may overlook, while having conversations with other experts around the observed patterns. Users can submit suspected deepfakes to the platform, which the global community can then scrutinise, verify, and expose. According to Thraets, this approach ensures that even the most convincing deepfakes can be exposed before they can do irreparable harm. 

Find a full outline of Community Fakes here.

Towards Ethical AI Regulation in Africa

By Tusi Fokane |

The ubiquity of generative artificial intelligence (AI)-based applications across various sectors has led to debates on the most effective regulatory approach to encourage innovation whilst minimising risks. The benefits and potential of AI are evident in various industries ranging from financial and customer services to education, agriculture and healthcare. AI holds particular promise for developing countries to transform their societies and economies. 

However, there are concerns that, without adequate regulatory safeguards, AI technologies could further exacerbate existing governance concerns around ethical deployment, privacy, algorithmic bias, workforce disruptions, transparency, and disinformation. Stakeholders have called for increased engagement and collaboration between policymakers, academia, and industry to develop legal and regulatory frameworks and standards for ethical AI adoption. 

The Global North has taken a leading position in exploring various regulatory modalities. These range from risk-based or proportionate regulation as proposed by the European Commission’s AI Act. Countries such as Finland and Estonia have opted for a greater focus on maintaining trust and collaboration at national level by adopting a human-centric approach to AI. The United Kingdom (UK) has taken a “context-specific” approach, embedding AI regulation within existing regulatory institutions. Canada has prioritised bias and discrimination, whereas other jurisdictions such as France, Germany and Italy have opted for greater emphasis on transparency and accountability in developing AI regulation. 

On the other hand, China has taken a more firm approach to AI regulation, distributing policy responsibility amongst existing standards bodies. The United States of America (USA) has adopted an incremental approach, introducing additional guidance to existing legislation and emphasising rights and safety. 

Whilst there are divergent approaches to AI regulation, there is at least some agreement, at a muti-lateral level, on the need for a human-rights based approach to ensure ethical AI deployment which respects basic freedoms, fosters transparency and accountability, and promotes diversity and inclusivity through actionable policies and specific strategies. 

Developments in AI regulation in Africa

Regulatory responses in Africa have been disparate, although the publication of the African Union Development Agency (AUDA-NEPAD) White Paper: Regulation and Responsible Adoption of AI for Africa Towards Achievement of AU Agenda 2063 is anticipated to introduce greater policy coherence. The White Paper follows the 2021 AI blueprint and the African Commission on Human and Peoples’ Rights Resolution 473, which calls for a human-rights-centred approach to AI governance. 

The White Paper calls for a harmonised approach to AI adoption and underscores the importance of developing an enabling governance framework to “provide guidelines for implementation and also keep AI development in check for negative impacts.” Furthermore, the White Paper calls on member states to adopt national AI strategies that emphasise data safety, security and protection in an effort to promote the ethical use of AI. 

The White Paper proposes a mixed regulatory and governance framework, depending on the AI use-case. First, the proposals encompass self-regulation, which would be enforced through sectoral codes of conduct, and which offer a degree of flexibility to match an evolving AI landscape. Second, the White Paper suggests the adoption of standards and certification to establish industry benchmarks. The third proposal is for a distinction between hard and soft regulation, depending on the identified potential for harm. Finally, the White Paper calls for AI regulatory sandboxes to allow for testing under regulatory supervision. 

Figure 1: Ethical AI framework

However, there are still concerns that African countries are lagging behind in fostering AI innovation and putting in place the necessary regulatory framework. According to the 2023 Government AI Readiness Index, Benin, Mauritius, Rwanda, Senegal, and South Africa are ahead in government efforts around AI out of the 24 African countries assessed. The index measures a country’s progress against four pillars: government/strategy, data & infrastructure, technology sector, and global governance/international collaboration.  

The national AI strategies of Mauritius, Rwanda, Egypt, Kenya, Senegal and Benin have a strong focus on infrastructure and economic development whilst also laying the foundation for AI regulation within their jurisdictions. For its part, Nigeria has adopted a more collaborative approach in co-creating its National Artificial Intelligence Strategy, with calls for input from AI researchers. 

Thinking beyond technical AI regulation 

Despite increasingly positive signs of AI adoption in Africa, there are concerns that the pace of AI regulation on the continent is too slow, and that it may not be fit for purpose for local and national conditions. Some analysts have warned against wholesale adoption of policies and strategies imported from the Global North, and which may fail to consider country-specific contexts.

Some Global South academics and civil society organisations have raised questions regarding the importation of regulatory standards from the Global North, some even referring to the practice as ‘data colonialism’. The apprehension of copy-pasting Global North standards is premised on the continent’s over-reliance on Big Tech digital ecosystems and infrastructure. Researchers indicate that “These context-sensitive issues raised on various continents can best be understood as a combination of social and technical systems. As AI systems make decisions about the present and future through classifying information and developing models from historical data, one of the main critiques of AI has been that these technologies reproduce or heighten existing inequalities involving gender, race, coloniality, class and citizenship.” 

Other stakeholders caution against ‘AI neocolonialism’, which replicates Western conventions, often resulting in poor labour outcomes and stifling the potential for the development of local AI approaches.  

Proposals for African solutions for AI deployment 

There is undoubtedly a need for ethical and effective AI regulation in Africa. This calls for the development of strategies and a context-specific regulatory and legal foundation, and this has been occurring in various stages. African policy-makers should ensure a multi-stakeholder and collaborative approach to designing AI governance solutions in order to ensure ethical AI, which respects human rights, is transparent and inclusive. This becomes even more significant given the potential risks to AI during election season on the continent. 
Beyond framing local regulatory solutions, stakeholders have called for African governments to play a greater role in global AI discussions to guard against regulatory blindspots that may emerge from importing purely Western approaches. Perhaps the strongest call is for African leaders to leverage expertise on the continent, and promote greater collaboration amongst African policymakers. 

CIPESA Joins International Initiative to Develop “AI Charter in Media”

By CIPESA Writer |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has joined a conglomeration of  international organisations and experts to develop a charter aimed at guiding the use of Artificial Intelligence (AI) in the media. 

According to Reporters Without Borders (RSJ), the group that is coordinating the development of the Charter, 16 partner organisations, as well as 31 media, AI and academic professionals representing 18 different nationalities, are involved in the process. The CIPESA Executive Director, Dr. Wairagala Wakabi, is among the experts on the committee that is led by journalist and Nobel Peace Prize laureate Maria Ressa.

The RSJ stated that the growing interest in the project highlights the real need to clearly and collaboratively develop an ethical framework to safeguard information integrity, at a time when generative AI and other algorithm-based technologies are being rapidly deployed in the news and information sphere.

Part of the committee’s responsibility is to develop a set of principles, rights, and obligations for information professionals regarding the use of AI-based systems, by the end of 2023. This is a response to the realisation that the rapid deployment of AI in the media industry presents a major threat to information integrity.

See here details about the initiative, the partner organisations and experts.