By Risper Arose |

In the ever-evolving digital landscape, technology has transformed how we communicate, access information, and engage with the world. But with these advances comes a darker side – one that disproportionately affects women in journalism and politics, particularly during pivotal moments such as elections. One such dark side is deepfakes – synthetic media generated by Artificial Intelligence (AI) that manipulates voices, faces, and actions –  which have become a powerful tool for deception.

Media manipulation in itself is not new and has long influenced politics and relations, helping create and spread narratives. For instance, in the 20th century, photos were manually edited to manipulate public opinion during political repression campaigns. At the time, this was a slow and meticulous process requiring skilled labour. With the rise of digital technology, manipulating media has become much easier and cheaper. This proliferation of deepfakes is further fuelled by the ever-growing unprecedented power of the internet and social media platforms to rapidly and virally disseminate digital content.

While deepfakes can be used for creative and educational purposes, in the age of information warfare they have increasingly been weaponised, causing significant disruption to the stability, integrity, and trust in institutions, the media, and society as a whole. They have the potential to further undermine norms of truth and trust on individual, organisational, and societal levels.

Additionally, the fact that deepfakes initially appeared as non-consensual pornography highlights their malicious potential, particularly as a gendered tool that disproportionately targets women. This also demonstrates the unique and troubling ability of AI to mimic real humans without consent, undermining their credibility and the agendas they champion, tarnishing their reputations, and inciting harassment. The effects ripple far beyond the screen, manifesting in offline violence, professional fallout, and psychological scars.

This month, Tanda Community Network launched a new research report that provides a comprehensive overview of the critical issue, shedding light on the interplay between deepfake technology, Technology-Facilitated Gender-Based Violence (TFGBV), and democratic processes.

The report highlights case studies from three African countries: Ghana, Senegal and Namibia, revealing how deepfakes are weaponised during elections. With support from the Africa Digital Rights Fund (ADRF), Tanda Community Network carried out focus group discussions, interviewed policymakers, technologists, women journalists, women politicians and civil society in the digital sector and surveyed over 100  women in the three countries.

The findings indicate that deepfake attacks inflict lasting socio-cultural, professional and psychological harm. For female journalists and politicians, the stakes are even higher, with the violence often spilling over into offline spaces.

Many victims of TFGBV, including deepfake attacks, fear the stigma associated with speaking out, leading many to suffer in silence. This leads to severe underreporting, fueled by a lack of trust, inadequate support systems, and the absence of effective tools or expertise to detect and combat these threats.

Compounding this is widespread media illiteracy. Media literacy is alarmingly low, leaving the public vulnerable to manipulation and unable to differentiate between authentic and fake content.

Perhaps the most concerning finding is the lack of robust legal frameworks to address deepfakes and online harassment. Across the study countries, there is no specific, enforceable legislation to hold perpetrators accountable for TFGBV involving deepfakes.

It goes without saying that there is an urgent need for safeguards against such attacks. These attacks, which often manifest as targeted online harassment, have severe implications – not just for the individuals involved but also for public trust in information.

To address these challenges, a multi-pronged approach is essential.

  • Education must play a central role in combating the threat of deepfakes. Awareness campaigns and media literacy programs should aim to teach users to critically analyse the content they consume, recognise digital manipulations, and navigate online spaces safely. Toolkits and training programs must be tailored for different stakeholder groups, including journalists, policymakers, and grassroots communities, to equip them with the skills needed to identify and respond to these threats effectively.
  • Policymakers must prioritise creating enforceable legal frameworks that clearly define and punish perpetrators of digital violence. These laws should also address the broader spectrum of TFGBV, acknowledging its various manifestations and far-reaching impacts.
  • Social media platforms also have a critical responsibility to ensure accountability. They must develop and enforce robust policies to address the spread of deepfakes and harassment, investing in technologies that can detect manipulated content before it causes harm. Transparency and collaboration with civil society organisations would enhance these platforms’ ability to mitigate risks.
  • Institutions, including research organisations and academia, need to focus on developing actionable, evidence-based solutions. Research should be targeted toward understanding the nuances of deepfake attacks on different stakeholder groups and providing implementable recommendations that can influence policy and practice.

The key to combating deepfakes lies in people-centered solutions that can empower everyone.  This requires conscious efforts that put community consultation and community leadership front and centre in the decision-making process, moving away from siloed interventions that are often top-down and driven by external motivations.

By addressing these challenges, women can participate fully and fearlessly in shaping our democracy and society. This report is a call to action to lawmakers, civil society and tech platforms to create safer, more inclusive spaces for women politicians and journalists and in a broader sense the general public.

As deepfakes grow smarter and harder to detect, the challenges they pose will only intensify. Yet, we are not starting from scratch. Existing knowledge and past experiences in combating digital threats provide a foundation we can build upon. The fight against deepfakes and digital violence is not just about protecting women in journalism and politics; it is about safeguarding democracy, fostering inclusion, and ensuring that technology serves humanity rather than undermining it.

Digital Shadows is more than a research report; it is a call to action for governments, tech companies, and civil society, among other stakeholders, to advocate for meaningful policies and equip the general public with the tools they need to navigate this evolving landscape.

Let’s act now—because the longer we wait, the more entrenched these digital shadows will become.

Download the Digital Shadow full report
Risper Arose is the Partnership Lead at Tanda Community Network.