The ADRF Awards USD 134,000 to 10 Initiatives to Advance Tech Accountability in Africa

Announcement |

The grant recipients of the eighth round of the Africa Digital Rights Fund (ADRF) will implement projects focused on Artificial Intelligence (AI), hate speech, disinformation, microtargeting, network disruptions, data access, and online violence against women journalists and politicians. The work of the 10 initiatives, who were selected from 130 applications, will span the breadth of the African continent in advancing tech accountability.

“The latest round of the ADRF is supporting catalytic work in response to the urgent need to counter the harms of technology in electoral processes,” said Ashnah Kalemera, the Programmes Manager at the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) – the administrators of the Fund. She added that for many of the initiatives being supported, tech accountability was a new area of work but the various projects’ advocacy, research and storytelling efforts would prove instrumental in pushing for tech justice

Established in 2019 as a rapid response and flexible funding mechanism, the ADRF aims to overcome the limitations of reach, skills, resources, and consistency in engagement faced by new and emerging initiatives working to defend and promote rights and innovation in the face of growing digital authoritarianism and threats to digital democracy in Africa. The sum of USD 134,000 awarded in the latest round, which was administered by CIPESA in partnership with Digital Action, brings to USD 834,000 the total amount awarded by the ADRF since inception to 62 initiatives across the continent.

According to Kalemera, the growth in the number of applicants to the ADRF reflects the demand for seed funding for digital rights work on the continent. Indeed, whereas the call for proposals for the eighth round was limited to tech accountability work, many applicants submitted  strong proposals on pertinent issues such as digital inclusion, media and information literacy, digital safety and security, surveillance, data protection and privacy, access and affordability – underscoring the cruciality of the ADRF. 

Here’s What the Grantees Will be Up To

In the lead-up to local government elections in Tanzania, Jamii Forums will engage content hosts, creators and journalists on obligations to tackle hate speech and disinformation online as a means to safeguard electoral integrity. In parallel, through its Jamii Check initiative, Jamii Forums will raise public awareness about the harms of disinformation and hate speech.

Combating hate speech and disinformation is also the focus of interventions supported in Senegal and South Sudan. Ahead of elections in the world’s youngest nation, DefyHateNow will monitor and track hate speech online in South Sudan, host a stakeholder symposium in commemoration of the International Day for Countering Hate Speech as a platform for engagement on collective action to combat hate speech, and run multi-media campaigns to raise public awareness on the harms of hate speech. Post elections in Senegal, Jonction will analyse the link between disinformation and network disruptions and engage stakeholders on alternatives to disruptions in future elections.

In the Sahel region, events leading up to coups in Chad, Burkina Faso, Mali and Niger have been characterised by restrictions on media and internet freedom, amidst which disinformation and violent extremism thrived. As some of the states in the region, notably Burkina Faso and Mali, move towards an end to military rule and head to the polls, the Thoth Media Research Institute will research disinformation and its role in sustaining authoritarian narratives and eroding human rights. The learnings from the research will form the basis of stakeholder convenings on strategies to combat disinformation in complex political, social, and security landscapes. Similarly, Internet Sans Frontières (ISF) will study the role of political microtargeting in shaping campaign strategies and voter behaviour, and the ultimate impact on the rights to privacy and participation in Mali. 

In South Africa, the Legal and Resources Centre (LRC), will raise awareness about the adequacy and efficacy of social media platforms’ content moderation policies and safeguards as well as online political advertising models in the country’s upcoming elections. The centre will also provide legal services for reparations and litigate for reforms related to online harms.

A study has found that Africa’s access to data from tech platforms, for research and monitoring electoral integrity, was below that in Europe and North America. Increased access to platform data for African researchers, civil society organisations, and Election Management Bodies (EMBs) would enable a deeper understanding of online content and its harms on the continent, and inform mitigation strategies. Accordingly, the ADRF will support Research ICT Africa to coordinate an alliance to advocate for increased data access for research purposes on the continent and to develop guidelines for ethical and responsible access to data to study elections-related content.

The impact of AI on the information ecosystem and democratic processes in Africa is the focus of two grantees’ work. On the one hand, the Eastern Africa Editors Society will assess how editors and journalists in Kenya, Uganda, Tanzania and Ethiopia have adopted AI and to what extent they adhere to best practice and the principles of the Paris Charter on AI and Journalism. On the other hand, the Outbox Foundation through its Thraets initiative will research the risks of AI-generated disinformation on elections, with a focus on Ghana and Tunisia. The findings will feed into tutorials for journalists and fact checkers on identifying and countering AI-generated disinformation as part of elections coverage, and awareness campaigns on the need for transparency on the capabilities of AI tools and their risks to democracy. 

Meanwhile, a group of young researchers under the stewardship of the Tanda Community-Based Organisation will research how deep fakes and other forms of manipulated media contribute to online gender-based violence against women journalists and politicians in the context of elections in Ghana, Senegal, and Namibia. The study will also compare the effectiveness of the legal and regulatory environment across the three countries in protecting women online, hold consultations and make recommendations for policy makers, platforms  and civil society on how to promote a safe and inclusive digital election environment for women.

Past and present supporters of the ADRF include the Centre for International Private Enterprise (CIPE), the Ford Foundation, the Swedish International Development Cooperation Agency (Sida), the German Society for International Cooperation Agency (GIZ), the Omidyar Network, the Hewlett Foundation, the Open Society Foundations, the Skoll Foundation and New Venture Fund (NVF).

Towards Ethical AI Regulation in Africa

By Tusi Fokane |

The ubiquity of generative artificial intelligence (AI)-based applications across various sectors has led to debates on the most effective regulatory approach to encourage innovation whilst minimising risks. The benefits and potential of AI are evident in various industries ranging from financial and customer services to education, agriculture and healthcare. AI holds particular promise for developing countries to transform their societies and economies. 

However, there are concerns that, without adequate regulatory safeguards, AI technologies could further exacerbate existing governance concerns around ethical deployment, privacy, algorithmic bias, workforce disruptions, transparency, and disinformation. Stakeholders have called for increased engagement and collaboration between policymakers, academia, and industry to develop legal and regulatory frameworks and standards for ethical AI adoption. 

The Global North has taken a leading position in exploring various regulatory modalities. These range from risk-based or proportionate regulation as proposed by the European Commission’s AI Act. Countries such as Finland and Estonia have opted for a greater focus on maintaining trust and collaboration at national level by adopting a human-centric approach to AI. The United Kingdom (UK) has taken a “context-specific” approach, embedding AI regulation within existing regulatory institutions. Canada has prioritised bias and discrimination, whereas other jurisdictions such as France, Germany and Italy have opted for greater emphasis on transparency and accountability in developing AI regulation. 

On the other hand, China has taken a more firm approach to AI regulation, distributing policy responsibility amongst existing standards bodies. The United States of America (USA) has adopted an incremental approach, introducing additional guidance to existing legislation and emphasising rights and safety. 

Whilst there are divergent approaches to AI regulation, there is at least some agreement, at a muti-lateral level, on the need for a human-rights based approach to ensure ethical AI deployment which respects basic freedoms, fosters transparency and accountability, and promotes diversity and inclusivity through actionable policies and specific strategies. 

Developments in AI regulation in Africa

Regulatory responses in Africa have been disparate, although the publication of the African Union Development Agency (AUDA-NEPAD) White Paper: Regulation and Responsible Adoption of AI for Africa Towards Achievement of AU Agenda 2063 is anticipated to introduce greater policy coherence. The White Paper follows the 2021 AI blueprint and the African Commission on Human and Peoples’ Rights Resolution 473, which calls for a human-rights-centred approach to AI governance. 

The White Paper calls for a harmonised approach to AI adoption and underscores the importance of developing an enabling governance framework to “provide guidelines for implementation and also keep AI development in check for negative impacts.” Furthermore, the White Paper calls on member states to adopt national AI strategies that emphasise data safety, security and protection in an effort to promote the ethical use of AI. 

The White Paper proposes a mixed regulatory and governance framework, depending on the AI use-case. First, the proposals encompass self-regulation, which would be enforced through sectoral codes of conduct, and which offer a degree of flexibility to match an evolving AI landscape. Second, the White Paper suggests the adoption of standards and certification to establish industry benchmarks. The third proposal is for a distinction between hard and soft regulation, depending on the identified potential for harm. Finally, the White Paper calls for AI regulatory sandboxes to allow for testing under regulatory supervision. 

Figure 1: Ethical AI framework

However, there are still concerns that African countries are lagging behind in fostering AI innovation and putting in place the necessary regulatory framework. According to the 2023 Government AI Readiness Index, Benin, Mauritius, Rwanda, Senegal, and South Africa are ahead in government efforts around AI out of the 24 African countries assessed. The index measures a country’s progress against four pillars: government/strategy, data & infrastructure, technology sector, and global governance/international collaboration.  

The national AI strategies of Mauritius, Rwanda, Egypt, Kenya, Senegal and Benin have a strong focus on infrastructure and economic development whilst also laying the foundation for AI regulation within their jurisdictions. For its part, Nigeria has adopted a more collaborative approach in co-creating its National Artificial Intelligence Strategy, with calls for input from AI researchers. 

Thinking beyond technical AI regulation 

Despite increasingly positive signs of AI adoption in Africa, there are concerns that the pace of AI regulation on the continent is too slow, and that it may not be fit for purpose for local and national conditions. Some analysts have warned against wholesale adoption of policies and strategies imported from the Global North, and which may fail to consider country-specific contexts.

Some Global South academics and civil society organisations have raised questions regarding the importation of regulatory standards from the Global North, some even referring to the practice as ‘data colonialism’. The apprehension of copy-pasting Global North standards is premised on the continent’s over-reliance on Big Tech digital ecosystems and infrastructure. Researchers indicate that “These context-sensitive issues raised on various continents can best be understood as a combination of social and technical systems. As AI systems make decisions about the present and future through classifying information and developing models from historical data, one of the main critiques of AI has been that these technologies reproduce or heighten existing inequalities involving gender, race, coloniality, class and citizenship.” 

Other stakeholders caution against ‘AI neocolonialism’, which replicates Western conventions, often resulting in poor labour outcomes and stifling the potential for the development of local AI approaches.  

Proposals for African solutions for AI deployment 

There is undoubtedly a need for ethical and effective AI regulation in Africa. This calls for the development of strategies and a context-specific regulatory and legal foundation, and this has been occurring in various stages. African policy-makers should ensure a multi-stakeholder and collaborative approach to designing AI governance solutions in order to ensure ethical AI, which respects human rights, is transparent and inclusive. This becomes even more significant given the potential risks to AI during election season on the continent. 
Beyond framing local regulatory solutions, stakeholders have called for African governments to play a greater role in global AI discussions to guard against regulatory blindspots that may emerge from importing purely Western approaches. Perhaps the strongest call is for African leaders to leverage expertise on the continent, and promote greater collaboration amongst African policymakers. 

Africa Commission Resolution A Boon For Fight Against Unlawful Surveillance

By CIPESA Writer |

The Collaboration on International ICT Police for East and Southern Africa (CIPESA) welcomes the resolution by the African Commission on Human and Peoples’ Rights, which urges African governments to cease undertaking unlawful communications surveillance.

The resolution is timely, as it comes amidst an unprecedented spike in the scale and nature of state surveillance that is often unlawful, excessive, and inadequately supervised by oversight bodies. As CIPESA research has found, the expansion in state surveillance in various African countries is denying citizens their rights to freedom of expression, association and assembly, and undermining their participation in democratic processes.

The Resolution on the deployment of mass and unlawful targeted communication surveillance and its impact on human rights in Africa, adopted last November at the Commission’s 77th Ordinary Session held in Arusha, Tanzania, expresses concern about the unrestrained acquisition of communication surveillance technologies by states without adequate regulation. It also notes the lack of adequate national frameworks on privacy, communication surveillance, and personal data protection. 

Furthermore, the resolution notes the Commission’s concern about the disproportionate targeting of journalists, human rights defenders, civil society organisations, whistleblowers and opposition political activists by state surveillance.

CIPESA welcomes the resolution, which reflects the findings of our research and the recommendations we have variously made to African governments regarding the conduct of state surveillance. CIPESA has previously called upon stakeholders, including governments, to take all measures that buttress the right to privacy in order to guarantee and enhance free expression, access to information, freedom of association, and freedom of assembly in accordance with international human rights standards.

Notably, the African Commission resolution urges African countries to ensure that all restrictions on privacy and other fundamental freedoms are necessary and proportionate, and in line with international human rights law and standards. It also urges states to consider safeguards such as the requirement for prior authorisation of surveillance by an independent and impartial judicial authority and the need for effective monitoring and regular review by independent oversight mechanisms.

According to CIPESA’s Legal Officer Edrine Wanyama, “The resolution is a step forward to buttressing data rights and privacy on the continent. States should take advantage of the resolution and overhaul regressive surveillance practices while embracing all internationally recognised efforts and standards for strengthening the right to privacy.”

According to the Declaration of Principles on Freedom of Expression and Access to Information in Africa, states should only engage in targeted surveillance in conformity with international human rights law (principle 41), and every individual shall have legal recourse to effective remedies in relation to the violation of their privacy and the unlawful processing of their personal information (principle 42 (7)). In addition, principle 20 requires states to guarantee the safety of journalists and other media practitioners by taking measures that prevent threats and unlawful surveillance.
See related CIPESA resources: Privacy Imperilled: Analysis of Surveillance, Encryption and Data Localisation Laws in Africa; Effects of State Surveillance on Democratic Participation in Africa; Compelled Service Provider Assistance for State Surveillance in Africa: Challenges and Policy Options; Mapping and Analysis of Privacy Laws in Africa.

A Decade of Internet Freedom in Africa: Report Documents Reflections and Insights from Change Makers

CIPESA Writer |

Over the last decade, Africa’s journey to achieve internet freedom has not been without challenges. There have been significant threats to internet freedom, evidenced by the rampant state censorship through
internet shutdowns, surveillance, blocking and filtering of websites, and the widespread use of repressive laws to suppress the voices of key actors.

However, amidst all this, there is a community of actors who have dedicated efforts towards advancing digital rights in the continent with the goal of ensuring that more Africans can enjoy the full benefits of the internet.

As part of our efforts recounting the work of the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) over the years, we are pleased to share this special edition report: A Decade of Digital Rights in Africa: Reflections and Insights from 10 Change Makers, where we document reflections and insights from ten collaborators who have been instrumental in shaping Africa’s digital and Internet freedom advocacy landscape over the last ten years.

These changemakers have demonstrated change by advocating for a more free, secure, and open internet in Africa and working to ensure that no one is left behind.

Read the full report: A Decade of Digital Rights in Africa: Reflections and Insights from 10 Change Makers!

Meet the Changemakers

‘Gbenga Sesan, the Executive Director of Paradigm Initiative, is an eloquent advocate for internet freedom across the continent, leading efforts to push back against repressive laws and promoting digital
inclusion while speaking truth to power. He continues to champion the transformative power of technology for social good and to drive positive change in society.

Arthur Gwagwa is a Research Scholar at Utrecht University, Netherlands, and a long-standing advocate for digital rights and justice. His work in the philanthropic sector has been instrumental in supporting various grassroots initiatives to promote internet freedom in Africa. Similarly, his pioneering research work and thought leadership continue to inspire and transform the lives of people in Africa.

Edetaen Ojo, the Executive Director of Media Rights Agenda, is a prominent advocate for advancing media rights and internet freedom. Known for his strategic vision and dedication to media freedom, he pioneered the conceptualisation and development of the African Declaration on Internet Rights and has been a key voice in shaping Internet policy-making in Africa.

Emilar Gandhi, the Head of Stakeholder Engagement and Global Strategic Policy Initiatives at Meta, built a strong foundation in civil society as an advocate for Internet freedom. She is a prominent figure in technology policy in Africa whose expertise and dedication have made her a valuable voice for inclusivity and responsible technology development in the region.

Dr. Grace Githaiga, the CEO and Convenor of Kenya ICT Action Network (KICTANet), has been a leading
advocate for media freedom and digital rights in Africa. Her tireless advocacy in shaping internet policy has earned her recognition for her pivotal roles in championing internet freedom, digital inclusion,
multistakeholderism, and women’s rights online.

Julie Owono, the Executive Director of Internet Sans Frontières (Internet Without Borders), is a passionate and respected digital rights advocate and thought leader in the global digital community. She is not only a champion for internet freedom in Africa but is also a symbol of hope for many communities standing at the forefront of the battle for internet freedom and connectivity in Africa.

Neema Iyer, the founder of Pollicy, is well known for her advocacy efforts in bringing feminist perspectives into data and technology policy. Her dynamic and multi-faceted approach to solving social challenges exemplifies the potential of data and technology to advance social justice and promote digital inclusion and internet freedom in Africa.

Dr. Tabani Moyo, the Regional Director of the Media Institute of Southern Africa (MISA), is a distinguished
media freedom advocate and influential leader in guiding a community of changemakers in Southern Africa. He has played an extensive and formidable role in pushing back against restrictive and repressive laws, supporting journalists under threat, empowering young Africans, and shaping internet governance policies.

Temitope Ogundipe, the Founder and Executive Director of TechSocietal, has been a champion for digital rights and inclusion in Africa. She is an advocate for women’s rights online and uses her expertise to contribute to the development of youth and address digital inequalities affecting vulnerable groups across the continent.

Wafa Ben-Hassine, the Principal Responsible Technology at Omidyar Network, is a recognised human rights defender and visionary leader dedicated to promoting human rights and responsible technological
development. Her relentless advocacy and valuable contributions to defending digital rights, civil liberties, and technology policy continue to inspire many across the continent.

Join the Report Launch Webinar:

When: January 31, 2024
Time: 14:00-16:00 (Nairobi Time)
Location: Zoom (Register here)
After registering, you will receive a confirmation email containing information about joining the webinar.

Updated: Watch the report launch webinar.

Introducing the Tech Accountability Fund and a Call for Proposals

Announcement |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has partnered with Digital Action to support work on tech accountability in Sub-Saharan Africa in the run up to and during the “Year of Democracy” in 2024. This support will be channelled through the Tech Accountability Fund that will be administered under the auspices of the Africa Digital Rights Fund (ADRF).  

Numerous African countries, including the Comoros, Senegal, Mauritania, Rwanda, Mozambique, Ghana, Algeria, Botswana, Chad, Guinea Bissau, Mali, Mauritius, Namibia, South Africa, South Sudan and Tunisia, are headed to the polls during 2024. Electoral processes are essential to building democracy, and given growing threats to information integrity and technology use in elections, it is crucial to conduct platform accountability around electoral processes. The Fund responds to these key concerns in the Year of Democracy and to the scant  resources to African civil society entities that are working to counter tech harms. 

In 2022, Africa had around 570 million internet users, of which 384 million (67%) were social media users. These users, most of whom are the youth, are increasingly using social media applications such as WhatsApp, Facebook, Twitter, YouTube, Instagram and TikTok for content creation and entertainment, business, advertising and entrepreneurship, communication and connection, education and learning, civic engagement and activism. As the users increase, reports from social media companies indicate the rise of harmful, illegal or offensive content on the platforms.

In response, social media companies have employed various measures to review, screen, and filter content to ensure it meets their community guidelines or policies and does not adversely affect the user experience on the platforms. The content moderation tools and techniques applied include keyword filtering, machine learning algorithms and human review. 

Despite these efforts, the inadequacy of the measures undertaken by social media platforms and social networking sites in moderating illegal, harmful or offensive content has increasingly been questioned. In Ethiopia for instance, social media companies have been accused of not doing enough to moderate such content, which has gone on to cause real-world harm, such as fuelling killings. Starkly, platforms such as Facebook and Twitter are accused of deploying minuscule resources and measures in content moderation in Africa, relative to investments in the United States and Europe. 

Key concerns about content moderation in Africa include the limited understanding by platforms of the cultural context in the continent, the lack of cultural sensitivity, labour rights violations, bias and discrimination of algorithms, non-application of local laws, lack of transparency and accountability in content moderation, all of which have an impact on freedom of expression and civic participation.

Call for Proposals
Applications are now open for the Tech Accountability Fund as the eighth edition of the ADRF. Grant sizes will range from USD 5,000 to USD 20,000 subject to demonstrated need. Cost sharing is strongly encouraged. Funding shall be for periods between six and 12 months. 

The Fund is particularly interested in work related to but not limited to:

  • Online gender-based violence, particularly against women politicians and women journalists
  • Network disruptions
  • Content moderation
  • Microtargeting and political advertising
  • Hate speech
  • Electoral Disinformation 
  • Electoral specific harms e.g. effects on freedom of expression and citizens’ ability to make independent choices and participate in electoral processes.

The deadline for applications is February 16, 2024. Read more about the Fund Guidelines here. The application form can be accessed here

Only shortlisted applicants will be contacted directly. Feedback on unsuccessful applications will be available upon request.