CIPESA Announces Largest ADRF Grants – USD 320,000 to 18 Initiatives

By Ashnah Kalemera |

The Africa Digital Rights Fund (ADRF) has awarded USD 320,000 to 18 initiatives in 14 countries to support efforts to advance digital rights, inclusion, and online safety.

The grant recipients will promote responsible data governance, advance accessibility for persons with disabilities, counter Technology-Facilitated Gender-Based Violence (TFGBV), and support digital equity for refugees. Others will build digital resilience among at-risk groups, deepen youth engagement in digital democracy, and promote women’s participation in the governance of Artificial Intelligence (AI).

Based in the Democratic Republic of Congo, Ethiopia, Ghana, Guinea, Kenya, Liberia, Madagascar, Rwanda, South Africa, South Sudan, Tanzania, Uganda, Zambia and Zimbabwe, the awardees will tackle some of Africa’s most pressing digital challenges.

The latest awards under the 10th funding round bring to USD 1.3 million, the total amount the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has disbursed under the ADRF. The fund was launched in 2019 to support organisations advancing digital rights in the face of limitations of reach, skills, resources and consistency in engagement.

“The overwhelming number of applications received in this round reflects the changing funding landscape for digital rights and democracy in Africa,” said Dr. Wairagala Wakabi, CIPESA’s Executive Director. “We are excited that the ADRF continues to bridge the prevailing funding gap and expand into new geographies and constituencies.”

The funding round received the largest number of applicants ever (430) and has expanded ADRF’s footprint into new countries such as Guinea, Liberia and Madagascar, and new beneficiary groups including youths, migrants, and a National Human Rights Institute (NHRI). The four most recent calls for proposals – Rounds eight, seven, six and five –  received 130, 280, 283 and 120 applications respectively.

Applications went through several rounds of reviews by two internal committees at CIPESA and an external committee of independent experts.

Overview of ADRF 10 Grantees

Digital Accessibility for Persons with Disabilities

The Rwanda-based Organisation d’Integration et de Promotion des Personnes Atteintes d’Albinisme (OIPPA) will build digital literacy and online safety skills for youth with disabilities, and conduct accessibility audits of government online platforms. In Ghana, Open Knowledge will enhance capacity and awareness of accessibility standards among civil society, parliamentary committees, and communications service providers.

Meanwhile, the Zimbabwe Council for the Blind will conduct accessibility audits of public sector websites, provide training in inclusive design, and advocate for implementation of inclusion and equity  under the country’s recently launched AI Strategy. The three initiatives will be anchored in CIPESA’s Disability and ICT Accessibility Framework Indicators.

The ADRF’s first NHRI grantee, the Ethiopia Human Rights Commission (EHRC), will strengthen staff capacity in digital inclusion and accessibility for persons with disabilities, and mainstream these principles into its broader human rights monitoring and oversight mandate.

Youth Engagement

Restless Development in Uganda will empower young media professionals and influencers to champion digital rights. Using its Youth Hack Methodology, the initiative will co-create innovative digital rights campaigns that combat disinformation and promote platform accountability.

AI Governance

As AI development and governance conversations continue to take root in Africa, women remain largely excluded. Women in Data Science and AI Zambia will build skills in ethical AI and algorithmic bias detection, and establish a national network to amplify women’s voices in Zambia’s AI policy conversations.

Technology-Facilitated Gender-Based Violence (TFGBV)

Given the gaps in state actors’ understanding of digital harms and the need to equip them with practical tools and guidance, ALT Advisory will develop and pilot adjudication training materials for judicial officers in Kenya and South Africa. These materials will address online harms such as TFGBV, disinformation, and digital rights violations.

Research ICT Africa will examine the drivers of TFGBV in South Africa, identifying regulatory and AI ethics gaps. Findings will inform workshops and policy discussions aimed at strengthening national responses and safer digital environments.

In Zambia, Asikana Network will develop a safety toolkit with reporting guides, evidence collection tips and referral resources. This will be complemented by digital safety labs for women to build skills in managing online risks and responding to incidents.

In Madagascar, Communication Idea Development (CID) will counter gender-based disinformation and hate speech through digital literacy campaigns and workshops targeting organisations and activists working in Antananarivo, Boeny, and Vakinankarata.

Data Governance

Building on Liberia’s ongoing national data governance journey, including support from the African Union and CIPESA to develop a Data Governance Policy, the West Africa ICT Action Network (WAICTNet) will build awareness of data rights and support stakeholder readiness ahead of the launch of the Policy and the enactment of the Data Protection and Privacy Act (2024).

Similarly, Amnesty International Kenya’s Privacy First Team will engage Kenyan university students to understand data rights and promote transparent data governance in line with the Data Protection Act of 2019.

Technology and Migration

In Kenya, Haki Na Sheria will examine cross-border data collection and sharing under the Shirika Plan that promotes refugee inclusion and settlement, highlighting risks such as surveillance and exclusion. The project will focus on the Dabaab Complex – the world’s largest refugee camp – offering digital rights literacy sessions and producing data rights guides in Somali and Swahili.

In South Sudan, the Lim Nguen Foundation will build digital literacy and safety among refugees and Internally Displaced Persons (IDPs) in the Gorom and Juba camps. The project will establish “Digital First Responders” to support survivors of TFGBV, particularly women and girls.

Digital Resilience

Hexabelt, in partnership with Eleza Fact, a Congolese disinformation and fact-checking initiative, will strengthen the digital resilience of journalists in Kinshasa and Lumbubashi through hands-on training, newsroom security audits, and cybersecurity drills.

Across the border, the Tanzania Human Rights Defenders Coalition (THRDC) will combine legal assistance, strategic litigation, and emergency support to safeguard environmental defenders and journalists from digital threats.

Information Integrity

In the aftermath of Guinea’s presidential election, tensions remain high ahead of the May 2026 legislative and municipal elections. Djikke Media will deliver workshops on fact-checking, open source investigations, digital hygiene, and deepfake detection.

In Uganda, a new knowledge agency – the House of Seshat – is being supported to explore how social media and generative AI are shaping political discourse and political accountability.

Applications are Open for a New Round of Africa Digital Rights Funding!

Announcement |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) is calling for proposals to support digital rights work across Africa.

This call for proposals is the 10th under the CIPESA-run Africa Digital Rights Fund (ADRF) initiative that provides rapid response and flexible grants to organisations and networks to implement activities that promote digital rights and digital democracy, including advocacy, litigation, research, policy analysis, skills development, and movement building.

 The current call is particularly interested in proposals for work related to:

  • Data governance including aspects of data localisation, cross-border data flows, biometric databases, and digital ID.
  • Digital resilience for human rights defenders, other activists and journalists.
  • Censorship and network disruptions.
  • Digital economy.
  • Digital inclusion, including aspects of accessibility for persons with disabilities.
  • Disinformation and related digital harms.
  • Technology-Facilitated Gender-Based Violence (TFGBV).
  • Platform accountability and content moderation.
  • Implications of Artificial Intelligence (AI).
  • Digital Public Infrastructure (DPI).

Grant amounts available range between USD 5,000 and USD 25,000 per applicant, depending on the need and scope of the proposed intervention. Cost-sharing is strongly encouraged, and the grant period should not exceed eight months. Applications will be accepted until November 17, 2025. 

Since its launch in April 2019, the ADRF has provided initiatives across Africa with more than one million US Dollars and contributed to building capacity and traction for digital rights advocacy on the continent.  

Application Guidelines

Geographical Coverage

The ADRF is open to organisations/networks based or operational in Africa and with interventions covering any country on the continent.

Size of Grants

Grant size shall range from USD 5,000 to USD 25,000. Cost sharing is strongly encouraged.

Eligible Activities

The activities that are eligible for funding are those that protect and advance digital rights and digital democracy. These may include but are not limited to research, advocacy, engagement in policy processes, litigation, digital literacy and digital security skills building. 

Duration

The grant funding shall be for a period not exceeding eight months.

Eligibility Requirements

  • The Fund is open to organisations and coalitions working to advance digital rights and digital democracy in Africa. This includes but is not limited to human rights defenders, media, activists, think tanks, legal aid groups, and tech hubs. Entities working on women’s rights, or with youth, refugees, persons with disabilities, and other marginalised groups are strongly encouraged to apply.
  • The initiatives to be funded will preferably have formal registration in an African country, but in some circumstances, organisations and coalitions that do not have formal registration may be considered. Such organisations need to show evidence that they are operational in a particular African country or countries.
  • The activities to be funded must be in/on an African country or countries.

Ineligible Activities

  • The Fund shall not fund any activity that does not directly advance digital rights or digital democracy.
  • The Fund will not support travel to attend conferences or workshops, except in exceptional circumstances where such travel is directly linked to an activity that is eligible.
  • Costs that have already been incurred are ineligible.
  • The Fund shall not provide scholarships.
  • The Fund shall not support equipment or asset acquisition.

Administration

The Fund is administered by CIPESA. An internal and external panel of experts will make decisions on beneficiaries based on the following criteria:

  • If the proposed intervention fits within the Fund’s digital rights priorities.
  • The relevance to the given context/country.
  • Commitment and experience of the applicant in advancing digital rights and digital democracy.
  • Potential impact of the intervention on digital rights and digital democracy policies or practices.

The deadline for submissions is Monday, November 17, 2025. The application form can be accessed here.

Why It’s Not Yet Uhuru for Artificial Intelligence in Africa and What To Do About It

By CIPESA Writer |

At the first Global Summit on Artificial Intelligence (AI) in Africa in Kigali, Rwanda earlier this month, it was evident that African countries are hugely lagging behind the rest of the world in developing and utilising AI. Also clear was that if the continent makes the right investments today, it stands to reap considerable benefits.

The challenges Africa faces were well-articulated at the summit that brought together 2,000 participants from 97 countries, as were the solutions. Some important steps were taken, such as issuance of the Africa Declaration on Artificial Intelligence that aims to mobilise USD 60 billion for the prospective Africa AI Fund, the unveiling of the Gates Foundation investment in AI Scaling Labs in four African countries, announcement of the Cassava AI Factory that is said to be Africa’s first AI-enabled data centre, and endorsement of the Africa Artificial Intelligence Council.

Just Where Does Africa Lie?

Crystal Rugege, Managing Director of the Rwanda Centre for the Fourth Industrial Revolution, which hosted the summit, noted that AI could unlock USD 2.9 trillion for Africa’s economy by 2030, thereby lifting 11 million Africans out of poverty and creating 500,000 jobs annually. However, Rugege added, “this will not happen by chance. It requires bold, decisive leadership and collective action.”

Some independent researchers and scholars feel most African countries are not doing enough to stimulate AI innovation and uptake. Indeed, speakers at an independent webinar held on the eve of the Kigali summit criticised the “ambitious prediction” of the USD 2.9 trillion AI dividend for Africa, citing the lack of inclusive AI policy-making, and African countries’ failure to invest in a workforce that is fit for the AI age.

A handful of countries (including Ethiopia, Ghana, Nigeria, Senegal, Kenya, Mauritius, Egypt, Tunisia, Algeria, Rwanda and Zambia) have developed AI Strategies and at least eight others are in the process of doing so, but there is minimal government-funded AI innovation and deployment. Africa receives only a pittance of the global AI funding.

Key Hindrances

The summit was not blind to the key hindrances to AI development and deployment. Africa’s limited computational power (or compute) including a shortage of locally-based data centres was severally cited. Africa holds less than 1% of global data centre capacity, which is insufficient to train and run AI models. Also, while the continent has the world’s youngest population, it is lowly skilled. Moreover, only 5% of the region’s AI talent has access to the computational power and resources needed to carry out complex tasks. Many countries also lack the requisite energy supplies to power sustained AI development. Also, Africa’s 60% mobile internet usage gap is slowing AI adoption and economic growth.

Accordingly, the summit – and the declaration it issued – focussed on how to address these bottlenecks. Recommendations include to focus education systems on Fourth Industrial Revolution skills including to build for and adapt to AI; developing AI infrastructure (innovation labs, data centres, sustainable energy); scaling African AI businesses (including enabling them to access affordable funding); and enhancing AI research.

Mahmoud Ali Youssouf, Chairperson of the African Union (AU) Commission, stressed the need to create a harmonised regulatory environment to enable cross-border AI trade and investment; and to leverage Africa’s rich and diverse datasets to fuel AI innovation and power global AI models.

Important Steps in Kigali
  • The Africa Declaration on Artificial Intelligence builds on the foundational strategies, policies and commitments of the AU (such as its AI Strategy and the Data Policy Framework) and the United Nations. It seeks to develop a comprehensive talent pipeline through AI education and research; establishes frameworks for open, secure and inclusive data governance; provides for deployments of affordable and sustainable computing infrastructure accessible to researchers, innovators and entrepreneurs across Africa; and aims to create supportive ecosystems with regional AI incubation hubs driving innovation and scaling African AI enterprises domestically and globally.

    The Declaration envisages the establishment of a USD 60 billion Africa AI Fund, leveraging public, private, and philanthropic capital. The Fund would invest in developing and expanding AI infrastructure, scaling African AI enterprises, building a robust pipeline of AI practitioners, and strengthening domestic AI research capabilities, while upholding principles of equity and inclusion.
  • The AI Scaling Labs: The Gates Foundation and Rwanda’s Ministry of ICT and Innovation signed a Memorandum of Understanding (MoU) to establish the Rwanda AI Scaling Hub, in which the foundation will invest USD 7.5 million. It will initially focus on healthcare, agriculture, and education. Over the next 12 months, the foundation plans to establish similar centres in Kenya, Nigeria, and Senegal “to break down barriers to scale and help move promising AI innovations to impact.”
  • The Cassava AI Factory: Cassava Technologies announced the Cassava AI Factory, reportedly Africa’s first AI-enabled data centre, powered by NVIDIA accelerated computing. “Building digital infrastructure for the AI economy is a priority if Africa is to take full advantage of the fourth industrial revolution,” said Cassava Founder and Chairman, Strive Masiyiwa. “Our AI Factory provides the infrastructure for this innovation to scale, empowering African businesses, startups and researchers with access to cutting-edge AI infrastructure to turn their bold ideas into real-world breakthroughs – and now, they don’t have to look beyond Africa to get it.”

    By keeping AI infrastructure and data within Africa, Cassava Technologies says it is strengthening the continent’s digital independence, driving local innovation and supporting African AI talent and businesses. Its first deployment in South Africa (in June 2025) will be followed by expansion to Egypt, Kenya, Morocco, and Nigeria.
  • The Africa Artificial Intelligence Council: The Smart Africa Alliance Steering Committee Meeting co-chaired by the International Telecommunications Union (ITU) Secretary and the AU Commissioner for Infrastructure and Energy, endorsed the creation of the Council to drive continental coordination on critical AI pillars, including AI computing infrastructure, data sets and data infrastructure development, skills development, market use cases, and governance/policy.
  • Use Cases and Sandboxes: Documentation of tangible use cases and sandboxes that support innovation and regulation is vital in AI development on the continent. On the sidelines of the summit, CIPESA contributed to two co-creation initiatives. The Datasphere Initiative held a Co-creation Lab on the role of AI sandboxes in supporting regulatory innovation and ethical AI governance in Africa. Meanwhile, Qhala hosted a Digital Trade and Regulatory Sandbox session focused on digital health, smartphones, and cross-border trade. Separately, the Rwanda Health Intelligence Centre was unveiled, which enables AI-driven emergency medical services delivery and real-time collection of data on healthcare outcomes in hospitals, thus strengthening evidence-based decision-making.

Ultimately, the AI promise remains high but for it to be realised, the ideas from the Kigali summit must be translated into actions. Countries must stump up funds for research and scaling innovations, support their citizens in acquiring AI-relevant skills, expand internet access and affordability, provide supportive infrastructure, and incentivise foreign investment and technology transfer. Moreover, they should ensure that national laws and regulations promote fair, safe, secure, inclusive and responsible AI, and conform to continental aspirations such as the African Union AI Strategy.

The Impact of Artificial Intelligence on Data Protection and Privacy in Africa

By Edrine Wanyama |

Artificial Intelligence (AI) is playing a critical role in digitalisation in Africa and has the potential to fundamentally impact various aspects of society. However, countries on the continent lack specific laws on AI, with front-runners such as Egypt, Ghana, Kenya, Mauritius and Rwanda only having policies or strategic plans but no legislation.

Despite its potential, AI poses challenges for data protection, notably in sectors such as transportation, banking, health care, retail, and e-commerce, where mass data is collected.  Yet it is unclear how African governments are prepared to deal with AI-enabled data and privacy breaches.

Today, at least 36 African countries have enacted data protection and privacy laws that regulate the collection and processing of personal data. Similarly, the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) entered into force in June 2023.

The laws adopted by states and the Malabo Convention stipulate various data rights for individuals. They include the right to access personal information, the right to prevent the processing of personal data, and the right of individuals to be informed of the intended use of their personal data, including in cases of automated data processing where the decision significantly affects the data subject.

Others include the right to access personal data in the custody of data collectors, controllers and processors; the right to object to the processing of all or part of one’s personal data; the right to rectification, blocking, erasure and destruction of personal data; and the right to a remedy in case of data privacy breaches.

In a new brief, CIPESA notes that AI raises concerns of bias and discrimination when dealing with data, perpetrating abusive data practices, spreading misinformation and disinformation, enhancing real-time surveillance, and aggravating cyber-attacks such as phishing. The brief makes recommendations on striking a balance between innovation and privacy protection through reinforcing legal and regulatory frameworks, advocating for transparency and accountability, and cultivating education and awareness.

The right to information requires that data subjects are provided with information, including the justification for collecting data, identification of the data controller, and categories of data to be collected. According to the brief, AI may not ably facilitate this right since it may not adhere to some of the steps and precautions required to observe and guarantee the right to access personal information. Most access-related rights require skilled and competent staff. On the other hand, AI systems are usually programmed to handle specific kinds of data and their capacities are limited to built-in competencies of the tasks they can perform.

With AI systems, it may be difficult for individuals to object to processing of their personal data. As such, AI may not guarantee the accuracy of the data or lawfulness and purpose of data processing. These challenges arise since there is no assurance that technology has been prepared to comply with data rights and principles.

In relation to automated decision making, decisions by AI may be made against the data subject solely based on technologies with no human involvement. Thus, AI may interpret and audit data in an inaccurate or unfair manner. This can perpetrate discriminatory practices based on personal data relating to tribe, race, gender, religion, and political inclination.

The right to rectification relates to dealing with inaccurate, outdated, misleading, incomplete or erroneous data, and an individual may request the data controller to stop any further processing or to erase personal data. However, AI may not fully comprehend this right. Rectification and erasure require human intervention to rightly diagnose existing problematic data issues if the accuracy of data is to be guaranteed.

On the other hand, portability of data as a right requires that data from a controller should be in simple and machine-readable format. Such data transfers can be used by governments to inform development and promote healthy competition in sectors. However, AI presents privacy challenges to portability, such as indiscriminate data transfers that could aggravate the confidentiality risk. In other cases, AI systems may perpetuate transmission of the wrong data. Also, some AI systems can perpetuate data lock-in as they may be designed to make it impossible for data to be ported or for individuals to switch to other services.

Where there are data breaches by data controllers and processors, the right to effective remedy arises. However, AI may not have clear mechanisms for analysing cases of breaches and issuing appropriate remedies. The determination of a violation requires human intervention since AI is largely untrained on how rules of remedy are applied. Additionally, AI may not comprehend various languages and unique procedures as it is often not adapted to suit different contexts or conceptions of justice.

Ultimately, AI presents both opportunities and challenges for personal data protection. Accordingly, there has to be a balance between innovation and privacy protection to ensure transparency and accountability in data collection, management and processing, while maximising the benefits presented by AI. This can happen with coordinated efforts by governments, decision makers, developers, service providers, civil society organisations and academia in developing, adopting and applying policies and other measures that seek to enhance maximisation of the benefits of AI.

Read the full brief here.

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.