Towards Inclusive AI Policies in Africa’s Digital Transformation

By CIPESA Writer |

On November 13, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) took part in the global PILNET summit on Artificial Intelligence (AI) and its impact on the work of Civil Society Organisations (CSOs). Over three days, the summit assembled stakeholders from across the world in Rome, Italy, to deliberate on various topics under the theme, “Amplifying Impact: Pro Bono & Public Interest Law in a Shifting World.”

CIPESA contributed to a session titled, “Pro bono for AI: Addressing legal risks and enhancing opportunities for CSOs”. The session focused on AI and its potential impacts on the work and operations of CSOs. CIPESA emphasised the need for a universally acceptable and adaptable framework to guide the increased application of AI in the fast-evolving technological era. Furthermore, CIPESA highlighted its efforts in developing a model policy on AI for CSOs in Africa, which is being undertaken with the support of the Thomson Reuters Foundation through its global pro bono network.

Edrine Wanyama, Programme Manager Legal at CIPESA, centered his discussions around ethical and rights-respecting AI adoption, and emphasised the necessity for CSOs to enhance their knowledge and measures of accountability while navigating the AI ecosystem.

Mara Puacz, the Head of Impact at Tech To The Rescue, Ana de la Cruz Cubeiro, a Legal Officer at PILnet, and Megan King, a Senior Associate at Norton Rose Fulbright, shared similar sentiments on the benefits of AI, which include expanding advocacy work and initiatives of CSOs.

They noted the increased demand for transparency and accountability in AI development and use, and the need to minimise harms that marginalised communities face from AI enabled analysis of data sets that often perpetuate bias, gaps in data, and limited or poorly digitalised language sets.

The session cited various benefits of AI for CSOs, such as enabling human rights monitoring, documenting and reporting at various fronts like the Universal Periodic Review, aiding democratic participation, and tracking and documenting trends. Others are facilitating and enhancing environmental protection, such as through monitoring pollution and providing real-time support to agri-business and the health sector by facilitating pest and disease identification and diagnosis for viable solutions.

However, funding constraints not only affect AI deployment but also capacity building to address the limited skills and expertise in AI deployment. In Africa, the inadequacy of relevant infrastructure, data sovereignty fears amongst states, and the irresponsible use of AI and related technologies present additional challenges.

Meanwhile, between October 23 and 24, 2025, CIPESA joined KTA Advocates and the Centre for Law, Policy and Innovation Initiative (CeLPII), to co-host the 8th Annual Symposium under the theme of “Digital Trade, AI and the Creative Economy as Drivers for Digital Transformation”.

The symposium explored the role of AI in misinformation and disinformation, as well as its potential to transform Uganda’s creative economy and digital trade. CIPESA emphasised the need to make AI central in all discussions of relevant sectors, including governments, innovators, CSOs and the private sector, among others, to identify strategies, such as policy formulation and adoption, to check potential excesses by AI.

Conversations at the PILNET summit and the KTA Symposium align with CIPESA’s ongoing initiatives across the continent, where countries and regional blocs are developing AI strategies and policies to inform national adoption and its application. At the continental level, in 2024, the African Union (AU) adopted the Continental AI Strategy, which provides a unified framework for using AI to drive digital transformation and socio-economic development of Africa.

Amongst the key recommendations from the discussions is the need for:

  • Wide adoption of policies guiding the use of AI by civil society organisations, firms, the private sector, and innovators.
  • Nationwide and global participation of individuals and stakeholders, including governments, CSOs, the private sector, and innovators, in AI processes and how it works to ensure that no one is left behind. This will ensure inclusive participation.
  • Awareness creation and continuous education of citizens, CSOs, innovators, firms, and the private sector on the application and value of AI in their work.
  • The adoption of policies and laws that specifically address the application of AI at national, regional and international levels and at organisational and institutional levels to mitigate the potential adverse impacts of AI rollout.

#BeSafeByDesign: A Call To Platforms To Ensure Women’s Online Safety

By CIPESA Writer |

Across Eastern and Southern Africa, activists, journalists, and women human rights defenders (WHRDs) are leveraging online spaces to mobilise for justice, equality, and accountability.  However, the growth of online harms such as Technology-Facilitated Gender-Based Violence (TFGBV), disinformation, digital surveillance, and Artificial Intelligence (AI)-driven discrimination and attacks has outpaced the development of robust protections.

Notably, human rights defenders, journalists, and activists face unique and disproportionate digital security threats, including harassment, doxxing, and data breaches, that limit their participation and silence dissent.

It is against this background that the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Irene M. Staehelin Foundation, is implementing a project aimed at combating online harms so as to advance digital rights. Through upskilling, advocacy, research, and movement building, the initiative addresses the growing threats in digital spaces, particularly affecting women journalists and human rights defenders.

The first of the upskilling engagements kicked off in Nairobi, Kenya, at the start of December 2025, with 25 women human rights defenders and activists in a three-day digital resilience skills share workshop hosted by CIPESA and the Digital Society Africa. Participants came from the Democratic Republic of Congo, Madagascar, Malawi, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe. It coincides with the December 16 Days Of Activism campaign, which this year is themed “Unite to End Digital Violence against All Women and Girls”.

According to the United Nations Population Fund (UNFPA), TFGBV is “an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated, and amplified in part or fully by the use of information and communication technologies or digital media against a person based on their gender.” It includes cyberstalking, doxing, non-consensual sharing of intimate images, cyberbullying, and other forms of online harassment.

Women in Sub-Saharan Africa are 32% less likely than men to use the internet, with the key impediments being literacy and digital skills, affordability, safety, and security. On top of this gender digital divide, more women than men face various forms of digital violence. Accordingly, the African Commission on Human and Peoples’ Rights (ACHPR) Resolution 522 of 2022 has underscored the urgent need for African states to address online violence against women and girls.

Women who advocate for gender equality, feminism, and sexual minority rights face higher levels of online violence. Indeed, women human rights defenders, journalists and politicians are the most affected by TFGBV, and many of them have withdrawn from the digital public sphere due to gendered disinformation, trolling, cyber harassment, and other forms of digital violence. The online trolling of women is growing exponentially and often takes the form of gendered and sexualised attacks and body shaming.

Several specific challenges must be considered when designing interventions to combat TFGBV. These challenges are shaped by legal, social, technological, and cultural factors, which affect both the prevalence of digital harms and violence and the ability to respond effectively. They include weak and inadequate legal frameworks; a lack of awareness about TFGBV among policymakers, law enforcement officers, and the general public; the gender digital divide; and normalised online abuse against women, with victims often blamed rather than supported.

Moreover, there is a shortage of comprehensive response mechanisms and support services for survivors of online harassment, such as digital security helplines, psychosocial support, and legal aid. On the other hand, there is limited regional and cross-sector collaboration between CSOs, government agencies, and the private sector (including tech companies).

A guiding strand for these efforts will be the #BeSafeByDesign campaign that highlights the necessity of safe platforms for women as well as the consequences when safety is missing. The #BeSafeByDesign obligation shifts the burden of responsibility of ensuring safety in online spaces away from women and places it on platforms where more efforts on risk assessments, accessible and stronger reporting pathways, proactive detection of abuse, and transparent accountability mechanisms are required. The initiative will also involve the practical upskilling of at-risk women in practical cybersecurity.

CIPESA @African Economic Research Consortium (AERC) Summit 2025

Update |

This year, the African Economic Research Consortium (AERC) is holding its first Summit in the context of its new 10-year Strategic Plan (2025-2035), Nairobi, Kenya. The three-day Summit themed ‘A Renewed AERC for Africa’s New Development Priorities’, is designed to hardwire the research-policy bridge.

This event is taking place from November 30 to December 02, 2025. For more information, click here.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).