Digital inclusion is often framed as access and numbers – how many people are trained, device ownership, and how many users are connected. In Somalia, however, the reality is far more complex. While recent data suggest that internet penetration has reached approximately 55 percent of the population, and there are over 10 million internet users, social media adoption remains low and skewed toward male users, with women constituting a smaller proportion of those who are online.
Meanwhile, the political and civic space remains constrained. Due to protracted conflict, fragmented governance and insecurity, Somalia is classified as “Not Free” in global democracy assessments. The country also ranks near the bottom in press freedom indices, with journalists and media houses facing threats, harassment, arbitrary closures, and censorship pressures, particularly in conflict-affected regions, making open expression online and offline perilous.
Young Somali women are joining digital spaces shaped by these fragile conditions, coupled with unequal power relations and persistent safety concerns. Many are navigating unstable job markets, expectations to contribute to family livelihoods, and social norms that continue to question women’s visibility and voice, both online and offline. In such a context, digital upskilling is not merely technical but rather deeply social, economic, and political. If approached narrowly, it risks reproducing existing exclusions by focusing only on tools and outputs.
The Digital Skills for Girls (DS4G) programme by Digital Shelter is designed with this in mind, treating digital skilling and inclusion not as isolated competencies but as entry points into broader questions of participation, agency, and voice within Somalia’s evolving digital ecosystem. Combining practical digital skills, digital safety and rights awareness, DS4G has supported 35 women and girls, conducted monthly meet ups and stakeholder engagements to empower young Somali women.
As noted by Ali, “At a time when many organisations were forced to scale back activities due to funding instability, CIPESA’s discretionary support allowed Digital Shelter to remain operational and responsive, ensuring that young women continued to access skills and learning spaces designed to support meaningful participation in digital, social and civic life”. He added that through DS4G, Digital Shelter had strengthened its role as a trusted, women-centered digital rights actor with a replicable programme model.
The DS4G’s sessions included graphic design, personal branding, emerging technologies, data protection and privacy, online threats and risks, and career development. A key component of DS4G was the Cyber Safety for Women event, which reinforced digital safety as a collective concern. The event featured a documentary screening on lived digital experiences and panel discussions on gender, online safety, and participation.
“DS4G recognised that technical skills alone are insufficient unless young women are also equipped to navigate digital environments safely, communicate confidently and position themselves for future opportunities,” said Digital Shelter’s Executive Director, Abdifatah Ali.
According to Digital Shelter, the inclusion of graphic design in the DS4G programme was a strategic one. The team argues that sitting at the intersection of creativity, communication, and influence, design shapes how information is interpreted, whose stories are amplified, and which messages gain traction. For the participants of DS4G, many of whom were students or recent graduates, it offered an accessible entry into digital work.
“As the training progressed, participants moved beyond executing tasks to interrogating purpose and impact, asking who messages are for, what they communicate, and how design can support causes, campaigns, and community conversations,” said Ayan Khalif, Digital Shelter’s Program Manager.
Indeed, participant feedback reflects positive outcomes – both skills acquisition and agency. “Before this project, I used social media without thinking much about safety. Now I understand how to protect myself online and how important digital security is for women like us,” said one participant. As part of reflection exercises, participants explored how design could support community initiatives, advocacy efforts and communicate messages. Another participant stated, “The monthly meetups helped me gain confidence. Speaking in front of others was difficult at first, but now I feel more comfortable expressing my ideas.”
The DS4G initiative has empowered a cohort of young women to navigate digital spaces with confidence and security, equipped with skills to exploit economic opportunities, advocate for change, and engage safely and confidently in community affairs.
Ugandan journalists are increasingly facing intertwined physical and digital threats which intensify during times of public interest including elections and protests. These threats are compounded by internet shutdowns, targeted surveillance, account hacking, online harassment, and regulatory censorship that directly undermines their safety and work. A study on the Daily Monitor’s experience found that the 2021 general election shutdown constrained news gathering, data-driven reporting, and online distribution, effectively acting as digital censorship. These practices restrict news gathering, production, and dissemination and have been documented repeatedly from the 2021 general election through the run‑up to the 2026 polls.
Over the years, CIPESA has documented digital rights violations, challenged internet shutdowns, and worked directly with media practitioners to strengthen their ability to operate safely and independently. This work has deepened as the threats to journalism have evolved.
In recent months CIPESA has conducted extensive journalist safety and digital resilience trainings across the country, reaching more than 200 journalists from diverse media houses and districts across the country, in the Acholi subregion (Gulu, Kitgum, Amuru, Lamwo, Agago, Nwoya, Pader, and Omoro), Ankole sub region (Buhweju, Bushenyi, Ibanda, Isingiro, Kazo, Kiruhura, Mbarara (City & District), Mitooma, Ntungamo, Rubirizi, Rwampara, and Sheema), Central (Kampala, Wakiso), Busoga Region (Bugiri, Bugweri, Buyende, Iganga, Jinja, Kaliro, Kamuli, Luuka, Mayuge, Namayingo, and Namutumba), and the Elgon, Bukedi, and Teso subregions (Mbale, Bududa, Bulambuli, Manafwa, Namisindwa, Sironko, Tororo, Busia, Butaleja, Kapchorwa, Soroti, and Katakwi).
The trainings aimed to strengthen the capacity of media actors to mitigate digital threats and push back against rising online threats and censorship that enable digital authoritarianism. The training was central to helping journalists and the general media sector to understand media’s role in democratic and electoral processes, ensure legal compliance and navigate common restrictions, buttressing their digital and physical security resilience, enhancing the skills to identify and counter disinformation and facilitating the newsroom safety frameworks for the media sector.
The various trainings were tailored to respond to the needs of the journalists, covering media, democracy, and elections; electoral laws and policies; and peace journalism, with attention to transparent reporting and the effects of military presence on journalism in post-conflict settings.
Meanwhile, in Mbale and Jinja, reporters unpacked election-day risks, misinformation circulating on social media, and the legal boundaries that are often used to intimidate them. Across the different regions, newsroom managers, editors and reporters worked through practical exercises on digital hygiene, safer communication, and physical-digital risk intersections.
CIPESA’s digital security trainings respond to the real conditions journalists work under. The sessions focus on election-day and post-election reporting, verifying information and claims under pressure, protecting sources, and strengthening everyday digital security through strong passwords, two-factor authentication, and safe device handling. Journalists also develop newsroom safety protocols and examine how peace journalism can help de-escalate tension rather than inflame it during contested political moments
One of the most important shifts for the participants, came from the perspective that safety stopped being treated as an individual burden and started being understood as an organisational responsibility. Through protocol-development sessions, journalists mapped threats, identified vulnerabilities such as predictable routines and weak passwords, and designed “if-then” responses for incidents like account hacking, detention, or device theft. For many journalists, this was the first time safety had been written down rather than improvised.
Beyond the training for journalists, CIPESA hosted several digital security clinics and help desks for human rights defenders and activists. At separate engagements, close to 70 journalists received one-on-one support during a digital security clinic at Ukweli Africa held from the 15 December 2025 including the at the Uganda Media Week. These efforts sought to enhance their digital security practices. The support provided during these interventions included checking the journalists’ devices for vulnerabilities, removal of malware, securing accounts, enabling encryption, and secure data management approaches.
“Some journalists who had arrived unsure, even embarrassed, about their digital habits, left lighter, not because the risks had vanished, but because they now understood the tools and how to manage risks.”
These engagements serve as avenues to build the digital resilience of journalists in Uganda, especially as the media faces heightened online threats amidst a shrinking civic space.Such trainings that speak the language of lived experience often travel further than any policy alone. In Uganda, where laws can be used to narrow civic space, where the internet can be switched off, and where surveillance blurs the line between public and private, practical digital security becomes a necessity.
By training journalists across Uganda, supporting them through digital security desks, and standing with them during moments like Media Week, CIPESA has helped journalists strengthen their resilience to keep reporting in spite of the challenges and threats they encounter daily.
Information integrity work is only as strong as the methods behind it. In Ethiopia’s fast-changing information environment, fact-checkers and researchers are expected to move quickly while maintaining accuracy, transparency, and ethical care. Inform Africa has expanded two practical capabilities to address this reality: advanced OSINT-based fact-checking training and structured disinformation research using the DISARM framework, in collaboration with the Collaboration on International ICT Policy for East and Southern Africa (CIPESA).
This work was advanced with support from the Africa Digital Rights Fund (ADRF), administered by CIPESA. At a time when many civic actors face uncertainty, the fund’s adaptable support helped Inform Africa sustain day-to-day operations and protect continuity, while still investing in verification and research methods designed to endure beyond a single project cycle.
The collaboration with CIPESA was not only administrative. It was anchored in shared priorities around digital rights, information integrity, and capacity building. Through structured coordination and learning exchange, CIPESA provided a partnership channel that strengthened the work’s clarity and relevance, and helped position the outputs as reusable methods that can be applied beyond a single team. The collaboration also reinforced a regional ecosystem approach: improving practice in one context while keeping the methods legible for peer learning, adaptation, and future joint work.
The implementation followed a phased timetable across the project activity period from April through November 2025. Early work focused on scoping and method design, aligning the training and research approaches with practical realities in newsrooms and civil society. Mid-phase work concentrated on developing the OSINT module and applying DISARM as a structured research lens, with iterative refinement as materials matured. The final phase focused on consolidation, documentation discipline, and packaging the outputs to support repeatable use, including onboarding, internal training, and incident review workflows.
A central focus has been an advanced OSINT training module built to move beyond tool familiarity into a complete verification workflow. Verification is treated as a chain of decisions that must be consistent and auditable: how to intake a claim, determine whether it is fact-checkable, plan the evidence, trace sources, verify images and videos, confirm the place and time, and document each step clearly enough for an editor or peer to reproduce the work. The aim is not only to reach accurate conclusions but also to show the route taken, including which evidence was prioritized and how uncertainty was handled.
This documentation discipline is not bureaucracy. It is a trust technology. In high-risk information environments, preserved sources, verification logs, and clear decision trails protect credibility, strengthen editorial oversight, and reduce avoidable errors. The module prioritizes hands-on, production-style assignments that mirror real newsroom constraints and trains participants to avoid overclaiming, communicate uncertainty responsibly, and present evidence in ways that non-expert audiences can follow.
In parallel, Inform Africa has applied the DISARM framework to disinformation research. DISARM provides a shared language for describing influence activity through observable behaviors and techniques, without drifting into assumptions. The priority has been to remain evidence-bound: collecting and preserving artifacts responsibly, maintaining a structured evidence log, reducing harm by avoiding unnecessary reproduction of inflammatory content, and avoiding claims of attribution beyond what the evidence supports. This DISARM-informed approach has improved internal briefs, strengthened consistency, and made incidents easier to compare over time and across partners.
Three lessons stand out from this work with CIPESA and ADRF. First, quality scales through workflow, not only through talent. Second, evidence discipline is a strategic choice that protects credibility and reduces harm in both fact-checking and research. Third, shared frameworks reduce friction by improving clarity and consistency across teams. Looking ahead, Inform Africa will integrate the OSINT module into routine training and onboarding and continue to apply DISARM-informed analysis in future incident reviews and deeper studies, reinforcing information integrity as a public good.
This article was first published by Informa Africa on December 15, 2025
As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values.
Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.
As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes. Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.
At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.
Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe.
The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms.
However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.
Case Study: Rwanda’s Participatory AI Policy Development
The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.
By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable.
Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes.
Consequences of Excluding CSOs in AI Regulatory Sandbox Development???
The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.
The session noted that CSOs are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation.
The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.
Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?
Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process.
Case Study: Uganda’s Failures Without Sandbox Testing
Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing.Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022. A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.
Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination ofalgorithmic decision-making processes and vendor contracts.
Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.
Models of Participatory Sandboxes
Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”
However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.
Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential.
Pathways Forward: Building Truly Inclusive Sandboxes
Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.
Creating the legal foundation
Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.
Building capacity and awareness
Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.
Institutionalise cross-sector learning.
Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.
Redesigning governance structures
True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.
The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.
Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.
Civil society engagement:
Surfaces lived experiences regulators often miss.
Strengthens legitimacy of governance frameworks.
Pushes for transparency in AI design and data use.
Ensures frameworks reflect African values and protect vulnerable communities, and
Enables oversight that prevents exploitative arrangements
While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.
The cost of exclusion can be seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.
The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.
Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.
The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”
The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.
According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at 1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.
To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.
For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore, it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”
Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.
NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF, AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.
To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.
The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries – 32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).