Towards Inclusive AI Policies in Africa’s Digital Transformation

By CIPESA Writer |

On November 13, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) took part in the global PILNET summit on Artificial Intelligence (AI) and its impact on the work of Civil Society Organisations (CSOs). Over three days, the summit assembled stakeholders from across the world in Rome, Italy, to deliberate on various topics under the theme, “Amplifying Impact: Pro Bono & Public Interest Law in a Shifting World.”

CIPESA contributed to a session titled, “Pro bono for AI: Addressing legal risks and enhancing opportunities for CSOs”. The session focused on AI and its potential impacts on the work and operations of CSOs. CIPESA emphasised the need for a universally acceptable and adaptable framework to guide the increased application of AI in the fast-evolving technological era. Furthermore, CIPESA highlighted its efforts in developing a model policy on AI for CSOs in Africa, which is being undertaken with the support of the Thomson Reuters Foundation through its global pro bono network.

Edrine Wanyama, Programme Manager Legal at CIPESA, centered his discussions around ethical and rights-respecting AI adoption, and emphasised the necessity for CSOs to enhance their knowledge and measures of accountability while navigating the AI ecosystem.

Mara Puacz, the Head of Impact at Tech To The Rescue, Ana de la Cruz Cubeiro, a Legal Officer at PILnet, and Megan King, a Senior Associate at Norton Rose Fulbright, shared similar sentiments on the benefits of AI, which include expanding advocacy work and initiatives of CSOs.

They noted the increased demand for transparency and accountability in AI development and use, and the need to minimise harms that marginalised communities face from AI enabled analysis of data sets that often perpetuate bias, gaps in data, and limited or poorly digitalised language sets.

The session cited various benefits of AI for CSOs, such as enabling human rights monitoring, documenting and reporting at various fronts like the Universal Periodic Review, aiding democratic participation, and tracking and documenting trends. Others are facilitating and enhancing environmental protection, such as through monitoring pollution and providing real-time support to agri-business and the health sector by facilitating pest and disease identification and diagnosis for viable solutions.

However, funding constraints not only affect AI deployment but also capacity building to address the limited skills and expertise in AI deployment. In Africa, the inadequacy of relevant infrastructure, data sovereignty fears amongst states, and the irresponsible use of AI and related technologies present additional challenges.

Meanwhile, between October 23 and 24, 2025, CIPESA joined KTA Advocates and the Centre for Law, Policy and Innovation Initiative (CeLPII), to co-host the 8th Annual Symposium under the theme of “Digital Trade, AI and the Creative Economy as Drivers for Digital Transformation”.

The symposium explored the role of AI in misinformation and disinformation, as well as its potential to transform Uganda’s creative economy and digital trade. CIPESA emphasised the need to make AI central in all discussions of relevant sectors, including governments, innovators, CSOs and the private sector, among others, to identify strategies, such as policy formulation and adoption, to check potential excesses by AI.

Conversations at the PILNET summit and the KTA Symposium align with CIPESA’s ongoing initiatives across the continent, where countries and regional blocs are developing AI strategies and policies to inform national adoption and its application. At the continental level, in 2024, the African Union (AU) adopted the Continental AI Strategy, which provides a unified framework for using AI to drive digital transformation and socio-economic development of Africa.

Amongst the key recommendations from the discussions is the need for:

  • Wide adoption of policies guiding the use of AI by civil society organisations, firms, the private sector, and innovators.
  • Nationwide and global participation of individuals and stakeholders, including governments, CSOs, the private sector, and innovators, in AI processes and how it works to ensure that no one is left behind. This will ensure inclusive participation.
  • Awareness creation and continuous education of citizens, CSOs, innovators, firms, and the private sector on the application and value of AI in their work.
  • The adoption of policies and laws that specifically address the application of AI at national, regional and international levels and at organisational and institutional levels to mitigate the potential adverse impacts of AI rollout.

Now More Than Ever, Africa Needs Participatory AI Regulatory Sandboxes 

By Brian Byaruhanga and Morine Amutorine |

As Artificial Intelligence (AI) rapidly transforms Africa’s digital landscape, it is crucial that digital governance and oversight align with ethical principles, human rights, and societal values

Multi-stakeholder and participatory regulatory sandboxes to test innovative technology and data practices are among the mechanisms to ensure ethical and rights-respecting AI governance. Indeed, the African Union (AU)’s Continental AI Strategy makes the case for participatory sandboxes and how harmonised approaches that embed multistakeholder participation can facilitate cross-border AI innovation while maintaining rights-based safeguards. The AU strategy emphasises fostering cooperation among government, academia, civil society, and the private sector.

As of October 2024, 25 national regulatory sandboxes have been established across 15 African countries, signalling growing interest in this governance mechanism. However, there remain concerns on the extent to which African civil society is involved in contributing towards the development of responsive regulatory sandboxes.  Without the meaningful participation of civil society in regulatory sandboxes, AI governance risks becoming a technocratic exercise dominated by government and private actors. This creates blind spots around justice and rights, especially for marginalised communities.

At DataFest25, a data rights event hosted annually by Uganda-based civic-rights organisation Pollicy, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), alongside the Datashphere Initiative, hosted a session on how civil society can actively shape and improve AI governance through regulatory sandboxes.

Regulatory sandboxes, designed to safely trial new technologies under controlled conditions, have primarily focused on fintech applications. Yet, when AI systems that determine access to essential services such as healthcare, education, financial services, and civic participation are being deployed without inclusive testing environments, the consequences can be severe. 

CIPESA’s 2025 State of Internet Freedom in Africa report reveals that AI policy processes across the continent are “often opaque and dominated by state actors, with limited multistakeholder participation.” This pattern of exclusion contradicts the continent’s vibrant civil society landscape, where various organisations in 29 African countries are actively working on responsible AI issues and frequently outpacing government efforts to protect human rights.

The Global Index on Responsible AI found that civil society organisations (CSOs) in Africa are playing an “outsized role” in advancing responsible AI, often surpassing government efforts. These organisations focus on gender equality, cultural diversity, bias prevention, and public participation, yet they face significant challenges in scaling their work and are frequently sidelined from formal governance processes. The consequences that follow include bias and exclusion, erosion of public trust, surveillance overreach and no recourse mechanisms. 

However, when civil society participates meaningfully from the outset, AI governance frameworks can balance innovation with justice. Rwanda serves as a key example in the development of a National AI Policy framework through participatory regulatory processes.

Case Study: Rwanda’s Participatory AI Policy Development

The development of Rwanda’s National AI Policy (2020-2023) offers a compelling model for inclusive governance. The Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Agency (RURA), supported by GIZ FAIR Forward and The Future Society, undertook a multi-stakeholder process to develop the policy framework. The process, launched with a collective intelligence workshop in September 2020, brought together government representatives, private sector leaders, academics, and members of civil society to identify and prioritise key AI opportunities, risks, and socio-ethical implications. The Policy has since informed the development of an inclusive, ethical, and innovation-driven AI ecosystem in Rwanda, contributing to sectoral transformation in health and agriculture, over $76.5 million in investment, the establishment of a Responsible AI Office, and the country’s role in shaping pan-African digital policy.

By embedding civil society in the process from the outset, Rwanda ensured that its AI governance framework, which would guide the deployment of AI within the country, was evaluated not just for performance but for justice. This participatory model demonstrates that inclusive AI governance through multi-stakeholder regulatory processes is not just aspirational; it’s achievable. 

Rwanda’s success demonstrates the power of participatory AI governance, but it also raises a critical question: if inclusive regulatory processes yield better outcomes for AI-enabled systems, why do they remain so rare across Africa? The answer lies in systemic obstacles that prevent civil society from accessing and influencing sandbox and regulatory processes. 

Consequences of Excluding CSOs in AI Regulatory Sandbox Development???

The CIPESA-DataSphere session explored the various obstacles that civil society faces in the AI regulatory sandbox processes in Africa as it sought to establish ways to advance meaningful participation.

The session noted that CSOs  are often simply unaware that regulatory sandboxes exist. At the same time, authorities bear responsibility for proactively engaging civil society in such processes. Participants emphasised that civil society should also take proactive measures to demand participation as opposed to passively waiting for an invitation. 

The proactive measures by CSOs must move beyond a purely activist or critical role, developing technical expertise and positioning themselves as co-creators rather than external observers.

Several participants highlighted the absence of clear legal frameworks governing sandboxes, particularly in African contexts. Questions emerged: What laws regulate how sandboxes operate? Could civil society organisations establish their own sandboxes to test accountability mechanisms?

Perhaps most critically, there’s no clearly defined role for civil society within existing sandbox structures. While regulators enter sandboxes to provide legal oversight and learn from innovators, and companies bring solutions to test and refine, civil society’s function remains ambiguous with vague structural clarity about their role. This risks civil society being positioned as an optional stakeholder rather than an essential actor in the process. 

Case Study: Uganda’s Failures Without Sandbox Testing

Uganda’s recent experiences illustrate what happens when digital technologies are deployed without inclusive regulatory frameworks or sandbox testing. Although not tested in a sandbox—which, according to Datasphere Initiative’s analysis, could have made a difference given sandboxes’ potential as trust-building mechanisms for DPI systems– Uganda’s rollout of Digital ID has been marred by controversy. Concerns include the exclusion of poor and marginalised groups from access to fundamental social rights and public services. As a result, CSOs sued the government in 2022.    A 2023 ruling by the Uganda High Court allowed expert civil society intervention in the case on the human rights red flags around the country’s digital ID system, underscoring the necessity of civil society input in technology governance.

Similarly, Uganda’s rushed deployment of its Electronic Payment System (EPS) in June 2025 without participatory testing led to public backlash and suspension within one week. CIPESA’s research on digital public infrastructure notes that such failures could have been avoided through inclusive policy reviews, pre-implementation audits, and transparent examination of algorithmic decision-making processes and vendor contracts.

Uganda’s experience demonstrates the direct consequences of the obstacles outlined above: lack of awareness about the need for testing, failure to shift mindsets about who belongs at the governance table, and absence of legal frameworks mandating civil society participation. The result? Public systems that fail to serve the public, erode trust, and costly reversals that delay progress far more than inclusive design processes would have.

Models of Participatory Sandboxes

Despite the challenges, some African countries are developing promising approaches to inclusive sandbox governance. For example, Kenya’s Central Bank established a fintech sandbox that has evolved to include AI applications in mobile banking and credit scoring. Kenya’s National AI Strategy 2025-2030 explicitly commits to “leveraging regulatory sandboxes to refine AI governance and compliance standards.” The strategy emphasises that as AI matures, Kenya needs “testing and sandboxing, particularly for small and medium-sized platforms for AI development.”

However, Kenya’s AI readiness Index 2023 reveals gaps in collaborative multi-stakeholder partnerships, with “no percentage scoring” recorded for partnership effectiveness in the AI Strategy implementation framework. This suggests that, while Kenya recognises the importance of sandboxes, implementation challenges around meaningful participation remain.

Kenya’s evolving fintech sandbox and the case study from Rwanda above both demonstrate that inclusive AI governance is not only possible but increasingly recognised as essential. 

Pathways Forward: Building Truly Inclusive Sandboxes

Session participants explored concrete pathways toward building truly inclusive regulatory sandboxes in Africa. The solutions address each of the barriers identified earlier while building on the successful models already emerging across the continent.

Creating the legal foundation

Sandboxes cannot remain ad hoc experiments. Participants called for legal frameworks that mandate sandboxing for AI systems. These frameworks should explicitly require civil society involvement, establishing participation as a legal right rather than a discretionary favour. Such legislation would provide the structural clarity currently missing—defining not just whether civil society participates, but how and with what authority.

Building capacity and awareness

Effective participation requires preparation. Participants emphasised the need for broader and more informed knowledge about sandboxing processes. This includes developing toolkits and training programmes specifically designed to build civil society organisation capacity on AI governance and technical engagement. Without these resources, even well-intentioned inclusion efforts will fall short.

Institutionalise cross-sector learning.

Rather than treating each sandbox as an isolated initiative, participants proposed institutionalising sandboxes and establishing cross-sector learning hubs. These platforms would bring together regulators, innovators, and civil society organisations to share knowledge, build relationships, and develop a common understanding about sandbox processes. Such hubs could serve as ongoing spaces for dialogue rather than one-off consultations.

Redesigning governance structures

True inclusion means shared power. Participants advocated for multi-stakeholder governance models with genuine shared authority—not advisory roles, but decision-making power. Additionally, sandboxes themselves must be transparent, adequately resourced, and subject to independent audits to ensure accountability to all stakeholders, not just those with technical or regulatory power.

The core issue is not if civil society should engage with regulatory sandboxes, but rather the urgent need to establish the legal, institutional, and capacity frameworks that will guarantee such participation is both meaningful and effective.

Why Civil Society Participation is Practical

Research on regulatory sandboxes demonstrates that participatory design delivers concrete benefits beyond legitimacy. CIPESA’s analysis of digital public infrastructure governance shows that sandboxes incorporating civil society input “make data governance and accountability more clear” through inclusive policy reviews, pre-implementation audits, and transparent examination of financial terms and vendor contracts. 

Academic research further argues that sandboxes should move beyond mere risk mitigation to “enable marginalised stakeholders to take part in decision-making and drafting of regulations by directly experiencing the technology.” This transforms regulation from reactive damage control to proactive democratic foresight.

Civil society engagement:

  • Surfaces lived experiences regulators often miss.
  • Strengthens legitimacy of governance frameworks.
  • Pushes for transparency in AI design and data use.
  • Ensures frameworks reflect African values and protect vulnerable communities, and
  • Enables oversight that prevents exploitative arrangements

While critics often argue that broad participation slows innovation and regulatory responsiveness, evidence suggests otherwise. For example, Kenya’s fintech sandbox incorporated stakeholder feedback through 12-month iterative cycles, which not only accelerated the launch of innovations but also strengthened the country’s standing as Africa’s premier fintech hub.

The cost of exclusion can be  seen in Uganda’s EPS system, the public backlash, eroded trust, and potential system failure, ultimately delaying progress far more than inclusive design processes. The window for embedding participatory principles is closing. As Nigeria’s National AI Strategy notes, AI is projected to contribute over $15 trillion to global GDP by 2030. African countries establishing AI sandboxes now without participatory structures risk locking in exclusionary governance models that will be difficult to reform later.

The future of AI in Africa should be tested for justice, not just performance. Participatory regulatory sandboxes offer a pathway to ensure that AI governance reflects African values, protects vulnerable communities, and advances democratic participation in technological decision-making.

Join the conversation! Share your thoughts. Advocate for inclusive sandboxes. The decisions we make today about who participates in AI governance will shape Africa’s digital future for generations.

CIPESA Participates in the 4th African Business and Human Rights Forum in Zambia

By Nadhifah Muhamad |

The fourth edition of the African Business and Human Rights (ABHR) Forum was held from October 7-9, 2025, in Lusaka, Zambia, under the theme “From Commitment to Action: Advancing Remedy, Reparations and Responsible Business Conduct in Africa.”

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated in a session titled “Leveraging National Action Plans and Voluntary Disclosure to Foster a Responsible Tech Ecosystem,” convened by the B-Tech Africa Project under the United Nations Human Rights Office and the Thomson Reuters Foundation (TRF). The session discussed the integration of digital governance and voluntary initiatives like the Artificial Intelligence (AI) Company Disclosure Initiative (AICDI) into National Action Plans (NAPs) on business and human rights. That integration would encourage companies to uphold their responsibility to respect human rights through ensuring transparency and internal accountability mechanisms.

According to Nadhifah Muhammad, Programme Officer at CIPESA, Africa’s participation in global AI research and development is estimated only at  1%. This is deepening inequalities and resulting in a proliferation of AI systems that barely suit the African context. In law enforcement, AI-powered facial recognition for crime prevention was leading to arbitrary arrests and unchecked surveillance during periods of unrest. Meanwhile, employment conditions for platform workers on the continent, such as OpenAI ChatGPT workers in Kenya, were characterised by low pay and absence of social welfare protections.

To address these emerging human rights risks, Prof. Damilola Olawuyi, Member of the UN Working Group on Business and Human Rights, encouraged African states to integrate ethical AI governance frameworks in NAPs. He cited Chile, Costa Rica and South Korea’s frameworks as examples in striking a balance between rapid innovation and robust guardrails that prioritise human dignity, oversight, transparency and equity in the regulation of high-risk AI systems.

For instance, Chile’s AI policy principles call for AI centred on people’s well-being, respect for human rights, and security, anchored on inclusivity of perspectives for minority and marginalised groups including women, youth, children, indigenous communities and persons with disabilities. Furthermore,  it states that the policy “aims for its own path, constantly reviewed and adapted to Chile’s unique characteristics, rather than simply following the Northern Hemisphere.”

Relatedly, Dr. Akinwumi Ogunranti from the University of Manitoba commended the Ghana NAP for being alive to emerging digital technology trends. The plan identifies several human rights abuses and growing concerns related to the Information and Communication Technology (ICT) sector and online security, although it has no dedicated section on AI.

NAPs establish measures to promote respect for human rights by businesses, including conducting due diligence and being transparent in their operations. In this regard, the AI Company Disclosure Initiative (AICDI) supported by TRF and UNESCO aims to build a dataset on corporate AI adoption so as to drive transparency and promote responsible business practices. According to Elizabeth Onyango from TRF,  AICDI helps businesses to map their AI use, harness opportunities and mitigate operational risk. These efforts would complement states’ efforts by encouraging companies to uphold their responsibility to respect human rights through voluntary disclosure. The Initiative has attracted about 1,000 companies, with 80% of them publicly disclosing information about their work. Despite the progress, Onyango added that the initiative still grapples with convincing some companies to embrace support in mitigating the risks of AI.

To ensure NAPs contribute to responsible technology use by businesses, states and civil society organisations were advised to consider developing an African Working Group on AI, collaboration and sharing of resources to support local digital startups for sustainable solutions, investment in digital infrastructure, and undertaking robust literacy and capacity building campaigns of both duty holders and right bearers. Other recommendations were the development of evidence-based research to shape the deployment of new technologies and supporting underfunded state agencies that are responsible for regulating data protection.

The Forum was organised by the Office of the United Nations High Commissioner for Human Rights (OHCHR), the United Nations (UN) Working Group on Business and Human Rights and the United Nations Development Programme (UNDP). Other organisers included the African Union, the African Commission on Human and Peoples’ Rights, United Nations Children’s Fund (UNICEF) and UN Global Compact. It brought together more than 500 individuals from over 75 countries –  32 of them African. The event was a buildup on the achievements of the previous Africa ABHR Forums in Ghana (2022), Ethiopia (2023) and Kenya (2024).

Championing Internet Freedom and Universal Periodic Review (UPR) at #FIFAfrica2019

By Sandra Acheng |
Due to the rise of internet usage, there is an increasing rate of abuse, threats, and attacks on the internet users’ which women usually fall prey to. The internet can lead to disruption and disinformation and this determines the intensifying need for defending digital human rights violations with a focus on freedom of expression, press freedom and digital rights. Human rights online is a topic that is often forgotten during discussions of human rights violations because the concept is still quite new without acknowledging the reality that human rights offline can translate online.  Many people usually face numerous human rights violations online but they are not even aware that their rights are being violated or where and who to report to in case this happens.
The growing rate of human rights violations online is a result of more people getting online and this greatly affects internet freedom.  Uganda experienced internet shutdowns by the government during elections in 2016 and this affected freedom of expression and online digital rights of users. Also, the introduction of Over the top (OTT) by the Ugandan Government last year in June 2018 has limited users from accessing and using the internet. The increased use of ICT has led to women facing abuse and violence more than ever for instance the growing rate of  Non Consensual distribution of images commonly done by intimate partner (NCII) commonly referred to as “Revenge Porn” is usually associated to morality and decency of women which violates their privacy, women’s rights as well as criminalizing and undermining women who exercise their right to sexual expression.
This year, Collaboration on International ICT Policy for East and Southern Africa (CIPESA) in partnership with Small Media convened a two days’ pre-event workshop on the 23rd and 24th September 2019 @FIFAfrica2019 at Ethiopian Sky Light Hotel for human rights defenders on UPR. It was an intense, interactive and fun training that gathered over 30 male and female participants from different African countries that are doing incredible work to champion freedom of expression, press freedom, and human rights online also known as digital rights to influence government and non-government actors in their country or others in defense of human rights. The two days’ workshop tackled;

  • Introduction to the Universal Periodic Review Advocacy Assembly
  • Making an Impact with the Universal Periodic Review

The understanding of UPR
The human rights system that previously existed was criticized for being selective (not all states, not all rights), non-collaborative, paralyzed by political games, therefore, the UPR came as a response to the Human Rights system because UPR is non-selective, collaborative and have no double standard. UPR is Universal Periodic Review and the first United Nations Human rights mechanism on Human rights online to create a more just world for everyone since it requires every UN member country to review, therefore, it gives entry to CSOs.
The UPR CYCLE
The universal periodic review happens every 4 to 5 years for 6 months. However, Countries have limited time to make recommendations.
How can internet freedom be championed in UPR?
Human rights review mechanism such as United Nations Universal Periodic Review provides a good opportunity to review human rights violations online among African countries and can be a good way of arguing that digital human rights are universal and hold African governments accountable on how they treat their citizens on this. Here are incredible ways internet freedom can be championed in UPR;

  • Building capacity of more human rights defenders and activists in African civil society organizations
  • Encourage collaborations at the national level; by bringing in new people during the periodic reviews of states doing the same work to get more people aware. Collaborations by group increase credibility because local NGOs collaborate with international bodies and togetherness is better.
  • Increase participation by involving different actors in CSO activism and human rights in different Africa civil society countries to lobby governments to submit recommendations.
  • Make UPR an expert mechanism because UPR is a political mechanism and it focuses on states which consider only national priorities.
  • Increase the time taken by each state to raise issues and make recommendations at Geneva because usually 50-100 states take the floor for 3-5 hours and make 4-5 recommendations which are limited time for each state to put the recommendations
  • Gather more activists and human rights defenders to convince African states to raise the issues of policy and laws during the UPR and take up recommendations made.
  • Showcase more research to make the UPR recommendations valuable.
  • Form a coalition in different African countries or states so that the group get accurate and real time information and participate in UPR which will give a sense of responsibility and commitment by different states to make submissions in time.

However, there are relevant tips or rules for advocacy of issues for considerations during the preparation of recommendations of a state under review;

  • The recommendation must be specific enough by mentioning specific laws and should be easy to follow up on the recommendations
  • It is good if the recommendations focus on only one issue.
  • It is better if the recommendations are action oriented.
  • Use Human rights language. Avoid languages that make the recommendations impossible.
  • Make the recommendations stronger by backing it up with references in your country for instance treaties or laws in your country.
  • Back up your recommendations from the African commission or other international bodies.
  • It is useful to put yourself in the shoe of the person you are trying to interest on your issues for instance on time.
  • Know your country very well in terms of internet freedom.
  • Prepare a statement of UPR for 2 weeks in about 10 countries.
  • It is good to make the presentation of recommendation a dialogue
  • It is always good to contact the person you are convincing before because they may not support your issues.
  • Note that not all diplomats are familiar with human rights online.
  • Pitch your idea.

There is need to build capacity, encourage collaborations and increase participation in Universal Periodic Review (UPR) process.

FIFAFRICA19: The Sessions, The Lessons, and Takeaways

By Hilda Nyakwaka |
The Forum on Internet Freedom in Africa event this year was hosted in Addis Ababa between 23rd and 26th September. This event was considered monumental because a few months prior, there were internet shutdownsand this was a testament to the progressive strides Ethiopia was making in creating an open and accessible internet for its citizens.
The first two days of the week were dedicated to a training on the Universal Periodic Review (UPR) hosted by Small Mediaand Data4Changeand with representatives from over 10 African countries.The UPR, an innovation by the UN Human Rights Council, is a periodic review of the human rights records by the 193 member states of the UN (UPR). The main purpose of this training was to see how to champion for digital rights at the coming UPR.
Over these two days participants went over different concepts such as the importance of making recommendations to countries through the UN in an effort to improve the digital space, how to build capacity for recommendation-making processes, how to increase participation in the UPR process and how to foster collaborations between participants from different countries.
Some common concerns that the participants shared about the previous UPR cycles included the focus of civil society organisations on report writing as opposed to lobbying, lack of accessibility to stakeholder reports by diplomats and other concerned citizens and the lack of focus on digital rights and other human rights when making recommendations.
Some of the key takeaways were the toolsbuilt by Data4Change to guide interested parties in creating advocacy strategy plans. These tools help users to not only see what recommendations have been made to their countries of interest, but to also discover and fill the gaps in the recommendations made and which countries would be important to partner with. During this session, all participants were able to make and rate each other’s recommendations to their individual countries with specific attention to digital rights.
There was also a general agreement to increase local stakeholder mobilization, awareness workshops, jointly documenting abuse of digital rights so as to have greater impact and lastly to properly document all our work. At the end of the training, we were all awarded certificates of completion for attending the trainings and completing online courses pertaining to the UPR process.
The next two days marked the official start of the forum, which was open to everyone who had registered to attend. There were several panel sessions occurring simultaneously focusing on different internet issues across Africa. The opening panel was attended by H.E Dr. Getahun Mekuria, the Minister of Information and Technology in Ethiopia who reiterated the government’s plans to liberalize the telecommunications industry in Ethiopia and to reduce cases of shutdowns in a move to increase and improve the citizens’ access to the internet.
Sessions that I was particularly interested in narrowed, their focus to how marginalized and targeted communities interact with technology and some of the solutions they have adapted to their situations.
One such session was on the importance of African feminist movements in improving women’s voices in the digital space as moderated by Rosebell Kagumire, an award-winning blogger and Pan-African feminist . In this session, feminists from across the continent reiterated the ways in which offline patriarchal systems were replicated online against women. One point that resonated with most attendees was that women were still being policed and bullied out of a lot of social apps such as Twitter and most women preferred to be invisible. Cases were brought up on how hypervisible women were facing a lot of challenges.
In addition to this, there was also a celebratory moment when the panelists mentioned some of the movements that succeeded in championing for women’s rights both offline and online some as big and visible as the #TotalShutdownKE that fought against femicide in Kenya and the revolution in Sudan that had women at its forefront and which gained support from across the world. A major gap that was identified in this conversation was the need for building more local movements and having inclusivity within the movement to include women from the LGBTQI community as their online experiences were unidentified despite being equally important.
Another key session was held on technology and disability where persons living with disabilities spoke about how to use ICTs to reduce accessibility barriers. Some of the challenges the panelists included insufficient ICT training, lack of accessibility features on even the most basic layouts of sites and apps, affordability of devices with accessibility features as one of the panelists cited that a mobile phone with accessibility features would cost one $800 which most Africans both abled and disabled cannot afford comfortably.
An Ethiopian Case Studywas used to explain just how crucial and important it was to provide holistic education and inclusive support to people living with disabilities within our communities. Some of the solutions that the panelists offered up to improve accessibility included proper policy implementation and strict punitive measures for those who failed to implement any policies, being intentional by teaching about accessibility features right from software development classes at the college level and putting a cap to funding that requires companies and organizations to provide opportunities for and hire people living with disabilities in their firms and provide ample accessibility features before receiving full funding. There was also a call to create awareness campaigns to make citizens aware of the challenges that a large section of our communities experience and how to develop protection mechanisms for them.
In conclusion, FIFAfrica 19 was a great opportunity to share experiences and solutions to challenges we as Africans seem to be experiencing when it comes to digital rights and internet freedom. In case you weren’t able to join us in Addis for this edition of FIFAfrica, find herethe agenda for the forum, somelive tweets from the event and the report on the State of Internet Freedom in Africaby CIPESA, launched at the forum.