How the WHO Digital Health Strategy Should Govern Data, AI and Digital Public Infrastructure

By Raylenne Kambua |

As the World Health Organization (WHO) develops the Global Digital Health Strategy for 2028–2033, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has submitted recommendations urging that the strategy be anchored on human rights, equity, and accountability, alongside technological innovation. 

Across Africa, Artificial Intelligence (AI), telemedicine, disease surveillance systems, and automated diagnostic systems are transforming healthcare delivery. However, CIPESA pointed out in the submission to the WHO Regional Office for Africa that technological innovation without proper governance can worsen exclusion, undermine privacy protections, and reinforce inequalities in healthcare delivery.

The submission comes at a time when key global and regional digital health governance frameworks are being reshaped. Last year, the World Health Assembly extended the Global Strategy on Digital Health 2020–2025, and simultaneously mandated a successor framework to be completed in 2027. 

Furthermore, global initiatives such as the World Summit on the Information Society (WSIS) and the Global Digital Compact emphasise that digital transformation must integrate the Sustainable Development Goals and ensure inclusive development.

At the continental level, the Africa Centres for Disease Control and Prevention (Africa CDC) has rolled out the Africa CDC Digital Transformation Strategy which alongside the  African Union (AU) Data Policy Framework advances interoperability, transparency, privacy, and the ethical deployment of digital systems in the health sector. However, CIPESA notes that despite these commitments, implementation gaps remain significant, particularly regarding health data governance, accountability, and protection against algorithmic harm.

CIPESA’s work on health data governance in Uganda, patient data privacy in Ghana, Rwanda, and Uganda, and analysis of Kenya’s Digital Health Act, point to the same reality. The rules governing who controls health data, who is included in digital health systems, and who is held accountable when data is mishandled are still weak across most of the continent.

“As countries embrace AI, digital public infrastructure, and data-driven healthcare systems, the real test will be whether these technologies strengthen confidence in public health systems or deepen concerns about exclusion, surveillance, and the misuse of personal data,” said CIPESA Executive Director Dr. Wairagala Wakabi.

He added: “Trustworthy digital health systems require transparent digital infrastructure, accountable AI systems, and strong data protection safeguards. Africa has the chance to shape a digital health governance model that is innovative, inclusive, and based on the public interest and human dignity.”

CIPESA’s Core Positions and Recommendations

Digital health offers significant potential to enhance Universal Health Coverage and strengthen health systems across Africa. However, without governance anchored in rights, equity, inclusion, and accountability, this promise will remain unfulfilled. It is against this backdrop that CIPESA submitted the following recommendations:

1. Digital Public Infrastructure (DPI)

Digital health infrastructure should be open, interoperable, transparent, and rights-respecting, with safeguards to prevent exclusion and misuse of shared systems.

    2. Health Data Governance

    Most African countries lack specific laws that govern health data. Countries should therefore establish clear legal frameworks governing health data, including informed and meaningful patient consent, limits on data sharing, independent oversight mechanisms, and enforceable accountability structures.

    3. Artificial Intelligence (AI)

    CIPESA warns that most AI systems used in healthcare are trained on non-African datasets, which increases the risk of inaccurate diagnoses and exclusion. The submission recommends that AI tools and systems should be tested and validated in Africa, and include mandatory “explainability” standards so that health professionals understand how the AI reaches conclusions, and safeguards against bias in clinical decision-making tools.

    4. Interoperability

    Many digital health tools are isolated across countries and institutions, meaning they can not share data with each other. In this light, CIPESA recommends the adoption of national interoperability standards, including the WHO SMART Guidelines, to ensure secure and efficient health data exchange. Also, all digital health vendors should adhere to interoperability standards and the utilisation of shared infrastructure.

    5. Equity and Inclusion

    The digital divide continues to expand in most African countries and limits access to digital health services. CIPESA recommends conducting “equity impact assessments” before launching new systems, continued availability of offline options, and supporting digital literacy initiatives.

    6. Stronger Governance

    CIPESA holds that technology fails without clear leadership and coordination between health and technology departments. Therefore, creating clear governance structures for accountability and embracing a multi-stakeholder approach in decision-making processes are vital for resilient health systems. Other recommendations are the publication of institutional AI and digital health use policies and mandatory human rights impact assessments for high-risk systems.

    7. Sustainable Financing

    Many digital health initiatives rely on short-term donor funding, resulting in countries being dependent and unable to scale such programmes. Additionally, gaps in workforce capacity constrain implementation. CIPESA urges governments to invest in domestic financing of digital health systems and training of health and technical personnel.

    In conclusion, CIPESA’s submission emphasises that while digital technologies offer significant opportunities to strengthen health systems and improve service delivery, without strong safeguards, digital health risks reproducing and scaling existing inequalities in new, technologically mediated forms. A rights-based, inclusive, and accountable approach is therefore essential to ensure that Africa’s digital health future is not only innovative, but also equitable and just.

    Read the full submission here.

    Outpaced by Its Own Ambition: Can Kenya Bridge Its AI Regulation  Gap?

    By Raylenne Kambua |

    The raw paradox at the heart of Kenya’s Artificial Intelligence (AI) moment is that the country is simultaneously sprinting ahead in AI adoption while grappling with a shrinking space for the very digital voices that AI empowers.

    According to the Digital Global Update Report, Kenya recorded the world’s highest usage rate of AI tools in 2025, with 42.1% of internet users aged 16 and above reporting active use of AI-powered technologies. This level of usage indicates that AI is increasingly being woven into the daily life of Kenyans.

    However, the Navigating the Implications of AI on Digital Democracy in Kenya report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) highlights that while AI empowers citizens, it also enables unprecedented surveillance and manipulation.

    A Nation Leading the Way in AI Adoption

    Kenya has made significant investments in digital services, innovation hubs, and connectivity under the National Digital Master Plan 2022–2032.

    These developments are also transforming how citizens interact with the government. Tools such as the Office of the Data Protection Commissioner’s Linda Data chatbot and platforms such as Sauti ya Bajeti have expanded access to rights information and budget tracking.

    Yet, even as AI delivered clear benefits, it also revealed its dual nature, most visibly during the 2024 #RejectFinanceBill protests, during which Gen Zs mobilised through AI-generated infographics, satire, and short-form videos. At the height of the protests on June 25, a nationwide internet disruption was enforced despite assurances from the Communications Authority. The disruption was confirmed by network monitors like Cloudflare and NetBlocks, exposing the fragility of internet freedom in Kenya.

    Civil society condemned the internet shutdown as a violation of rights, while telecoms Safaricom and Airtel attributed it to outages in their undersea cable. In the aftermath, reports of abductions and enforced disappearances of digital activists escalated, with the Kenya National Commission on Human Rights documenting at least 82 cases between June and December 2024.

    Kenya’s AI Policy Landscape

    The launch of the Kenya National AI Strategy 2025–2030 in March 2025 signalled the country’s ambition to position itself as Africa’s leading AI innovation hub. The strategy prioritises governance, ethics, investment, digital infrastructure, data ecosystem development, and support for AI research and innovation.

    Kenya has also strengthened its international profile through participation in programmes such as the United Nations High-Level Advisory Board on AI, joining the International Network of AI Safety Institutes, and assuming leadership in the World Summit on the Information Society (WSIS+20).

    At the national level, initiatives such as Digital Platforms Kenya (DigiKen) and the Kenya Bureau of Standards’ draft AI Code of Practice reflect growing momentum toward operationalising AI governance and skills building. The government is also developing an AI and Emerging Technologies Policy and a Data Governance Policy, both of which are expected to be in place by July 2026.

    However, the gap between ambition and readiness remains wide. Kenya ranks 93rd in the 2025 Government AI Readiness Index, due to persistent weaknesses in infrastructure, implementation, and institutional capacity.

    Moreover, Kenya’s legal framework for AI remains fragmented and incomplete. Currently, there is no standalone AI law in force, but a controversial Artificial Intelligence Bill, 2026, that has raised significant concerns about over-regulation and censorship  is under discussion. Additionally regulation is based on broader laws such as the Data Protection Act 2019 and the Computer Misuse and Cybercrimes Act 2018, which were not designed to address AI-specific risks such as deepfakes, automated decision-making, algorithmic discrimination, or synthetic disinformation.

    As highlighted in the CIPESA report, critical gaps remain in the use of AI. These include the absence of mandatory algorithmic impact assessments, weak safeguards against AI-driven surveillance such as facial recognition, and scant measures to address AI-generated electoral misinformation. Furthermore, regulatory authorities lack sufficient capabilities to audit and monitor sophisticated AI systems, and there are no clear licensing or accountability frameworks for AI creators and deployers.

    “Without deliberate, inclusive, and rights-centred governance, AI risks entrenching authoritarianism and exacerbating inequalities.” (Navigating the Implications of AI on Digital Democracy in Kenya, 2025)

    The Way Ahead: AI Governance Focused on Human Rights

    The CIPESA report outlines a human rights–centred approach to AI governance that is built on the following key principles:

    1. Life-Centred and Human-Centred Design and Accountability: AI should support and not replace human judgment, with strong oversight to ensure transparency and accountability.
    2. Equity and Fairness: Design AI to prevent bias and expand inclusive access, especially for underrepresented groups.
    3. Transparency and Trust: Ensure AI systems are explainable, well-documented, and open to public scrutiny and challenge.
    4. Safety, Security and Resilience: Build resilient systems with ongoing risk assessments and strong protections against misuse.
    5. International Collaboration and Ethical AI Development: Advance ethical AI through international collaboration while upholding constitutional values and human oversight.
    6. Environmental sustainability: Align AI development with climate resilience and sustainable resource use.
    7. Inclusive Participation and Cultural Relevance: Reflect local diversity and involve marginalised communities in AI design.
    8. Robust Governance and Adaptive Regulation: Maintain flexible, responsive regulation that keeps pace with technological change.

    The report calls for a coordinated, multi-stakeholder approach to AI governance. It recommends that:

    • The government should enact a comprehensive AI law aligned with constitutional and international human rights standards, establish a legally mandated National AI Advisory Council with inclusive representation and strong enforcement powers.  It should also introduce clear prohibitions on high-risk practices such as real-time biometric surveillance without judicial oversight.
    • Civil society and the media should strengthen public awareness, promote accountability, and counter AI-driven disinformation.
    • Private sector actors should uphold transparency, fairness, and ethical standards across AI systems, including fair labour practices. Labour protections must be guaranteed for gig workers and data annotators within the AI value chain.
    • Academia and research institutions should continue generating evidence that can guide context-specific policy and regulation.
    • Across all stakeholders, digital literacy must be expanded, especially in underserved and rural communities, so that citizens can understand and challenge AI systems that affect them.

    With the ongoing legislative processes on AI, this is a pivotal time for Kenya, as it has the momentum and the attention of the world. But momentum without action will not work. The country cannot afford slow, fragmented debates while technology is fast progressing. Additionally, Kenya must strike a careful balance between regulation and innovation, as overly restrictive rules could limit access, slow local innovation, and lock the country out of AI’s economic and social benefits. The goal should be a flexible, forward-looking framework that protects rights while still enabling growth and opportunity.

    Read the full report, Navigating the Implications of AI on Digital Democracy in Kenya.

    Democratising Big Tech: Lessons from South Africa’s 2024 Election

    By Jean-Andre Deenik | ADRF

    South Africa’s seventh democratic elections in May 2024 marked a critical turning point — not just in the political sphere, but in the digital one too. For the first time in our democracy’s history, the information space surrounding an election was shaped more by algorithms, platforms, and private tech corporations than by public broadcasters or community mobilisation.

    We have entered an era where the ballot box is not the only battleground for democracy. The online world — fast-moving, largely unregulated, and increasingly dominated by profit-driven platforms — has become central to how citizens access information, express themselves, and participate politically.

    At the Legal Resources Centre (LRC), we knew we could not stand by as these forces influenced the lives, choices, and rights of South Africans — particularly those already navigating inequality and exclusion. Between May 2024 and April 2025, with support from the African Digital Rights Fund (ADRF), we implemented the Democratising Big Tech project: an ambitious effort to expose the harms of unregulated digital platforms during elections and advocate for transparency, accountability, and justice in the digital age.

    Why This Work Mattered

    The stakes were high. In the run-up to the elections, political content flooded platforms like Facebook, YouTube, TikTok, and X (formerly Twitter). Some of it was civic-minded and constructive — but much of it was misleading, inflammatory, and harmful.

    Our concern wasn’t theoretical. We had already seen how digital platforms contributed to offline violence during the July 2021 unrest, and how coordinated disinformation campaigns were used to sow fear and confusion. Communities already marginalised — migrants, sexual minorities, women — bore the brunt of online abuse and harassment.

    South Africa’s Constitution guarantees freedom of expression, dignity, and access to information. Yet these rights are being routinely undermined by algorithmic systems and opaque moderation policies, most of which are designed and governed far beyond our borders. Our project set out to change that.

    Centering People: A Public Education Campaign

    The project was rooted in a simple truth: rights mean little if people don’t know they have them — or don’t know when they’re being violated. One of our first goals was to build public awareness around digital harms and the broader human rights implications of tech platforms during the elections.

    We launched Legal Resources Radio, a podcast series designed to unpack the real-world impact of technologies like political microtargeting, surveillance, and facial recognition. Our guests — journalists, legal experts, academics, and activists — helped translate technical concepts into grounded, urgent conversations.

    We spoke to:

    Alongside the podcasts, we used Instagram to host

    Holding Big Tech to Account

    A cornerstone of the project was our collaboration with Global Witness, Mozilla, and the Centre for Intellectual Property and Information Technology Law (CIPIT). Together, we set out to test whether major tech companies (TikTok, YouTube, Facebook, and X) were prepared to protect the integrity of South Africa’s 2024 elections. To do this, we designed and submitted controlled test advertisements that mimicked real-world harmful narratives, including xenophobia, gender-based disinformation, and incitement to violence. These ads were submitted in multiple South African languages to assess whether the platforms’ content moderation systems, both automated and human, could detect and block them. The findings revealed critical gaps in platform preparedness and informed both advocacy and public awareness efforts ahead of the elections.

    The results were alarming.

    • Simulated ads with xenophobic content were approved in multiple South African languages;
    • Gender-based harassment ads directed at women journalists were not removed;
    • False information about voting — including the wrong election date and processes — was accepted by TikTok and YouTube.

    These findings confirmed what many civil society organisations have long argued: that Big Tech neglects the Global South, failing to invest in local language moderation, culturally relevant policies, or meaningful community engagement. These failures are not just technical oversights. They endanger lives, and they undermine the legitimacy of our democratic processes.

    Building an Evidence Base for Reform

    Beyond exposing platform failures, we also produced a shadow human rights impact assessment. This report examined how misinformation, hate speech, and algorithmic discrimination disproportionately affect marginalised communities. It documented how online disinformation isn’t simply digital noise — it often translates into real-world harm, from lost trust in electoral systems to threats of violence and intimidation.

    We scrutinised South Africa’s legal and policy frameworks and found them severely lacking. Despite the importance of online information ecosystems, there are no clear laws regulating how tech companies should act in our context. Our report recommends:

    • Legal obligations for platforms to publish election transparency reports;
    • Stronger data protection and algorithmic transparency;
    • Content moderation strategies inclusive of all South African languages and communities;
    • Independent oversight mechanisms and civil society input.

    This work is part of a longer-term vision: to ensure that South Africa’s digital future is rights-based, inclusive, and democratic.

    Continental Solidarity

    In April 2025, we took this work to Lusaka, Zambia, where we presented at the Digital Rights and Inclusion Forum (DRIF) 2025. We shared lessons from South Africa and connected with allies across the continent who are also working to make technology accountable to the people it impacts.

    What became clear is that while platforms may ignore us individually, there is power in regional solidarity. From Kenya to Nigeria, Senegal to Zambia, African civil society is uniting around a shared demand: that digital technology must serve the public good — not profit at the cost of people’s rights.

    What Comes Next?

    South Africa’s 2024 elections have come and gone. But the challenges we exposed remain. The online harms we documented did not begin with the elections, and they will not end with them.

    That’s why we see the Democratising Big Tech project not as a one-off intervention, but as the beginning of a sustained push for digital justice. We will continue to build coalitions, push for regulatory reform, and educate the public. We will work with journalists, technologists, and communities to resist surveillance, expose disinformation, and uphold our rights online.

    Because the fight for democracy doesn’t end at the polls. It must also be fought — and won — in the digital spaces where power is increasingly wielded, often without scrutiny or consequence.

    Final Reflections

    At the LRC, we do not believe in technology for technology’s sake. We believe in justice — and that means challenging any system, digital or otherwise, that puts people at risk or threatens their rights. Through this project, we’ve seen what’s possible when civil society speaks with clarity, courage, and conviction.

    The algorithms may be powerful. But our Constitution, our communities, and our collective will are stronger.

    Kenyan Journalists Trained on Digital Rights and Addressing Online Harms

    By Lyndcey Oriko |

    The National Cohesion and Integration Commission (NCIC) and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) have trained 60 Kenyan journalists on addressing digital harms such as hate speech and disinformation.

    The training in Naivasha in June 2024 targeted journalists and media workers based in Nakuru County, which the Commission has identified as a conflict hotspot. The journalists were equipped with the knowledge and skills to navigate the complexities of reporting on digital rights and online harms in a more professional and ethical way, particularly during sensitive periods such as conflicts and protests.

    The training happened at a time when Kenya was experiencing protests and demonstrations dubbed the #RejectTheFinanceBill2024. The protests saw significant mobilisation and engagement on social media platforms, predominantly TikTok and X. The country had also experienced internet throttling despite assurances by the communications regulator that they had no plans to switch off the internet and calls by civil society actors for the government not to interrupt internet services.

    In his opening remarks, NCIC’s Commissioner, Dr. Danvas Makori, underscored the critical role journalists play in mitigating hate speech and fostering peace, particularly during sensitive periods such as conflicts and protests. He highlighted the importance of ethical reporting, particularly in the face of rising disinformation and online hate speech.

    Dr. Wairagala Wakabi from CIPESA discussed the challenges to internet freedom, including increased censorship and harassment of journalists and independent content creators. He challenged participants to engage in research to inform their reporting and to leverage the emerging technologies to always verify and fact-check as a way of combating disinformation and online hate speech.

    The workshops included in-depth sessions on balancing freedom of expression,  which is guaranteed by article 33 of the Constitution of Kenya 2010, with necessary limitations, such as those aimed at combating hate speech, which is stipulated in the National and Cohesion Integration (NCI) Act, 2008. The training emphasised the importance of protecting offline and online rights, and the journalists were reminded of their responsibilities to uphold rights and freedoms while avoiding content that could harm others.

    Making references to the #RejectTheFinanceBill2024, the discussions also tackled various forms of online harm, emphasising the importance of civic education, policy enforcement, and ethical reporting.

    On his part, Kyalo Mwengi, the Director Legal Services at NCIC, emphasised the fundamental role of journalists in fostering peace. The training was essential to equip journalists with the skills to verify information, understand the nuances of conflict-sensitive reporting, and to effectively use social media to promote cohesion rather than division and to ensure that the public receives reliable and truthful information.

    Liban Guyo, Director Peace Building and Reconciliation at the Commission highlighted the importance of contextualising stories, especially those about conflicts. He said the media can escalate or de-escalate a conflict through their reporting, which underscored the need for conflict-sensitive reporting.

    Mwengi also presented some of the Commission’s recommendations to the Parliamentary Cohesion and Equality Committee, which is considering amendments to the 2008 Act through the National Cohesion and Integration Bill, 2023. He noted that because the NCI Act was enacted prior to the passage of the 2010 Constitution, it lacked constitutional powers, thereby affecting its performance and effectiveness. Accordingly, the Commission was proposing that the NCIC should be anchored within the Constitution, like other Commission, with clear funding mechanisms and guaranteed independence. In addition, the amendments should consider the prevailing digital landscape to craft robust online hate speech regulations.

    In her remarks, Lucy Mwangi from the Media Council of Kenya (MCK) urged journalists to apply the training’s teachings daily, emphasising ethical standards and the promotion of peace and accuracy, both online and offline. She stressed the importance of being registered and carrying press cards to uphold professional integrity, including ensuring their personal safety.

    Some of the key issues raised by the participants include the high cost of verifying information, low digital literacy, lack of awareness of conflict-sensitive reporting, and the reactive approach by social media platforms to hate speech and misinformation that allows harmful content to spread quickly. The workshop not only provided valuable insights into the responsibilities of journalists in the digital age but also fostered a collaborative spirit among media professionals to address the challenges posed by online harms. Given the recent protests against proposed tax hikes in Kenya, the timing of this training was particularly relevant, underscoring the need for responsible reporting amidst heightened social tensions. Overall, this initiative represents a proactive step towards promoting ethical journalism and safeguarding digital rights in Kenya.

    Kenya’s 2022 Political Sphere Overwhelmed by Disinformation

    Ahead of the August 9, 2022, general elections, Kenya has been hit by a deluge of disinformation, which is fanning hate speech, threatening electoral integrity, and is expected to persist well beyond the polls. Last month, the Kenya ICT Action Network (KICTANet) and CIPESA convened stakeholders in Nairobi to disseminate the findings of research on the nature, pathways, and effects of disinformation in the lead-up to the election, and the actions required to combat disinformation. Below is a summary of the report findings and takeaways from the dissemination event, as captured by KICTANet:

    There is a lot of strange information going on around the country, and this has been happening for a while. During the Kenya Internet Governance Forum (IGF) week, the Kenya ICT Action Network (KICTANet) in partnership with the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) held a workshop to disseminate a report on  Disinformation in Kenya’s Political Sphere: Actors, Pathways and Effects. The research is part of a regional study conducted by CIPESA, that explores the nature, perpetrators, and effects of misinformation in Cameroon, Ethiopia, Uganda, Nigeria, and Kenya.

    As Kenya nears the 2022 general elections, disinformation remains at its peak levels, both at grassroots and national levels. The availability of sophisticated technology and its ease of use has enabled a wide range of political actors to act as originators and spreaders of disinformation.

    Currently, there is no law that clearly defines or distinguishes between misinformation and disinformation. However, it is an offense to deliberately create and spread false or misleading information in the country. False publications and the publication of false information are punishable under the Computer Misuse and Cyber Crimes Act under Sections 22 and 23. It is a crime to relay false information with the intent that such information is viewed as true, with or without monetary gain. However, these same laws can also be used to silence dissent, making it a double-edged sword.

    The study identifies different forms of disinformation that take place both physically and online. They include deep fakes, text messages, WhatsApp messages, and physical copies such as pamphlets and fliers. These are spread through the use of keyboard armies on social media, where politicians up to the grassroots levels hire influencers, and content creators who spread messages around them or against their opponents. This is done through mass brigading and document and content manipulation. The rationale is driven by the desire to get ahead politically or economically and is fuelled by an ecosystem that is fertile for the spread of this vice.

    According to Safaricom, in the year 2017, 50% of its communications department time was spent monitoring fraud and fake information at different times. The instigators of this disinformation are influencers, politicians themselves, people they work with, and their parties.

    There is a flow to how the fake news gets to the audience, and disinformation does not start with the pictures but with a plan that is part of a bigger political strategy. It starts with identifying the target audience, choosing the personnel and people to push the message, and then narrative development is done. This is followed by content development, which includes videos, pictures or memes, and audio files. Once this is done, the content is then strategically released to the unknowing public, who, without critically analyzing the information, spread it far and wide to a wider audience. This results in diminished trust in democratic and political institutions and restricted access to reliable and diverse information.

    This can be addressed by having increased government engagement on social media as opposed to it being reactive only. For example, the government needs to be an active contributor to accurate information. Considering there is a space in which disinformation thrives, in particular where there is a lack of response, rumors spread. Civil society should also engage with policymakers and media representatives on enhancing digital literacy and fact-checking skills. The intermediaries should increase transparency and accountability in content moderation measures and conduct cross-sectoral periodic policy reviews.

    Key Takeaways

    1. The weakest link in disinformation is the citizen, and therefore, one of the most effective ways to tackle the issue is to empower the citizenry to be able to detect and respond wisely to misinformation. If the general public is not informed, it is a lost battle.
    2. There is a thin line between misinformation and mal-information and it can easily be blurred.
    3. The Computer Misuse and Cyber Crimes Act 2018 is a double-edged sword that censors yet tries to get some accountability from the general public in regard to spreading misinformation.
    4. Safaricom reported that during the 2017 election, 50% of its time was spent monitoring fraudulent interactions.
    1 2 3 6