By Reyhana Masters |
The phrase “misinformation crisis” used to evoke images of shadowy troll farms and bot networks manipulating elections from afar. Today, the crisis is extremely close – in WhatsApp groups, TikTok reels, and “breaking news” alerts that collapse under scrutiny. The more urgent question is no longer whether Africa faces a polluted information ecosystem but how the continent responds to it.
A February 2026 regional engagement convened by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) gathered members of the judiciary, data protection authorities, communications regulators, law enforcement officers and National Human Rights Institutions (NHRIs) to examine the scale and impact of digital harms.
CIPESA’s Victor Kapiyo set the tone with a reminder that disinformation is not simply about false content; it is about power, intent, amplification, and impact. Discussions focused on responses that separate genuine harm from protected expression.
Disinformation has become sophisticated and professionalised, often backed by political or commercial interests with the resources to manipulate narratives at scale. It moves across borders, shielded by opaque algorithms and corporate structures that complicate national oversight.
Nigeria’s elections illustrate this phenomenon, with political contestation unfolding not only at rallies and ballot boxes, but across encrypted messaging platforms, influencer networks and algorithm-driven feeds.
Fabricated audio recordings, doctored endorsements, and deepfake videos circulated widely. One false claim suggested that President Donald Trump would intervene in Nigeria’s election – a fabrication designed to exploit geopolitical anxieties as well as domestic political and religious tensions.
What makes the Nigerian case instructive is not only the scale of falsehoods, but the architecture behind them. Influencers are reportedly paid significant sums to seed and normalise partisan narratives. Political actors assemble coordinated digital teams to produce, test and amplify content across multiple platforms simultaneously.
“Elections and armed conflicts are key drivers of disinformation. Governments have used both disinformation and the response to it to entrench themselves in power, shrink civic space, and target opponents and critics.” Source: Disinformation Pathways and Effects: Case Studies from Five African Countries.
Even trained journalists, facing financial strain in struggling media markets, are sometimes recruited into propaganda networks that blur the line between professional reporting and political messaging. Moreover, some foreign state actors invest in narrative campaigns to advance their geopolitical interests, viewing African electoral environments as arenas for strategic influence.
A Wider Continental Pattern
Across Africa, disinformation thrives at the intersection of several reinforcing vulnerabilities: intense political competition, widening economic inequality, weak and underfunded media ecosystems, gaps in platform governance, low levels of media literacy and the growing entanglement of foreign geopolitical interests in domestic affairs.
In many contexts, independent newsrooms struggle financially, leaving audiences vulnerable to cheaper, sensationalist content engineered for virality. Regulatory frameworks are often outdated or overly broad, oscillating between under-enforcement and heavy-handed crackdowns that conflate criticism with criminality.
Meanwhile, global technology platforms operate across borders with inconsistent content moderation standards, creating jurisdictional grey zones that undermine accountability.
Beyond Criminalisation
Experience from across the continent suggests that criminalising individual users for “false information” is a blunt and frequently counter-productive response. Without clear legal definitions, disinformation laws can be weaponised against journalists, opposition figures and ordinary citizens exercising legitimate expression.
Indeed, this has been witnessed in countries such as Kenya and Uganda, where laws on “false news” or “computer misuse” have been invoked to arrest and prosecute individuals over what appears to be protected speech.
Effective responses to disinformation require a more layered approach. Clear and precise legal definitions are essential to distinguish between harmful coordinated manipulation and protected speech. Safeguards must be embedded to prevent abuse of disinformation laws for political ends. Platform accountability mechanisms need strengthening, particularly around transparency in political advertising, algorithmic amplification, and coordinated inauthentic behaviour.
Equally critical is sustained investment in media literacy so that citizens are better equipped to interrogate sources and narratives. Independent journalism must be protected and financially supported as a public good. Oversight of coordinated political digital campaigns – including disclosure of funding sources and sponsorship structures – is necessary to illuminate the financial and logistical structures behind viral content.
Following the Money
Focusing on individual users such as those who forward or share content misses the deeper architecture of harm. Without tracing and addressing the networks that design, fund and amplify these campaigns, regulatory responses risk treating symptoms rather than causes.
Participants were urged to draw careful distinctions between misinformation (false information shared without harmful intent), disinformation (deliberate deception), and malinformation (genuine information used to cause harm). Yet these distinctions are often blurred in law. As Kapiyo explained, “when legislation uses vague terms like ‘false news’, ‘annoying’, or ‘offensive’, it creates a net so wide that legitimate criticism can be trapped within it.”
Across several African countries, disinformation laws have been invoked not to dismantle coordinated fraud networks, but to prosecute critics, journalists and opposition voices. This is specifically when governments intervene in digital spaces when their political legitimacy is threatened or when electoral narratives are challenged and when protest movements emerge.
However, the same urgency is not always visible when harmful misinformation spreads socially, when children are exposed to abuse content, or when online fraud syndicates operate at scale.
Several participants observed that enforcement patterns often mirror political anxieties rather than objective harm assessments. “We must ask ourselves,” one judicial officer reflected during the discussions, “are we responding to harm, or are we responding to discomfort?”
Another participant from an NHRI cautioned that credibility is eroded when states appear animated only by speech that threatens authority. “If citizens see that the law moves fastest against critics but slowest against fraudsters and child exploitation networks, trust collapses,” she noted. “And once trust collapses, regulation itself becomes suspect.”
Kapiyo urged the room to think beyond reactionary fixes and toward structural reform: “Digital harms are real but so are constitutional protections. The challenge is not choosing one over the other but instead the solution lies in designing responses that respect both.”
This tension between legitimate regulation and opportunistic control formed a key undercurrent throughout the engagement. Participants repeatedly returned to the same conclusion: a polluted ecosystem cannot be cleaned with contaminated tools. If the response lacks proportionality, clarity and fairness, it risks becoming part of the problem it seeks to solve.
Participants agreed that responses must balance addressing harm with protecting constitutional rights. The test of legality, legitimacy and proportionality remains essential: if a restriction fails one, it fails entirely.
From Discussion to Duty
As the engagement drew toward its close, the conversation shifted from diagnosis to responsibility. Who, precisely, must act and how?
For legislators, the recommendation was unequivocal: draft narrowly tailored laws grounded in clear definitions. Avoid vague formulations such as “false news” that collapse complex categories into blunt offences. Embed explicit safeguards against abuse, including independent oversight and sunset clauses that require periodic review.
For the judiciary, the charge was equally clear: rigorously interrogate executive claims of harm. Apply constitutional proportionality tests consistently. Insist on evidence of coordinated manipulation rather than speculative assertions of public disorder. Judicial independence, several participants noted, is the difference between regulation and repression.
Communications regulators and data protection authorities were urged to strengthen transparency requirements for political advertising and algorithmic amplification. “If money is shaping narratives,” one regulator observed, “then disclosure must follow the money.” Cross-border cooperation will be essential, particularly where coordinated campaigns operate across jurisdictions.
Law enforcement agencies were encouraged to prioritise organised fraud networks, child exploitation rings and coordinated digital criminal enterprises – areas where harm is demonstrable and urgent – rather than focusing disproportionate energy on individual expression. Capacity-building in digital forensics and evidence preservation was identified as critical.
And for civil society and media institutions, the focus is on resilience: invest in investigative capacity to expose coordinated campaigns, strengthen fact-checking networks, and expand media literacy initiatives so that citizens can interrogate viral narratives without defaulting to cynicism.

