Ahead of the August 9, 2022, general elections, Kenya has been hit by a deluge of disinformation, which is fanning hate speech, threatening electoral integrity, and is expected to persist well beyond the polls. Last month, the Kenya ICT Action Network (KICTANet) and CIPESA convened stakeholders in Nairobi to disseminate the findings of research on the nature, pathways, and effects of disinformation in the lead-up to the election, and the actions required to combat disinformation. Below is a summary of the report findings and takeaways from the dissemination event, as captured by KICTANet:
There is a lot of strange information going on around the country, and this has been happening for a while. During the Kenya Internet Governance Forum (IGF) week, the Kenya ICT Action Network (KICTANet) in partnership with the Collaboration on International ICT Policy in East and Southern Africa (CIPESA) held a workshop to disseminate a report on Disinformation in Kenya’s Political Sphere: Actors, Pathways and Effects. The research is part of a regional study conducted by CIPESA, that explores the nature, perpetrators, and effects of misinformation in Cameroon, Ethiopia, Uganda, Nigeria, and Kenya.
As Kenya nears the 2022 general elections, disinformation remains at its peak levels, both at grassroots and national levels. The availability of sophisticated technology and its ease of use has enabled a wide range of political actors to act as originators and spreaders of disinformation.
Currently, there is no law that clearly defines or distinguishes between misinformation and disinformation. However, it is an offense to deliberately create and spread false or misleading information in the country. False publications and the publication of false information are punishable under the Computer Misuse and Cyber Crimes Act under Sections 22 and 23. It is a crime to relay false information with the intent that such information is viewed as true, with or without monetary gain. However, these same laws can also be used to silence dissent, making it a double-edged sword.
The study identifies different forms of disinformation that take place both physically and online. They include deep fakes, text messages, WhatsApp messages, and physical copies such as pamphlets and fliers. These are spread through the use of keyboard armies on social media, where politicians up to the grassroots levels hire influencers, and content creators who spread messages around them or against their opponents. This is done through mass brigading and document and content manipulation. The rationale is driven by the desire to get ahead politically or economically and is fuelled by an ecosystem that is fertile for the spread of this vice.
According to Safaricom, in the year 2017, 50% of its communications department time was spent monitoring fraud and fake information at different times. The instigators of this disinformation are influencers, politicians themselves, people they work with, and their parties.
There is a flow to how the fake news gets to the audience, and disinformation does not start with the pictures but with a plan that is part of a bigger political strategy. It starts with identifying the target audience, choosing the personnel and people to push the message, and then narrative development is done. This is followed by content development, which includes videos, pictures or memes, and audio files. Once this is done, the content is then strategically released to the unknowing public, who, without critically analyzing the information, spread it far and wide to a wider audience. This results in diminished trust in democratic and political institutions and restricted access to reliable and diverse information.
This can be addressed by having increased government engagement on social media as opposed to it being reactive only. For example, the government needs to be an active contributor to accurate information. Considering there is a space in which disinformation thrives, in particular where there is a lack of response, rumors spread. Civil society should also engage with policymakers and media representatives on enhancing digital literacy and fact-checking skills. The intermediaries should increase transparency and accountability in content moderation measures and conduct cross-sectoral periodic policy reviews.
Key Takeaways
- The weakest link in disinformation is the citizen, and therefore, one of the most effective ways to tackle the issue is to empower the citizenry to be able to detect and respond wisely to misinformation. If the general public is not informed, it is a lost battle.
- There is a thin line between misinformation and mal-information and it can easily be blurred.
- The Computer Misuse and Cyber Crimes Act 2018 is a double-edged sword that censors yet tries to get some accountability from the general public in regard to spreading misinformation.
- Safaricom reported that during the 2017 election, 50% of its time was spent monitoring fraudulent interactions.