By Juliet Nanfuka |
Content moderation has emerged as a critical global concern in the digital age. In Africa, coinciding with increasing digital penetration and digitalisation, along with problematic platform responses to harmful content and retrogressive national legislation, the demand for robust and rights-respecting content moderation has reached a new level of urgency.
Online content platforms and governments are increasingly getting caught in a contentious struggle on how to address content moderation, especially as online hate speech and disinformation become more pervasive. These vices regularly seep into the real world, undermining human rights, social cohesion, democracy, and peace. They have also corroded public discourse and fragmented societies, with marginalised communities often bearing some of the gravest consequences.
The Social Media 4 Peace (SM4P) project run by the United Nations Educational, Scientific and Cultural Organization (UNESCO) was established to strengthen the resilience of societies to potentially harmful content spread online, in particular hate speech inciting violence, while protecting freedom of expression and enhancing the promotion of peace through digital technologies, notably social media. The project was piloted in four countries – Bosnia and Herzegovina, Colombia, Indonesia, and Kenya.
At the Forum on Internet Freedom in Africa (FIFAfrica) held in Tanzania in September 2023, UNESCO hosted a session where panellists interrogated the role of a multi-stakeholder coalition in addressing gaps in content moderation with a focus on Kenya. The session highlighted the importance of multi-stakeholder cooperation, accountability models, and safety by design to address online harmful content, particularly disinformation and hate speech in Africa.
In March 2023, as a product of the SM4P, UNESCO in partnership with the National Cohesion and Integration Commission (NCIC) launched the National Coalition on Freedom of Expression and Content Moderation in Kenya. The formation of the coalition, whose membership has grown to include various governmental, civil society and private sector entities, is a testament to the need for content moderation efforts based on broader multi-stakeholder collaboration.
As such, the coalition provided a learning model for the participants at FIFAfrica, who included legislators, regulators, civil society activists and policy makers, whose countries are grappling with establishing effective and rights-based content moderation mechanisms. The session explored good practices from the SM4P project in Kenya for possible replication of the model coalition across African countries. Discussions largely centred on issues around content moderation challenges and opportunities for addressing these issues in the region.
Online content moderation has presented a new regulatory challenge for governments and technology companies. Striking a balance between safeguarding freedom of expression and curtailing harmful and illegal content has presented a challenge especially as such decisions have largely fallen into the remit of platforms, and state actors have often criticised platforms’ content moderation practices.
This has resulted in governments and platforms coming to loggerheads as was witnessed during the 2021 Uganda elections. Six days before the election, Meta blocked various accounts and removed content for what it termed as “coordinated inauthentic behavior”. Most of the accounts affected were related to pro-ruling party narratives or had links to the ruling party. In response, the Uganda government blocked social media before blocking access to the entire internet. Nigeria similarly suspended Twitter in June 2021 for deleting a post made from the president’s account.
Social media companies are taking various measures to curb such content, including by using artificial intelligence tools, employing human content moderators, collaborating with fact-checking organisations and trusted partner organisations that identify false and harmful content, and relying on user reports of harmful and illegal content. However, these measures by the platforms are often criticised as inadequate.
On the other hand, some African governments were also criticised for enacting laws and issuing directives that undermine citizens’ fundamental rights, under the guise of combating harmful content.
Indeed, speaking at the 2023 edition of FIFAfrica, Felicia Anthonio, #KeepItOn Campaign Manager at Access Now, stated that weaknesses in content moderation are often cited by some governments that shut down the internet. She noted that governments justify shutting down internet access as a tool to control harmful content amidst concerns of hate speech, disinformation, and incitement of violence. This was reaffirmed by Thobekile Matimbe from Paradigm Initiative who noted that “content moderation is a delicate test of speech”, stressing that if content moderation is not balanced against freedom of expression and access to information, it would result in violation of fundamental human rights. Beyond governments and social media companies, other stakeholders need to step up efforts to combat harmful content and advocate for balanced content moderation policies. For instance, speakers at the FIFAfrica session were unanimous that civil society organisations, academic institutions, and media regulators need to enhance digital media literacy and increase collaborative efforts. Further, they stressed the need for these stakeholders to regularly conduct research to underpin evidence-based advocacy, and to support the development of human rights-centered strategies in content moderation.