Smell The Coffee Kenya, Disinformation Is Brewing!

By Juliet Nanfuka |

Just over a year ago, Kenya was in the midst of a bitterly contested general election held in August 2022. The electoral period was characterised by hate speech and disinformation, which remain prevalent today. Indeed, recent studies have highlighted a booming disinformation industry in the country, fuelled by political, economic and personal interests with many actors including politicians, content creators, and citizens churning out hate speech and disinformation on social media platforms. 

During the election period, disinformation and hate speech circulated widely as social media personalities and ordinary citizens on various sides of the political divide coordinated and shared inciteful and hateful content. Influencers with a large following on the platforms are often bankrolled by politicians to recruit and coordinate micro-influencers to develop common disinformation and hate narratives and push hashtags which often trend on social media. Further, social media trolls engage through Facebook posts, tweets and WhatsApp groups with targeted hate against ethnic communities such as the Kalenjin, Kikuyu and Luo, the ethnic communities of current president William Ruto, former president Uhuru Kenyatta, and former Prime Minister Raila Odinga, respectively. 

Amidst the election-related disinformation blitz, social media platforms seemed to do either too little or nothing to stop the spread of harmful and illegal content. An investigation by Global Witness and Foxglove in June 2022 showed that Facebook failed to detect inflammatory and violent hate speech ads posted on its platforms in Swahili and English. Further, the investigation found that even after putting out a statement in July 2022 on its efforts to combat harmful content, including 37,000 pieces of Kenyan content removed from Facebook and Instagram for violating its hate speech policies, similar hate speech ads were later approved on their platforms. 

Likewise, in July 2022 Twitter was blamed by local actors for profiting from its negligence by allowing its trending topic section to be exploited through paid influencers to amplify malicious, coordinated, inauthentic attacks to silence Kenya’s civil society, muddy their reputations and stifle the reach of their messaging. In September 2021, Twitter suspended 100 accounts from Kenya for violating the platform’s manipulations and spam policy after being found to have been tweeting pre-determined hashtags meant to misinform the public and attack certain personalities. In June 2022, the company suspended 41 accounts promoting the hashtag #ChebukatiCannotBeTrusted, which suggested that the then Chairperson of the Independent Electoral and Boundaries Commission (IEBC) was supporting one of the presidential candidates. 

TikTok, which has gained popularity among younger audiences, has also come under scrutiny after disinformation and hate content was found on its platform ahead of the August 2022 election. A study by Mozilla found 132 videos that had been viewed collectively over four million times, which were spreading hate speech and inciting violence against some ethnic communities. Some also featured synthetic and manipulated content couched as Netflix documentaries, news stories and fake opinion polls or fake tweets aimed at triggering violence, fear and violence as was witnessed during the 2007 post-election period. According to the report, TikTok suffered context bias and its content moderation practices were not robust enough to tackle the spread of such content on its platform. TikTok has since removed the videos highlighted in the report. 

According to Kenya’s hate speech watchdog, the National Cohesion and Integration Commission (NCIC), hate speech content is most prevalent on Facebook and Twitter. In July 2022, the NCIC ordered Meta to address hate speech and incitement on its Facebook platform within a week or face a suspension of its operations in the country. In August 2022, the Commission also found an increase in hate content on TikTok. Some of the hate and disinformation hashtags it identified on the various platforms included #RejectRailaOdinga, #Riggagy and #RutoMalizaUfungwe, which propagated falsehoods against candidates in the presidential election.

Some critics have argued that social media platforms have shown a consistent failure to adequately moderate content in Kenya. Furthermore, the platforms’ attempts at content moderation are implemented in a lacklustre, under-funded and opaque system that is neither participatory nor rights-respecting. Other studies have also shown that platforms continue to inconsistently enforce their own rules through flawed content moderation practices and in essence permit the spread of extreme, divisive and polarising content partly due to their lack of understanding of Kenya’s cultural context and local languages and slang.

The government’s attempts at legislating on disinformation and hate speech have not been without setbacks. In 2018, the Computer Misuse and Cybercrimes Act, 2018 was adopted, imposing punitive fines and prison terms on the publication of false information and false, misleading and fictitious data. Unfortunately, these provisions have been unjustly used to target bloggers for exposing corruption or seeking state accountability. 

A case by the Bloggers Association of Kenya challenging the constitutionality of the law remains pending an appeal of the decision by the High Court in February 2020 allowing the enforcement of the law. Section 13 of the National Cohesion and Integration Act constricts the definition of hate speech to “ethnic hatred” and fails to capture the constitutional limitations under Article 31, which include propaganda for war, incitement to violence, hate speech, and advocacy of hatred. This means various hate speech content remains lawful in the absence of a clear criminal prohibition.

Moreover, the NCIC, which was formed following the 2007 post-election violence, has been plagued by numerous challenges in its attempt to fight hate and promote peace and national cohesion. The commission for most of its active life has been underfunded, thus hindering its ability and capacity to monitor hate speech online, investigate incidents and conduct public awareness and engagements. Further, political interference with its work means that it has been incapable of enforcing the law to get successful convictions of offenders who are mostly the political elite

More importantly, successive government administrations have failed to implement the recommendations of the Truth Justice and Reconciliation Commission (TJRC) report to address the drivers of hate and disinformation. The report identified those drivers as Kenya’s historical inter-ethnic tensions that are systemic and deep-rooted in its social, cultural, religious, economic and political fabric. Disinformation and hate speech in Kenya thrive on these unresolved historical tensions around political ideology, ethnicity, economics, and demography.  

Today, a majority of social media users in Kenya are aware of and fuel hate speech and disinformation on social media. To some, it is all fun and games, as they assume no feelings get hurt. To many, however, disinformation triggers pain, fear, tension and hate. Last year, a local politician advised Kenyans to put matters of politics in their lungs, not their hearts. This attitude is also a problem, as such views may breed a level of acceptance and normalisation of disinformation and hate speech in the country by encouraging people to grow a ‘thick skin’ instead of objectively addressing the root causes of the vice. People, including Kenyans, are known to act on their feelings. As we have seen in neighbouring countries such as the Democratic Republic of Congo, Ethiopia and Sudan, hate speech and disinformation can drive violence with devastating consequences. 

The failure to resolve Kenya’s underlying tensions means the country risks further social division and fragmentation of society as well as diminished progress due to a continuation of governance policies and practices that further entrench discrimination and exclusion in accessing opportunities, resources and essential services. The hate that arises from the effects of such policies and practices, and the disinformation deployed to justify and perpetuate them, affects people’s mental health and emotional well-being. Moreover, they cement long-held historical fears, suspicions and animosity that continue to undermine the ability of Kenyans to trust each other or the government and could inhibit the willingness of sections of the public to cooperate in nation-building for the common good. 

Be that as it may, there are some promising efforts, such as the recently launched local coalition on freedom of expression and content moderation and the draft guidelines for regulating digital platforms spearheaded by UNESCO that seek to promote engagement and tackle content that potentially damages human rights and democracy. The multistakeholder coalition is an initiative of UNESCO and ARTICLE 19 that aims to bridge the gap between local stakeholders and social media companies and to improve content moderation practices, including supporting regulatory reform, building the capacity of state and non-state actors, and raising awareness on the ethical use of digital platforms. While these twin initiatives are new and largely untested, they present an opportunity to ensure more rights-respecting content moderation practices, the application of common norms based on human rights standards and stronger multistakeholder engagement in the content moderation process.

Finally, it may be easy to blame social media companies for the weaknesses in their content moderation systems, and by all means they need to be held to account. However, better algorithms alone cannot fix our society or our social norms. Kenyans must wake up and smell the coffee. Leaders need to drop the divisive acts and work together with stakeholders and citizens to address historical tensions and foster a culture of inclusion, tolerance, respect and understanding. While at it, they should promote responsible social media use, fact-checking, and media literacy in order to counter the negative impact of hate speech and disinformation and ultimately build a more just, harmonious, democratic and equitable society.

Submit Your Session Proposal or Travel Support Application to the Forum on Internet Freedom in Africa 2023 (FIFAfrica23)

Announcement |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) invites interested parties to submit session proposals to the 2023 edition of the Forum on Internet Freedom in Africa (FIFAfrica23). Successful submissions will help to shape the agenda of the event, which will gather hundreds of policymakers, regulators, human rights defenders, journalists, academics, private sector players, global information intermediaries, bloggers, and developers.

FIFAfrica23, which is set to take place in Dar es Salaam, Tanzania on September 27-29, 2023, offers a platform for deliberation on gaps and opportunities for advancing privacy, free expression, inclusion, free flow of information, civic participation, and innovation online. This year will mark a decade of hosting the landmark event in various African countries, including Ethiopia, Ghana, South Africa, Uganda, and Zambia.

As part of the registration, we invite session proposals including panel discussions, lightning talks, exhibitions, and skills workshops to shape the FIFAfrica23 agenda. 

CIPESA is committed to ensuring diversity of voices, backgrounds and viewpoints in attendance and as organisers and speakers at panels at FIFAfrica. In line with this, there is limited funding to support travel for participation at FIFAfrica23. Preference will be given to applicants who can partially support their attendance and those who organise sessions.

Submissions close at 18.00 (East Africa Time) on July 14, 2023. Successful session proposals and travel support applicants will be directly notified by August 14, 2023.

The session proposal and travel support form can be accessed here.

NOTE: All data collected as part of the registration and session proposal exercise will only be used for purposes of the FIFAfrica event management.   

Follow @cipesaug on Twitter and on the dedicated FIFAfrica website for regular updates on the Forum.

Tackling Threats to Media Freedom and Safety in The Digital Age

By Ashnah Kalemera | 

The proliferation of technology has created new opportunities for journalists and journalism in Africa, but it has also come with threats. For civil society, academia, media development practitioners, activists and development partners, it is critical to understand the key issues related to freedom of expression and the internet and possible ways to address them as part of programming and strategic intervention. 

At the Africa Media Convention (AMC), which was held in Lusaka, Zambia on May 11-13, 2023, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) convened a session that explored threats to media freedom and journalists safety in the digital age and comprehensive measures to tackle them.

During the session, which brought together over 40 stakeholders from across Africa, it was acknowledged that technology had enabled groundbreaking journalism, ease of reach to diverse audiences which has also enabled active engagement, and more accessible content production avenues. With this evolution, new actors have joined the sector and new regulation and economic sustainability models have been witnessed, all with implications for the future of media freedom and democracy. 

However, the digital era has also seen an exponential increase in online harassment of journalists, criminalisation of aspects of journalism, surveillance of journalists, and the orchestration of disinformation campaigns. These threats have translated into offline risks of physical violence, thereby undermining the safety and independence of journalists, while also eroding freedom of expression. 

According to Kamufisa Manchishi, a lecturer in the Faculty of Journalism and Public Relations at Zambia’s Mulungushi University, digitalisation in the media had created several “crises”. One of them was an identity crisis, whereby journalists and media houses are struggling to balance their online presence with upholding journalistic principles and ethics. Linked to the identity crisis was a financial crisis of generating revenue and sustaining operations. “This has caused clickbait journalism and led to compromise of ethics,” said Manchishi. 

Manchishi added that fears of being surveilled by the state and private actors had led to increased self censorship. He said: “Journalists do not want to talk about controversial issues on phone or virtual platforms. They are worried someone is listening and are opting to conduct interviews and investigations physically.” 

Meanwhile, misinformation and disinformation continue to proliferate on legacy media as well as on social and digital media. CIPESA research indicates that disinformation from online platforms is often amplified through traditional print and broadcast media. Soren Johannsen, BBC Media Action’s Zambia Country Director, called for more innovative approaches to promoting digital literacy. 

Whilst applauding various stakeholders’ efforts in debunking and fact-checking, Johannsen advocated for more interventions designed around pre-bunking as an inoculation theory for behavioural change. “We shouldn’t try to just correct and verify but help users understand where false information is coming from, the motivation and the consequences,” he said. Such efforts should be complemented with more research to help understand the originators, flows and uptake of misinformation and disinformation. 

Disinformation Pathways and Effects: Case Studies from Cameroon, Ethiopia, Kenya, Nigeria and Uganda.

Understanding the Information Disorder in Tunisia, Algeria and Libya .

As disinformation and misinformation threaten democracy, public security, and social cohesion, there has been an increase in legislative responses including the enactment of laws on cybercrime, computer misuse, hate speech and “false news”. However, many of such laws in place are vague and broadly criminalise “false news” or “offensive publications online” without, for instance, distinguishing between misinformation and disinformation, and have been weaponised against critics, journalists and media houses. 

Citing various examples from West Africa, Dora Mawutor, the Programme Manager at the Media Foundation for West Africa (MFWA), said the determination of what qualifies as false news lies with the state and its “self-serving purposes”. She called for more solidarity among the media fraternity to push back against the selective application of such laws through increased coverage of legislative developments and  of attacks against journalists. Mawutor also called for advocacy for review or repeal of such repressive laws. 

Echoing Mawutor’s sentiments, Alfred Bulakali, the Regional Director of Article19 West Africa, stated that laws across Africa that govern  freedom of expression and media freedom often come with heavy sanctions against offenders yet they fall short of the three part test under international human rights standards. He noted that media freedom advocates around Africa had scored successes in securing the decriminalisation of libel and defamation provisions in traditional press laws. However, added Bulakali, “technology has given them [the decriminalised provisions] an opportunity to come back” through laws and regulations being developed to govern media and freedom of expression online. He called for renewed efforts in decriminalising libel and defamation online and offline and limiting the power of law enforcers to interpret the laws.

On online harassment, Cecilia Maundu, a Kenyan broadcast journalist, digital rights researcher and digital security trainer, stated that online gender-based violence is under-reported, even though some newsrooms have dedicated gender desks. Meanwhile, newsroom policies are also weak or non-existent, putting women journalists at increased risk. As a result, there was limited visibility of online gender-based violence in mainstream media and inadequate support for survivors. This calls for more response measures and programming that not only focus on newsroom policies and safety mechanisms, but also on psychological support. 

The joint responsibility alluded to by the various speakers at the session around advocacy, movement building, institutional capacity building, skills and knowledge development as well research and documentation  are key planks of CIPESA’s programming and engagement at national, regional and international levels to advance access to information, privacy and data protection, and free expression online as enablers of citizen participation, resisting authoritarianism, protecting women’s or other marginalised groups’ rights, amplifying people’s voices, and engendering accountability.

Disinformation and Hate Speech Continue to Fuel the Conflict in Eastern DR Congo 

By Nadine Kampire Temba and CIPESA Writer |

The Democratic Republic of Congo (DR Congo) continues to witness an information war, characterised by spiralling incitement, misinformation, disinformation and hate speech on social media. The state of affairs has undermined cohesion between communities and continues unabated due to various factors. 

Firstly, the country’s long history of political instability has created an environment where misinformation, disinformation and hate speech thrive. Over the past three decades, the DR Congo has witnessed cyclic and indiscriminate violence, and internationalised conflict. These have been fuelled by impunity for atrocities, endemic corruption, poor governance, leadership wrangles and differences over the control of an estimated USD 24 trillion worth of mineral resources, pitting the government against neighbouring countries, at least 120 armed groups and other parties. 

The instability has left at least 24.6 million people (one in four Congolese) at risk of food insecurity and a further six million citizens internally displaced, the highest on the continent. Human rights violations have remained commonplace despite years of humanitarian interventions. More recently, the conflict has escalated in Ituri and Kivu provinces in the eastern part of the country. Violence between government forces and armed groups has led to the death of at least 1,300 people since October 2022 and forced about 300,000 civilians to flee their homes and villages. 

Secondly, divisions among the country’s diverse ethnic groups have contributed to the escalation of tensions and hostility, while disputes with neighbouring Rwanda have led to the deterioration of diplomatic relations between the two states. Congo, United Nations experts and western governments accuse Rwanda of backing the March 23 Movement (M23) rebel group which continues to extend its control in North Kivu – accusations Rwanda and the rebel group deny. 

The Congolese government has labelled the M23 a “terrorist movement” and blamed it for committing atrocities, including summary executions and forced enlistment of civilians. In January 2023, Congo accused Rwanda of shooting down its fighter jet and described this as a “deliberate act of aggression that amounts to an act of war”. Rwanda claimed the jet had violated its airspace on three occasions. This came eight months after Kinshasa banned Rwanda Air from its airspace.

For its part, the Rwanda government accuses the Congolese army of utilising proxy armed groups, such as the Democratic Forces for the Liberation of Rwanda (Les Forces démocratiques de libération du Rwanda, FDLR), to contain the M23 offensive and to destabilise Rwanda.

The strained relations between the two countries, coupled with social divisions based on ethnicity, religion, nationality and political affiliation, continue to be exploited by politicians and groups affiliated to both countries to create tension and fuel hate speech and disinformation online and offline. On October 30, 2022, the Congolese government ordered the Rwandan ambassador, Vincent Karega, to leave the country within 48 hours in retaliation for Kigali’s alleged support to the M23. The DR Congo also recalled its top diplomat from Rwanda in a further souring of relations. A day later, on October 31, 2022, thousands of Congolese citizens, mostly in Goma city, attended anti-Rwanda protests to denounce Kigali’s alleged support of the M23, a mostly Congolese Tutsi group and what they called the “hypocrisy of the international community in the face of Rwanda’s aggression.” 

During the protests, names of individuals identified as Rwandans were read out, resulting in attacks and lynching of some individuals. In response, online trolls affiliated to Rwanda have targeted Congolese political leaders, journalists and civil society leaders. The targets of attacks in Congo have included the Banyarwanda (Tutsi) and the Banyamulenge whose citizenship, equal rights and belonging in DR Congo have been constantly questioned. They often face threats, attacks, and dehumanising stereotypes as they are perceived as foreigners or Rwandan implants supporting the M23 rebellion. 

Amidst these tensions is a weak and underdeveloped media environment in both DR CongoC and Rwanda, coupled with low media literacy among the population, which are enabling the spread of false information without being challenged or fact-checked. The situation has been further complicated by the lack of both skills and tools in content moderation and editorial guidance on the part of local media outlets and journalists working from both sides of the border. 

Media houses have to compete with social media platforms where users have found a sense of “community” by connecting with a variety of actors at the national level and in the diaspora who anonymously disseminate and amplify well-scripted radical messages, conspiracy theories and polarising narratives to wider audiences and appeal to ethnic loyalties and sow discord among communities. Some of the rivalling groups have deployed bots and trolls in order to manipulate the public opinion on social media.

According to Congolese journalist Desanges Kihuha, the media that are committed to providing truthful information are struggling to match the speed at which conflict-related disinformation and misinformation are spreading on social media platforms due to limited skills and funding. Thus, any actor intent on spreading false information can publish information online where it can easily gain virality without being fact-checked. “In the current context of war and insecurity in North Kivu province, misinformation continues to spread at a fast rate due to the use of digital and social media. Unfortunately, there is little press coverage of this phenomenon of hate speech and fake news,” says Kihuha.

Related to the above is that a significant part of the population, especially those in rural areas, lack access to accurate, verifiable and reliable information, while at the same time, the youth rely on social media for information. In addition, the social and economic challenges affecting the public, such as high poverty levels and limited access to basic services and infrastructure, create frustration, resentment, anger and distrust of the state, making the public vulnerable to exploitation. 

As a result, politicians, armed groups and their allies exploit these vulnerabilities to create tension by manipulating public opinion to generate support for their extremist political views or groups and channelling the public anger to promote hate speech and disinformation to further escalate the ethnic and regional conflicts. Theogene Magarambe, a Rwandan journalist, describes this as the “instrumentalisation of the M23 insurgency” in order to distract the public from governance shortcomings and the failure to restore peace and rule of law in Kivu. 

The failure by governments on both sides of the border to create an environment to push back against the political polarisation and disinformation online is widely acknowledged. “At the practical level, policies related to content moderation and regulation are currently inexistent, though we are engaging cross-border communities in order to create space for dialogue, hosting workshops and platforms where we exchange knowledge,” says Marion Ngavho Kambale, who is the head of the civil society of the North Kivu province.  Magarambe adds: “Today the true legitimacy test for any credible government is whether it can implement legal safeguards on privately developed technologies and hold platform operators accountable for failure to moderate content.”

Critics point to the challenge of the structural conception of social media platforms, whose business models and algorithms mostly prioritise content based on its engagement value rather than its accuracy or truth. Platforms such as Facebook have been criticised for inaction in the face of online ethnic incitement and massacres in Ethiopia – a potential risk in the DR Congo-Rwanda conflict. As Arsene Tungali, a digital rights and internet governance expert observed, inadequate actions by the likes of Facebook, Twitter, and WhatsApp means “the devastating effects of political polarisation, hate speech and disinformation peddled on  social media remain a problem.”

Louis Gitinywa, a digital rights and technology lawyer, says although the internet has offered citizens and private actors in the two countries a robust civic space for organising and engaging with key societal priorities, the “lawlessness and disinformation online … continue to contribute to fighting and killings”. 

Overall, addressing hate and disinformation in the DR Congo-Rwanda conflict will require a sustained and coordinated effort from multiple actors. Looking ahead, there is a critical need to build capacity and expertise amongst all the stakeholders in order to formulate effective strategies for content moderation. This includes building the legal expertise and strategic litigation to hold liable social media such as Facebook and Twitter for failing to effectively put adequate measures to moderate content in native and indigenous languages.

Further, since media literacy is limited, it is important to build the capacity of journalists, media practitioners and civil society reporting on the conflict to be aware of the complex information environment, relevant skills in fact-checking, professional ethics, content moderation as well as building their own professional networks for sharing credible information with counterparts across borders and avoiding sensationalism in reporting. They can also use the available platforms to promote responsible social media use, tolerance and dialogue between different groups in order to build trust. Moreover, the different actors should desist from propagating hate speech and disinformation. 

Nadine Kampire Temba is a journalist and digital rights lawyer based in Goma city, DR Congo, and a fellow with CIPESA. She coordinates Afia Amani Grands Lacs, an online media outlet that undertakes fact-checking and defends press freedom in the Great Lakes Region. You can follow her on Twitter @nadineKampire

CIPESA Submits Comments on Uganda’s Proposed New Digital Tax 

By Edrine Wanyama |

On April 28, 2023, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA)  submitted comments on the Income Tax (Amendment) Bill, 2023 to the Committee on Finance, Planning and Economic Development of the Uganda Parliament. The comments argue that the proposed law would  undermine access to and use of digital tools and services. 

The bill, among others, proposes to impose a tax of five percent on foreign-based entities that derive income from providing digital services to customers in Uganda. The proposals are contained in clause 16 which seeks to introduce a new section, 86A.

Clause 86A provides:

  1. A tax is imposed on every non-resident person deriving income from providing digital services in Uganda to a customer in Uganda at the rate prescribed in Part IV of the Third Schedule to this Act.
  2. For the purposes of subsection (1), income is derived from providing a digital service in Uganda to a customer in Uganda, if the digital service is delivered over the internet, electronic network or an online platform.
  3. For the purposes of this section “digital service” includes—
  1. online advertising services;
  2. data services;
  3. services delivered through an online marketplace or intermediation platform, including an accommodation online marketplace, a vehicle hire online marketplace and any other transport online marketplace;
  4. digital content services, including accessing and downloading of digital content;
  5. online gaming services;
  6. cloud computing services;
  7. data warehousing;
  8. services, other than those services in this subsection, delivered through a social media platform or an internet search engine; and
  9. any other digital services as the Minister may prescribe by statutory instrument made under this Act.”

While the clause targets non-residents, if enacted it would add to the digital taxes borne by the already tax-burdened consumers of digital services in Uganda. Since July 1, 2022, web hosting, software and streaming services in the country pay a mandatory value added tax of 18% chargeable on consumers of services offered by  platforms such as Amazon, Meta (Facebook), Twitter and Zoom. 

The tax would potentially hinder inclusive access and use of digital technologies and negatively affect Uganda’s digital economy. According to the United Nations Capital Development Fund (UNCDF), Uganda’s digital economy score is low, particularly in areas such as digital inclusiveness. According to the UNCDF Score Card of 2021 the digital divide or groups most excluded from the digital economy in Uganda are the elderly (80%), rural communities (64%), persons with disabilities (74%), the youth (33%), refugees (80%) and migrants (75%). The inclusion gap such as for persons with disabilities is attributed to the high cost of technologies. 

Innovation is a prerequisite for the provision of digital services including advertising, data services, marketing, cloud computing services, and data warehousing. Most of these tools and services are developed outside Uganda, hence imposing high taxes on non-residents that provide them could   limit access to these critical tools and services. That could push Ugandans further into the margins of the global digital economy.

The enjoyment of digital rights and freedoms, including freedom of expression, access to information, and association, could also be limited by the imposition of high digital taxes. 

Accordingly, the submission by CIPESA recommends that the Committee on Finance, Planning and Economic Development:

  1. Drops the entire proposed clause 16 of the Income Tax (Amendment) Bill, 2023;
  2. Conducts wide consultations with the affected stakeholders including the tech community, innovators, the business community and civil society on the potential effects of the proposed amendment.
  3. Conducts a tax impact assessment to weigh the potential effects of the proposed tax on access and use of digital tools and services. The impact assessment should specifically spell out the anticipated positive impacts and weigh them against the anticipated negative effects.
  4. Takes into consideration and supports all the progressive policies that seek to increase and enhance accessibility and usage of digital tools and services such as tax incentives which usually lead to lowering of the costs to be borne by consumers in purchase and use of digital tools and services.

See the full submission here.