Uganda’s Changes On Computer Misuse Law Spark Fears It Will Be Used To Silence Dissidents

By News  Writer |

Uganda’s controversial Computer Misuse (Amendment) Bill 2022, which rights groups say will likely be used to silence dissenting voices online, has come into force after the country’s President Yoweri Kaguta Museveni signed it into law yesterday.

The country’s legislators had passed amendments to the 2011 Computer Misuse Act in early September, limiting writing or sharing of content on online platforms, and restricting the distribution of children’s details without the consent of their parents or guardians.

The bill was brought before the house to “deter the misuse of online and social media platforms.” A document tabled before the house stated that the move was necessitated by reasoning that “enjoyment of the right to privacy is being affected by the abuse of online and social media platforms through the sharing of unsolicited, false, malicious, hateful and unwarranted information.”

The new law, which is also curbing the spread of hate speech online, recommends the application of several punitive measures, including ineligibility by offenders to hold public office for 10 years and imprisonment for individuals who “without authorization, accesses another person’s data or information, voice or video records and shares any information that relates to another person” online.

Rights groups and a section of online communities are worried the law might be abused by regimes, especially the current one, to limit free speech and punish persons that criticize the government. Some have plans to challenge it in court.

Fears expressed by varying groups come in the wake of increasing crackdowns on individuals that don’t shy away from critiquing Museveni’s (Uganda’s longest-serving president, who also blocked social media in the run up to last year’s general election) authoritarian regime online.

Recently, a Ugandan TikToker, Teddy Nalubowa, was remanded in prison for recording and sharing a video that celebrated the death of a former security minister, who led the troops that killed 50 civilians protesting the arrest of opposition politician Robert Kyagulanyi Ssentamu (Bobi Wine) in 2020. Nalubowa, a member of Ssentamu’s National Unity Platform, was charged with offensive communication in contravention of the Computer Misuse Act 2011 amid public outcry over the harassment and intimidation of dissidents. Ssentamu, Museveni’s critic and country’s opposition leader, recently said the new amendment is targeting his ilk.

The Committee to Protect Journalists (CPJ) had earlier called on Museveni not to sign the bill into law, saying that it was an added arsenal that authorities could use to target critical commentators, and punish media houses by criminalizing the work of journalists, especially those undertaking investigations.

The Collaboration for International ICT Policy in East and Southern Africa (CIPESA) had also made recommendations including the deletion of Clause 5, which bars people from sending unsolicited information online, saying that it could be abused and misused by the government.

“In the alternative, a clear definition and scope of the terms “unsolicited” and “solicited” should be provided,” it said.

It also called for the scrapping of punitive measures, and the deletion of clauses on personal information and data, which duplicated the country’s data protection law.

The CIPESA said the law also is likely to infringe on the digital rights of individuals, including the freedom of expression and access to information, adding that the provisions did not address issues, like trolling and harassment, brought forth by emerging technologies as the law sought to do in the first place.

This article was first published by the Ghana Business on Oct 15, 2022.

New Law in Uganda Imposes Restrictions on Use of Internet

By Rodney Muhumuza |

Ugandan President Yoweri Museveni has signed into law legislation criminalizing some internet activity despite concerns the law could be used to silence legitimate criticism.

The bill, passed by the legislature in September, was brought by a lawmaker who said it was necessary to punish those who hide behind computers to hurt others. That lawmaker argued in his bill that the “enjoyment of the right to privacy is being affected by the abuse of online and social media platforms through the sharing of unsolicited, false, malicious, hateful and unwarranted information.”

The new legislation increases restrictions in a controversial 2011 law on the misuse of a computer. Museveni signed the bill on Thursday, according to a presidential spokesman’s statement.

The legislation proposes jail terms of up to 10 years in some cases, including for offenses related to the transmission of information about a person without their consent as well as the sharing or intercepting of information without authorization.
Opponents of the law say it will stifle freedom of expression in a country where many of Museveni’s opponents, for years unable to stage street protests, often raise their concerns on Twitter and other online sites.
Others say it will kill investigative journalism.

The law is “a blow to online civil liberties in Uganda,” according to an analysis by a watchdog group known as Collaboration on International ICT Policy for East and Southern Africa, or CIPESA.

The Committee to Protect Journalists is among groups that urged Museveni to veto the bill, noting its potential to undermine press freedom.

“Ugandan legislators have taken the wrong turn in attempting to make an already problematic law even worse. If this bill becomes law, it will only add to the arsenal that authorities use to target critical commentators and punish independent media,” the group’s Muthoki Mumo said in a statement after lawmakers passed the bill.

Museveni, 78, has held power in this East African country since 1986 and won his current term last year.

Although Museveni is popular among some Ugandans who praise him for restoring relative peace and economic stability, many of his opponents often describe his rule as authoritarian.

This article was first published by the Washington Post on Oct 13, 2022

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.

Participant Reflection on #FIFAfrica22: Effective Engagement in the UPR Process for Digital Rights Promotion

By Murungi Judith |

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA) and Small Media held a workshop on the Universal Periodic Review (UPR) process as part of  Forum on Internet Freedom in Africa (FIFAfrica 22), which was held in Lusaka Zambia from September 26-29, 2022. The workshop is a product of the UPROAR project aimed at advancing the cause of digital rights globally by supporting engagement in international advocacy at the UPR. 

The 32 participants at the workshop represented a diverse array of backgrounds including  civil society, digital rights activism and advocacy, legal, journalism, and academia.  A total of  20 countries were also represented -Benin,Burundi, Botswana,  Cameroon, Democratic Republic of Congo, Ethiopia, Ghana, India Kenya, Mozambique, Nigeria,  Senegal, Sri Lanka, Sudan, Tanzania,Uganda,United Kingdom, United States of America, Zambia, and Zimbabwe.

The workshop entailed an overview of the UPR, its  purpose and the processes. Also included were in depth  discussions on international and regional normative frameworks on digital rights. Specific attention was drawn to the Universal Declaration of Human Rights as the first normative framework on freedom of expression. The International Convention on Civil and Political Rights (ICCPR) was also explored under the core tenets of the right to hold opinions without interference (freedom of opinion), the right to seek and receive information (access to information) and the right to impart information (freedom of expression).

It was noted that the right to freedom of expression is not absolute and that the three-part test is key in determining the circumstances which potentially justify limitations. Under Article 19 (2) of the ICCPR, limitations are specifically listed as ( i) it must be provided for in law (ii) it must pursue a legitimate aim (iii) it must be necessary for a legitimate purpose.

The three-part-test formed the basis of heated debate related to electoral democracy and internet shutdowns in countries like Cameroon and Tanzania when compared to Kenya where the government did not impose an internet shutdown during their recent elections. As a result of the comparative discussions, participants reached  the conclusion that there are still actions of governments that are a threat to internet freedom such as arrests, detention and assassination of some journalists. It is the responsibility of civil society, activists and human rights defenders to hold governments accountable through the use and increased participation in the UPR process. 

The presence of Hon. Neema Lugangira from Tanzania, a member of Tanzanian Parliament and the Chairperson of the African Parliamentary Network on Internet Governance in the sessions was priceless and a beacon of hope in bridging the gap between civil society and policy makers towards promoting digital rights through the UPR.

The workshop also explored various case law on freedom of expression in Africa including precedent such as in Lohé Issa Konaté v Burkina Faso. Participants deliberated on the relevance of evaluating and critically assessing the law and ensuring that cases are framed in a manner that is in line with the jurisdiction of the particular court of law approached without which matters could be thrown out. This session gave the participants a clear understanding of the link between offline and online rights and specific laws that apply to minority and marginalised groups such as children, women, persons with disabilities and other vulnerable communities. 

The session on campaign and advocacy planning aimed at equipping participants with the necessary tools required to engage partners on how to carry out campaigns and to execute advocacy strategies through the UPR. It highlighted the eye-catching and precise advocacy materials that could be used in social media as well as other platforms for the UPR at local level. It led to discussions on the critical role played by local stakeholders in leveraging the UPR for digital rights development in their various contexts. The session helped the participants understand how to engage with local partners and to ensure that there is effective implementation of recommendations made to their respective countries. This involved fact sheets and how to use them during the UPR process. 

Participants engaged in a practical lobbying session where they had to appear before a UN delegate and present the issues affecting digital rights in their respective countries and recommendations for reform. This practical group exercise was very beneficial and informative because it gave the participants a chance to apply what they had learnt in regard to the UPR process. It gave them an opportunity to experience the review process at Geneva. 

Through the UPROAR Website, participants were guided on how to leverage research and social media platforms online for effective design and branding as part of UPR engagements  related to digital rights. The workshop also entailed guidance on what stakeholder mapping is and its importance.

In a subsequent panel entitled ‘Stemming the Tide: Has the Universal Periodic Review Mechanism Contributed to Changes in the Digital Rights Landscape of States Under Review?’ panelists shared experiences from Namibia, Democratic Republic of Congo, Uganda, Rwanda and many others. This gave the participants in the workshop an understanding on how to prepare for stakeholder engagements and how to conduct evidence-based advocacy at the United Nations Human Rights Council.

It was noted that the Covid-19 pandemic led to the imposition of travel restrictions which caused difficulties in traveling to Geneva to physically participate in the UPR process. Online opportunities were a welcome alternative but the lack of reliable internet access among civil society on the continent during the sessions presented an additional barrier

Beyond making submissions and engaging during review sessions, participants were urged to also take part in monitoring recommendations. Experiences were shared about governments such as that of Uganda which rejected all the recommendations that were given in regard to digital rights. In such instances participants were encouraged not to give up and draw back due to such government response but to keep doing the work of advocacy in line with digital rights since the same is also a notable step in the right direction. They were also encouraged to collaborate with law and policy members to ensure that they know about the UPR process and that they are able to positively respond to the recommendations given. They were also encouraged to ensure that there is in-country pressure from civil society to ensure that governments act on the recommendations given to them. It was noted that in Tanzania there has been a significant increase in the acceptance of recommendations after there has been collaboration between civil society and parliamentarians.  

The UPR sessions at FIFAfrica22 were very informative and intriguing as it engaged well-equipped workshop trainers. Experiences from those who had participated in Geneva engagements on digital rights stirred the urge for proactive engagement and participation by those coming up for review like Botswana.

State of Internet Freedom in Africa 2022: The Rise of Biometric Surveillance

FIFAfrica22 |

Digital biometric data collection programmes are becoming increasingly popular across the African continent. Governments are investing in diverse digital programmes to enable the capture of biometric information of their citizens for various purposes.

A new report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) documents the emerging and current trends in biometric data collection and processing in Africa. It focuses on the deployment of national biometric technology-based programmes in 16 African countries, namely Angola, Cameroon, Central African Republic, Democratic Republic of Congo, Kenya, Lesotho, Liberia, Mozambique, Nigeria, Senegal, Sierra Leone, Tanzania, Togo, Tunisia, Uganda, and Zambia.

The report published today is the ninth consecutive one issued by CIPESA since 2014 under the State of Internet Freedom in Africa series. It was released at the Forum on Internet Freedom in Africa (FIFAfrica), which is taking place in Lusaka, Zambia.

The biometric data collection programmes reviewed by the report include those related to civil registrations, such as the issuance of National Identity cards, biometric voter registration and identification programmes, government-led CCTV programmes with facial recognition capabilities, national ePassport initiatives, refugees’ registration, and mandatory biometric SIM card registration.

The report highlights the key trends, potential risks, challenges and gaps relating to biometric data collection projects in the continent. These include limited public engagement and awareness campaigns; inadequate legal frameworks that heighten risks to privacy; exclusion from accessing essential services; enhanced surveillance, profiling and targeting; conflicting interests and the wide powers of third parties; and limited capacity and training. 

Consequently, the study notes that these biometric programmes are being implemented in countries with poor digital rights records, declining democracy and rising digital authoritarianism, which casts doubt on the integrity of biometric data collection programmes and the resultant databases. Thus, viewed collectively, the developments, trends and risks outlined in the report heighten concern over the growing threats to the right to privacy of personal data and potential violations of digital rights on the continent. 

Finally, the report presents recommendations to various stakeholders including the government, civil society, the media, the private sector and academia, which, if implemented, will go a long way in addressing data protection and privacy gaps, risks and challenges in the study countries. 

The key recommendations include a call to:

  • Governments to implement the laws and policy frameworks on identity systems and data protection and privacy while paying keen attention to compliance with regionally and internationally recognised principles and minimum standards on data protection and privacy for biometric data collection and require the adoption of human rights-based approaches. 
  • Countries without data protection and privacy laws such as Liberia, Mozambique, Sierra Leone and Tanzania should expedite the process of enacting appropriate data protection laws so as to guarantee the data protection and privacy rights of their citizens. 
  • Governments to ratify the AU Convention on Cyber Security and Personal Data Protection (Malabo Convention) to ensure government commitment to regional data protection and privacy as a means to hold them accountable.
  • Governments to establish independent and robust oversight data protection bodies to regulate data and privacy protection including biometric data.
  • Civil society to engage in advocacy and lobby governments to develop, implement and enforce privacy and data protection policies, laws and institutional frameworks that are in compliance with regional and international minimum human rights standards.
  • Civil society to monitor, document and report on the risks, threats, abuses and violations of privacy and human rights associated with biometric data collection programmes, and propose effective solutions to safeguard rights in line with international human rights standards.
  • The media to progressively document and report on initiatives such as advocacy by civil society and other stakeholders to keep track of developments. 
  • The media to conduct investigative journalism to identify and expose privacy violations arising from the implementation of biometric data collection programmes.
  • The private sector to take deliberate efforts to ensure that all their respective biometric data collection programmes and systems are developed implemented and managed in compliance with best practices prescribed by the national, regional and international human rights standards and practices on privacy and data protection, including the UN Guiding Principles on Business and Human Rights.
  • The private sector to ensure that they progressively adopt and develop comprehensive internal privacy policies to guide the collection, storing and processing of personal data. 
  • The private sector to take deliberate efforts aimed at involving data subjects in the control and management of their personal data by providing timely information on external requests for information. 
  • Academia to conduct evidence-based research on data protection and privacy including biometrics, highlighting the challenges, risks, benefits and trends in biometric data collection programmes. 

The full State of Internet Freedom in Africa 2022 Report can be accessed here.