Are Tech Companies Skirting their Responsibilities to Journalists’ Safety?

By CIPESA Writer |

The proliferation of technology has created new opportunities but also threats to journalists and journalism in Africa. Online harassment, criminalisation of aspects of journalism, disinformation and misinformation, surveillance, and trolling, are among the common threats. Often, these threats translate into physical violence, and they are undermining the safety and independence of journalists, and  are leading to the erosion of freedom of expression.

A report by the International Press Institute (IPI) and Konrad Adenauer Stiftung (KAS) on safety of journalists in Africa reveals that media freedom is under assault amidst an increase in attempts to stifle independent media and spiralling attacks on journalists. According to this report, “in a bid to control the public narrative and maintain their hold on power, authoritarian regimes and, in some cases, even democratically elected governments, have been brazenly silencing critical voices and undermining freedom of expression.”

In the lead up to the 10th anniversary of the United Nations (UN) Plan of Action on the Safety of Journalists, the UNESCO Section of Freedom of Expression and Safety of Journalists and the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) organised a dialogue on tech platform responsibilities for safety of journalists in Africa. The last 10 years have witnessed various social, economic and technological processes that have introduced new dimensions to democracy, governance and human rights. The exponential growth of digital technologies, for example, has given rise to new concerns about the use and misuse of digital platforms, as well as the role of internet companies in mediating freedom of expression.

In his address at the dialogue, which was held as part of the ninth edition of the Forum on Internet Freedom in Africa (FIFAfrica22), Guilherme Canela, Chief, Section of Freedom of Expression and Safety of Journalists at UNESCO, said the evolving digital ecosystem not only offers enormous opportunities for fostering human rights but also increases risks that compromise fundamental rights like freedom of expression. “Journalists are also part of this equation, benefitting a lot from these opportunities but also suffering from the problems of the digital ecosystem including the viability of the news media sector and the online violence against journalism, journalists, and in particular women journalists,” he said. “Our job therefore is to enhance the opportunities to mitigate the risks and to prosecute the harms.” , Zoe Titus, Director at Namibia Media Trust, stated that authoritarian governments  are closing democratic space and targeting journalists, especially their personal integrity, through laws and policies that are against international norms.

But it is not only governments stifling journalists, as politicians and their supporters are unleashing targeted disinformation to undermine the credibility of independent media. For instance, the August 2022 general election in Kenya saw a spike in coordinated attacks against the media and its credibility. “There were fake news websites, and a continuous tug of war between different media and journalists depending on which candidate they supported. They would attack what the journalists were reporting, then attack their media house and finally the individual journalists and link them to a specific candidate,” said Catherine Muya, Programme Officer- Digital, ARTICLE19.

According to Anriette Esterhuysen, of the Association for Progressive Communications (APC), with increased use of social media, many journalists are vulnerable “but the attacks take a special streak when directed [at] or targeting women”. The various violations were being compounded because tech companies were not being held sufficiently responsible for the harm perpetrated on their platforms. As a result, the tech companies were not taking swift and adequate measures to tackle content that undermines journalists’ safety.

On the other hand, there are concerns that, in pandering to state expectations and demands, some tech companies are targeting innocent and genuine content under the guise of offending guidelines that govern content on their platforms. “Legitimate content has been rejected on these flimsy grounds,” said Muya, citing results of research conducted by ARTICLE 19 in Kenya as part of the Social media for peace project.

Muya added that content or accounts flagged for alleged offensive messages are temporarily or permanently blocked without notification or due process: “They just summarily do this and escalation or reactivation is hard.” Such account holders need to go through intermediaries like The Oversight Board to seek redress, and reporting to platforms and receiving a response from  them is  tedious, Muya said.

In the circumstances, the role of technology companies in regulating content, protecting journalists and enabling the prosecution of the perpetrators of violations against journalists came under focus at the dialogue. Speakers called for more transparency and consistency in the moderation of content online by tech companies, arguing that the companies could do a lot more in sanitising the internet and in  protecting the safety of journalists.

Tech companies can do more, especially on transparency and in anticipating and mitigating risks to journalists. Accordingly, UNESCO and its partners are developing a risk assessment framework for the safety of journalists, which could have two major components. The first would be identification of the principal risks faced by journalists by type and consequence. The second component could be a risk management strategy which would articulate the appropriate risk controls and mitigations, means of monitoring and methods of reporting such risks.

Further, platforms would need to document these attacks and be more transparent with data about the attacks, and how they were handled. “Documenting and sharing data is crucial, for instance on incidents of harmful content, including attacks on journalists such as by direct abuse and threats or disinformation campaigns, and actions taken,” said Wairagala Wakabi, CIPESA’s Executive Director. He added that it was essential to properly research safety concerns such as  sexualised attacks against journalists, including the extent of the problem and its effects, in order to devise effective remedial measures.

Digital Rights Prioritised at The 73rd Session of The ACHPR

By CIPESA Writer |

Digital rights as key to the realisation and enforcement of human rights on the African continent was  among the thematic focus areas of the Forum on the Participation of NGOs in the 73rd Ordinary Session of the African Commission on Human and Peoples’ Rights (ACHPR) held on October 17-18, 2022 in Banjul, the Gambia. Under the theme “Human Rights and Governance in Africa: A Multi-Dimensional Approach in Addressing Conflict, Crisis and Inequality”, the Forum also featured thematic discussions on conflict, the Africa Continental Free Trade Agreement, the environment, climate change, gender-based violence, post Covid-19 strategies and civic space for human rights and good governance.

The Forum on the Participation of NGOs in the Ordinary Sessions of the ACHPR is an advocacy platform coordinated by the African Centre for Democracy and Human Rights Studies. It aims to promote advocacy, lobbying and networking among non-governmental organisations (NGOs) for the promotion and protection of human rights in Africa. The Forum allows for sharing updates on the human rights situation on the continent by African and international NGOs with a view of identifying responses as well as adopting strategies towards promoting and protecting human rights on the continent.

A session in which the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) participated alongside Paradigm Initiative (PIN), the International Center for Not-for-Profit Law (ICNL) and the Centre for Human Rights-University of Pretoria, discussed the relationship between human rights and technology.

Thobekile Matimbe from PIN observed that internet shutdowns in the region are worrying and a major threat to freedom of expression, access to information, freedom of association and peaceful assembly contrary to article 9 of the African Charter on Human and People Rights (ACHPR) and the ACHPR Declaration of Principles on freedom of expression and access to information in Africa. She  expounded on the profound adverse impacts of internet shutdowns and disruptions on socio-economic rights, including the right to education, housing, health, and even social security. Matimbe specifically called for an end to the now two years internet and phone shutdown in Ethiopia’s Tigray region, while also regretting the continued violation of international human rights standards by States in other parts of the continent. 

Introducing digital rights as human rights and situating the different human rights groups within the digital rights discourse, Irene Petras from ICNL highlighted the technological evolution on the continent and the interrelatedness and interdependence of the internet with various rights and freedoms. According to her, internet shutdowns are an emerging concern that is adversely impacting the digital civic space. 

According to Access Now, in 2021 at least 182 internet shutdowns were experienced in 34 countries across the globe. In Africa, shutdowns were recorded in 12 countries on up to 19 occasions. The affected countries were Chad, the Democratic Republic of the Congo, Ethiopia, Gabon, Niger, Uganda and Zambia, which experienced internet restrictions during elections. Eswatini, Ethiopia, Gabon, Senegal and South Sudan experienced internet shutdowns due to protests and civil unrest. 

According to CIPESA’s legal officer Edrine Wanyama, given the long-standing authoritarianism and democracy deficits in most parts of the continent, elections, protests and demonstrations and examination periods are  the key drivers of internet shutdowns in Africa. Wanyama also noted that the consequences of internet shutdowns were wide ranging, extending to economic and financial loss, undermining freedom of expression, access to information and access to the internet, aggravating the digital exclusion gap, placing doubt on credibility of elections, facilitating loss of trust in governments and often fueling disinformation and hate speech

Given the social, economic and political benefits of the internet, Hlengiwe Dube of the Centre for Human Rights at the University of Pretoria urged states to re-think its availability and access at all times, as opposed to imposing information blackouts and creating situations for litigation.  She noted that meaningful access and creation of a facilitative environment for internet access has widely been advanced as part of the Sustainable Development Goals (SDGs)

The session called for active monitoring and documentation of internet shutdowns by NGOs including through collaborative and partnership building efforts, utilising investigative tools like Observatory of Network Interference (OONI) and NetBlocks which help to detect disruptions, and engaging in strategic litigation. 

The joint recommendations provided for inclusion in the NGOs Statement to the African Commission on Human and Peoples’ Rights (ACHPR) 73rd Ordinary Session by the thematic cluster on digital rights and security are to:

African Commission on Human and Peoples’ Rights (ACHPR) 

  1. In the event of an internet shutdown or any state-perpetrated network disruption, the ACHPR should condemn in the strongest terms such practices and reiterate the state obligations under international human rights law and standards. 
  2. In its assessment of State periodic reports, the ACHPR should engage States under assessment on issues of internet access including the occurrence of interferences through measures such as the removal, blocking or filtering of content and assess compliance with international human rights law and standards.
  3. The ACHPR should engage with stakeholders including State Parties, national human rights institutions and NGOs to develop guidance on internet freedom in Africa aimed at realising an open and secure internet in the promotion of freedom of expression and access to information online.

States Parties

  1. States should recognise and respect that universal, equitable, affordable and meaningful access to the internet is necessary for the realisation of human rights by adopting legal, policy and other measures to promote access to the internet and amend laws that unjustifiably restrict access to the internet.
  2. States parties should desist from unnecessarily implementing internet shutdowns and any other arbitrary actions that limit access to, and use of the internet and restore all disrupted digital networks where such disruptions have been ordered. Where limitation measures that disrupt access to the internet and social media are inevitable, they should be narrowly applied and should be prescribed by the law; serve a legitimate aim and be necessary and proportionate means to achieve a stated aim in a democratic society. 
  3. The State, as the duty bearer, should create a conducive environment for business entities to operate in a manner that respects human rights. 

Non-Governmental Organisations 

  • NGOs and other stakeholders should monitor and document the occurrence of internet shutdowns including their impact on human rights and development; raise awareness of the shutdowns and continuously advocate for an open and secure internet.

The Private Sector

  • Telecommunications companies and internet service providers, in their response to shut down requests, should take the relevant legal measures to avoid internet shutdowns and whenever they receive Internet Shutdown requests from States, the companies should insist on human rights due diligence before such measures are taken to mitigate their impact on human rights, ensuring transparency.

Uganda’s Changes On Computer Misuse Law Spark Fears It Will Be Used To Silence Dissidents

By News  Writer |

Uganda’s controversial Computer Misuse (Amendment) Bill 2022, which rights groups say will likely be used to silence dissenting voices online, has come into force after the country’s President Yoweri Kaguta Museveni signed it into law yesterday.

The country’s legislators had passed amendments to the 2011 Computer Misuse Act in early September, limiting writing or sharing of content on online platforms, and restricting the distribution of children’s details without the consent of their parents or guardians.

The bill was brought before the house to “deter the misuse of online and social media platforms.” A document tabled before the house stated that the move was necessitated by reasoning that “enjoyment of the right to privacy is being affected by the abuse of online and social media platforms through the sharing of unsolicited, false, malicious, hateful and unwarranted information.”

The new law, which is also curbing the spread of hate speech online, recommends the application of several punitive measures, including ineligibility by offenders to hold public office for 10 years and imprisonment for individuals who “without authorization, accesses another person’s data or information, voice or video records and shares any information that relates to another person” online.

Rights groups and a section of online communities are worried the law might be abused by regimes, especially the current one, to limit free speech and punish persons that criticize the government. Some have plans to challenge it in court.

Fears expressed by varying groups come in the wake of increasing crackdowns on individuals that don’t shy away from critiquing Museveni’s (Uganda’s longest-serving president, who also blocked social media in the run up to last year’s general election) authoritarian regime online.

Recently, a Ugandan TikToker, Teddy Nalubowa, was remanded in prison for recording and sharing a video that celebrated the death of a former security minister, who led the troops that killed 50 civilians protesting the arrest of opposition politician Robert Kyagulanyi Ssentamu (Bobi Wine) in 2020. Nalubowa, a member of Ssentamu’s National Unity Platform, was charged with offensive communication in contravention of the Computer Misuse Act 2011 amid public outcry over the harassment and intimidation of dissidents. Ssentamu, Museveni’s critic and country’s opposition leader, recently said the new amendment is targeting his ilk.

The Committee to Protect Journalists (CPJ) had earlier called on Museveni not to sign the bill into law, saying that it was an added arsenal that authorities could use to target critical commentators, and punish media houses by criminalizing the work of journalists, especially those undertaking investigations.

The Collaboration for International ICT Policy in East and Southern Africa (CIPESA) had also made recommendations including the deletion of Clause 5, which bars people from sending unsolicited information online, saying that it could be abused and misused by the government.

“In the alternative, a clear definition and scope of the terms “unsolicited” and “solicited” should be provided,” it said.

It also called for the scrapping of punitive measures, and the deletion of clauses on personal information and data, which duplicated the country’s data protection law.

The CIPESA said the law also is likely to infringe on the digital rights of individuals, including the freedom of expression and access to information, adding that the provisions did not address issues, like trolling and harassment, brought forth by emerging technologies as the law sought to do in the first place.

This article was first published by the Ghana Business on Oct 15, 2022.

New Law in Uganda Imposes Restrictions on Use of Internet

By Rodney Muhumuza |

Ugandan President Yoweri Museveni has signed into law legislation criminalizing some internet activity despite concerns the law could be used to silence legitimate criticism.

The bill, passed by the legislature in September, was brought by a lawmaker who said it was necessary to punish those who hide behind computers to hurt others. That lawmaker argued in his bill that the “enjoyment of the right to privacy is being affected by the abuse of online and social media platforms through the sharing of unsolicited, false, malicious, hateful and unwarranted information.”

The new legislation increases restrictions in a controversial 2011 law on the misuse of a computer. Museveni signed the bill on Thursday, according to a presidential spokesman’s statement.

The legislation proposes jail terms of up to 10 years in some cases, including for offenses related to the transmission of information about a person without their consent as well as the sharing or intercepting of information without authorization.
Opponents of the law say it will stifle freedom of expression in a country where many of Museveni’s opponents, for years unable to stage street protests, often raise their concerns on Twitter and other online sites.
Others say it will kill investigative journalism.

The law is “a blow to online civil liberties in Uganda,” according to an analysis by a watchdog group known as Collaboration on International ICT Policy for East and Southern Africa, or CIPESA.

The Committee to Protect Journalists is among groups that urged Museveni to veto the bill, noting its potential to undermine press freedom.

“Ugandan legislators have taken the wrong turn in attempting to make an already problematic law even worse. If this bill becomes law, it will only add to the arsenal that authorities use to target critical commentators and punish independent media,” the group’s Muthoki Mumo said in a statement after lawmakers passed the bill.

Museveni, 78, has held power in this East African country since 1986 and won his current term last year.

Although Museveni is popular among some Ugandans who praise him for restoring relative peace and economic stability, many of his opponents often describe his rule as authoritarian.

This article was first published by the Washington Post on Oct 13, 2022

Opinion | What Companies and Government Bodies Aren’t Telling You About AI Profiling

By Tara Davis & Murray Hunter |

Artificial intelligence has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as the character Data from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.

Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.

But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.

As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. (In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client.) We also approached two government bodies, Home Affairs and the Department of Health, with the same query.

Why AI transparency matters for privacy
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.

The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.

But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.

What we found
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try to understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. In policy jargon, this sort of query is called a “data subject request”.

The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.

Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law. Among those companies that are widely understood to use AI for data profiling – notably, those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.

Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.

Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section.

Led by Advocate Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider.

The AI sat in a “black box” that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.”

What’s to be done
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling. We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.

The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.

Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)

Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.

ALT Adivosry is an Africa Digital Rights Fund (ADRF) grantee.