By CIPESA Writer |
Artificial intelligence (AI)-related legal and national policy frameworks were the focus for Ugandan editors at an August 20, 2025, workshop organised by the Uganda Editors Guild and World Association of News Publishers (WAN IFRA). The training deliberated on responsible adoption of AI tools by newsrooms and saw participants brainstorm how to effectively navigate the complexities that AI poses to the media industry and the practice of journalism.
WAN-IFRA WIN Deputy Executive, Operations, Jane Godia emphasised that artificial intelligence is evolving rapidly and media houses can no longer afford to ignore the shift. “What we’re really focused on is how to embrace AI in ways that strengthen the core of journalism, and not to replace it, but to enhance its usage while safeguarding credibility and editorial independence,” she said.
Godia urged newsrooms to develop clear AI policies to guide ethical and responsible reporting in this new era in order to promote meaningful conversations about establishing practical, well-defined policies that harness the power of AI without compromising journalistic ethics.
At the workshop, the Collaboration on International ICT for East and Southern Africa (CIPESA) presentations focused on the state of artificial intelligence regulation and noted with concern, the lack of an AI-specific legislation in the country. However, there are several laws and policies in which provisions that touch the application and use of AI can be drawn. CIPESA highlighted existing legal frameworks enabling AI deployment, current regulatory gaps, and the consequent implications of AI on newsrooms.
The key legal instruments highlighted include the Uganda Data Protection and Privacy Act enacted in 2019, which provides for the protection and regulation of personal data, and whose data protection rights and principles apply to processing of data by AI systems. Section 27 of this Act specifically provides for rights related to automated decision-making, which brings the application of AI directly under the section.
The other instruments discussed include the Copyright and Neighboring Rights Act, which protects the rights of proprietors and authors from unfair use, and the National Payment Systems Act, which regulates payment systems and grants the Central Bank regulatory oversight over payments. Furthermore, the National Information Technology Authority, Uganda (NITA-U) Act establishes the National Information Technology Authority with a mandate to enhance public service delivery and to champion the transformation of livelihoods of Ugandans using information and communication technologies (ICT). While these laws do not specifically mention AI, some of their provisions can be utilised to regulate AI-related practices and processes.
Other laws discussed include the Uganda Communications Act enacted in 2013, which establishes the Uganda Communications Commission as the communications sector regulator that, among others, oversees the deployment of AI in the sector. Meanwhile, the Regulation of Interception of Communications Act (RICA) enacted in 2010, requires telecommunication service providers in section 8(1)(b) to aid interception of communications by installing hardware and software, which are essentially AI manned. Also relevant is the Anti-Terrorism Act provides for the interception of communication for persons suspected to be engaged in perpetration of acts of terrorism and the Computer Misuse Act provides for several offences committed using computers.
In addition to the laws, various AI-linked policy frameworks were also presented. These include Vision 2040, which is intended to drive Uganda into a middle-income status country by 2040; the National Fourth Industrial Revolution (4IR) Strategy (2020), which aims to position Uganda as a continental hub for 4IR technologies by 2040; and Uganda’s third National Development Plan (NDP III), which is a comprehensive framework to guide the country’s development. These strategic frameworks cover some areas of Machine Learning and AI integration by virtue of being technology-oriented.
Making reference to the Artificial Intelligence in Eastern Africa Newsrooms report, Edrine Wanyama, Programmes Manager-Legal at CIPESA, highlighted the advantages of AI in newsrooms as extending to increased increased productivity and efficiency in task performance, decrease in daily workload, faster reporting of news stories, quicker fact-checks and detection of disinformation and misinformation patterns.
On the flip side, the workshop also highlighted the current risks associated with use of AI in newsrooms, including facilitating disinformation and misinformation, the tradeoff of accuracy for speed by journalists and editors, over-reliance on AI tools at the cost of individual creativity, the erosion of journalistic ethics and integrity, and the threat of job loss that looms over journalists and editors.
Dr. Peter G. Mwesige, Chief of Party at CIPESA, urged editors to think beyond what AI can do for journalists and newsrooms, and treat AI itself as a beat to be covered critically. Citing trends from other markets, he observed that media coverage is often incomplete, swinging between hype and alarm, and called for explanatory, evidence-based reporting on the promise and limits of AI. He noted that one of AI’s most compelling capabilities is processing large data sets, such as election results, rapidly and at scale.
On the ethical front, Dr. Mwesige emphasised the need for transparency, saying journalists should disclose material use of AI in significant editorial tasks. He urged newsrooms to adopt clear internal policies or integrate AI guidance into existing editorial guidelines.
Dr. Mwesige concluded that while AI can assist with brainstorming story ideas, editing, and transcription, among others, “journalists must still put in the hard work.”
Following the deliberations, CIPESA presented recommendations that challenged the use of AI in the newsroom and the protection of the participants, if AI is to be used meaningfully and ethically without compromising integrity and professionalism.
- Ethically use AI by, among others, complying with acceptable standards such as the Paris Charter on AI, respect for copyright and acknowledge sources of works.
- In collaboration with other newsrooms and media houses, develop best practices including policies to guide the integration and application of AI in their work.
- Media houses should collaboratively invest resources in training journalists in responsible and ethical use of AI.
- Employ and deploy the use of fact-checkers to deal with information disorders like misinformation, disinformation and deepfakes.
- Respect other people’s rights, such as intellectual property rights and the right to privacy, while using AI.
- Use AI under the exercise of extra caution when generating content to avoid cases of unethical usage that often undermines journalism’s ethical standards.
- Prioritise human oversight over the application and use of AI to ensure that all cases of excessive intrusion by AI are ironed out and a human aspect is added to generated content.

