Why It’s Not Yet Uhuru for Artificial Intelligence in Africa and What To Do About It

By CIPESA Writer |

At the first Global Summit on Artificial Intelligence (AI) in Africa in Kigali, Rwanda earlier this month, it was evident that African countries are hugely lagging behind the rest of the world in developing and utilising AI. Also clear was that if the continent makes the right investments today, it stands to reap considerable benefits.

The challenges Africa faces were well-articulated at the summit that brought together 2,000 participants from 97 countries, as were the solutions. Some important steps were taken, such as issuance of the Africa Declaration on Artificial Intelligence that aims to mobilise USD 60 billion for the prospective Africa AI Fund, the unveiling of the Gates Foundation investment in AI Scaling Labs in four African countries, announcement of the Cassava AI Factory that is said to be Africa’s first AI-enabled data centre, and endorsement of the Africa Artificial Intelligence Council.

Just Where Does Africa Lie?

Crystal Rugege, Managing Director of the Rwanda Centre for the Fourth Industrial Revolution, which hosted the summit, noted that AI could unlock USD 2.9 trillion for Africa’s economy by 2030, thereby lifting 11 million Africans out of poverty and creating 500,000 jobs annually. However, Rugege added, “this will not happen by chance. It requires bold, decisive leadership and collective action.”

Some independent researchers and scholars feel most African countries are not doing enough to stimulate AI innovation and uptake. Indeed, speakers at an independent webinar held on the eve of the Kigali summit criticised the “ambitious prediction” of the USD 2.9 trillion AI dividend for Africa, citing the lack of inclusive AI policy-making, and African countries’ failure to invest in a workforce that is fit for the AI age.

A handful of countries (including Ethiopia, Ghana, Nigeria, Senegal, Kenya, Mauritius, Egypt, Tunisia, Algeria, Rwanda and Zambia) have developed AI Strategies and at least eight others are in the process of doing so, but there is minimal government-funded AI innovation and deployment. Africa receives only a pittance of the global AI funding.

Key Hindrances

The summit was not blind to the key hindrances to AI development and deployment. Africa’s limited computational power (or compute) including a shortage of locally-based data centres was severally cited. Africa holds less than 1% of global data centre capacity, which is insufficient to train and run AI models. Also, while the continent has the world’s youngest population, it is lowly skilled. Moreover, only 5% of the region’s AI talent has access to the computational power and resources needed to carry out complex tasks. Many countries also lack the requisite energy supplies to power sustained AI development. Also, Africa’s 60% mobile internet usage gap is slowing AI adoption and economic growth.

Accordingly, the summit – and the declaration it issued – focussed on how to address these bottlenecks. Recommendations include to focus education systems on Fourth Industrial Revolution skills including to build for and adapt to AI; developing AI infrastructure (innovation labs, data centres, sustainable energy); scaling African AI businesses (including enabling them to access affordable funding); and enhancing AI research.

Mahmoud Ali Youssouf, Chairperson of the African Union (AU) Commission, stressed the need to create a harmonised regulatory environment to enable cross-border AI trade and investment; and to leverage Africa’s rich and diverse datasets to fuel AI innovation and power global AI models.

Important Steps in Kigali
  • The Africa Declaration on Artificial Intelligence builds on the foundational strategies, policies and commitments of the AU (such as its AI Strategy and the Data Policy Framework) and the United Nations. It seeks to develop a comprehensive talent pipeline through AI education and research; establishes frameworks for open, secure and inclusive data governance; provides for deployments of affordable and sustainable computing infrastructure accessible to researchers, innovators and entrepreneurs across Africa; and aims to create supportive ecosystems with regional AI incubation hubs driving innovation and scaling African AI enterprises domestically and globally.

    The Declaration envisages the establishment of a USD 60 billion Africa AI Fund, leveraging public, private, and philanthropic capital. The Fund would invest in developing and expanding AI infrastructure, scaling African AI enterprises, building a robust pipeline of AI practitioners, and strengthening domestic AI research capabilities, while upholding principles of equity and inclusion.
  • The AI Scaling Labs: The Gates Foundation and Rwanda’s Ministry of ICT and Innovation signed a Memorandum of Understanding (MoU) to establish the Rwanda AI Scaling Hub, in which the foundation will invest USD 7.5 million. It will initially focus on healthcare, agriculture, and education. Over the next 12 months, the foundation plans to establish similar centres in Kenya, Nigeria, and Senegal “to break down barriers to scale and help move promising AI innovations to impact.”
  • The Cassava AI Factory: Cassava Technologies announced the Cassava AI Factory, reportedly Africa’s first AI-enabled data centre, powered by NVIDIA accelerated computing. “Building digital infrastructure for the AI economy is a priority if Africa is to take full advantage of the fourth industrial revolution,” said Cassava Founder and Chairman, Strive Masiyiwa. “Our AI Factory provides the infrastructure for this innovation to scale, empowering African businesses, startups and researchers with access to cutting-edge AI infrastructure to turn their bold ideas into real-world breakthroughs – and now, they don’t have to look beyond Africa to get it.”

    By keeping AI infrastructure and data within Africa, Cassava Technologies says it is strengthening the continent’s digital independence, driving local innovation and supporting African AI talent and businesses. Its first deployment in South Africa (in June 2025) will be followed by expansion to Egypt, Kenya, Morocco, and Nigeria.
  • The Africa Artificial Intelligence Council: The Smart Africa Alliance Steering Committee Meeting co-chaired by the International Telecommunications Union (ITU) Secretary and the AU Commissioner for Infrastructure and Energy, endorsed the creation of the Council to drive continental coordination on critical AI pillars, including AI computing infrastructure, data sets and data infrastructure development, skills development, market use cases, and governance/policy.
  • Use Cases and Sandboxes: Documentation of tangible use cases and sandboxes that support innovation and regulation is vital in AI development on the continent. On the sidelines of the summit, CIPESA contributed to two co-creation initiatives. The Datasphere Initiative held a Co-creation Lab on the role of AI sandboxes in supporting regulatory innovation and ethical AI governance in Africa. Meanwhile, Qhala hosted a Digital Trade and Regulatory Sandbox session focused on digital health, smartphones, and cross-border trade. Separately, the Rwanda Health Intelligence Centre was unveiled, which enables AI-driven emergency medical services delivery and real-time collection of data on healthcare outcomes in hospitals, thus strengthening evidence-based decision-making.

Ultimately, the AI promise remains high but for it to be realised, the ideas from the Kigali summit must be translated into actions. Countries must stump up funds for research and scaling innovations, support their citizens in acquiring AI-relevant skills, expand internet access and affordability, provide supportive infrastructure, and incentivise foreign investment and technology transfer. Moreover, they should ensure that national laws and regulations promote fair, safe, secure, inclusive and responsible AI, and conform to continental aspirations such as the African Union AI Strategy.

Policy brief: Human Rights Implications of Health Care Digitalisation in Kenya

News Update |

This policy brief draws on the key findings of a human rights impact assessment of Digital Health Services to make concrete recommendations for a human rights-based digitalisation of health care services in Kenya.

Drawing on a human rights impact assessment conducted in October-November 2024, the brief shows how the transition from the National Health Insurance Fund (NHIF) to the Social Health Insurance Fund (SHIF) has faced significant challenges that impact the right to health, particularly for vulnerable and marginalised groups and addresses broader concerns as to the role of digitalisation in health care management and its implications for service delivery. 

Notably, Kenya’s journey towards a rights-based digital health system requires a coordinated approach that addresses infrastructure, regulatory enforcement, gender equality, and resource allocation and management. By adopting the recommendations found in this brief, Kenya can create a digital health environment that not only advances healthcare service delivery but also protects, promotes and respects the rights of all its citizens, particularly those most at risk of exclusion.

Recommendations on the NHIF-SHIF Transition

1. Enhance digital infrastructure: Fully operationalize the SHA platform and integrate it with existing systems like Kenya Health Information System and Kenya Electronic Medical Records.

2. Conduct public awareness campaigns: Educate citizens on SHA benefits and processes to dispel misinformation and encourage enrolment.

3. Expedite empanelment of facilities: Increase the accreditation of healthcare providers to ensure uninterrupted access to services.

4. Strengthen National-County coordination: Align roles, resources, and responsibilities to streamline service delivery under the devolved healthcare framework as stipulated under the Fourth Schedule of the Constitution.

5. Review contribution models: Adjust means-testing mechanisms to ensure affordability, especially for vulnerable and marginalized populations.

6. Prioritize capacity building: Train healthcare workers and Community Health Promoters to effectively navigate the transition and support beneficiaries.

7. Incorporate stakeholder feedback: Deliberately establish clear communication channels and include healthcare workers, vulnerable and marginalized groups in the design and implementation of SHA systems to promote inclusivity.

8. Clarify referral pathways: Define roles for various healthcare levels under the Primary Health Care Act to simplify patient navigation.

9. Ensure accountability and transparency: Regularly audit the transition to address and mitigate inefficiencies and restore public trust.

Read the full policy brief here.

Research Partners

The research into the human rights impacts of digital health services in Kenya was conducted in partnership between the Kenya National Commission on Human Rights – Kenya’s National Human Rights Institution, CIPESA – The Collaboration on International ICT Policy for East and Southern Africa which works to promote effective and inclusive ICT policy, and the Danish Institute for Human Rights – Denmark’s national human rights institution which works internationally to address the human rights implications of technology use.

Call for Applications: DPI Journalism Fellowship for Eastern Africa

Call for Applications |

Date of Publication: 1 April 2025.

Application Deadline: 21 April 2025 – 18.00 East African Time.

The Collaboration on International ICT Policy for East and Southern Africa (CIPESA), in partnership with Co-Develop, invites applications for the Digital Public Infrastructure (DPI) Journalism Fellowship for Eastern Africa.

This regional fellowship aims to build a new generation of journalists with the knowledge and skills to investigate and report on Digital Public Infrastructure and Digital Public Goods (DPGs). The fellowship is inspired by a similar Co-Develop-funded initiative implemented by the Media Foundation for West Africa (MFWA), which supported fellows to produce over 100 impactful stories that spurred public debate and influenced policy.

Through rigorous training, mentorship, and financial support, selected journalists will explore the promises, challenges, and lived experiences related to DPI across Eastern Africa.

What is Digital Public Infrastructure?

DPI refers to foundational digital systems and services that enable secure, inclusive, and efficient delivery of both public and private services. These include, among others:

  • Digital ID systems.
  • Instant and interoperable payment platforms.
  • Open data platforms.
  • Data exchange frameworks.
  • e-Government systems.

Well-designed DPI holds transformative potential—but without public understanding and critical engagement, it can also deepen exclusion, surveillance, and limit adoption/uptake.

Fellowship Details

Duration: 6 months (June–December 2025).

Structure:

  • June 2025: Virtual training workshops and editorial guidance.
  • July 2025: Story development and mentoring.
  • August 2025: In-person workshop in Nairobi, Kenya, peer learning, and advanced training.

Outputs: Each Fellow is expected to produce at least three high-quality, published stories on DPI or DPGs during the fellowship.

Benefits

  • A grant of up to USD 1,500 to support story production.
  • Access to reporting grants post-fellowship.
  • Mentorship from senior journalists and digital policy experts.
  • Certificate of Completion.
  • Travel, accommodation, and incidental expenses for the in-person workshop.

Eligibility Criteria

The fellowship is open to journalists based in the following Eastern Africa countries:

Burundi, Democratic Republic of the Congo, Ethiopia, Kenya, Rwanda, Somalia, South Sudan, Tanzania, and Uganda.

Applicants must:

  • Be a practicing journalist with at least three years of professional experience.
  • Demonstrate strong interest or experience in reporting on digital technologies, governance, human rights, or development.
  • Be proficient in English or French. 
  • Be available to fully participate in the three-month fellowship and in post-fellowship activities.
  • Be affiliated with a credible media outlet willing to support their reporting.

Selection Process

The selection will be based on merit and demonstrated interest in DPI-related reporting. The process includes:

  • Initial application screening.
  • Interviews with shortlisted candidates.
  • Final selection by a panel of media and policy experts.
  • Women and early-career journalists are strongly encouraged to apply.

How to Apply

Applicants should complete this form by 21 April 2025.

For more information, please visit: https://cipesa.org or contact [email protected]

Protecting Global Democracy in the Digital Age: Insights from PAI’s Community of Practice

By Christian Cardona |

2024 was a historic year for global elections, with approximately four billion eligible voters casting a vote in 72 countries. It was also a historic year for AI-generated content, with a significant presence in elections all around the world. The use of synthetic media, or AI-generated media (visual, auditory, or multimodal content that has been generated or modified via artificial intelligence), can affect elections by impacting voting procedures and candidate narratives, and enabling the spread of harmful content. Widespread access to improved AI applications has increased the quality and quantity of the synthetic content being distributed, accelerating harm and distrust.

As we look toward global elections in 2025 and beyond, it is vital that we recognize one of the primary harms of generative AI in 2024 elections has been the creation of deepnudes of women candidates. Not only is this type of content harmful to the individuals, but also likely creates a chilling effect on female political participation in future elections. The AI and Elections Community of Practice (COP) has provided us with key insights, such as these, and actionable data that can help inform policymakers and platforms as they seek to safeguard future elections in the AI age.

To understand how various stakeholders and actors anticipated and addressed the use of generative AI during elections and are responding to potential risks, the COP provided an avenue for Partnership on AI (PAI) stakeholders to present their ongoing efforts, receive feedback from peers, and discuss difficult questions and tradeoffs when it comes to deploying this technology. In the last three meetings of the eight-part series, PAI was joined by the Center for Democracy & Technology (CDT), the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), and Digital Action to discuss AI’s use in election information and AI regulations in the West and beyond.

Investigating the Spread of Election Information with Center for Democracy & Technology (CDT)

The Center for Democracy & Technology has worked for thirty years to improve civil rights and civil liberties in the digital age, including through almost a decade of research and policy work on trust, security, and accessibility in American elections. In the sixth meeting of the series, CDT provided an inside look into two recent research reports published on the confluence of democracy, AI, and elections.

The first report investigates how chatbots from companies such as OpenAI, Anthropic, MistralAI, and Meta, handle responses to election-based queries, specifically for voters with disabilities. The report found that 61% of responses from chatbots tested provided answers that were insufficient (defined in this report as a response that included one or more of the following: incorrect information, omission of key information, structural issues, or evasion) in at least one of the four ways assessed by the study, including that 41% of the responses contained factual errors, such as incorrect voter registration deadlines. In one case, a chatbot provided information that cited a non-existent law. A quarter of the responses were likely to prevent or dissuade voters with disabilities from voting, raising concerns about the reliability of chatbots in providing important election information.

The second report explored political advertising across social media platforms and how changes in policies at seven major tech companies over the last four years have impacted US elections. As organizations seek more opportunities to leverage generative AI tools in an election context, whether for chatbots or political ads, they must continue investing in research on user safety and implementing evaluation thresholds for deployment, and ensure full transparency on product limitations once deployed.

AI Regulations and Trends in African Democracy with CIPESA

A “think and do tank,” the Collaboration on International ICT Policy for East and Southern Africa focuses on technology policy and practice as it intersects with society, human rights, and livelihoods. In the seventh meeting of the series, CIPESA provided an overview of their work on AI regulations and trends in Africa, touching topics like national and regional AI strategies, and elections and harmful content.

As the use of AI continues to grow in Africa, most AI regulation across the continent focuses on the ethical use of AI and human rights impacts, while lacking specific guidance on the impact of AI on elections. Case studies show that AI is undermining electoral integrity on the continent, distorting public perception given the limited skills of many to discern and fact-check misleading content. A June 2024 report by Clemson University’s Media Forensics Hub found that the Rwandan government used large language models (LLMs) to generate pro-government propaganda during elections in early 2024. Over 650,000 messages attacking government critics, designed to look like authentic support for the government, were sent from 464 accounts.

The 2024 general elections in South Africa saw similar misuse of AI, with AI-generated content targeting politicians and leveraging racial and xenophobic undertones to sway voter sentiment. Examples include a deepfake depicting Donald Trump supporting the uMkhonto weSizwe (MK) party and a manipulated 2009 video of rapper Eminem supporting the Economic Freedom Fighters Party (EFF). The discussion emphasized the need to maintain a focus on AI as it advances in the region with particular attention given to mitigating the challenges AI poses in electoral contexts.

AI tools are lowering the barrier to entry for those seeking to sway elections, whether individuals, political parties, or ruling governments. As the use of AI tools grows in Africa, countries must take steps to implement stronger regulation around the use of AI and elections (without stifling expression) and ensure country-specific efforts are part of a broader regional strategy.

Catalyzing Global AI Change for Democracy with Digital Action

Digital Action is a nonprofit organization that mobilizes civil society organizations, activists, and funders across the world to call out digital threats and take joint action. In the eighth and final meeting in the PAI AI and Elections series, Digital Action shared an overview of the organization’s Year of Democracy campaign. The discussions centered on protecting elections and citizens’ rights and freedoms across the world, as well as exploring how social media content has had an impact on elections.

The main focus of Digital Action’s work in 2024 was supporting the Global Coalition For Tech Justice, which called on Big Tech companies to fully and equitably resource efforts to protect 2024 elections through a set of specific, measurable demands. While the media expected to see very high profile examples of generative AI swaying election results around the world, they instead saw corrosive effects on political campaigning, harms to individual candidates and communities, as well as likely broader harms to trust and future political participation.

Many elections around the world were impacted by AI-generated content being shared on social media, including Pakistan, Indonesia, India, South Africa and Brazil, with minorities and female political candidates being particularly vilified. In Brazil, deepnudes appeared on a social media platform and adult content websites depicting two female politicians in the leadup to the 2024 municipal elections. While one of the politicians took legal action, the slow pace of court processes and lack of proactive steps by social media platforms prevented a timely fix.

To mitigate future harms, Digital Action called for each Big tech company to establish and publish fully and equitably resourced Action Plans (globally and for each country holding elections). By doing so, tech companies can provide greater protection to groups, such as female politicians, that are often at risk during election periods.

What’s To Come

PAI’s AI and Elections COP series has concluded after eight convenings with presentations from industry, media, and civil society. Over the course of the year, presenters provided attendees with different perspectives and real-world examples on how generative AI has impacted global elections, as well as how platforms are working to combat harm from synthetic content.

Some of key takeaways from the series include:

  1. Down-ballot candidates and female politicians are more vulnerable to the negative impacts of generative AI in elections. While there were some attempts to use generative AI to influence national elections (you can read more about this in PAI’s case study), down-ballot candidates were often more susceptible to harm than nationally-recognized ones. Often, local candidates with fewer resources were unable to effectively combat harmful content. Deepfakes were also shown to prevent increased participation of female politicians in some general elections.
  2. Platforms should dedicate more resources to localizing generative AI policy enforcement. Platforms are attempting to protect users from harmful synthetic content by being transparent about the use of generative AI in election ads, providing resources to elected officials to tackle election-related security challenges, and adopting many of the disclosure mechanisms recommended in PAI’s Synthetic Media Framework. However, they have fallen short in localizing enforcement policies with a lack of language support and in-country collaboration with local governments, civil society organizations, and community organizations that represent minority and marginalized groups such as persons with disabilities and women. As a result, generative AI has been used to cause real-world harm before being addressed.
  3. Globally, countries need to adopt more coherent regional strategies to regulate the use of generative AI in elections, balancing free expression and safety. In the U.S., a lack of federal legislation on the use of generative AI in elections has led to various individual efforts from states and industry organizations. As a result, there is a fractured approach to keeping users safe without a cohesive overall strategy. In Africa, attempts by countries to regulate AI are very disparate. Some countries such as Rwanda, Kenya, and Senegal have adopted AI strategies that emphasize infrastructure and economic development but fail to address ways to mitigate risks that generative AI presents in free and fair elections. While governments around the world have shown some initiative to catch up, they must work with organizations, both at the industry and state level, to implement best practices and lessons learned. These government efforts cannot exist in a vacuum. Regulations must cohere and contribute to broader global governance efforts to regulate the use of generative AI in elections while ensuring safety and free speech protections.

While the AI and Elections Community of Practice has come to an end, we continue to push forward in our work to responsibly develop, create, and share synthetic media.

This article was initially published by Partnership on AI on March 11, 2025

CIPESA-Run ADRF Awards USD 140,000 to Eleven Digital Democracy Non-Profits Amidst Funding Cuts

By Ashnah Kalemera |

With many funders shifting their funding priorities about human rights, governance and livelihood issues, African Civil Society Organisations (CSOs), human rights defenders and activists have been severely impacted. As a result, critical programming on civic participation, tech accountability, digital rights and digital inclusion, which was scoring wins in the face of growing authoritarianism on the continent, has been crippled. 

In response to this changing funding landscape, the Africa Digital Rights Fund (ADRF) managed by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) has awarded USD 140,000 to eleven non-profit organisations as bridging funds. The discretionary awards are aimed at bridging the gap in operations and programming faced by CIPESA’s past and present partners and subgrantees. The funds bring to USD one million the total awarded by CIPESA under the  ADRF initiative since its launch in April 2019.

According to CIPESA’s Executive Director, Dr. Wairagala Wakabi, “anchor institutions such as CIPESA have lost funding and that means many crucial but smaller actors across the continent have equally been affected”. Nonetheless, CIPESA is committed to “defending digital democracy amidst the steady  democratic regression we are witnessing, and the cruciality of funding organisations that are battling rising authoritarianism cannot be overemphasised,” said Wakabi.

The recipient organisations work on various digital democracy issues in 10 countries – Cote d’Ivorie, the Democratic Republic of Congo (DR Congo), Ethiopia, Kenya, Mozambique, Senegal, Somalia, South Sudan, Uganda and Zambia. These organisations work on catalytic issues in difficult contexts and have established track records. The selection of beneficiaries was guided by a survey on the impact of funding termination by the United States (US) government. 

Round Nine ADRF Beneficiaries:

  1. Action et Humanisme – based in Cote d’Ivoire, the organisation works to advance digital accessibility for persons with disabilities. 
  2. Agora, an online activism initiative focused on social accountability in Uganda.
  3. Bloggers of Zambia, whose motto is “Keeping Online Spaces Open” and is pushing for progressive legislative reforms in Zambia.
  4. Digital Rights Frontlines (formerly DefyHateNow), which is at the frontline of countering hate speech and disinformation online in South Sudan.
  5. Digital Shelter, a Somali group working to advance the digital civic space.
  6. Forum de Organizacoes de Pessoas com Deficiencia – FAMOD, which works to promote the rights of persons with disabilities in Mozambique, including the right to information through web accessibility and inclusion through affordable access to technology.
  7. Inform Africa, a media integrity hub in Ethiopia.
  8. Jonction, a Senegalese digital rights advocacy organisation.
  9. Thraets, a tech research lab focused on elections integrity and Artificial Intelligence (AI)-generated content.
  10. Rudi International, a Congolese digital rights advocacy and digital literacy organisation.
  11. Tanda Community Network, based in Kibera, Nairobi, Kenya, the community network is championing work against Technology Facilitated Gender Based Violence (TFGBV) alongside efforts to bridge the digital divide.

The survey revealed that following the suspension and eventual termination of U.S. funding, many organisations had reduced the scope of their activities, scaled back staff salaries and benefits, and in a number of cases laid off staff. Over 90% of the organisations surveyed  were uncertain about their ability to maintain operations beyond two months. Only one of the surveyed organisations said it would remain fully operational if it did not receive additional funding.

A staggering 92% of respondents had reduced programming scope and one in three respondent organisations reported that they had slashed staff. For one recipient, over 60% of the team was “not able to continue working in any capacity going forward”. The percentage of US funding was between 20% and 60% of the annual budgets of the organisations surveyed.

Even in the face of a grim funding future, civil society organisations that face harassment and operate in volatile political environments remain resilient. As the head of one of the grant beneficiary organisations stated: “Unfortunately, we do not have the luxury to cease activities”. The same unwavering commitment to continue operations was demonstrated by the DR Congo-based recipient whose digital literacy training centre was robbed during the January 2025 rebel attacks in Goma.

The ADRF provides financial support to organisations and networks to overcome barriers to accessing funding and building a stronger movement of digital and human rights advocates in Africa. The Fund has also built the capacity of initiatives in advocacy, public communication, research and data-for-advocacy. Supported initiatives commend the ADRF as a unique funding initiative that has broken ranks with traditional funders’ structure. See previous ADRF recipients here.

The discretionary round of the ADRF was supported by funding from the Skoll Foundation, the Wellspring Philanthropic Fund and the Ford Foundation. Other supporters of the ADRF in the past include the Center for International Private Enterprise (CIPE), the Swedish International Development Cooperation Agency (Sida), the German Society for International Cooperation Agency (GIZ), the Omidyar Network, the Hewlett Foundation, the Open Society Foundations and New Venture Fund (NVF).