Case Study 2: Huawei's Safe City Programs in Africa — AI Surveillance and Democratic Governance
The Safe City Promise
Huawei Technologies' Safe City solution is one of its most globally deployed products. The company's marketing materials describe an integrated "smart city" security platform that combines high-definition surveillance cameras, AI-powered facial recognition and behavior analysis, communications infrastructure, emergency response integration, and centralized command-and-control software. The stated purpose is crime prevention, traffic management, and urban safety — legitimate objectives that governments everywhere pursue.
In Africa, where urban crime rates in several cities are genuinely high, where government security capacity is stretched, and where international development discourse about smart city infrastructure has created demand for technology-enabled governance solutions, Huawei's Safe City pitch has found receptive governments. By 2023, Huawei reported Safe City deployments in more than 230 cities across more than 90 countries globally, with a significant concentration in Africa, Latin America, and Southeast Asia. In Africa alone, documented Safe City deployments include Zimbabwe, Uganda, Kenya, Ethiopia, Zambia, Ivory Coast, Angola, and several others. The scale of these deployments represents a significant penetration of AI-enabled surveillance infrastructure into African governance.
Understanding what these deployments actually consist of, how they have been used, and what accountability frameworks govern them requires moving beyond marketing materials to documented evidence — evidence that reveals a significant gap between the promise of safe, well-governed cities and the reality of surveillance capacity deployed without adequate democratic oversight.
The Zimbabwe Deployment
Zimbabwe's Huawei Safe City deployment is among the most documented and most troubling. The project began formally with a 2018 memorandum of understanding between Huawei and the Postal and Telecommunications Regulatory Authority of Zimbabwe, with Chinese state financing through a loan from Exim Bank of China. The deployment included approximately 10,000 surveillance cameras across Harare, the capital, equipped with facial recognition capability.
Zimbabwe's political context makes this surveillance infrastructure particularly concerning. The country has been ruled under authoritarian conditions since independence — first under Robert Mugabe until 2017, then under Emmerson Mnangagwa, who came to power through a military-assisted transition. The government has a documented history of using state institutions to monitor and suppress political opposition, civil society organizations, journalists, and labor movement leaders. Elections have been repeatedly marred by violence and intimidation. The consolidation of mass surveillance infrastructure in this political environment creates obvious and serious risks for political freedom.
Documented evidence of surveillance system use against political opposition in Zimbabwe has emerged through reporting by Human Rights Watch, Amnesty International, and investigative journalists. Security services have reportedly used camera and facial recognition infrastructure to identify and track individuals attending political opposition events. Journalists have reported cases of individuals being arrested following surveillance identification in ways that suggested systematic monitoring of political gatherings rather than targeted criminal investigation.
Huawei's position is that it sells technology, not governance, and that the use of its technology by government customers is the responsibility of those governments. This position — commercially convenient but ethically inadequate — fails to engage with the foreseeable consequences of deploying advanced surveillance technology in authoritarian governance contexts. A company that sells facial recognition surveillance infrastructure to a government with a documented history of using surveillance to repress political opposition cannot credibly claim to be surprised when that infrastructure is used for repression.
The Uganda Deployment
Uganda's case provides additional documentation of the political application of Safe City surveillance. President Yoweri Museveni has ruled Uganda since 1986 in what international observers have characterized as an increasingly authoritarian manner. The government's treatment of opposition politician Robert Kyagulanyi, known as Bobi Wine, provides specific documentation of surveillance technology's political application.
Bobi Wine, a popular musician who became a politician and opposition presidential candidate in the 2021 elections, has documented multiple instances of what he and his supporters describe as electronic surveillance and physical monitoring by Ugandan security services. His campaign events were monitored, his communications were reportedly intercepted, and surveillance footage was reportedly used to identify and detain supporters who attended his rallies. The Ugandan government's deployment of Huawei surveillance infrastructure — including cameras with facial recognition capability in Kampala — created the technical capacity for this kind of systematic surveillance at a scale that would have been impossible without it.
International observer missions monitoring Uganda's 2021 elections documented significant irregularities and restrictions on political activity, including restrictions on opposition campaigning that surveillance capacity enabled the government to enforce more effectively. The African Union election observer mission expressed concern about pre-election intimidation and violence. The surveillance infrastructure did not cause Uganda's democratic governance failures, but it gave those failures sharper teeth.
The Pattern Across Africa
Zimbabwe and Uganda are not isolated cases. Researchers at Freedom House, Carnegie Endowment for International Peace, and the Oxford Internet Institute have documented patterns of AI surveillance export to governments with poor democratic governance records across Africa and globally.
Freedom House's research on "Freedom on the Net" has documented the global spread of Chinese surveillance technology and its correlation with democratic backsliding. Their analysis finds that governments that receive Chinese AI surveillance technology are statistically more likely to restrict internet freedom and increase surveillance of political opposition in subsequent years — a correlation consistent with (though not definitively proving) a causal relationship between technology access and political repression.
The Carnegie Endowment's "AI Global Surveillance Index" documented 75 countries using AI surveillance technology, with Chinese companies as the primary suppliers to governments in Africa, Southeast Asia, and the Middle East. Their analysis found that 51% of documented AI surveillance deployments were in countries rated as "partly free" or "not free" by Freedom House, suggesting that the technology is disproportionately deployed in contexts where it is most likely to be used for political repression.
The pattern extends beyond African contexts. In Ecuador, a Huawei Safe City system called ECU-911 has been documented as providing surveillance capacity used against political protesters. In Pakistan, Chinese-supplied surveillance infrastructure has been used in politically sensitive ways in both Xinjiang-adjacent regions and in monitoring of political opposition. In Serbia, a Huawei Safe City deployment in Belgrade has been the subject of civil society concern about surveillance of Roma communities and political dissidents.
The Technology Partnership vs. Technology Imposition Question
The framing of Huawei's African Safe City deployments as "technology partnerships" obscures an important asymmetry. Genuine technology partnership implies: shared governance of the technology and the data it generates; mutual benefit from the arrangement; respect for the host country's legal framework and human rights commitments; and accountability mechanisms if the technology is misused.
The actual structure of most Safe City deployments involves: Chinese state financing (often through Exim Bank of China loans that create financial dependence); Huawei's retained involvement in system maintenance and data management (creating ongoing access to surveillance data); contractual terms that are typically not publicly disclosed (preventing civil society or parliamentary oversight); and no meaningful accountability mechanism when the technology is used against human rights defenders.
The financing structure is particularly significant. Many African governments that have deployed Huawei Safe City systems have done so with Chinese state financing, creating debt obligations to Chinese state institutions. This creates a governance dynamic in which the borrowing government has financial incentives to maintain favorable relationships with the lending state, potentially including constraints on governance decisions about the surveillance infrastructure itself.
The data governance question is equally concerning. AI surveillance systems generate enormous quantities of data — imagery, movement patterns, behavioral analytics, identified individuals. In most African Safe City deployments, the governance of this data — who has access to it, how long it is retained, how it can be used, and whether it can be shared with Chinese state institutions — is not specified in publicly available documentation. Given the Chinese government's demonstrated interest in surveillance data on populations globally, and the provisions in Chinese law that potentially require Chinese companies to cooperate with Chinese intelligence services, the data governance question is not academic.
What Genuine Technology Partnership Would Require
The contrast between current Safe City deployments and what genuine technology partnership would look like is illuminating for understanding what responsible AI deployment in the Global South requires.
Genuine technology partnership in AI surveillance would require, at minimum: transparent documentation of what the system does and how it operates, publicly accessible to citizens and civil society; clear legal frameworks governing the system's use, with specific prohibitions on use against political opposition, journalists, and civil society; independent oversight by institutions with genuine authority to investigate and sanction misuse; data governance agreements specifying who has access to surveillance data, for how long, and for what purposes; and accountability mechanisms — including the ability to switch off or modify the system — that are under the governance of host-country institutions rather than supplier companies.
None of these elements is present in most documented Safe City deployments in Africa. The absence is not accidental — it reflects the governance interests of the deploying governments (surveillance capacity without accountability serves repressive purposes better than surveillance capacity with accountability) and the commercial interests of the supplier (comprehensive, dependency-creating deployments are more profitable than modular, accountable ones).
The implication for AI ethics is clear. Organizations involved in developing, supplying, financing, or supporting AI surveillance technology bear ethical responsibility for foreseeable uses of that technology — including use by governments with documented records of political repression. "We sell technology, not governance" is not an adequate ethical framework for companies whose technology is foreseeably used to identify and detain political dissidents.
The Broader Accountability Question
The Huawei Safe City controversy raises a governance question that extends beyond Huawei and beyond Africa: who is accountable when AI surveillance technology enables human rights violations?
The host government bears primary accountability, as the sovereign entity that decides to deploy the technology and direct its use. But sovereign accountability is limited by the fact that it is exercised by the same government engaging in the repressive behavior — governments do not typically hold themselves accountable for repressing political opposition.
The technology supplier bears accountability for foreseeable misuse. Huawei is not an innocent party whose technology has been hijacked for unexpected purposes. The political context of its African clients — authoritarian governments with documented histories of surveillance-enabled repression — was foreseeable at the point of sale. A company that sells facial recognition surveillance to such governments is not acting in good faith when it subsequently expresses surprise that the technology is used for surveillance.
The financing institutions — primarily China's Exim Bank and other state financial institutions — bear accountability for the governance conditions attached to their financing. Financing that enables surveillance infrastructure deployment without requiring human rights safeguards facilitates human rights violations.
The international community — including human rights bodies, democratic governments, and civil society organizations — bears accountability for establishing adequate export control frameworks, human rights impact assessment requirements, and accountability mechanisms for AI surveillance technology. The inadequacy of current export control frameworks, which allow powerful surveillance technology to flow to authoritarian governments without serious governance requirements, is a policy failure with real human costs.
For business professionals, the lesson is both specific and general. The specific lesson: involvement in AI surveillance deployment — as developer, supplier, financier, consultant, or indirect beneficiary — requires serious engagement with the human rights implications of that deployment in the specific political context where it will operate. The general lesson: AI systems are not politically neutral tools. They have governance implications that vary by context, and those implications must be assessed before deployment, not after documented harm has occurred.
Discussion Questions: (1) Huawei argues that it sells technology and is not responsible for how its government customers use it. How would you evaluate this argument? What conditions, if any, would make a supplier's argument that it is not responsible for how its technology is used credible? (2) Several African governments that have deployed Chinese surveillance technology also maintain bilateral relationships with the United States and the European Union, which have expressed concern about Chinese surveillance technology export. What leverage do Western governments have to address this issue, and what would appropriate engagement look like? (3) Imagine you are advising a municipality in a developing country that is considering a Safe City deployment. What governance requirements — procurement conditions, legal frameworks, oversight mechanisms — would you recommend as minimum conditions for an ethically defensible deployment?