Case Study 38-1: Xinjiang — The World's Most Advanced Surveillance State as Laboratory and Warning
Background
The Xinjiang Uyghur Autonomous Region of northwestern China is, by most measures, the most comprehensively surveilled territory in the world. Since approximately 2017, the Chinese government has deployed an extraordinary range of surveillance technologies to monitor the region's approximately 12 million Uyghur, Kazakh, Kyrgyz, and other predominantly Muslim inhabitants. The resulting system — sometimes called the "Xinjiang surveillance state" — has become both a subject of human rights documentation and a case study in what comprehensive AI-powered surveillance of a population looks like in practice.
This case study examines the Xinjiang surveillance system as a real-world manifestation of the surveillance trajectories explored in Chapter 38. It is important to approach this material carefully: the surveillance system in Xinjiang is embedded in, and inseparable from, a broader campaign of mass detention, forced labor, cultural suppression, and what multiple governments and human rights organizations have characterized as crimes against humanity. Analyzing it as a surveillance case study does not reduce it to a technical question; the technology is deployed in service of what is, by any serious assessment, a system of ethnic and religious persecution.
The Infrastructure
The Xinjiang surveillance infrastructure comprises several interlocking systems:
Checkpoint system: Throughout Xinjiang, checkpoints require residents to scan their face, submit to iris scans, and present their ID. These checkpoints are present at the entrances to markets, mosques, schools, parks, and residential neighborhoods. Movement through public space requires passing through multiple surveillance checkpoints daily, creating a comprehensive record of every person's movement.
Wi-Fi sniffing devices: Devices mounted throughout public spaces capture the MAC addresses of mobile phones (unique identifiers for each device), creating a real-time map of the location of every mobile phone — and, by extension, every person — in the surveilled area.
Video surveillance integration: An estimated one camera for every 1.5 residents in some parts of Xinjiang, integrated into a centralized system with facial recognition, gait recognition, and behavioral analytics capabilities. Research by the Australian Strategic Policy Institute has documented the deployment of cameras specifically designed to identify Uyghur faces through ethnicity classification algorithms.
Predictive policing integration: The Integrated Joint Operations Platform (IJOP) is a mobile app used by police in Xinjiang that aggregates surveillance data from multiple sources and generates alerts when individuals' behavior pattern deviates from norms. IJOP flags can trigger police contact, detention, or referral to the "re-education" camp system.
DNA, voice, and additional biometrics: Under programs described as public health initiatives, Uyghurs have been required to provide DNA, voice samples, iris scans, and other biometric data in systematic collection programs.
The Ethnicity Classification Problem
One of the most significant features of Xinjiang surveillance from a surveillance studies perspective is the documented development of AI algorithms specifically designed to identify people as Uyghur based on facial characteristics — so-called "ethnicity classification" algorithms.
Research by The Intercept, IPVM (a surveillance industry research organization), and academic researchers has documented that Chinese technology companies including Huawei, Dahua, and Hikvision filed patents for, or marketed, systems that could identify people as Uyghur and alert authorities to their presence. This is surveillance not of behavior — of what people do — but of identity — of who people are. It is the most explicit form of racializing surveillance: a system literally built to identify a racial/ethnic group and subject them to differential treatment.
Several of the companies documented as developing ethnicity classification algorithms market their products internationally. Dahua and Hikvision products have been installed in schools, shopping malls, and public spaces in the United States, Europe, and across Asia. The algorithms for ethnicity classification may or may not be present in internationally marketed products; the infrastructure capable of running them is.
Export of the Surveillance Model
The Xinjiang surveillance system has become a product. Chinese technology companies have exported surveillance infrastructure — cameras, AI systems, data integration platforms — to dozens of countries through the Digital Silk Road, China's technology-focused extension of the Belt and Road Initiative.
Research by the Carnegie Endowment for International Peace, the Australian Strategic Policy Institute, and Freedom House has documented the export of Chinese surveillance technology to governments in Africa, Central Asia, Southeast Asia, Latin America, and the Middle East. The technology is sold on commercial terms; in some cases, Chinese state financing makes it affordable to governments that could not otherwise afford it.
The surveillance systems exported through the Digital Silk Road are, in their base configuration, the same AI-powered cameras, facial recognition systems, and data integration platforms deployed in Chinese cities. They can be integrated into broader surveillance architectures; they can be used for the purposes that the purchasing government chooses.
This creates a significant analytic challenge: the same technology that is used to surveil Uyghur populations in Xinjiang is also used for traffic management in Malaysian cities and crime mapping in Kenyan police departments. The technology is not inherently an instrument of ethnic persecution; it becomes one when deployed for that purpose. But once the infrastructure is in place, the purposes to which it is put are determined by the political context — and political contexts change.
What Xinjiang Teaches
The Xinjiang case illustrates several things about the future of surveillance that are directly relevant to Chapter 38's analysis:
1. Comprehensiveness is achievable. The ambient surveillance condition that Chapter 38 describes as a trajectory is a present reality in Xinjiang. Full coverage of public space with integrated facial, gait, and behavioral surveillance, combined with checkpoint systems, has produced a near-totalizing surveillance environment. This demonstrates that the technical feasibility of comprehensive surveillance — whatever the cost and political will required — is not hypothetical.
2. Technology and purpose are separable at deployment, inseparable in consequence. The cameras in Xinjiang are the same cameras sold to American school districts. What makes them instruments of ethnic persecution in Xinjiang is not the technology itself but the political system that deploys it and the purposes it serves. This does not mean the technology is neutral; it means that technology-focused analysis must be accompanied by political-system analysis.
3. The algorithmic targeting of ethnic groups is commercially available. Ethnicity classification algorithms were not developed in secret military programs; they were developed by commercial companies competing for government contracts. The development of algorithms that identify people by race or ethnicity as an input to differential treatment represents a commercialization of the most explicit form of racializing surveillance.
4. Export creates proliferation risks that markets do not self-correct. Commercial export of surveillance infrastructure distributes capabilities that may be used for purposes the original developers would not endorse — and may not be able to prevent. The governance of surveillance technology export is a significant international policy question that existing frameworks address inadequately.
Discussion Questions
-
The case study describes the Xinjiang surveillance system as a "real-world manifestation of the surveillance trajectories explored in Chapter 38." What specific trajectories does Xinjiang illustrate? Are there any trajectories from Chapter 38 that Xinjiang does NOT illustrate?
-
The ethnicity classification algorithms developed for deployment in Xinjiang were produced by commercial companies responding to government contracts. What responsibility, if any, do technology companies have for the uses to which their products are put? Does this responsibility differ when the product is a general-purpose surveillance camera vs. an ethnicity classification algorithm?
-
The same surveillance infrastructure is described as being used for both traffic management in Malaysia and ethnic persecution in Xinjiang. Does this dual-use nature of the technology change the analysis of responsibility for harmful uses?
-
What governance mechanisms — at national, international, or industry levels — would be needed to prevent the export of surveillance infrastructure to governments that will use it for ethnic or political persecution? Are such mechanisms feasible?
-
The case study notes that the scale of surveillance in Xinjiang demonstrates that "comprehensive surveillance is achievable." Does this fact change your assessment of the feasibility of the Democratic-Regulated scenario in Chapter 38? Does knowing that the technology exists make its regulation more or less achievable?