In 2019, Robert Williams, a Black man living in Farmington Hills, Michigan, was arrested in his driveway. Police officers handcuffed him in front of his wife and daughters and drove him to a detention facility, where he was held overnight. The...
In This Chapter
- Opening: What the Camera Sees
- Section 1: The Argument — Surveillance Is Not Racially Neutral
- Section 2: The Historical Architecture — Surveillance as Racial Control
- Section 3: Contemporary Racial Surveillance Systems
- Section 4: The Intersectionality of the Surveillance Gaze
- Section 5: Jordan and Yara — Being Watched Differently
- Section 6: Structural Analysis — Why Surveillance Is Racially Unequal
- Section 7: Resistance, Documentation, and Counter-Surveillance
- Section 8: Implications for the Architecture Metaphor
- Chapter Summary
- Key Terms
- Discussion Questions
Chapter 36: Racial Surveillance and the Discriminatory Gaze
Opening: What the Camera Sees
In 2019, Robert Williams, a Black man living in Farmington Hills, Michigan, was arrested in his driveway. Police officers handcuffed him in front of his wife and daughters and drove him to a detention facility, where he was held overnight. The charge: shoplifting tools from a Shinola watch store.
Williams had never been to that store. He had never stolen anything. His face had been misidentified by a facial recognition algorithm deployed by the Detroit Police Department. An investigator had matched Williams's driver's license photo to grainy surveillance footage — a match that was, by any careful standard, clearly wrong. The investigator later admitted in a deposition that they had conducted the match themselves, untrained, by placing two photographs side by side.
Robert Williams became the first documented wrongful arrest caused by facial recognition in the United States. He was not the last. Michael Oliver and Nijeer Parks, both Black men, were arrested on the basis of facial recognition errors in 2019 and 2020 respectively. In each case, the algorithm failed. In each case, the failure landed on a Black man's body.
This is not a chapter about algorithms. It is a chapter about what algorithms inherit.
Every surveillance system is built by people, trained on data generated by society, and deployed in a world already structured by racial hierarchy. When we say that surveillance systems are "neutral" or "objective," we are saying something that is empirically false and historically illiterate. Surveillance systems have never watched everyone equally. They were not designed to.
This chapter traces the racial logic of surveillance from its earliest modern forms through its most sophisticated contemporary expressions. We begin with the argument at its most blunt: the watcher and the watched have never been randomly assigned. Then we follow that argument across centuries and technologies, ending with Jordan and Yara — two students who have been watched differently their entire lives, even at the same university.
Section 1: The Argument — Surveillance Is Not Racially Neutral
1.1 Visibility Asymmetry as Racial Structure
The concept of visibility asymmetry — the unequal distribution of surveillance burdens between watcher and watched — takes on a specific and urgent character when analyzed through race. In Chapter 1, we established that the fundamental power of surveillance lies in who sees without being seen. In Chapter 8, we examined how CCTV cameras are distributed unevenly across urban space, clustering in neighborhoods defined by poverty and Blackness. In Chapter 35, we documented the algorithmic bias of facial recognition systems that misidentify darker-skinned faces at rates dramatically higher than lighter-skinned ones.
These are not separate phenomena. They are expressions of a single structural logic: in a racially stratified society, surveillance systems are built to watch some people more than others, and that "more" follows racial lines with remarkable consistency.
The sociologist Simone Browne makes this argument with canonical force in her 2015 book Dark Matters: On the Surveillance of Blackness. Browne proposes that we cannot understand modern surveillance without understanding its racial genealogy. The technologies and techniques we associate with contemporary surveillance — the identification document, the biometric register, the checkpoint, the algorithmic flag — did not emerge in a racial vacuum. They emerged in a society organized around the management, control, and exploitation of Black bodies. To study surveillance without centering that history is to misunderstand what surveillance is for.
💡 Intuition Check: You may be tempted to argue that modern surveillance technologies are different from their historical predecessors — that a facial recognition algorithm is categorically distinct from a slave pass system. By the end of this chapter, evaluate that intuition carefully. What has changed? What has not?
1.2 The Racial Formation Framework
Before we proceed, it is worth establishing a conceptual frame from sociology. Michael Omi and Howard Winant's theory of racial formation holds that race is not a fixed biological reality but a socially constructed category that is continually remade through political, economic, and cultural processes. Race is not something people have; it is something that societies do.
Surveillance participates actively in racial formation. When a police department defines "suspicious behavior" in ways that map onto the behavior of Black men walking, it does not merely respond to race — it produces and enforces a racial category. When a predictive policing algorithm is trained on historical arrest data from neighborhoods that were over-policed precisely because they were Black neighborhoods, it does not merely reflect the past — it extends and legitimates it. Surveillance categories are, in this sense, racial categories dressed in technical language.
This is what scholars mean when they say that surveillance is racializing: it does not merely respond to pre-existing racial identities but actively constructs them, marks them, and subjects them to differential treatment.
Section 2: The Historical Architecture — Surveillance as Racial Control
2.1 Lantern Laws and the Slave Pass System
Simone Browne's Dark Matters opens with an analysis of New York City's 1713 Lantern Laws, which required enslaved Black, mixed-race, and Indigenous people to carry a lantern after dark if they were not accompanied by a white person. The law was explicitly a visibility technology: it made certain bodies legible, trackable, and subject to intervention in ways that other bodies were not. To move through public space at night without a lantern was to be immediately identifiable as potentially unauthorized, potentially suspicious, potentially dangerous.
The lantern is a proto-surveillance device. It does not record, compute, or transmit. But it performs exactly the function that we associate with modern surveillance systems: it renders certain bodies visible to the gaze of authority, creates a mechanism for verifying identity and authorization, and subjects those bodies to potential detention if they cannot produce the right credentials.
The slave pass system made this logic even more explicit. Enslaved people required written authorization from enslavers to move through public space. The pass was both an identification document and a travel permit — a piece of paper that certified that the person carrying it was who they claimed to be, was authorized to be where they were, and could be detained by any white person if they lacked it or if it appeared forged. The slave pass system was the colonial American surveillance state, applied to the specific problem of controlling a population that the economic system required to move but that the political system required to control.
Browne reads these histories not as antiquarian curiosities but as structural precedents. The logic of the slave pass — prove your identity, prove your authorization, or be detained — is the logic of every subsequent identification regime, including the driver's license, the employee badge, the digital login, and the facial recognition checkpoint. The specific form changes. The underlying function — marking who may move freely and who must be verified — remains.
🔗 Connection to Chapter 3: We examined in Chapter 3 how colonial census systems were not neutral counting exercises but technologies of racial classification and control. The British colonial census in India created the administrative categories that became the basis for partition violence in 1947. The U.S. census has counted enslaved people, classified racial groups with evolving and politically motivated taxonomies, and been used to facilitate Japanese American internment during World War II. The surveillance capacity of the census and the surveillance capacity of the slave pass system are branches of the same tree.
2.2 Biometric Surveillance and the Measurement of Difference
The history of biometrics — the measurement of bodies for purposes of identification — is inseparable from the history of scientific racism. Francis Galton, who invented the fingerprint identification system in the 1880s, was a committed eugenicist who believed that biometric measurement could establish the hereditary hierarchy of races. Alphonse Bertillon, who developed the Bertillon system of anthropometric measurement used by European and American police forces in the late nineteenth century, applied measurement techniques drawn directly from the racial science of physical anthropology.
The fingerprint endured as a biometric identifier while the racial science that surrounded its invention was discredited. But the infrastructure built to collect, categorize, and store biometric data was built by racial states for racial purposes. When contemporary facial recognition systems are trained on datasets that underrepresent darker-skinned faces, this is not a neutral technical accident. It is the continuation of a historical pattern in which Black and Brown bodies are treated as data for white institutions rather than as persons deserving of accurate identification.
🎓 Advanced Note: Browne extends her analysis through what she calls racializing surveillance — processes by which surveillance "reify boundaries along racial lines." She distinguishes this from surveillance that merely reflects existing racial categories. Contemporary predictive policing algorithms do not merely reflect racial disparities in the criminal justice system; they reinforce and extend those disparities by treating historical over-policing as predictive data. The algorithm learns from a racist past and projects it into the future.
Section 3: Contemporary Racial Surveillance Systems
3.1 Stop and Frisk — Mass Surveillance on the Street
The New York Police Department's stop-and-frisk program, as practiced between approximately 2003 and 2013, was one of the largest mass surveillance operations conducted by a municipal police force in American history. At its peak in 2011, NYPD officers conducted 685,724 stops. Over 88 percent of those stopped were Black or Latino. Over 88 percent of those stopped were found to be entirely innocent of any crime.
Stop and frisk was surveillance in its most direct form: the physical interception of bodies in public space, the demand for identification and explanation, the recording of information about the person stopped (name, address, physical description, reason for stop). Each stop generated a UF-250 form — a paper surveillance record — that was stored in a database. The NYPD's stop-and-frisk database was, in effect, a surveillance registry of hundreds of thousands of Black and Latino men who had done nothing wrong.
The legal framework that authorized this practice, Terry v. Ohio (1968), permitted police to stop and briefly detain a person based on "reasonable articulable suspicion" of criminal activity. In practice, as the 2013 federal court decision in Floyd v. City of New York found, the NYPD had been conducting stops that were constitutionally unjustified, discriminatory, and — from a crime-prevention standpoint — largely ineffective. The program was not primarily about stopping crime. It was about maintaining a surveillance presence in certain neighborhoods, deterring movement, and asserting control over bodies that the city's racial imagination coded as threatening.
📊 Real-World Application: The surveillance logic of stop and frisk did not disappear when the program was formally curtailed. It migrated. Vehicle stops, pedestrian checks, surveillance camera density, and predictive policing algorithms have been documented as functionally replacing physical stops in many jurisdictions, achieving similar surveillance effects on similar populations with less legally visible accountability.
3.2 Predictive Policing — The Algorithm as Discriminatory Gaze
Predictive policing systems promise to bring scientific rigor to police resource allocation. If we can predict where crime will occur, the argument goes, we can deploy officers there proactively, preventing crime before it happens. PredPol (now Geolitica), one of the most widely deployed predictive policing platforms, generates "predictive boxes" — small geographic areas where the algorithm forecasts elevated crime risk.
The fundamental flaw in this logic is that predictive policing systems are trained on historical crime data — specifically, on arrest data. Arrest data is not crime data. It is data about where police have been, whom police have arrested, and what police have chosen to prosecute. In a country with a documented history of racially biased policing, this data reflects decades of discriminatory enforcement decisions. When a predictive policing algorithm is trained on that data, it learns to predict not where crime occurs but where police have historically focused their attention — which is to say, Black and Brown neighborhoods.
A 2016 investigation by ProPublica found that PredPol's predictions directed police toward neighborhoods that were already among the most heavily policed in their cities, creating what researchers at Human Rights Watch have called a feedback loop: more policing generates more arrests, more arrests generate more data flagging the area as high-risk, which generates more policing. The communities inside these loops experience ongoing, intensified surveillance. The communities outside them are, comparatively, invisible to law enforcement.
ShotSpotter, an acoustic surveillance system that claims to detect gunshots and alert police, has been deployed almost exclusively in majority-Black and Latino neighborhoods in American cities. A 2021 audit by the Chicago Office of Inspector General found that 89 percent of ShotSpotter alerts in Chicago led to no evidence of a gun crime; the alerts nonetheless dispatched police to those locations, generating police-community contacts concentrated in communities of color.
⚠️ Common Pitfall: It is tempting to argue that predictive policing systems would be fixed by better data — by removing bias from the training set. This argument is appealing but insufficient. Even if we could construct an unbiased dataset (itself a deeply contested proposition), the underlying question of what behavior we are trying to predict, and in whose neighborhoods we are deploying surveillance resources, remains a political question that no technical fix resolves. The problem is not just the algorithm; it is the question the algorithm is built to answer.
3.3 The Ring Network and Racial Profiling at Scale
We examined the Ring Neighbors platform in Chapter 16 as an infrastructure for privatized neighborhood surveillance. Here we return to that analysis with a sharper racial lens.
Ring's Neighbors platform — a social media application linked to Ring doorbells and security cameras — functions as a distributed surveillance network in which users share video clips and posts about suspicious activity in their neighborhoods. Multiple investigative reports, including analysis by the Electronic Frontier Foundation and reporting by Gizmodo and The Guardian, have documented that posts on the Neighbors platform disproportionately flag Black and brown people — particularly Black men — as suspicious.
The mechanism is not algorithmic in the narrow sense; it is human. Neighbors who have been conditioned by decades of racialized crime coverage, by the implicit associations produced by a society in which Black men are systematically over-represented in criminal contexts and under-represented in innocent contexts, look at their camera footage and see threat where they see a Black man walking. They post. Other neighbors see the post and are primed to view the next Black person walking in the neighborhood as a potential suspect. The network amplifies and distributes the individual racial panic.
Ring's relationship with law enforcement amplified this further. Until 2022, Ring maintained formal agreements with over 2,000 U.S. police departments that allowed police to request footage from Ring users without a warrant. A user who received such a request could decline, but the interface design made declining the non-default option. The practical effect was that police departments had access to a private, distributed surveillance network operating in residential neighborhoods — with footage generated and reviewed through frameworks that were demonstrably racially biased.
🔗 Connection to Chapter 16: In Chapter 16, we examined how the Ring network privatizes the infrastructure of surveillance while outsourcing the labor of watching to individual homeowners. Here we see the racial dimension of that privatization: when watching is distributed to individual actors operating through their own frameworks of threat assessment, those frameworks reproduce the racial biases of the broader society. The architecture is neutral; the watchers are not.
3.4 Facial Recognition and the Documented Failure of Objectivity
We examined in Chapter 35 the MIT Media Lab's Gender Shades study, which documented that commercially deployed facial recognition systems misclassified darker-skinned women at rates up to 34 percentage points higher than lighter-skinned men. Joy Buolamwini's finding was not a marginal statistical anomaly. It was a finding about the fundamental character of systems that had been marketed as objective.
The wrongful arrests of Robert Williams, Michael Oliver, and Nijeer Parks are the concrete human consequence of deploying systems with documented racial bias in high-stakes law enforcement contexts. In each case, the algorithm failed on a Black face. In each case, a Black man was arrested, detained, and subjected to the coercive apparatus of the criminal justice system on the basis of that failure. In each case, law enforcement treated the algorithmic output as credible evidence rather than as the output of a system with a documented failure mode.
The National Institute of Standards and Technology (NIST) conducted comprehensive evaluations of facial recognition algorithms in 2019 and found that most commercially available algorithms had higher false-positive rates for Black faces, Indigenous faces, and East Asian faces compared to white faces. False positives in facial recognition — identifying the wrong person as a match — are precisely the errors that generate wrongful arrests. NIST's finding was a direct assessment of the risk of deploying these systems in law enforcement contexts.
📊 Real-World Application: As of 2024, several U.S. cities — including San Francisco, Boston, and Portland — have enacted municipal bans on government use of facial recognition technology. These bans are not merely expressions of civil libertarian sentiment; they are policy responses to documented, quantified evidence of racial bias in deployed systems. The question of whether bans are the right policy response, or whether regulated use with strong accountability frameworks is preferable, is a genuine debate — but the starting point of that debate is the empirical record of failure.
3.5 Immigration Enforcement and the Surveillance of Latino Communities
Immigration enforcement in the United States operates as a comprehensive surveillance infrastructure targeting Latino communities regardless of immigration status. E-Verify, the federal employment verification system, functions as a workplace surveillance mechanism. Customs and Border Protection operates checkpoints up to 100 miles from any international border — a zone that includes the vast majority of the U.S. population. Biometric collection at ports of entry (fingerprints, iris scans, photographs) occurs for virtually all non-citizen arrivals.
Since 2017, the expansion of Secure Communities — a program that enables local law enforcement to share fingerprint data with ICE — has integrated local police departments into the federal immigration surveillance infrastructure. This integration creates what advocates call a chilling effect: immigrant communities, including people with legal status, become reluctant to interact with police, report crimes, or participate in civic life, because any police contact becomes a potential point of immigration surveillance.
The surveillance burden here is explicitly tied to ethnicity. Border Patrol agents have been documented stopping people based on speaking Spanish, having dark hair, or appearing, in the agent's judgment, to "look Mexican." The formal legal standard — reasonable suspicion — is applied through racially biased perception.
🌍 Global Perspective: The European Union's Frontex border agency operates biometric databases and surveillance infrastructure at European borders that have been documented to disproportionately burden migrants from sub-Saharan Africa and the Middle East. The global pattern is consistent: border surveillance infrastructures target populations that are racially marked as foreign, as threatening, or as undeserving of mobility.
3.6 The NYPD Demographics Unit — Surveillance of Muslim Communities
Following the September 11 attacks, the NYPD created a covert Demographics Unit — also known as the Zone Assessment Unit — that conducted warrantless surveillance of Muslim communities throughout the northeastern United States. The unit sent undercover officers into mosques, cafes, student groups, and community organizations; mapped "ancestries of interest"; and compiled reports on the religious practices, political views, and social networks of Muslim New Yorkers.
The program operated for over a decade before reporting by the Associated Press (Pulitzer Prize, 2012) brought it to public attention. When it was finally shut down in 2014, the city acknowledged that the unit had not produced a single criminal lead in its entire operational history. It had surveilled thousands of people, collected sensitive information about religious practice and political belief, and produced no law enforcement benefit whatsoever.
What it produced was fear. Muslim New Yorkers who learned of the program — many did not learn of it until the AP reporting — described the experience of knowing that their mosques, their student associations, their restaurants had been infiltrated as deeply destabilizing. The surveillance had targeted them not for any behavior but for their religion and their ethnicity. The experience of being watched that way — categorically, without individual cause, based on group membership — is precisely the experience of racialized surveillance.
🔗 Connection to Theme — Consent as Fiction: The NYPD Demographics Unit exemplifies the dimension of our consent framework that concerns information people have no capacity to withhold. The surveilled Muslims of New York had not consented to surveillance. They had no mechanism to withhold consent, no notice that they were being watched, and no recourse once the surveillance was revealed. Consent, in this context, was not merely absent; it was structurally impossible.
3.7 Surveillance and Settler Colonialism — Indigenous Land Defenders Under the Gaze
The monitoring of Indigenous land defenders and environmental activists represents a surveillance dimension that connects racial surveillance to the ongoing structures of settler colonialism. During the Standing Rock Sioux Tribe's resistance to the Dakota Access Pipeline (2016-2017), documented surveillance of water protectors included aerial surveillance by police helicopters and National Guard aircraft, infiltration by private security contractors, surveillance of social media communications, and collection of biometric data from arrested protesters.
The Energy Transfer Partners company, which owned the pipeline, hired TigerSwan — a private intelligence and security firm with roots in U.S. special operations — to conduct intelligence operations against pipeline opponents. TigerSwan produced internal documents characterizing the protest movement using counterterrorism language and conducting intelligence operations that would be recognizable to anyone familiar with COINTELPRO — the FBI's 1960s-1970s program of covert surveillance and disruption of civil rights, Black Power, and Indigenous rights movements.
The continuity is not metaphorical. Many of the techniques used against Standing Rock water protectors — infiltration, communication monitoring, intelligence-sharing between private security and law enforcement — were developed in the COINTELPRO era, refined against environmental and Indigenous movements in the intervening decades, and deployed again in 2016. Indigenous land defenders have been subjected to continuous surveillance precisely because they represent a structural challenge to settler property claims and extractive capitalism.
Section 4: The Intersectionality of the Surveillance Gaze
4.1 Race × Class × Gender × Immigration Status
Surveillance burdens do not fall on race alone. They fall at the intersection of race, class, gender, immigration status, religion, and other axes of social position. Kimberlé Crenshaw's framework of intersectionality — the recognition that social categories are mutually constitutive and cannot be understood in isolation — is essential to a complete analysis of who gets watched.
Consider: A wealthy Black man in an expensive car in a wealthy neighborhood may be pulled over by police, but his class position gives him resources for legal defense, social capital that complicates police impunity, and access to media attention if an incident occurs. A poor Black man in the same car — or in no car at all — has none of these resources. Race is the constant; class shapes the texture of the surveillance experience.
Or consider a Black woman seeking healthcare. Research on racial bias in healthcare shows that Black women's pain is systematically undertreated, their symptoms systematically discounted. Electronic health records — surveillance systems in themselves — encode the biases of clinicians, creating persistent records that may follow Black women through subsequent healthcare encounters. The surveillance here is medical rather than criminal, but it is no less real, and no less racialized.
Undocumented immigrants face a surveillance burden that combines racial marking, class vulnerability, and legal precarity in ways that produce near-total surveillance exposure. They cannot interact with law enforcement without immigration risk. They may be reluctant to seek healthcare, use banks, or engage with institutions because each interaction creates a data trail. Their precarity is, in part, a surveillance product: the state has made them maximally legible to enforcement while making them maximally reluctant to become legible to services.
🎓 Advanced Note: Dorothy Roberts's work on surveillance and reproductive control connects racial surveillance to gender through the long history of state monitoring of Black women's reproduction — from forced reproduction under enslavement to coercive sterilization programs of the twentieth century to contemporary surveillance of welfare recipients' reproductive choices. The surveilled body is not only Black or only female; it is the intersection that generates specific surveillance attention.
Section 5: Jordan and Yara — Being Watched Differently
5.1 Yara's Story
Jordan has known Yara since their first semester at Hartwell. Yara is Palestinian American, the daughter of a family that came to the United States as refugees in the 1990s. She grew up in a small city in New Jersey, in a neighborhood that was, for a time, the subject of a "mapping" project by the New Jersey State Police — a program, later revealed and discontinued, that documented the demographics of neighborhoods with significant Muslim populations.
Yara knew about the mapping, sort of, the way you know about weather — as something that exists in the environment, that shapes what you do without necessarily articulating why. Her parents had certain habits: they did not put their last name on the mailbox. Her father did not go to Friday prayers at the largest mosque in the city, choosing instead a smaller congregation. Her mother spoke Arabic at home but not in grocery stores. Yara absorbed these habits as children absorb anything — not through explicit teaching but through observing what the adults around her did to navigate the world.
When Yara came to Hartwell, she entered a different surveillance environment — nominally more equal, constitutionally protected. But she noticed things Jordan did not. She noticed that the campus police car seemed to slow when she walked with her cousin, who wore a hijab. She noticed that her name — Yara Khalil — seemed to produce a slightly longer pause at airport security. She noticed that when her family's community center applied for a permit for a fundraising event, the permitting process took longer than comparable events by other organizations.
None of these observations were individually conclusive. That was the point. Racialized surveillance rarely announces itself. It operates in the pause, the slow-down, the extra form, the longer wait. It is designed to be deniable, and its deniability is part of its power.
"It's not that I know I'm being watched," Yara tells Jordan one evening in the library. "It's that I know I might be watched, and I can never be sure, and that uncertainty is the whole thing. It changes what I say. It changes what I search online. It changes how I think about being Muslim in public."
5.2 Jordan's Integration
Jordan has been developing their analysis of surveillance across thirty-five chapters and what feels like a lifetime of thinking. They came into Dr. Osei's class believing that surveillance was a universal burden — that everyone was equally subject to the same watching. That was the "nothing to hide" framework applied to inequality: if surveillance is everywhere, no one is specially targeted.
Chapter 36 breaks that frame permanently.
It is not that Jordan had been unaware of racial inequality before. They had experienced it in their own mixed-race body: the moment in a convenience store when a security guard moved to follow them; the time in high school when they were stopped by a school resource officer for "looking suspicious" in the hallway of their own school. They had filed these experiences as individual incidents — bad luck, individual bias, aberrations from a fair system.
What the structural analysis of this chapter offers is a different framing: these were not aberrations. They were systemic expressions of a surveillance architecture designed to watch Black and Brown bodies more, to mark them as suspicious more readily, to subject them to intervention more automatically. The individual incidents were instances of a structural pattern.
Jordan's synthesis connects to all five of the book's recurring themes at once. Visibility asymmetry: they had always been more visible to security guards, police, and surveillance systems than their white peers. Consent as fiction: they had never consented to being followed in stores, surveilled by resource officers, or having their face run through a database — yet it happened. Normalization: they had, for years, accepted these experiences as simply part of moving through the world as a person of color, without recognizing them as surveillance events at all. Structural vs. individual: the incidents were not about individual prejudice but about systems that produced prejudiced outcomes regardless of the intentions of individual actors. Historical continuity: the Black male body as suspect, as requiring verification, as subject to preemptive intervention, is a pattern with a four-hundred-year history in American society.
Section 6: Structural Analysis — Why Surveillance Is Racially Unequal
6.1 Structural Racism and Technical Systems
Social scientists distinguish between individual racism (the prejudiced beliefs and actions of individual people) and structural racism (the way that policies, institutions, and practices perpetuate racial inequality regardless of the intentions of individual actors). A bank loan officer who applies a policy that disadvantages Black applicants because of zip code is participating in structural racism even if they hold no prejudiced views. A facial recognition system trained on biased data produces racially biased outputs regardless of the intentions of its engineers.
This distinction matters because it shifts the analysis from individuals to systems. The question is not whether the NYPD officers who ran the Demographics Unit were personally bigoted. The question is what structural conditions produced a surveillance program that targeted an entire religious community without cause and without benefit. The question is not whether the engineers who built PredPol are racist. The question is what the feedback loop between historical police data and algorithmic prediction does to communities of color regardless of the engineers' intentions.
Structural racism in surveillance systems is self-reinforcing in ways that individual racism is not. Individual prejudice can be corrected by changing individual behavior. Structural racism persists because it is embedded in institutions, practices, and — increasingly — in the technical systems that automate those practices. An algorithm learns from biased data and produces biased outputs; those outputs shape enforcement decisions that generate new biased data; that data trains the next version of the algorithm. The bias circulates and deepens with no individual actor "doing" anything.
✅ Best Practice: When evaluating any surveillance system for potential deployment, conducting a disparate impact analysis — examining whether the system produces different outcomes across racial groups — is necessary but not sufficient. It is also necessary to examine the training data for historical bias, the decision context for whether algorithmic outputs will be treated as conclusive or as one input among several, and the accountability mechanisms for challenging incorrect or biased outputs.
6.2 The Limits of Anti-Bias Technical Fixes
The standard industry response to documented racial bias in surveillance algorithms is to promise to fix the bias — to diversify training data, improve model architecture, add fairness constraints. These technical responses are real and sometimes effective at the margins. They are not solutions to the structural problem.
Joy Buolamwini, in Unmasking AI (2023), argues that the question is not merely whether we can build more accurate facial recognition systems but whether facial recognition should be used in law enforcement contexts at all. Even a system with dramatically reduced racial bias creates risks of wrongful arrest, mass surveillance, and the chilling of constitutionally protected behavior. The accuracy problem and the deployment problem are separate, and improving accuracy does not resolve the deployment question.
This is the structural vs. individual frame applied to technology: the question is not whether we can fix individual algorithms but what kind of surveillance infrastructure we want to build and on what communities we want to deploy it. Those are political questions, not technical ones.
Section 7: Resistance, Documentation, and Counter-Surveillance
7.1 Communities Under the Gaze — Strategies of Response
Communities that have been subjected to racialized surveillance have developed strategies of response across the entire history of modern surveillance. During enslavement, resistance to the pass system included forging passes, sharing passes, and building networks of people who could produce plausible credentials. During the civil rights movement, communities developed rapid-response legal networks to document and respond to police surveillance and violence. During COINTELPRO, movement organizations developed security culture practices to identify infiltrators and protect communications.
Contemporary responses include:
- Know Your Rights trainings in communities targeted by stop and frisk or immigration enforcement, teaching people what police can and cannot legally do and how to document encounters
- Copwatch programs: community organizations that follow police in targeted neighborhoods and document police-community encounters, creating counter-surveillance of the watchers
- Legal challenge: organizations like the ACLU, the Lawyers' Committee for Civil Rights, and the Electronic Frontier Foundation have litigated against racially biased surveillance systems, sometimes successfully
- Policy advocacy: campaigns against predictive policing, facial recognition, and ShotSpotter have achieved municipal bans in some cities
- Community oversight: some cities have created community oversight boards with real authority over police surveillance decisions
📝 Note: These strategies share a common logic: they attempt to redistribute some of the visibility asymmetry by making the watchers themselves visible, accountable, and subject to documentation. They do not resolve the structural problem, but they create friction in the operation of discriminatory surveillance systems and generate legal and political costs for those who deploy them.
7.2 The Abolitionist Analysis
Some scholars and activists argue that the problem with racial surveillance is not that it is biased but that it is surveillance — that the appropriate response is not reform but abolition of the surveillance apparatus. This position, associated with the prison abolition movement and scholars like Angela Davis, Ruth Wilson Gilmore, and Andrea Ritchie, holds that systems of surveillance and criminalization are not merely imperfect tools that can be fixed but are constitutive of racial oppression itself.
From this perspective, the call to "fix" facial recognition by making it less biased, or to "improve" predictive policing by using better data, is a call to build more efficient instruments of oppression. The goal is not more accurate surveillance of Black communities but the dismantling of the surveillance infrastructure that produces and enforces racial hierarchy.
This is a minority position among mainstream policy advocates, but it is an important intellectual challenge to reform-oriented approaches. It forces the question: what is the surveillance system for, and who does it serve? If the honest answer is that it serves the management and control of communities that the dominant society views as threatening, then making it technically better may not be progress.
Section 8: Implications for the Architecture Metaphor
Surveillance, we have argued throughout this book, is a built environment — an architecture of watching that shapes behavior, produces knowledge, and distributes power. In this chapter, we have seen that this architecture was not built for everyone equally. It was built, in significant part, to watch Black, Brown, Muslim, and Indigenous people more than others — to mark certain bodies as requiring verification, control, and intervention.
The architecture metaphor helps us see why this is structural rather than individual. Architecture reflects the values and decisions of the society that commissioned it. When American cities built highways through Black neighborhoods in the mid-twentieth century, they were expressing political choices about whose mobility mattered and whose neighborhoods could be destroyed for the benefit of those whose mobility mattered more. When those same cities deploy facial recognition in Black neighborhoods and ShotSpotter on Black blocks, they are making analogous choices — ones that are dressed in the language of public safety but that express the same underlying distribution of who is worth protecting and who is worth watching.
Understanding surveillance as racial architecture means recognizing that individual incidents of racially biased surveillance — the wrong arrest, the extra stop, the slower permit — are not aberrations from a neutral system. They are the expected outputs of a system designed, wittingly or not, to produce exactly those outcomes.
The question for Part 8 is what we do with that recognition. Chapter 39 will address design-level responses. Chapter 40 will ask what a person of conscience does with this knowledge. But the recognition itself — clear-eyed, historically grounded, structurally analyzed — is the necessary starting point.
Chapter Summary
This chapter has argued that surveillance systems do not watch everyone equally. The inequality is not accidental; it is structural, historical, and self-reinforcing. From the lantern laws and slave pass systems of the colonial era to the predictive policing algorithms and facial recognition systems of the present, surveillance technologies have been deployed with particular intensity on Black, Brown, Muslim, and Indigenous communities. Simone Browne's Dark Matters provides the canonical scholarly framework for this analysis: surveillance categories are racial categories, and modern surveillance is continuous with the racialized control technologies of earlier eras.
We examined stop and frisk as mass surveillance; predictive policing as algorithmic discrimination; Ring Neighbors as distributed racial profiling; facial recognition as a technology with documented, quantified racial failure modes; immigration enforcement as comprehensive surveillance of Latino communities; the NYPD Demographics Unit as the surveillance of Muslim identity; and the monitoring of Indigenous land defenders as the continuation of settler colonial control.
Through Yara's story and Jordan's synthesis, we connected these structural patterns to lived experience: the pause, the slow-down, the extra form, the uncertainty of never knowing whether you are being watched. That uncertainty — produced deliberately by systems designed for exactly this purpose — is itself a form of surveillance power.
The key recurring themes converge here: visibility asymmetry is racially distributed; consent is impossible where surveillance is categorical; normalization conceals the surveillance burden from those who do not bear it; structural analysis is the only framework adequate to the scale of the problem; and the history goes back further than the technology.
Key Terms
- Racializing surveillance (Browne): Processes by which surveillance reifies and enforces racial boundaries
- Slave pass system: Colonial-era identification and travel permit requirement for enslaved people, analyzed as proto-surveillance
- Lantern laws: 18th-century New York laws requiring enslaved people to carry lanterns at night, analyzed as visibility-control technology
- Predictive policing: Law enforcement approach using algorithmic analysis of historical crime data to forecast future criminal activity
- Feedback loop (predictive policing): Self-reinforcing cycle in which over-policing generates arrest data that trains algorithms to predict more policing in the same areas
- Disparate impact analysis: Examination of whether a policy or system produces different outcomes across racial or other protected groups
- Racial formation (Omi and Winant): Theory that race is a socially constructed category continually remade through political, economic, and cultural processes
- Intersectionality (Crenshaw): Framework recognizing that social categories (race, class, gender, etc.) are mutually constitutive and produce distinct experiences at their intersections
- Structural racism: Racial inequality embedded in institutions, policies, and practices rather than in individual beliefs or intentions
Discussion Questions
-
Simone Browne argues that modern surveillance is continuous with the racial control technologies of earlier eras. Do you find this continuity argument persuasive? What evidence would challenge it?
-
Predictive policing systems are trained on historical arrest data. Some researchers argue that this data could be "cleaned" to remove racial bias before training. What are the limits of this approach? What would a genuinely unbiased training dataset for predictive policing look like?
-
The chapter distinguishes between structural racism and individual racism in surveillance systems. Can you identify a case in which individual racism in a surveillance context might have structural effects, and vice versa?
-
Yara describes the surveillance of her community as operating through uncertainty rather than confirmed knowledge. How does the uncertainty of being surveilled — not knowing whether you are being watched — function as a form of social control?
-
The chapter briefly presents the abolitionist argument against reforming surveillance systems. What are the strongest arguments for and against this position?