> "The NSA's goal is nothing less than total population control."
Learning Objectives
- Trace the evolution of surveillance from physical observation to digital dataveillance
- Explain the relationship between Bentham's panopticon and Foucault's theory of disciplinary power as applied to modern surveillance
- Analyze the Snowden revelations and their implications for the relationship between citizens and the state
- Distinguish between state surveillance, corporate surveillance, and their points of convergence
- Evaluate the capabilities and risks of facial recognition and biometric surveillance technologies
- Articulate the concept of surveillance capitalism and its relationship to traditional state surveillance
- Assess resistance strategies including encryption, anonymity tools, and legal challenges
In This Chapter
- Chapter Overview
- 8.1 The Panopticon: Architecture as Power
- 8.2 The Rise of Video Surveillance
- 8.3 Digital Surveillance: The Snowden Revelations
- 8.4 Corporate Surveillance and Dataveillance
- 8.5 Facial Recognition and Biometric Surveillance
- 8.6 Surveillance Capitalism and the Surveillance State: Convergence
- 8.7 Resistance: Encryption, Anonymity, and Legal Challenges
- 8.8 The Chilling Effect: Surveillance and Self-Censorship
- 8.9 Chapter Summary
- What's Next
- Chapter 8 Exercises -> exercises.md
- Chapter 8 Quiz -> quiz.md
- Case Study: The Snowden Revelations and Mass Surveillance -> case-study-01.md
- Case Study: Facial Recognition in Law Enforcement: The Detroit Case -> case-study-02.md
Chapter 8: Surveillance: From Panopticon to Platform
"The NSA's goal is nothing less than total population control." --- William Binney, former NSA Technical Director (2014)
Chapter Overview
In Chapter 7, we developed the theoretical tools to understand what privacy is --- why it matters, how it functions, and what happens when it breaks down. This chapter asks the complementary question: what is privacy being broken down by?
The answer is surveillance --- the systematic monitoring, observation, and recording of people, places, and activities. But surveillance in the 21st century looks nothing like the surveillance of even a few decades ago. The guard in the watchtower has been replaced by a constellation of cameras, algorithms, data brokers, and platform architectures that together constitute the most comprehensive monitoring apparatus in human history.
This chapter traces the arc from Bentham's 18th-century prison blueprint to the vast digital surveillance infrastructure that now saturates daily life. We will examine how states watch their citizens, how corporations watch their users, and how these two modes of watching are increasingly difficult to distinguish. We will also examine what it means to resist --- and whether resistance is possible at all within a system where the architecture itself is the instrument of control.
In this chapter, you will learn to: - Analyze the panopticon as both a physical design and a metaphor for modern surveillance - Evaluate the scope and implications of mass digital surveillance programs revealed by Edward Snowden - Distinguish between different modalities of surveillance: physical, digital, biometric, and predictive - Apply Nissenbaum's contextual integrity framework (from Chapter 7) to specific surveillance practices - Assess the convergence of state and corporate surveillance and its implications for democracy - Evaluate the effectiveness and limitations of resistance strategies
8.1 The Panopticon: Architecture as Power
8.1.1 Bentham's Design
In 1791, the English philosopher and social reformer Jeremy Bentham published a detailed proposal for a new type of prison. He called it the panopticon --- from the Greek pan (all) and optikon (seeing). The design was elegantly simple: a circular building with cells arranged around the perimeter, each visible from a central inspection tower. The tower's windows were fitted with blinds so that prisoners could never tell whether they were being observed at any given moment.
The genius --- and the horror --- of the design lay in its economy. A single guard could not actually watch all prisoners simultaneously. But the prisoners could not know when they were being watched and when they were not. The result, Bentham predicted, would be automatic self-regulation: prisoners would behave as if they were being watched at all times, because the cost of being caught during an actual observation was too high to risk.
Bentham was explicit about the power dynamics at work. He called the panopticon "a new mode of obtaining power of mind over mind, in a quantity hitherto without example." He envisioned its application not only in prisons but in factories, schools, hospitals, and poorhouses --- anywhere that populations needed to be managed efficiently. The panopticon was, in Bentham's own utilitarian framework, an instrument of maximum social control at minimum cost.
The panopticon was never built exactly as Bentham designed it, though several prisons incorporated its principles --- most notably Pentonville Prison in London (1842) and the Presidio Modelo in Cuba (1926). Its lasting significance is not architectural but conceptual --- as a model for understanding how surveillance operates as a form of power.
8.1.2 Foucault's Analysis: Discipline and the Internalized Gaze
We introduced Foucault's analysis of the panopticon in Chapter 5 when examining power/knowledge. Here we extend that analysis with specific attention to surveillance.
In Discipline and Punish (1975), Foucault argued that the panopticon was not merely a prison design but a diagram of modern power itself. Pre-modern power was spectacular: the king demonstrated authority through public executions, elaborate ceremonies, and displays of military force. Modern power, Foucault argued, works differently. It is disciplinary: it operates through constant, subtle observation that causes individuals to internalize the norms of the observer and regulate their own behavior.
The critical insight is the inversion of visibility. In sovereign power, the ruler is visible and the population is invisible (the anonymous crowd). In disciplinary power, the population is rendered visible (observed, measured, recorded) and the power mechanism is invisible (you cannot see who is watching you, or when, or why).
Foucault identified three features of panoptic power that distinguish it from earlier forms of control:
- Automaticity. The surveillance mechanism operates whether or not anyone is actively watching. The prisoner self-regulates; the guard is almost superfluous.
- Individuation. Each prisoner is isolated in a separate cell, visible from the tower but invisible to other prisoners. The individual becomes an object of observation and knowledge --- a "case" to be studied, measured, and categorized.
- Normalization. The purpose of panoptic surveillance is not primarily punishment but the production of docile subjects --- people who internalize prevailing norms and regulate their own behavior accordingly.
"This is what I was trying to describe when I talked about my grandmother," Eli said during the class discussion of Foucault. "She doesn't need to see anyone watching her. The cameras are the architecture. The possibility of being watched is enough to change her behavior. Foucault is describing something she's lived."
Connection: In Chapter 5 (Section 5.1.2), we examined how the digital panopticon manifests through workplace monitoring, social media self-censorship, and smart city sensors. This chapter extends that analysis from the theoretical to the specific --- examining the actual technologies and institutions that constitute modern surveillance systems.
8.1.3 Beyond the Panopticon: Critiques and Extensions
Foucault's panopticon model has been enormously influential, but it has also been extended and challenged by subsequent scholars:
David Lyon argues that modern surveillance is better described as a "surveillant assemblage" --- not a single watchtower but a network of interconnected systems that together produce a comprehensive picture. Your phone tracks your location. Your credit card tracks your purchases. Your fitness tracker monitors your heartbeat. Your email provider scans your correspondence. Your smart TV records your viewing habits. No single system sees everything, but the assemblage --- the combination of all these systems --- creates a picture far more detailed than any panopticon guard could produce.
Kevin Haggerty and Richard Ericson describe the "disappearance of disappearance" --- the progressive elimination of spaces and activities that are not subject to any form of monitoring. In the panopticon, the prisoner could at least know the boundaries of observation (the cell). In the digital assemblage, there are no clear boundaries. Surveillance extends into private homes (smart speakers), intimate relationships (dating apps), physical bodies (wearables), and even sleep (sleep tracking apps).
Zygmunt Bauman argues that the panopticon metaphor overemphasizes confinement. Modern surveillance doesn't confine people --- it follows them. Bauman calls this "liquid surveillance": surveillance that flows into every crevice of daily life, adapting to the mobile, fluid character of contemporary society. You are not locked in a cell; you carry the surveillance device in your pocket.
Julie Cohen adds a crucial dimension: modern surveillance doesn't just watch behavior --- it shapes behavior through algorithmic feedback loops. The recommendation engine doesn't passively observe your preferences; it actively molds them. The fitness tracker doesn't just record your steps; it nudges you toward certain behaviors through gamification and social comparison. This is not the panopticon's passive observation but something new: surveillance that actively constructs the subject it observes.
Intuition: If Bentham designed the panopticon for prisons, who designed the smartphone panopticon? The answer is: no one, and everyone. The surveillance capacity of the modern phone was not planned by a single architect --- it emerged from the convergence of engineering decisions, business models, regulatory gaps, and consumer choices. This makes it both more pervasive and harder to challenge than any intentionally designed surveillance system.
8.2 The Rise of Video Surveillance
8.2.1 CCTV: From Department Stores to City Streets
Closed-circuit television (CCTV) cameras began appearing in commercial settings in the 1960s and in public spaces in the 1970s. Their widespread deployment in cities accelerated through the 1980s and 1990s, driven by crime prevention concerns and facilitated by declining hardware costs.
The United Kingdom became the paradigmatic CCTV society. By 2023, the UK had an estimated 7 million CCTV cameras --- roughly one for every ten people --- making it one of the most surveilled nations on Earth. London alone had over 900,000 cameras, and a person moving through central London could expect to be captured by over 300 cameras in a single day. China has surpassed this with an estimated 500 million surveillance cameras nationwide, many integrated with facial recognition capabilities.
The expansion of CCTV raises a fundamental question that Chapter 7's frameworks help us analyze: Does the mere act of recording activity in public spaces violate privacy?
Under Warren and Brandeis's "right to be let alone" framework, possibly not --- you are not being "intruded upon" in the traditional sense when you walk down a public street. Under Westin's framework, it clearly does --- CCTV eliminates the state of anonymity (being in public but free from identification). Under Nissenbaum's contextual integrity framework, the answer depends on whether the CCTV data flow matches established norms: a camera in a bank lobby may conform to the financial context's security norms, while a camera on a residential street may breach the neighborhood context's norms of relative anonymity.
8.2.2 The Evidence Problem
A striking feature of the CCTV expansion is how thin the evidence base for its effectiveness has been. Multiple systematic reviews --- including a comprehensive analysis by Welsh and Farrington (2009) --- have found that CCTV has a modest effect on crime reduction in car parks and a minimal effect in other settings. A subsequent meta-analysis by Piza et al. (2019) reached similarly modest conclusions: CCTV was associated with a 13% reduction in crime overall, but the effect was driven almost entirely by car parks and public transit stations, with negligible impact in residential areas and city centers. CCTV is more effective as a forensic tool (identifying perpetrators after the fact) than as a deterrent.
Despite this evidence, CCTV expansion has continued largely unabated. This gap between evidence and policy reveals something important about surveillance: the desire to watch is driven by more than instrumental rationality. CCTV provides a visible symbol of security, a political response to public anxiety, and a mechanism of social control that serves institutional interests regardless of its crime-prevention efficacy.
Dr. Adeyemi posed the question sharply: "If a surveillance technology has limited proven effectiveness at achieving its stated goal, but it continues to expand, what does that tell you about the actual function it serves?"
Reflection: Consider the surveillance cameras in your own environment --- on campus, in stores, on streets. When was the last time you noticed one? The fact that cameras become invisible through familiarity is itself a form of normalization. Foucault would say that the internalization of the gaze has been so successful that you no longer register the watching. The camera has become part of the architecture --- which is precisely the point.
8.3 Digital Surveillance: The Snowden Revelations
8.3.1 Before Snowden: The Architecture of Mass Surveillance
On June 5, 2013, The Guardian published the first of a series of articles based on classified documents leaked by Edward Snowden, a 29-year-old systems administrator who had worked as a contractor for the National Security Agency (NSA). Over the following months, Snowden's disclosures --- published in coordination with journalists Glenn Greenwald, Laura Poitras, and Barton Gellman --- revealed the existence of surveillance programs whose scope exceeded even the most alarming predictions of privacy advocates.
The key programs included:
PRISM --- A program through which the NSA obtained data directly from the servers of nine major technology companies (Microsoft, Yahoo, Google, Facebook, PalTalk, YouTube, Skype, AOL, and Apple). The data included emails, chat messages, videos, photos, stored data, VoIP conversations, file transfers, and social networking details. PRISM operated under Section 702 of the FISA Amendments Act and was authorized by the Foreign Intelligence Surveillance Court (FISC). Technology companies initially denied knowledge of the program; subsequent disclosures revealed that cooperation ranged from voluntary to legally compelled.
Upstream collection --- The NSA's tapping of undersea fiber optic cables to collect communications data in transit. This program captured communications not just of foreign targets but of millions of Americans whose data happened to flow through monitored infrastructure. The technical term for this is "incidental collection" --- a euphemism that obscures the fact that millions of innocent people's communications were captured as a routine operational feature, not as an exceptional byproduct.
XKeyscore --- A search system that allowed NSA analysts to search through vast databases of collected emails, chats, and browsing histories. One NSA training slide boasted that XKeyscore covered "nearly everything a typical user does on the internet." Analysts could search by name, email address, IP address, language, browser type, or keyword --- and could do so without prior judicial authorization.
TEMPORA --- The UK's Government Communications Headquarters (GCHQ) program that tapped over 200 fiber optic cables, each carrying data at 10 gigabits per second, and stored the content for three days and the metadata for thirty days --- creating a rolling buffer of virtually all internet traffic passing through the UK. The program was authorized under a broad interpretation of the Regulation of Investigatory Powers Act 2000 (RIPA) and was not publicly known until the Snowden disclosures.
8.3.2 The Metadata Debate
One of the most significant policy debates triggered by the Snowden revelations concerned metadata --- data about communications rather than the content of communications themselves. The U.S. government argued that collecting metadata (who called whom, when, for how long) was less intrusive than collecting content (what was said), and therefore required a lower legal threshold.
This distinction collapses under scrutiny. As former NSA and CIA Director Michael Hayden acknowledged in 2014: "We kill people based on metadata." Research by Stanford University's Jonathan Mayer and Patrick Mutchler demonstrated that phone metadata alone could reveal:
- A person calling a suicide prevention hotline at 2 a.m.
- A series of calls between a woman, a gynecologist, and an abortion clinic
- Repeated calls between a person, a criminal defense attorney, and a bail bond company
- A pattern of calls consistent with a firearm purchase
- The duration and frequency of calls that reveal the intimacy of a relationship
The metadata-versus-content distinction is, in Nissenbaum's terms, a context violation: it attempts to separate information from its context in a way that the information itself resists. The pattern of your communications is the content, in any meaningful sense. Knowing that someone called an AIDS clinic, then called their insurance company, then called a family member reveals the content of a deeply private medical situation --- without intercepting a single word.
"My father's generation grew up in Nigeria before and after independence," Dr. Adeyemi told the class. "He told me about the colonial postal inspectors who steamed open letters. The British government said they were only looking at the addresses on the envelopes --- the metadata. My father said: 'The address tells them who your friends are. That was always the point.'"
8.3.3 Legal Frameworks and Their Failures
The legal framework governing U.S. surveillance --- the Foreign Intelligence Surveillance Act (FISA) of 1978, the PATRIOT Act of 2001, and the FISA Amendments Act of 2008 --- was designed for an era of targeted surveillance: identifying specific individuals suspected of specific wrongdoing and obtaining specific authorization to monitor them.
Mass surveillance inverts this logic entirely. Instead of starting with a suspect and seeking their data, mass surveillance starts with everyone's data and searches for suspects within it. This inversion was enabled by a series of legal interpretations --- many of them secret, issued by the classified Foreign Intelligence Surveillance Court (FISC) --- that stretched the language of surveillance statutes beyond recognition.
Section 215 of the PATRIOT Act authorized the FBI to obtain "any tangible things" relevant to an authorized investigation. The FISC secretly interpreted this to authorize the bulk collection of all phone metadata from all American telecommunications companies --- reasoning that the entire database was "relevant" because it might contain records pertinent to future investigations. As Senator Ron Wyden observed, if everything is relevant, the word "relevant" has no meaning.
The FISC itself represents a structural accountability problem. In 2012, the FISC approved 1,856 surveillance orders and denied zero. In its first 33 years of operation (1979-2012), the FISC approved 33,942 warrant applications and denied 11. The court's approval rate --- above 99.97% --- suggests that it functions less as a check on executive power than as a rubber stamp for surveillance requests.
Common Pitfall: It is tempting to frame the Snowden debate as "security vs. privacy" --- as if surveillance programs are effective at preventing terrorism and the only question is whether the privacy cost is worth the security benefit. But multiple independent reviews --- including the President's Review Group on Intelligence and Communications Technologies (2013) and the Privacy and Civil Liberties Oversight Board (2014) --- found that the bulk phone metadata program had not been essential in preventing any terrorist attack. The "security vs. privacy" framing assumes an effectiveness that the evidence does not support. The more accurate framing is: mass surveillance imposes enormous privacy costs for minimal security benefit --- and the question is why it persists despite this unfavorable calculus.
8.3.4 The Global Dimension
The Snowden revelations were not only about the United States. They revealed a global surveillance network centered on the Five Eyes alliance (United States, United Kingdom, Canada, Australia, and New Zealand), with each member nation collecting data on behalf of the others --- circumventing domestic legal restrictions by outsourcing surveillance to foreign partners.
The revelations also exposed the NSA's interception of communications by allied heads of state (including German Chancellor Angela Merkel), its systematic weakening of encryption standards through the BULLRUN program, and its collaboration with technology companies through both voluntary cooperation and legal compulsion.
The geopolitical consequences were significant. The Merkel phone-tapping revelations damaged U.S.-European relations and accelerated the EU's push for stronger data sovereignty. Brazil began planning an independent undersea cable to bypass U.S. surveillance infrastructure. The legitimacy of U.S. technology companies was damaged worldwide, with estimated losses to the U.S. cloud computing industry of $22-35 billion over three years (Information Technology and Innovation Foundation, 2014).
8.4 Corporate Surveillance and Dataveillance
8.4.1 From State Watching to Platform Monitoring
The term dataveillance --- coined by Roger Clarke in 1988 --- refers to the systematic monitoring of people's actions or communications through the application of information technology to personal data. While the Snowden revelations focused attention on state surveillance, corporate dataveillance had already grown to rival and in some ways exceed government monitoring capabilities.
The distinction between state and corporate surveillance is worth articulating clearly:
| Dimension | State Surveillance | Corporate Surveillance |
|---|---|---|
| Primary purpose | National security, law enforcement, social control | Commercial profit, advertising, behavioral prediction |
| Legal authority | Statutory powers, court orders (sometimes secret) | Terms of service, consent frameworks (often fictional) |
| Coercive power | Can arrest, detain, deport | Can deny service, degrade experience, manipulate pricing |
| Scale | Massive, but constrained by legal frameworks (at least nominally) | Massive, with fewer legal constraints on data collection |
| Accountability | Oversight mechanisms exist (FISC, congressional committees) though often weak | Minimal external oversight; self-regulation predominates |
| Voluntariness | Mandatory (you cannot opt out of state jurisdiction) | Ostensibly voluntary (you can "choose" not to use the service) |
But this table understates the relationship between the two. As we will see in Section 8.6, state and corporate surveillance are not parallel systems --- they are increasingly convergent.
8.4.2 Platform Surveillance in Practice
Modern platform surveillance operates through multiple mechanisms:
Behavioral tracking. Every click, scroll, pause, hover, search query, message, purchase, and location check-in is recorded. The resulting behavioral profiles are far more granular than any government surveillance program could achieve through coercive means, because users voluntarily generate the data as a condition of using "free" services. Facebook's internal documents, disclosed by whistleblower Frances Haugen in 2021, revealed that the company tracked over 29,000 data points on each user --- including not just explicit actions (posts, likes, shares) but implicit signals (how long you paused on a post, whether you scrolled back to look at something again, what you almost typed and then deleted).
"That's the part that keeps me up at night," Mira said during office hours. "The NSA had to build secret programs and break the law. Facebook just had to build a product people loved."
Cross-platform tracking. Data brokers like Acxiom, Oracle Data Cloud, and LiveRamp merge data from dozens of sources --- online behavior, offline purchases, public records, location tracking, loyalty programs --- to build comprehensive profiles that follow individuals across contexts. A person who searches for diabetes symptoms on Google, buys sugar-free products at a grocery store using a loyalty card, and visits a medical office (tracked through location data) can be identified as a likely diabetes patient --- even if they have never disclosed this to anyone. The data broker industry generates an estimated $200 billion in annual revenue in the United States alone, yet most Americans cannot name a single data broker.
Inference engines. Beyond tracking what users do, platforms infer what users are: their sexual orientation, political beliefs, personality traits, emotional states, pregnancy status, and health conditions. Facebook's internal research demonstrated that its algorithms could predict personality traits from "likes" with greater accuracy than the assessments of friends, family members, and even spouses. This is the assault on Westin's right of reserve that we identified in Chapter 7 --- the right to withhold aspects of yourself is nullified when algorithms can deduce what you chose not to disclose.
IoT expansion. Smart home devices (Amazon Echo, Google Nest, Ring doorbells), connected vehicles, fitness trackers, and smart appliances extend corporate surveillance into physical spaces that were previously private. Amazon's acquisition of Ring doorbell cameras created a nationwide surveillance network of over 10 million privately owned cameras --- data that Amazon has shared with law enforcement over 2,000 times, often without the camera owners' knowledge or the surveilled individuals' consent.
Connection: Recall from Chapter 4 the concept of behavioral surplus --- Zuboff's term for the data extracted from users beyond what is needed to improve the service they are using. Dataveillance is the mechanism through which behavioral surplus is produced. The attention economy (Chapter 4) provides the business model; dataveillance provides the implementation.
8.4.3 The VitraMed Dimension: Health Data Surveillance
The VitraMed thread takes on new significance in the surveillance context. Health data surveillance operates through mechanisms that are distinct from --- and in some ways more intimate than --- general platform surveillance.
Mira's father, Vikram, had initially pitched VitraMed as a tool for clinicians --- a way to organize electronic health records and identify patterns that individual doctors might miss. But as the platform grew, the data it collected became valuable for purposes far beyond clinical care:
- Insurance companies wanted access to VitraMed's predictive models to improve risk assessment
- Pharmaceutical companies wanted to identify patient populations for drug trials
- Employers with self-insured health plans wanted aggregate health data about their workforces
- Data brokers wanted to integrate health signals with broader consumer profiles
"Each of those requests comes with a justification," Vikram told Mira during a phone call she recounted to Dr. Adeyemi's class. "Insurers say it will lower premiums. Pharma says it will cure diseases. Employers say it will reduce costs. And they're not wrong --- these are real benefits. But what they're really asking for is surveillance. They want to watch patients --- their behaviors, their bodies, their choices --- in ways that patients never consented to when they went to see their doctor."
The health context makes the surveillance dimension particularly stark because of the relationship between the data collector and the data subject. A patient who shares symptoms with a physician is operating within a context governed by centuries of medical ethics, including the Hippocratic principle of confidentiality. When that data flows to an insurer or a data broker, the contextual integrity violation (Chapter 7, Section 7.3) is severe --- the information has moved from a context of care to a context of commerce or control.
Intuition: Health data surveillance reveals a pattern that recurs throughout this chapter: surveillance systems are often introduced with benevolent justifications (better care, safer communities, more relevant services) and then repurposed for control, profit, or exclusion. The justification and the repurposing are not separate stages --- the business model depends on the repurposing. The benevolent justification is the mechanism through which consent is manufactured.
8.5 Facial Recognition and Biometric Surveillance
8.5.1 The Technology
Facial recognition technology (FRT) uses machine learning algorithms to map the geometry of a person's face --- the distance between the eyes, the shape of the cheekbones, the contour of the jawline --- and compare the resulting "faceprint" against a database to identify or verify the person's identity.
The technology has advanced rapidly since the early 2010s. Error rates for one-to-one verification (confirming that you are who you claim to be) have dropped below 0.1% for high-quality images under controlled conditions. One-to-many identification (searching a database to determine who a person is) remains significantly less reliable, particularly for faces captured at a distance, at an angle, or in poor lighting --- conditions that characterize most real-world surveillance footage.
Critically, the technology's error rates are not distributed equally. A landmark study by Joy Buolamwini and Timnit Gebru at MIT (2018) --- the Gender Shades project --- found that commercial facial recognition systems from Microsoft, IBM, and Face++ had error rates of up to 34.7% for darker-skinned women, compared to 0.8% for lighter-skinned men. A follow-up study by the National Institute of Standards and Technology (NIST, 2019) confirmed these disparities across a broader range of algorithms, finding that many algorithms had false positive rates 10 to 100 times higher for African American and Asian faces compared to Caucasian faces.
This is not a minor calibration problem. It means that facial recognition technology is least reliable for the populations most likely to be subjected to surveillance.
8.5.2 The Detroit Case
Eli's Detroit thread converges powerfully with the facial recognition debate. Detroit's police department deployed facial recognition technology through its Project Green Light program, which connected real-time surveillance cameras at gas stations, convenience stores, churches, and community centers to a centralized monitoring hub at police headquarters.
"My neighborhood became a test case," Eli told the class. "Green Light cameras went up at the corner store, at the church, at the rec center. They didn't ask. They just appeared. And then we started hearing stories --- people getting stopped by police who said the computer flagged their face."
In 2020, Robert Williams, a Black man living in a Detroit suburb, was arrested at his home in front of his wife and two young daughters. The charge was based entirely on a facial recognition match that turned out to be wrong. Williams was detained for 30 hours before the error was acknowledged. He was one of at least three Black individuals known to have been wrongfully arrested in the United States due to facial recognition misidentification, all of them in the Detroit area --- including Michael Oliver (2019) and Nijeer Parks (2019, New Jersey).
The Williams case illustrates several converging failures:
- Technical failure: The algorithm misidentified Williams. As the Gender Shades research predicted, the technology was least reliable for the demographic most subjected to its use.
- Procedural failure: Detroit police policy required human review of facial recognition matches. The reviewing detective, who received a grainy photo and a low-confidence match, issued an arrest warrant anyway --- suggesting that the algorithmic output was treated as authoritative despite its known limitations.
- Accountability failure: After the wrongful arrest was acknowledged, no officer was disciplined. No policy was changed. The facial recognition system continued operating. Williams had to file a lawsuit to receive compensation for his wrongful detention.
- Contextual failure: Facial recognition was deployed in a city with a long history of racially discriminatory policing, overlaid onto neighborhoods already subjected to disproportionate surveillance. The technology amplified existing power asymmetries rather than disrupting them.
Connection: The Power Asymmetry theme surfaces here with particular force. Facial recognition technology is deployed by institutions with power (police departments) in communities with less power (predominantly Black neighborhoods) and is least accurate for the people it is most used against. This is not coincidental --- it is the structural dynamic that Chapter 5's analysis of data power predicted.
8.5.3 Clearview AI and the Scraping Problem
A different dimension of facial recognition emerged with the 2020 exposure of Clearview AI, a startup that had scraped over 3 billion photographs from social media platforms (Facebook, Twitter, LinkedIn, Venmo) to build the world's largest facial recognition database. Clearview marketed its tool to law enforcement agencies, claiming it could identify virtually anyone from a single photograph.
The Clearview AI case raised issues distinct from the Detroit case:
- Consent: The photographs were originally posted for social purposes. Their scraping and use for law enforcement identification violated the contextual norms of every platform on which they appeared.
- Scale: With 3 billion images, Clearview's database was larger than any government-operated system. The private sector had created a surveillance tool that exceeded government capability.
- Regulation gap: While platforms' terms of service prohibited scraping, the enforcement of those terms against a company selling to law enforcement was complicated by the government's interest in the technology.
- Mission creep: Clearview initially marketed exclusively to law enforcement. Investigations revealed that access had also been provided to private companies, wealthy individuals, and foreign governments.
Several countries have taken action against Clearview AI. The UK Information Commissioner's Office fined Clearview over 7.5 million pounds in 2022. Australia's information commissioner found Clearview in breach of privacy law. France's CNIL imposed a 20 million euro fine. But the company continues to operate in the United States, where no equivalent federal regulation exists.
8.5.4 Biometric Surveillance Beyond Faces
Facial recognition is only one dimension of biometric surveillance --- the monitoring and identification of individuals through their physical characteristics:
- Gait analysis identifies individuals by the way they walk, even when faces are obscured. China has deployed gait recognition systems in several cities, capable of identifying individuals from up to 50 meters away.
- Voice recognition matches individuals by vocal patterns, enabling identification from phone calls and recordings
- Iris scanning provides high-precision identification at closer ranges
- Emotion recognition --- also called "affect detection" --- claims to identify emotional states from facial expressions, though the scientific basis for this technology has been widely criticized by psychologists including Lisa Feldman Barrett, who argues that emotions do not map reliably to specific facial configurations
- DNA surveillance involves the collection and analysis of genetic material, sometimes without the individual's knowledge (through "surreptitious" DNA collection from discarded items)
What distinguishes biometric data from other forms of personal data is its permanence. You can change your password. You can get a new credit card number. You cannot change your face, your gait, your irises, or your DNA. A biometric database breach is irreversible --- the compromised identifiers cannot be reset. This permanence makes biometric data qualitatively different from other categories of personal data and justifies stronger regulatory protection.
Illinois's Biometric Information Privacy Act (BIPA), passed in 2008, was the first U.S. state law to require informed consent before the collection of biometric data. BIPA has generated significant litigation --- including a $650 million settlement with Facebook over its use of facial recognition on user photos without consent, and a $228 million jury verdict against BNSF Railway for scanning truck drivers' fingerprints without consent. But most U.S. states have no equivalent protection.
8.6 Surveillance Capitalism and the Surveillance State: Convergence
8.6.1 Shoshana Zuboff's Framework
We introduced Shoshana Zuboff's concept of surveillance capitalism in Chapter 4 (Section 4.3). Here we examine its relationship to traditional state surveillance and the ways in which the two have become entangled.
Zuboff defines surveillance capitalism as "a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales." The core mechanism is the extraction of behavioral surplus --- data about user behavior that exceeds what is needed to improve the service --- and its transformation into prediction products sold to business customers.
The key insight for this chapter is that surveillance capitalism doesn't just resemble state surveillance; it has created the infrastructure that makes state surveillance vastly more powerful:
- Data supply chain: Government agencies routinely purchase data from commercial data brokers rather than collecting it themselves, circumventing warrant requirements. The Department of Homeland Security, the IRS, and the FBI have all purchased location data, communications metadata, and social media monitoring from private companies. A 2022 ACLU investigation revealed that the Department of Homeland Security had purchased billions of records of domestic phone calls from a commercial data broker --- data it could not have legally collected directly.
- Technology transfer: Facial recognition algorithms developed by commercial companies (Microsoft, Amazon, NEC) are sold directly to law enforcement agencies. Palantir, originally funded by the CIA's venture capital arm In-Q-Tel, built its data analytics platform for intelligence agencies and then commercialized it for police departments, immigration enforcement, and corporate clients.
- Infrastructure sharing: Amazon's Ring doorbell network functions as a distributed surveillance system whose footage is accessible to police through partnerships with over 2,000 law enforcement agencies. Google's Sensorvault --- a database of location data from hundreds of millions of phones --- has been accessed by law enforcement through "geofence warrants" that demand data on every person in a given area during a given time period.
8.6.2 The Convergence Problem
The convergence of state and corporate surveillance creates a compound threat that is greater than either would pose alone:
Regulatory arbitrage. When government agencies face legal restrictions on data collection, they can purchase equivalent data from commercial providers who collected it under terms of service rather than legal authority. This creates a shadow surveillance system that operates outside the constitutional constraints designed to limit government power. The Fourth Amendment protects against unreasonable government searches; it does not protect against government purchases of data that private companies collected through "consent."
Accountability laundering. When harms result from surveillance systems that combine government and corporate components, accountability becomes diffuse. If a person is denied a job based on a background check that incorporates commercially purchased location data originally collected by a phone app, who is responsible? The employer? The background check company? The data broker? The app developer? The phone manufacturer? The surveillance assemblage distributes responsibility so widely that it effectively disappears.
Democratic deficit. Citizens can, in principle, hold their government accountable through democratic processes --- voting, legislation, judicial review. They have far fewer mechanisms for holding corporations accountable for surveillance practices. And when the government's surveillance operates through corporate infrastructure, democratic accountability is effectively bypassed.
"I can vote out a mayor who installs cameras in my neighborhood," Eli said. "But I can't vote out Amazon. I can't vote out the data broker who sold my location history to the police department. The convergence means that the democratic checks on surveillance --- weak as they already are --- don't apply to half the system."
Common Pitfall: Discussions of surveillance often treat state and corporate surveillance as separate issues --- "government overreach" on one side, "Big Tech privacy" on the other. This framing obscures the most important dynamic: the two systems are deeply intertwined, and addressing one without the other is insufficient. A robust privacy framework must address the convergence, not just the components.
8.6.3 The Consent Fiction in Surveillance
The convergence of state and corporate surveillance exposes the consent fiction with particular clarity. Consider the chain of "consent" involved in a typical surveillance transaction:
- You "consent" to a social media platform's terms of service (without reading them --- see Chapter 9)
- The terms authorize the platform to share data with "partners" (you have no say over who the partners are)
- A data broker purchases your data from the platform (you don't know this is happening)
- A law enforcement agency purchases your data from the data broker (no warrant required)
- Your data is used to place you at the scene of a crime, flag you for additional scrutiny, or deny you a government benefit (you have no opportunity to challenge the evidence)
At each step, some form of "consent" or "legal authority" is invoked. But the cumulative effect is that your data travels from a context of social connection to a context of law enforcement without anything that a reasonable person would recognize as meaningful consent. This is the consent fiction operating at scale --- theatrical consent that legitimizes a transfer of power.
8.7 Resistance: Encryption, Anonymity, and Legal Challenges
8.7.1 Technical Resistance: Encryption and Anonymity Tools
In response to pervasive surveillance, a range of technical tools have emerged to protect communications and anonymity:
End-to-end encryption (E2EE) ensures that only the sender and recipient can read a message --- not the platform that carries it, not a government that demands it. Signal, WhatsApp (since 2016), and iMessage use E2EE by default. The adoption of E2EE by mainstream platforms represents one of the most significant privacy advances of the past decade --- and has been fiercely contested by governments worldwide.
Tor (The Onion Router) anonymizes internet traffic by routing it through multiple volunteer-operated servers, each of which removes one layer of encryption, making it extremely difficult to trace traffic back to its origin. Tor is used by journalists, activists, whistleblowers, and ordinary citizens seeking privacy --- as well as by criminals, a duality that complicates governance.
VPNs (Virtual Private Networks) encrypt traffic between a user's device and a VPN server, preventing the user's internet service provider from monitoring their activity. However, VPN providers can themselves monitor traffic, creating a new point of trust (and potential surveillance).
Privacy-focused browsers and search engines --- Firefox with privacy extensions, Brave browser, DuckDuckGo search --- reduce tracking by blocking third-party cookies, fingerprinting, and data collection.
8.7.2 The Encryption Debate
Governments have consistently sought to limit the spread of strong encryption, arguing that it enables criminals and terrorists to "go dark" --- communicating beyond the reach of lawful surveillance.
The debate has a long history:
- In the 1990s, the U.S. government proposed the Clipper Chip --- a government-designed encryption chip with a built-in backdoor for law enforcement. The proposal was abandoned after intense opposition from technologists and civil liberties organizations who demonstrated that the backdoor could be exploited by adversaries.
- In 2015-2016, the FBI demanded that Apple create a tool to unlock the iPhone of the San Bernardino shooter. Apple refused, arguing that creating a backdoor for one phone would compromise the security of all iPhones. The FBI eventually accessed the phone through a third-party hacking tool, but the underlying policy question remained unresolved.
- In 2023, the UK's Online Safety Act included provisions that could require platforms to scan encrypted messages for child sexual abuse material --- effectively mandating a form of backdoor access. Signal and WhatsApp threatened to withdraw from the UK market rather than comply, and the provision has not been enforced as of 2025.
The technical consensus among cryptographers is that there is no way to create a backdoor that is accessible only to authorized parties. A vulnerability engineered for law enforcement can be exploited by criminals, foreign intelligence services, and authoritarian governments. As the cybersecurity community puts it: you cannot build a door that only good guys can walk through.
Reflection: The encryption debate forces a genuine tension between values that Chapter 6's ethical frameworks help us navigate. A utilitarian analysis must weigh the benefits of law enforcement access against the costs of weakened security for all users. A deontological analysis asks whether treating all users as potential criminals (by mandating surveillance backdoors) respects their dignity. A Rawlsian analysis asks: behind the veil of ignorance, would you accept mandatory encryption backdoors --- not knowing whether you would be a crime victim, a wrongly accused suspect, a dissident in an authoritarian state, or a domestic abuse victim whose safety depends on encrypted communication?
8.7.3 Legal and Advocacy Resistance
Beyond technical tools, surveillance has been challenged through legal action and organized advocacy:
Judicial challenges. In Carpenter v. United States (2018) --- referenced in Chapter 7's case study --- the U.S. Supreme Court held that accessing historical cell-site location data constitutes a search under the Fourth Amendment, requiring a warrant. Chief Justice Roberts wrote that "a cell phone --- almost a feature of human anatomy --- tracks nearly exactly the movements of its owner." The decision represents a significant, if limited, judicial recognition that digital surveillance requires new legal standards.
Legislative campaigns. Bans and moratoriums on government use of facial recognition have been enacted in San Francisco, Boston, Minneapolis, and other cities. The EU's AI Act (2024) categorizes real-time biometric identification in public spaces as a "prohibited" AI practice, with narrow exceptions for law enforcement.
Civil society organizations. The Electronic Frontier Foundation (EFF), the ACLU, Privacy International, Access Now, and numerous other organizations conduct surveillance litigation, publish research, provide technical tools, and advocate for policy reform. The DataRights Alliance --- where Sofia Reyes works --- has been particularly active in challenging the surveillance-policing nexus.
"Legal challenges are necessary but insufficient," Sofia told Dr. Adeyemi's class during a guest presentation. "We won Carpenter, and it mattered. But it only addressed one specific data type in one specific legal context. The surveillance infrastructure is growing faster than courts can constrain it. We need structural reform --- not just case-by-case litigation."
8.7.4 Community Resistance
Some of the most effective resistance to surveillance has come from community organizing rather than litigation or technology:
In Detroit, a coalition of community organizations --- including the Detroit Community Technology Project and the Detroit Digital Justice Coalition --- mounted a sustained campaign against Project Green Light's expansion. They attended city council meetings, conducted independent audits of the facial recognition system's accuracy, organized "know your rights" workshops, and documented cases of misidentification. Their efforts contributed to a 2021 city ordinance requiring a public hearing and city council approval before any new surveillance technology could be deployed.
"We didn't stop surveillance in Detroit," Eli acknowledged. "But we changed the terms. Before, the police department could deploy whatever technology they wanted, and we'd only find out when someone got wrongfully arrested. Now there's a public process. It's not enough, but it's a start --- and it happened because people organized."
In Oakland, California, a community-driven surveillance technology ordinance passed in 2018 --- one of the first in the nation --- requires city departments to submit impact reports and obtain city council approval before acquiring any surveillance technology. The ordinance was drafted with significant community input and has been used to block or modify several proposed surveillance acquisitions.
Connection: Community resistance to surveillance connects to Chapter 5's discussion of counter-data and sousveillance. When communities conduct their own audits of surveillance systems, they are producing counter-data --- using the tools of measurement and analysis to challenge institutional claims about the accuracy, necessity, and fairness of surveillance technologies.
8.8 The Chilling Effect: Surveillance and Self-Censorship
8.8.1 Documenting the Chill
The chilling effect --- the tendency of surveillance to suppress lawful behavior, speech, and association --- is one of the most well-documented consequences of surveillance, and one of the most difficult to govern because the harm is preventive: people don't do something they would otherwise have done, and the absence of an action leaves no trace.
Empirical studies have documented chilling effects across multiple contexts:
- Wikipedia searches. A 2016 study by Jon Penney found that after the Snowden revelations, Wikipedia searches for terrorism-related articles declined by approximately 20% --- suggesting that people avoided seeking information on sensitive topics when they believed they might be watched.
- Google searches. Alex Marthews and Catherine Tucker (2017) found similar declines in Google searches for terms that users expected might be flagged by government surveillance. The effect was larger for searches that could be construed as related to terrorism or extremism, but extended to searches on other sensitive topics including health and sexuality.
- Journalism. The Committee to Protect Journalists found that the Snowden revelations led to significant changes in journalistic practice: sources became less willing to speak, journalists adopted encryption and operational security measures, and some investigative projects were abandoned because sources could not be adequately protected.
- Muslim communities. Research by Diala Shamas and the CLEAR project at CUNY documented the chilling effects of FBI surveillance and informant programs on American Muslim communities: reduced mosque attendance, self-censorship in religious discussions, reluctance to engage in charitable giving (for fear of funding accusations), and diminished political participation.
- Academic freedom. PEN America's 2013 survey of over 520 writers found that 24% had deliberately avoided certain topics in phone or email conversations due to surveillance concerns, and 16% had avoided writing or speaking about particular subjects.
8.8.2 The Democratic Harm
The chilling effect represents a distinctive form of harm that is not captured by traditional privacy frameworks focused on data breaches, unauthorized access, or identity theft. The harm is to the democratic ecosystem itself:
- When journalists cannot protect sources, government accountability weakens
- When citizens self-censor, public discourse narrows
- When communities under surveillance withdraw from civic life, democratic participation declines
- When people avoid searching for information, intellectual freedom contracts
- When political organizers worry about monitoring, collective action is suppressed
These are not speculative harms --- they are documented consequences of existing surveillance systems. And they disproportionately affect communities that are already marginalized: communities of color, immigrant communities, Muslim communities, political dissidents, LGBTQ+ individuals in hostile environments, and labor organizers.
"The irony," Dr. Adeyemi observed, "is that the communities most in need of democratic voice are the ones most silenced by surveillance. Surveillance doesn't suppress everyone equally. It suppresses the already vulnerable. And that makes the democratic harm not just a privacy issue but a justice issue."
Intuition: The chilling effect connects Foucault's panoptic theory to democratic practice. Foucault argued that the panopticon produces docile subjects --- people who internalize norms and self-regulate. The chilling effect is the democratic version of this dynamic: surveillance produces docile citizens --- people who avoid controversial speech, sensitive research, political organizing, and civic dissent. A democracy of docile citizens is not a democracy at all.
8.9 Chapter Summary
Key Concepts
- Panopticon (Bentham/Foucault): Surveillance's power lies in the possibility of being watched, which causes self-regulation. Modern surveillance extends this principle through digital systems with no clear boundaries.
- Dataveillance (Clarke): The systematic monitoring of people through the application of information technology to personal data --- now the dominant form of surveillance.
- Surveillance capitalism (Zuboff): An economic order that extracts behavioral surplus from users and converts it into prediction products --- creating the infrastructure that state surveillance leverages.
- Convergence: State and corporate surveillance are deeply intertwined through data purchases, technology transfers, and infrastructure sharing, creating compound accountability gaps.
- Chilling effect: Surveillance suppresses lawful behavior, speech, and association --- disproportionately affecting marginalized communities and weakening democratic participation.
- Facial recognition and biometric surveillance raise distinctive concerns because of their permanence, their differential accuracy across demographic groups, and their deployment in communities already subject to disproportionate policing.
Key Debates
- Is the panopticon metaphor still useful for understanding digital surveillance, or have we moved beyond it?
- Can encryption backdoors be designed to limit access to authorized parties, or is strong encryption an all-or-nothing proposition?
- Should facial recognition technology be regulated, banned, or subjected to moratoriums pending improved accuracy and governance?
- Is resistance (technical, legal, communal) sufficient to address surveillance, or does the structural dynamic require more fundamental change?
Applied Framework
To evaluate a surveillance practice: (1) identify the watching --- who is observing whom, through what technology; (2) identify the power asymmetry --- who benefits from the watching and who bears its costs; (3) apply contextual integrity --- does the surveillance violate the informational norms of the context in which it operates; (4) assess the chilling effect --- what lawful behavior is suppressed by the surveillance; (5) evaluate accountability --- who is responsible when surveillance causes harm, and what recourse do the surveilled have.
What's Next
In Chapter 9: Data Collection and Consent, we shift focus from the mechanisms of surveillance to the legitimizing frameworks that are supposed to govern it. We'll examine informed consent --- the legal and ethical concept that is supposed to give individuals control over their data --- and discover why, in practice, consent has become a fiction that legitimizes the very surveillance practices this chapter has documented.
Before moving on, complete the exercises and quiz.