Appendix B: Landmark Studies in Surveillance Research

Annotated Guide to 25 Essential Works


How to Use This Appendix

This appendix annotates twenty-five landmark studies, papers, and investigations that have shaped the field of surveillance studies. Each entry covers: the research question, the method employed, key findings, the study's significance to surveillance scholarship, its most important limitations, and where it connects to the chapters of this textbook.

These annotations are not substitutes for reading the originals. They are introductions — enough context to understand why a study matters and what to look for when you read it. For several of these works (particularly Foucault, Zuboff, and Browne), detailed reading guides are provided in Appendix C; this appendix focuses on their empirical and conceptual contributions.


Study 1: Michel Foucault, Discipline and Punish (1975)

Full Citation: Foucault, Michel. 1977. Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan. New York: Pantheon. [Originally Surveiller et punir, Gallimard, 1975.]

Research Question: How did modern forms of power and punishment develop from the Enlightenment through the nineteenth century, and what relationship do these forms bear to the surveillance and control institutions of the present?

Method: Historical archaeology — a distinctive Foucauldian method that traces not the progressive development of ideas but the discontinuities, accidents, and power struggles through which knowledge regimes are formed. Foucault reads legal texts, architectural plans, prison reform proposals, military manuals, and educational treatises as artifacts of shifting power formations.

Key Findings: Modern punishment shifted from spectacular public torture (the execution as performance of sovereign power) to incarceration as continuous observation and normalization. The panopticon, Bentham's prison design, is not merely a building but a machine for producing subjects who internalize surveillance and discipline themselves. This same logic — of disciplinary power operating through visibility and normalization — spread from the prison to schools, hospitals, factories, and armies. The "carceral society" is one in which disciplinary mechanisms are diffused throughout social institutions.

Significance to Surveillance Studies: Discipline and Punish provided surveillance studies with its foundational theoretical vocabulary: panopticism, disciplinary power, normalization, the gaze. It directed attention to the relationship between visibility and control, and to how institutional architecture shapes human subjectivity. Every subsequent surveillance theory — including Mathiesen's synopticism, Haggerty and Ericson's surveillant assemblage, and Zuboff's surveillance capitalism — defines itself partly in relation to Foucault.

Limitations: Foucault's analysis is conducted at a level of abstraction that can make it difficult to apply to specific contemporary cases. His account tends toward totalization — surveillance appears to penetrate all social relations — which can obscure the unevenness of surveillance across social groups and the spaces of resistance that exist within disciplinary regimes. His empirical history has been challenged on specific points. His framework, developed before digital surveillance, requires extension rather than simple application.

Where Referenced in This Textbook: Chapters 2 (core analysis), 4 (historical context), 5 (theoretical frameworks), 26 (workplace surveillance).


Study 2: Stanley Milgram's Obedience Studies (1961–1963)

Full Citation: Milgram, Stanley. 1963. "Behavioral Study of Obedience." Journal of Abnormal and Social Psychology 67 (4): 371–378. [Extended in: Milgram, Stanley. 1974. Obedience to Authority. New York: Harper and Row.]

Research Question: To what extent will ordinary people comply with instructions from authority figures to harm others?

Method: Experimental. Participants were told they were in a learning study; a confederate playing "the learner" was ostensibly connected to a shock generator. An authority figure (experimenter in a lab coat) instructed participants to administer increasingly intense "shocks" when the learner gave wrong answers. In the most famous condition, approximately 65% of participants administered the maximum 450-volt shock when instructed to continue by the authority figure.

Key Findings: Ordinary people, without sadistic motivation, will comply with authority-sanctioned instructions to harm others when the authority structure is clear, when there is distance from the victim, and when responsibility is diffused to the authority. Compliance rates varied dramatically with proximity to the victim, the authority figure's presence, and group context.

Significance to Surveillance Studies: Milgram's studies are directly relevant to understanding surveillance compliance: why do workers comply with monitoring systems they find invasive? Why do citizens comply with surveillance regimes they know to be unjust? Why do platform users accept data practices they describe as troubling? Milgram's insight — that the structure of authority and the diffusion of responsibility are more powerful determinants of compliance than individual values — helps explain why surveillance systems function despite widespread misgivings. The Milgram paradigm also illuminates the behavior of surveillance system operators: compliance with instructions to monitor, store, and report on citizens or employees may reflect authority structures more than individual ethics.

Limitations: The original studies have been criticized on methodological and ethical grounds. The deception protocol caused participants distress; subsequent replication studies have raised questions about the consent process. Some researchers argue that participants were not as deceived as Milgram claimed and that higher compliance rates reflect performance of compliance rather than genuine obedience. The transferability of laboratory findings to real-world authority contexts is debated.

Where Referenced in This Textbook: Chapter 5 (theoretical frameworks), Chapter 26 (workplace compliance), Chapter 28 (algorithmic management).


Study 3: Facebook Emotional Contagion Study (Kramer et al., 2014)

Full Citation: Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks." Proceedings of the National Academy of Sciences 111 (24): 8788–8790.

Research Question: Can emotional states be transferred between individuals through social networks — specifically, does seeing more positive or more negative content in a News Feed change users' own emotional expression?

Method: Randomized controlled experiment conducted on 689,003 Facebook users without their knowledge. Facebook manipulated users' News Feeds to show more positive or more negative emotional content and then measured the emotional content of users' own subsequent posts.

Key Findings: Users who saw fewer positive posts themselves posted less positively; users who saw fewer negative posts themselves posted less negatively. Emotional contagion was demonstrated at scale without in-person interaction.

Significance to Surveillance Studies: This study is significant less for its psychological findings than for what it reveals about platform surveillance and experimentation. Facebook used its surveillance infrastructure — the ability to monitor, categorize, and manipulate the News Feed of nearly 700,000 users — to conduct psychological experiments without consent or ethical review. The study demonstrates that platform surveillance is not merely passive monitoring but enables active behavioral manipulation. The "terms of service consent" defense — that users agreed to data use for "research" — was widely rejected as falling far short of meaningful informed consent for psychological experimentation. The outrage following the study's publication contributed to growing public awareness of the scope of platform surveillance and experimentation.

Limitations: The effect sizes were very small — the study's statistical power to detect tiny effects was a function of its enormous sample. Whether these small average effects translate into meaningful individual experiences is uncertain. The study has also been questioned for whether the manipulation was real (News Feeds are always algorithmically curated) and for its operationalization of "emotional content."

Where Referenced in This Textbook: Chapter 13 (social media surveillance), Chapter 14 (behavioral targeting), Chapter 34 (surveillance capitalism critique).


Study 4: Gender Shades — Buolamwini and Gebru (2018)

Full Citation: Buolamwini, Joy, and Timnit Gebru. 2018. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research 81: 1–15.

Research Question: Do commercial facial analysis systems perform equally across demographic groups — specifically, do they show differential accuracy by gender and skin tone?

Method: Auditing study. Buolamwini and Gebru constructed a benchmark dataset (Pilot Parliaments Benchmark) of 1,270 parliamentarian photographs with balanced representation of gender and skin tone, labeled using the Fitzpatrick skin classification scale. They tested three major commercial facial analysis APIs (IBM, Microsoft, and Face++) on this benchmark and measured accuracy rates across demographic subgroups.

Key Findings: All three systems showed significant accuracy disparities by gender and skin tone. Error rates for darker-skinned women were up to 34.7 percentage points higher than for lighter-skinned men. The worst-performing system had an error rate of 35.0% for darker-skinned women versus 0.3% for lighter-skinned men.

Significance to Surveillance Studies: Gender Shades provided empirical proof, through rigorous auditing methodology, of racial and gender bias in commercial facial recognition. Before this study, industry claims of high accuracy could be cited without demographic disaggregation; after this study, accuracy claims without demographic breakdown were no longer defensible. The study launched a field of algorithmic auditing and directly influenced regulatory debates about facial recognition. It demonstrated the audit methodology as a tool of critical surveillance research. It also illustrated the role of training data composition: systems trained predominantly on lighter-skinned faces performed poorly on darker-skinned faces, encoding the historical underrepresentation of darker-skinned people in technology datasets.

Limitations: The Pilot Parliaments Benchmark is a relatively small and specific dataset (parliamentarians are not representative of general populations). The study was conducted at a single point in time; commercial systems have been updated since. The focus on binary gender classification does not address accuracy for nonbinary or gender-nonconforming individuals.

Where Referenced in This Textbook: Chapters 7 (biometrics), 35 (facial recognition), 36 (race and surveillance), 40 (future of surveillance).


Study 5: Princeton WebTAP — Online Tracking Measurement (Englehardt and Narayanan, 2016)

Full Citation: Englehardt, Steven, and Arvind Narayanan. 2016. "Online Tracking: A 1-Million-Site Measurement and Analysis." In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 1388–1401.

Research Question: What is the prevalence and nature of third-party tracking across the web at scale?

Method: Web measurement study using OpenWPM, an automated measurement framework. The researchers crawled the top 1 million websites as ranked by Alexa and measured: the presence of third-party scripts, cookies, and tracking technologies; the network of tracker relationships; and the deployment of advanced fingerprinting techniques.

Key Findings: Third-party tracking was pervasive across the web. Tracking networks with presence on more than 100,000 sites exist; Google's tracker ecosystem was present on approximately 76% of top sites. A small number of large tracking companies had presence across the majority of the web, enabling cross-site tracking at massive scale. Advanced fingerprinting techniques (canvas fingerprinting, WebRTC-based fingerprinting) were deployed on thousands of sites without user knowledge.

Significance to Surveillance Studies: This study provided the first rigorous, large-scale empirical documentation of the commercial surveillance infrastructure of the web. Prior to this kind of systematic measurement, claims about the prevalence of tracking were based on smaller studies or industry claims. The Princeton WebTAP methodology became a standard tool for ongoing surveillance of the surveillance ecosystem, enabling researchers to track changes in tracking behavior over time. The study also demonstrated the highly concentrated structure of the tracking industry — a few dominant players observe the vast majority of web activity.

Limitations: Web crawls from a research network do not replicate the experience of real users, who may receive different tracking depending on their cookies, profiles, and location. The list of "known trackers" used for identification is maintained by community efforts and may miss novel or obscure tracking technologies.

Where Referenced in This Textbook: Chapters 12 (cookies and tracking), 13 (social media), 14 (behavioral targeting).


Study 6: ProPublica Machine Bias — COMPAS Study (Larson et al., 2016)

Full Citation: Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. "How We Analyzed the COMPAS Recidivism Algorithm." ProPublica, May 23.

Research Question: Does the COMPAS algorithm — used by courts in Broward County, Florida, and other jurisdictions to predict recidivism risk — perform equally across racial groups?

Method: Investigative data analysis. ProPublica obtained COMPAS risk scores and subsequent criminal records for 7,214 people arrested in Broward County. They analyzed false positive and false negative rates by race.

Key Findings: COMPAS was twice as likely to incorrectly flag Black defendants as future criminals compared to white defendants (false positive rate: 44.9% Black, 23.5% white). COMPAS was more likely to incorrectly label white defendants as low risk when they subsequently re-offended (false negative rate: 47.7% white, 28.0% Black). The algorithm's overall accuracy was only slightly better than chance, and its racial disparities were substantial.

Significance to Surveillance Studies: Machine Bias is one of the most consequential pieces of accountability journalism about algorithmic systems. It brought the concept of algorithmic bias to a wide public audience and demonstrated the gap between claimed objectivity and actual performance of risk assessment tools. It sparked a major academic debate about the definition of fairness in machine learning (the Northpointe response argued that COMPAS was "fair" by a different statistical definition — equal predictive accuracy across groups — demonstrating that different fairness definitions are mathematically incompatible). The study established the audit methodology as a form of public accountability journalism for algorithmic systems.

Limitations: The study's methodology has been contested by Northpointe (the company that produces COMPAS), which argued that the researchers' definition of fairness was incomplete. Statistical fairness experts have pointed out that the different fairness metrics invoked by ProPublica and Northpointe cannot simultaneously be satisfied when base rates differ across groups — a genuine technical constraint, not merely a disagreement about definitions. This debate has been enormously productive for machine learning ethics research.

Where Referenced in This Textbook: Chapters 8 (predictive policing), 29 (HR analytics), 36 (race and surveillance).


Study 7: Zuboff's Analysis of Google AdWords (from The Age of Surveillance Capitalism, 2019)

Full Citation: Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs. [Especially Part I, Chapters 2–4.]

Research Question: How did Google's discovery of behavioral surplus and its monetization in AdWords constitute a new economic logic (surveillance capitalism)?

Method: Historical case study combining document analysis, interview research, corporate history, and theoretical interpretation.

Key Findings: In the early 2000s, Google discovered that the "data exhaust" from search queries — the behavioral signals generated by user searches that went beyond what was necessary to return search results — could be used to predict what users wanted and thus to target advertising with unprecedented precision. The realization that behavioral data could be captured, processed, and sold as predictions about future behavior — without informing users — created the economic logic Zuboff calls surveillance capitalism. This logic spread from Google to Facebook, Amazon, and ultimately to commercial life generally.

Significance to Surveillance Studies: Zuboff provided surveillance studies with an economic framework that explains why commercial surveillance has the characteristics it does: not as a byproduct of technology development but as a deliberate and structured economic logic. Behavioral surplus, the rendition cycle, behavioral modification — these concepts gave analysts tools to describe what commercial surveillance is and does. The framework is also a normative argument: surveillance capitalism is not merely a business model but a threat to human autonomy and democratic governance.

Limitations: Critics have argued that Zuboff's framework is analytically imprecise (what exactly counts as "behavioral surplus" versus data necessary for service provision?), historically incomplete (she locates surveillance capitalism's origins too narrowly in Google), and politically one-sided (she focuses on platform monopolies while giving insufficient attention to governmental surveillance). Her definition of "surveillance capitalism" has been challenged as so broad as to describe most commercial data collection, or alternatively as too narrow to address non-commercial surveillance.

Where Referenced in This Textbook: Chapters 11 (data economy), 13 (social media), 14 (behavioral targeting), 34 (surveillance capitalism critique).


Study 8: ACLU Ring/Neighbors Racial Profiling Analysis (2021)

Full Citation: American Civil Liberties Union. 2021. The Surveillance Storefront: Ring and Amazon's Neighborhood Surveillance Network. ACLU Special Report.

Research Question: Do posts on Ring's Neighbors app exhibit racial profiling — do they disproportionately target individuals of color as "suspicious," independent of behavior?

Method: Content analysis of Neighbors app posts collected from a sample of US metropolitan areas, combined with demographic analysis of reporting patterns.

Key Findings: Posts on Neighbors disproportionately described Black, Latino, and Asian individuals as suspicious, even when the described behavior was entirely ordinary (walking, standing, delivering packages). White individuals were substantially underrepresented in "suspicious person" posts relative to their population share in the areas studied. The report also documented Ring's partnerships with over 2,000 police departments, the terms of which gave police access to footage upon request without subpoena.

Significance to Surveillance Studies: The ACLU analysis documented how a "neutral" distributed surveillance platform reproduces and amplifies racial bias through aggregated user-generated content. Ring's official terms prohibit discriminatory content; the analysis showed that such terms are inadequate to prevent discriminatory outcomes. The study also highlighted the quasi-public character of private surveillance networks: systems owned by individual residents, aggregated by a commercial platform, and integrated with police databases constitute a form of surveillance infrastructure with public implications that private law cannot adequately address.

Limitations: Content analysis of public posts captures only what users publish, not the full universe of suspicious activity reports. The study's sample is not fully described. Ring subsequently modified its platform and police partnership terms in response to public pressure; the study reflects conditions at a particular moment.

Where Referenced in This Textbook: Chapters 16 (Ring/home surveillance), 36 (racial surveillance).


Study 9: Carpenter v. United States, 585 U.S. 296 (2018)

Research Question (Legal): Does the government's warrantless acquisition of seven days of cell-site location information (CSLI) from a wireless carrier violate the Fourth Amendment?

Method: Supreme Court opinion (5-4), authored by Chief Justice Roberts.

Key Findings (Holding): Yes. The government needs a warrant to obtain seven or more days of CSLI. Cell-site location information is unique: it is detailed (revealing religious practice, political associations, romantic relationships), retrospective (enabling reconstruction of past movements), and generated automatically and passively (not truly "voluntarily" shared). The third-party doctrine applies with less force to this category of information.

Significance to Surveillance Studies: Carpenter is the most important US Supreme Court privacy decision in the digital age. It marked the first significant retreat from the third-party doctrine, acknowledging that the doctrine's logic does not extend to all digital data. Roberts's majority opinion explicitly addressed the "seismic shifts in digital technology" that make pre-digital privacy doctrine inadequate. The decision also prompted significant lower-court litigation about what categories of digital data require warrants — geofence warrants, tower dumps, historical email metadata — creating an evolving body of law that students should track.

Limitations as a Precedent: Carpenter is explicitly narrow — it addresses CSLI specifically and does not purport to overrule Miller or Smith. The Court declined to define how comprehensive data must be to trigger its reasoning, leaving extensive uncertainty. Carpenter has been applied inconsistently in lower courts; its scope remains contested.

Where Referenced in This Textbook: Chapters 9 (intelligence surveillance), 18 (smartphone tracking), 31 (legal frameworks).


Study 10: SafeGraph Location Data and COVID-19 (Multiple Studies, 2020–2021)

Full Citation: Multiple studies using SafeGraph mobility data for COVID-19 modeling; documented ethical concerns in: Calhoun, Emily. 2021. "The Ethical Problems with Location Data in COVID-19 Research." Harvard Data Science Review.

Research Question: Can commercial location data (from smartphone apps) be used to model COVID-19 transmission patterns and policy effects?

Method: Correlational analysis using mobility data purchased from SafeGraph, a commercial location data company, to study the relationship between movement patterns and COVID-19 spread.

Key Findings (of ethical concern): SafeGraph location data used in COVID research was collected from smartphone apps for marketing purposes, without meaningful consent for health research uses. The data was sold to researchers without adequate documentation of its collection methodology. The same data infrastructure (commercial location tracking) used for benign pandemic research was also used by the Trump administration's HHS to monitor compliance with stay-at-home orders and, as reported by Vice/Motherboard, purchased by US military and intelligence agencies.

Significance to Surveillance Studies: The SafeGraph case is a paradigmatic example of function creep at speed: data collected for advertising purposes was repurposed for public health research, then for law enforcement monitoring, then for national security purposes — within the span of months. It illustrates how surveillance capitalism's data infrastructure creates capabilities that are rapidly repurposed for state surveillance. It also illustrates a gap in US research ethics: IRB oversight did not systematically evaluate whether using commercially collected location data for research was ethically appropriate.

Limitations: The ethical analysis varies significantly by specific study; some COVID-19 research using location data was more carefully designed than others.

Where Referenced in This Textbook: Chapters 11 (data economy), 24 (epidemiological surveillance), 31 (legal frameworks).


Study 11: Amazon Workplace Monitoring (Kantor and Streitfeld, 2015; Kantor and Sundaram, 2022)

Full Citations: - Kantor, Jodi, and David Streitfeld. 2015. "Inside Amazon: Wrestling Big Ideas in a Bruising Workplace." New York Times, August 15. - Kantor, Jodi, and Arya Sundaram. 2022. "The Rise of the Worker Productivity Score." New York Times, August 14.

Research Question: How do algorithmic surveillance and productivity scoring systems affect workers across different industries?

Method: Investigative journalism combining interviews with current and former employees, document analysis, and worker testimony. The 2022 study involved extensive interviews with workers across Amazon warehouses, insurance companies, and other tracked workplaces.

Key Findings: Amazon's warehouse tracking measures productivity by the second; automated systems issue warnings and terminations without human supervisor review. Workers describe intense surveillance as harmful to physical and mental health. Similar tracking systems have spread to insurance claims processing, financial services, and other white-collar professions. Workers often do not know the specific metrics by which they are scored.

Significance to Surveillance Studies: The Times investigations documented algorithmic management in concrete, human terms that theoretical treatments could not — illustrating in worker testimony the precise mechanisms and effects that Chapter 28 analyzes abstractly. The 2022 study established that extreme productivity scoring was spreading beyond warehouses to office workers, contradicting assumptions that white-collar workers were exempt from the most intensive forms of workplace surveillance.

Limitations: Investigative journalism cannot achieve the systematic sampling of academic research; findings reflect the experiences of workers who agreed to be interviewed, which may not represent all affected workers. Amazon contested specific characterizations.

Where Referenced in This Textbook: Chapters 26 (performance reviews), 27 (remote work), 28 (algorithmic management).


Study 12: Shoshana Zuboff, "Big Other" (2015)

Full Citation: Zuboff, Shoshana. 2015. "Big Other: Surveillance Capitalism and the Prospects of an Information Civilization." Journal of Information Technology 30 (1): 75–89.

Research Question: What new form of power is instantiated by the behavioral modification capabilities of surveillance capitalism?

Method: Theoretical analysis and conceptual argument, drawing on the history of capitalism and political philosophy.

Key Findings: Zuboff introduces the concept of "Big Other" — the ubiquitous, nonhuman, networked intelligence that embodies the surveillance capitalism order. Big Other is not a state actor (Big Brother) but an automated system of behavioral modification that operates through prediction and nudging. Zuboff argues that this constitutes a third logic of capitalism, distinct from industrial capitalism (which modifies nature) and managerial capitalism (which modifies human behavior through employment): surveillance capitalism modifies human behavior at scale through information asymmetries.

Significance to Surveillance Studies: This paper introduced the core conceptual vocabulary of Zuboff's larger project (elaborated in the 2019 book) and was widely cited before the book's publication. It established the framework for analyzing surveillance not just as monitoring but as behavioral modification — a framework that broadened the scope of what surveillance studies needed to address.

Where Referenced in This Textbook: Chapters 11 (data economy), 14 (behavioral targeting), 34 (surveillance capitalism).


Study 13: Simone Browne's Lantern Laws Analysis (from Dark Matters, 2015)

Full Citation: Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press. [Especially Chapter 2, "Everybody's Got a Little Light Under the Sun."]

Research Question: How does racial surveillance operate historically, and how does analysis of anti-Black surveillance illuminate contemporary biometric surveillance?

Method: Historical critical analysis of primary sources including colonial and antebellum legal texts, combined with theoretical analysis of contemporary biometrics through the lens of critical race theory.

Key Findings: New York City's 1712 Lantern Laws required enslaved Black people to carry lanterns when moving through the city after dark, making them permanently and distinctively visible to white authority. Browne reads lantern laws as a proto-biometric surveillance technology: a requirement that Black bodies be marked, lit, and legible to power as a condition of movement. This historical analysis illuminates contemporary biometrics: both share the logic of requiring particular bodies to be rendered distinctively visible and trackable to authority.

Significance to Surveillance Studies: Browne's analysis reoriented surveillance studies by demonstrating that surveillance is not a modern invention but has deep roots in the surveillance of Black bodies under slavery and colonialism. It challenged the field's dominant frameworks — most surveillance theory was developed without attention to race — and established "racializing surveillance" as a concept that names not merely racially biased surveillance but surveillance as a technology of racial formation. Dark Matters is now one of the canonical texts of surveillance studies.

Where Referenced in This Textbook: Chapters 4 (history), 36 (racial surveillance), 38 (children/historical).


Study 14: NSA PRISM Program Disclosure Documents (Snowden, 2013)

Full Citation: Greenwald, Glenn, Ewen MacAskill, and Laura Poitras. 2013. "NSA Prism Program Taps in to User Data of Apple, Google and Others." The Guardian, June 7. [Original documents available via The Guardian and other publications.]

Research Question (Political): What is the scope of the NSA's collection of internet communications from US technology companies?

Method: Investigative journalism based on classified NSA documents provided by Edward Snowden.

Key Findings: The NSA's PRISM program collected internet communications — email, chat, video, photos, stored data — from the servers of major American technology companies including Google, Apple, Facebook, Microsoft, Yahoo, Skype, YouTube, and AOL, under FISA Section 702 authority. The program operated since 2007. A separate program, code-named "upstream," collected communications as they passed through internet backbone infrastructure. Taken together, these programs represented collection at a scale and comprehensiveness that most observers — including members of Congress — did not know existed.

Significance to Surveillance Studies: The Snowden revelations were the defining event of contemporary surveillance politics. They provided concrete documentation of surveillance programs that had been discussed theoretically but not empirically established. They demonstrated the gap between official public representations of NSA activity and actual operational scope. They triggered legal challenges, congressional hearings, diplomatic crises, and sustained public debate about the relationship between security surveillance and civil liberties. Surveillance studies scholarship produced before 2013 and after 2013 differs significantly in its empirical foundation.

Limitations: The documents are a subset of a larger classified archive; claims about what was not revealed are difficult to assess. Subsequent government and company statements have disputed some characterizations of how PRISM operated.

Where Referenced in This Textbook: Chapters 9 (intelligence surveillance), 31 (legal frameworks), 30 (whistleblowing).


Study 15: EU GDPR Enforcement Tracker Analysis

Full Citations: Council of the European Union. 2016. Regulation (EU) 2016/679 (GDPR). Analysis: CMS Law (ongoing). GDPR Enforcement Tracker database. Analysis by: Ryan, Johnny. 2019. Report: GDPR, and behavioral re-targeting in real time bidding. Dublin: Irish Council for Civil Liberties.

Research Question: What has GDPR enforcement accomplished since the regulation took effect in May 2018?

Method: Aggregated analysis of enforcement decisions reported across EU member state data protection authorities.

Key Findings: The GDPR Enforcement Tracker database (maintained by CMS Law) documents fines issued across EU member states. Notable findings: enforcement has been highly uneven across member states; the largest fines have targeted major US technology companies (Meta, Google, Amazon, TikTok); many member state DPAs have been under-resourced and slow to act; complaints about real-time bidding received by the Irish Data Protection Commission (the lead supervisory authority for most major US platforms) took years to resolve; significant GDPR violations by the advertising industry (RTB) remain widespread despite formal prohibitions.

Significance to Surveillance Studies: The GDPR enforcement record demonstrates both the potential and the limitations of comprehensive privacy regulation. The GDPR has created compliance costs for industry, expanded consumer rights, and produced landmark fines — but enforcement has been insufficient to restructure the surveillance capitalism business model, particularly with regard to the advertising technology ecosystem. The analysis of what has worked (data breach notification requirements, rights of access) and what has not (consent mechanisms for advertising) provides important lessons for comparative regulatory analysis.

Where Referenced in This Textbook: Chapters 12 (tracking), 31 (legal frameworks), 34 (surveillance capitalism).


Study 16: Chicago Predictive Policing Studies

Full Citations: Perry, Walter L. et al. 2013. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation. AND: Richardson, Rashida, Jason Schultz, and Kate Crawford. 2019. "Dirty Data, Bad Predictions." New York University Law Review Online 94: 192–233.

Research Question: Does predictive policing reduce crime? Does it introduce or amplify racial bias?

Method: Quasi-experimental evaluation (RAND study) and critical legal and data analysis (Richardson et al.).

Key Findings: Chicago's Strategic Subject List (SSL) and Predictive Policing program assigned risk scores to individuals. Richardson, Schultz, and Crawford found that predictive policing systems in Chicago and elsewhere were trained on "dirty data" — crime statistics that reflected prior racial disparities in policing, including those produced by unconstitutional stop-and-frisk programs. Systems trained on this data embed the biases of prior policing into future predictions, creating a feedback loop in which communities that were heavily policed continue to receive heavy policing regardless of actual crime rates.

Significance to Surveillance Studies: The "dirty data" analysis is a conceptual contribution to algorithmic bias research: it identifies a specific mechanism by which historical injustice is encoded in algorithmic systems. It is not merely that algorithms apply biased metrics; it is that the historical data used to train them was generated by biased human processes. This insight generalizes: any predictive system trained on historically biased data will reproduce those biases even with race-neutral variables.

Where Referenced in This Textbook: Chapters 8 (CCTV/public safety), 36 (racial surveillance), 40 (AI futures).


Study 17: Cornell "Gaydar" Study — Kosinski et al. (2013, 2017)

Full Citations: - Kosinski, Michal, David Stillwell, and Thore Graepel. 2013. "Private Traits and Attributes Are Predictable from Digital Records of Human Behavior." PNAS 110 (15): 5802–5805. - Kosinski, Michal, and Yilun Wang. 2017. "Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation from Facial Images." [Preprint; subsequently published in Journal of Personality and Social Psychology.]

Research Question: Can sensitive personal attributes — sexual orientation, political views, intelligence — be inferred from behavioral data or facial images?

Method: Machine learning analysis of Facebook likes (2013) and facial images from dating profiles (2017) to predict self-reported sexual orientation and other attributes.

Key Findings: Facebook likes could predict sexual orientation, political views, and intelligence with accuracy substantially above chance. Deep neural networks could predict self-reported sexual orientation from facial images with 81% accuracy for men and 71% for women, compared to 61% and 54% for human judges.

Significance to Surveillance Studies: These studies have significant implications for understanding the inferential power of behavioral and biometric surveillance data. If sexual orientation, political affiliation, and health status can be inferred from data that seems unrelated to those characteristics, then the "we only use data about X, not about Y" defense of surveillance systems is inadequate — behavioral data enables inference of sensitive attributes regardless of whether those attributes were directly collected. This has profound implications in contexts where disclosure of inferred sexual orientation or political affiliation could result in violence, discrimination, or legal jeopardy.

Limitations: The studies have been strongly criticized on methodological grounds. The population studied (US Facebook users, public dating profile users) is not representative; self-report of sexual orientation does not capture the full complexity of sexual identity; the facial images used may confound sexual orientation with presentation and grooming choices rather than face geometry. Critics argue the studies potentially do more harm than good by lending scientific credibility to inference-based surveillance.

Where Referenced in This Textbook: Chapters 12 (profiling), 32 (anonymization), 40 (future AI).


Study 18: Mechanical Turk Surveillance of Crowdworkers

Full Citation: Dzieza, Josh. 2020. "How Hard Will the Robots Make Us Work?" The Verge, February 27. [See also: Irani, Lilly C., and M. Six Silberman. 2013. "Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk." CHI 2013 Proceedings.]

Research Question: How does surveillance operate in the context of platform-mediated gig labor, where workers are geographically distributed and compensated by task?

Method: Investigative journalism and participant-observer research documenting the monitoring infrastructure of Amazon Mechanical Turk and similar platforms.

Key Findings: Amazon Mechanical Turk monitors the behavior of crowdworkers continuously: keystroke patterns, time per task, accuracy rates, and "suspicious" behavior patterns are tracked. Workers who deviate from expected behavioral patterns can be banned without explanation or appeal. Workers have no visibility into the monitoring criteria. The asymmetry is extreme: workers are continuously surveilled; the platform's decision-making processes are entirely opaque.

Significance to Surveillance Studies: Crowdwork surveillance illustrates how algorithmic management operates when stripped of the physical workplace context. There is no employer-employee relationship, no physical proximity, and no human manager — only a surveillance system and an algorithm. This extreme case illuminates the logic of workplace surveillance more generally: the monitoring is designed to extract maximum productivity while minimizing accountability to workers.

Where Referenced in This Textbook: Chapters 27 (remote work), 28 (algorithmic management).


Study 19: Pasquale's Analysis of Credit Scoring as Black Box

Full Citation: Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press. [Especially Chapter 1.]

Research Question: How do algorithmic systems used in credit scoring, search rankings, and financial trading operate, and what accountability mechanisms exist?

Method: Legal analysis, case studies, and document analysis of the opacity of commercial algorithmic systems.

Key Findings: Credit scoring algorithms are proprietary, opaque, and consequential: they determine access to housing, employment, and financial services, yet affected individuals have no right to understand how their scores are calculated, what inputs are weighted, or how to contest errors beyond narrow procedures. The Fair Credit Reporting Act provides some rights of access and correction, but does not compel disclosure of the underlying algorithm.

Significance to Surveillance Studies: Pasquale's "black box" concept captures the fundamental accountability challenge of algorithmic surveillance: decisions that significantly affect people's lives are made by systems that no one outside the company — and often few people inside it — can fully explain. The gap between the consequentiality of algorithmic decisions and the opacity of algorithmic processes is a defining feature of algorithmic governance. Pasquale's framework applies not only to credit scoring but to content moderation, predictive policing, and any domain in which algorithmic decisions shape access to opportunities.

Where Referenced in This Textbook: Chapters 11 (data economy), 29 (HR analytics), 31 (legal frameworks).


Study 20: Ruha Benjamin's Analysis of Racial Bias in Healthcare Algorithms

Full Citation: Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press. [Especially the analysis of pulse oximetry and healthcare algorithms.] See also: Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations." Science 366 (6464): 447–453.

Research Question: Are healthcare algorithms racially biased, and if so, how is bias introduced?

Method: Benjamin employs critical analysis of published research and policy documents; Obermeyer et al. (2019) employed quantitative analysis of a commercial healthcare algorithm used to identify patients for care management programs.

Key Findings: A widely used commercial health algorithm systematically underestimated the health needs of Black patients by using healthcare cost as a proxy for health need. Because Black patients spend less on healthcare for any given level of health need (due to access barriers, economic inequality, and historical medical mistrust), the algorithm systematically assigned them lower risk scores. The algorithm accurately predicted spending but not health — a critical distinction. Benjamin's broader analysis documents multiple healthcare AI systems that reproduce racial health disparities.

Significance to Surveillance Studies: This study demonstrates a critical concept for algorithmic analysis: proxy discrimination. The algorithm did not use race as a variable but used a variable (healthcare spending) that is correlated with race due to structural racism in healthcare access. This proxy encoded racial inequality invisibly — no one intended to build a racist algorithm, yet a racist outcome was produced. This insight is generalizable to any predictive system: variables that seem race-neutral may be racially correlated in ways that produce discriminatory outcomes.

Where Referenced in This Textbook: Chapters 29 (HR analytics), 36 (racial surveillance), 40 (AI futures).


Study 21: Pew Research Center — Americans and Privacy (Ongoing Series)

Full Citation: Auxier, Brooke, Lee Rainie, Monica Anderson, Andrew Perrin, Madhu Kumar, and Erica Turner. 2019. Americans and Privacy: Concerned, Confused, and Feeling Lack of Control Over Their Personal Information. Washington: Pew Research Center.

Research Question: How do Americans understand and feel about data privacy, and what practices do they adopt?

Method: National survey of 4,272 US adults, part of an ongoing series of Pew surveillance and privacy studies.

Key Findings: Large majorities of Americans report concern about their data privacy (79% concerned about how companies use their data, 64% about government). Most report feeling they have little control over how their data is collected and used. Large majorities say the risks of data collection outweigh the benefits. At the same time, most have not taken significant actions to protect their privacy.

Significance to Surveillance Studies: The Pew privacy series provides the best long-term tracking data on American public opinion about surveillance and privacy. The studies document the "privacy paradox" at scale — widespread concern coexisting with limited protective action — and suggest that the gap reflects feelings of helplessness rather than genuine indifference. The studies also reveal demographic variation in surveillance attitudes and experiences.

Where Referenced in This Textbook: Chapters 1 (introduction), 20 (quantified self), 33 (resistance).


Study 22: Srinivasan's Analysis of Facebook's "Authentic Identity" Policies

Full Citation: Srinivasan, Ramesh. 2019. Beyond the Valley: How Innovators around the World Are Overcoming Inequality and Creating the Technologies of Tomorrow. Cambridge: MIT Press. [See also Facebook's real-name policy controversies, documented by EFF and others.]

Research Question: How do platform identity requirements affect marginalized communities, including Indigenous people, drag performers, abuse survivors, and trans individuals?

Method: Case study analysis and interviews with affected communities.

Key Findings: Facebook's "authentic identity" (real-name) policy, which requires users to use their legal name, has had disproportionate impacts on individuals for whom legal-name identity is dangerous, unavailable, or contrary to cultural practice. Native Americans with traditional names, drag performers, domestic violence survivors who use pseudonyms for safety, and trans individuals who have not legally changed their names are all disadvantaged. The policy was designed around a US cultural context and enrolled identity surveillance functions that primarily harm already-marginalized people.

Significance to Surveillance Studies: This case illustrates how surveillance functions built into platform design (identity verification, account authentication) are not neutral but encode cultural assumptions and produce differential harms. The design choice to require "authentic" identity is simultaneously a commercial decision (verified identities are more valuable to advertisers), a content moderation choice (pseudonymity enables harassment), and a surveillance decision (real-name data is more useful for behavioral profiling and law enforcement requests).

Where Referenced in This Textbook: Chapters 13 (social media), 36 (racial/marginalized surveillance).


Study 23: Eubanks — Automating Inequality Case Studies

Full Citation: Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press.

Research Question: How do automated systems designed to manage government services affect low-income communities?

Method: Multi-site case study research combining interviews, ethnography, document analysis, and quantitative data analysis. Case studies focus on Indiana's automated welfare eligibility system, the Allegheny County Family Screening Tool (predictive child welfare system), and Los Angeles's coordinated entry system for homeless services.

Key Findings: Indiana's welfare automation system, contracted to IBM, significantly increased erroneous benefit denials for eligible people, disproportionately harming the vulnerable people the system was supposed to serve. The Allegheny County Family Screening Tool assigns risk scores to families, raising profound questions about pre-crime intervention in family life and the encoding of poverty as risk. Los Angeles's homeless system uses algorithmic prioritization that embeds prior surveillance-based judgments into housing allocation decisions.

Significance to Surveillance Studies: Eubanks demonstrates that surveillance and algorithmic governance are not only Silicon Valley problems — they operate with full force in public benefit systems, where the most vulnerable people have the least ability to contest adverse decisions. The "digital poorhouse" concept — the use of technology to discipline and surveil the poor under the guise of efficiency — names a specific form of discriminatory surveillance that class-blind surveillance analysis misses.

Where Referenced in This Textbook: Chapters 4 (history/administrative surveillance), 29 (HR/algorithmic decision-making), 36 (race), 38 (children).


Study 24: Citron — Intimate Privacy (2022)

Full Citation: Citron, Danielle Keats. 2022. The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age. New York: Norton.

Research Question: How do intimate privacy violations — non-consensual pornography, stalkerware, doxxing, and surveillance in romantic contexts — harm individuals, and what law is needed to address them?

Method: Legal analysis, case study, and interview research with survivors of intimate privacy violations.

Key Findings: Intimate privacy violations — particularly non-consensual intimate image distribution (NCII, or "revenge porn") — disproportionately harm women and LGBTQ individuals, causing profound harms to employment, relationships, mental health, and physical safety. Existing law was inadequate; Citron documents the legislative campaigns that produced NCII laws in most US states.

Significance to Surveillance Studies: Citron's work demonstrates the intimate dimension of surveillance — that surveillance harms operate not only at the level of politics and social control but in the intimate spaces of sexuality, romantic relationships, and bodily autonomy. Her analysis of NCII as a form of intimate surveillance extends the field's focus beyond institutional surveillance actors.

Where Referenced in This Textbook: Chapters 19 (stalkerware), 38 (children/youth).


Study 25: Solove's "Nothing to Hide" Argument Analysis

Full Citation: Solove, Daniel J. 2011. Nothing to Hide: The False Tradeoff Between Privacy and Security. New Haven: Yale University Press. [Originally published as: Solove, Daniel J. 2007. "'I've Got Nothing to Hide' and Other Misunderstandings of Privacy." San Diego Law Review 44: 745–772.]

Research Question: How should we understand and respond to the "nothing to hide" argument for accepting surveillance?

Method: Philosophical and legal analysis; rebuttal argument construction.

Key Findings: The "nothing to hide" argument rests on a narrow conception of privacy (secrecy) and a narrow conception of harm (exposure of embarrassing information). Privacy serves many functions beyond secrecy: autonomy, freedom of thought, identity development, freedom from exploitation. Surveillance harms are not limited to exposure of "hidden" wrongdoing: they include chilling of lawful activity, power asymmetries that enable abuse, aggregation harms, identity theft, and structural changes to social institutions. Solove catalogs fifteen different responses to the nothing-to-hide argument.

Significance to Surveillance Studies: Solove's analysis is the definitive academic response to the most common objection to privacy protections. Every surveillance studies student will encounter the "nothing to hide" argument; this work provides the intellectual vocabulary to respond. The analysis also illustrates a methodological point: bad arguments for surveillance (or against privacy) deserve careful philosophical engagement rather than dismissal.

Where Referenced in This Textbook: Chapters 1 (introduction), 5 (theory), 33 (resistance). Also Appendix F (FAQ).


This guide covers twenty-five landmark studies selected for their disciplinary breadth, methodological diversity, and ongoing relevance to the textbook's themes. The field of surveillance studies continues to generate significant scholarship; students should supplement these canonical works with current publications in Surveillance and Society, Big Data and Society, and related journals. For many of these works, updated or response studies exist that refine or challenge the original findings; engaging with these responses is part of reading primary sources critically.