32 min read

Not the alarm they set — 7:00 a.m. — but the one their phone's AI decided was optimal based on a sleep-cycle analysis of their movement patterns during the night. The phone's accelerometer registered micro-movements all night long, fed them into an...

Learning Objectives

  • Define surveillance using David Lyon's sociological definition and explain why precision matters
  • Distinguish between five major categories of surveillance: state, commercial, domestic, environmental, and self-surveillance
  • Explain the concept of visibility asymmetry and why it is the book's central organizing principle
  • Trace surveillance encounters across an ordinary day and identify their structural sources
  • Articulate why surveillance matters for questions of power, freedom, and equity
  • Locate surveillance historically, refuting the assumption that it is a purely modern phenomenon

Chapter 1: What Is Surveillance? Defining the Watcher and the Watched


Opening: One Tuesday in Jordan's Life

Jordan Ellis's alarm goes off at 6:47 a.m.

Not the alarm they set — 7:00 a.m. — but the one their phone's AI decided was optimal based on a sleep-cycle analysis of their movement patterns during the night. The phone's accelerometer registered micro-movements all night long, fed them into an algorithm Jordan never read the documentation for, and made a judgment call on their behalf.

Jordan silences it. They check their messages. Three notifications from social media platforms have accumulated while they slept, each one the product of an algorithm calculating the precise moment Jordan was most likely to be receptive to engagement. One is an ad for a brand of running shoes. Jordan mentioned running to a friend via text three days ago.

At the warehouse where Jordan works part-time — a logistics facility about twelve minutes from campus — a supervisor's tablet shows Jordan's scan rate for the previous week: 214 packages per hour, compared to the floor average of 198. Jordan's badge, clipped to their vest, communicates with sensors at every station. The facility's system logs not just what Jordan scans but where Jordan stands, how long Jordan pauses, how often Jordan moves to the restroom. This data feeds a productivity algorithm that will, at the end of the quarter, influence decisions about hours, scheduling, and contract renewal.

On the bus home from class, Jordan passes a row of three traffic cameras, one automated license-plate reader, and — though they don't see it — a Stingray cell-site simulator mounted inside an unmarked white van three blocks away, temporarily acquiring the identities of every phone in range. Jordan's face is captured by a camera inside the bus. The transit authority retains that footage for thirty days.

Back in the dorm, Jordan's laptop connects to the university's Wi-Fi. The university's network monitoring software logs every domain Jordan visits. Jordan opens a browser and searches for information about a medical condition they'd rather not discuss. The search engine records the query, timestamped, attached to a persistent identifier tied to Jordan's account. Three advertisers receive notification within milliseconds that someone with Jordan's inferred demographic profile is interested in this topic. Jordan doesn't know any of this is happening.

That night, Jordan and their roommate Marcus get into a conversation about the new smart speaker Marcus installed in their room.

"You know that thing listens all the time," Jordan says.

Marcus shrugs. "I don't care. I'm not doing anything wrong."

Jordan doesn't have a good answer yet. But something about that response — I'm not doing anything wrong — feels insufficient. Like it's answering the wrong question entirely.

This book is, among other things, an extended attempt to explain why Marcus is not wrong, exactly, but is also profoundly missing the point.


1.1 The Problem with "I Know It When I See It"

Most people, if asked to define surveillance, would gesture toward something involving cameras, spies, or government agencies. They might mention the NSA. They might picture a dark room full of monitors. They might think of a private investigator trailing someone through a parking garage.

These images are not wrong. But they are partial — dangerously partial. If surveillance is only the thing done by men in trench coats and federal agencies, then the concept fails to capture the tracking, classification, and behavioral modification that happens to nearly every person, every day, in nearly every institutional context they inhabit. A definition narrow enough to exclude Amazon's recommendation engine, your university's learning management system, or your employer's time-tracking software is a definition that leaves most surveillance unnamed — and therefore, uncritical.

⚠️ Common Pitfall: One of the most persistent errors in thinking about surveillance is conflating it with intentional espionage or illegal monitoring. Most surveillance is legal, institutional, and often presented as beneficial. Focusing only on the dramatic cases makes the routine cases invisible — and routine surveillance is, precisely because of its routineness, more consequential for most people's lives.

The opposite error is equally common: defining surveillance so broadly — "everything is surveillance" — that the term loses analytical purchase. If a friend glances at your phone screen, that is not the same thing as a state agency systematically tracking your communications for months. Precision matters.

What we need is a definition that is precise enough to distinguish surveillance from ordinary social interaction, capacious enough to capture both historical and digital forms, and structurally oriented enough to illuminate questions of power.


1.2 Defining Surveillance: The Lyon Framework

The definition this textbook employs comes from David Lyon, one of the foremost scholars in the field. Lyon defines surveillance as:

"The focused, systematic, and routine attention to personal details for purposes of influence, management, protection, or direction." — David Lyon, Surveillance Society (2001)

This definition rewards careful unpacking. Each word is doing work.

1.2.1 Focused

Surveillance is not random ambient awareness. It is directed at particular persons, populations, or behaviors. A security camera aimed at a cash register is more surveillance than a camera aimed at the sky — not because the technology differs, but because the attention is structured toward specific subjects for specific purposes. Focus implies a watcher with an agenda.

1.2.2 Systematic

Surveillance is not occasional or accidental. It follows rules, protocols, and repeatable procedures. The time card didn't just measure your work once — it measured everyone's work, every day, according to a standard procedure. Systematicity is what transforms isolated observation into an institution. It is also what makes surveillance scalable: the same procedure that tracks ten workers can be extended to track ten thousand without fundamentally changing its logic.

1.2.3 Routine

This is perhaps the most politically important word in the definition. Routine surveillance is surveillance that has been normalized — absorbed into the background of daily life so thoroughly that it no longer registers as surveillance at all. Jordan's badge at the warehouse is routine. The cookie that follows you across websites is routine. The school attendance register is routine. Routine surveillance is the hardest kind to see, and therefore the hardest kind to contest.

1.2.4 Personal Details

Surveillance is about persons, not abstract phenomena. Even when the data appears impersonal — a GPS coordinate, a purchase timestamp, a browsing duration — it becomes surveillance when linked, directly or inferentially, to an individual or a categorized population. The transformation from raw data to personal information is increasingly the central technical operation of modern surveillance.

1.2.5 For Purposes of Influence, Management, Protection, or Direction

Lyon's definition is deliberately pluralistic about purpose. Surveillance is not exclusively repressive. A parent installing a baby monitor aims at protection. A teacher taking attendance aims at management. A therapist taking session notes aims at direction (guiding treatment). A government tracking citizens aims at influence (or control). The purposes vary — and importantly, they vary in moral weight — but the structural operation is similar. This breadth is not an evasion; it is a recognition that surveillance is a tool which, like most tools, can be used for different ends.

💡 Intuition: Think of surveillance like lighting. Lighting can illuminate a surgery, a stage performance, a prison yard, or a living room. The light itself is neutral. But who controls it, where it is directed, who it is for, and who pays the energy bill — those are questions of power. Lyon's definition gives us the concept of light without presupposing that all light is the same.


1.3 Dataveillance: Surveillance Through Data Traces

Roger Clarke coined the term dataveillance in 1988 to describe the surveillance of people through the collection, processing, and exchange of data about them. At the time, he was writing about credit records, mailing lists, and government databases. The concept has only grown in relevance.

Dataveillance operates differently from traditional observation in several important ways:

Scale: A single human watcher can observe perhaps a few dozen people simultaneously, at most. A data system can process records on hundreds of millions of people at once. Scale transforms surveillance from a resource-intensive activity into a nearly costless one.

Persistence: Traditional surveillance ends when the watcher leaves. Data persists. The record of a purchase made in 2019 may still be influencing credit decisions in 2031. The photograph posted to social media at eighteen may still be discoverable at forty-five. Dataveillance is surveillance that outlasts the original act of collection.

Aggregation: Individual data points may be innocuous. Your name is public. Your employer is on LinkedIn. Your neighborhood is inferrable from your IP address. Your approximate income is estimable from your spending patterns. Individually, these data points reveal little. Aggregated, they can produce a profile of remarkable intimacy — one that you yourself might not have constructed consciously.

Inference: The most powerful modern dataveillance does not merely record what you do; it predicts what you will do, inferring unobserved characteristics from observed ones. Credit scores infer future repayment behavior from past financial patterns. Recidivism algorithms infer future criminal behavior from demographic and behavioral correlates. Health insurers infer risk from purchasing patterns. What is inferred may never have been observed — and may not even be true.

📊 Real-World Application: In 2012, investigative reporting revealed that Target's analytics team had developed a pregnancy prediction model capable of estimating a shopper's due date based on purchasing patterns (shifts to unscented lotion, larger bags of cotton balls, and specific vitamin supplements). The system was so accurate that, in one documented case, a father discovered his teenage daughter's pregnancy through a Target mailer before she had told him. The data Target held was mundane; the inference it enabled was intimate and consequential. This is dataveillance: surveillance through the combination and processing of data that, in isolation, seemed entirely benign.


1.4 Synopticism: Reversing the Gaze

Traditional surveillance pointed down: the powerful watched the less powerful. The employer watched the worker. The state watched the citizen. The priest watched the congregant. This directionality seemed so natural that many early surveillance theorists treated it as definitive.

Thomas Mathiesen, writing in 1997, introduced the concept of synopticism as a corrective — and as a complement to Michel Foucault's panopticism (which we will examine in depth in Chapter 2). Where panopticism described the few watching the many, synopticism describes the many watching the few: the audience watching the celebrity, the public watching the politician, the internet watching the corporation, citizens filming police.

The word derives from the Greek synoptikos — "seeing altogether" — and Mathiesen used it primarily to analyze mass media. Television, he argued, created a system in which millions of viewers directed their gaze toward a handful of broadcasters. This was not emancipation; it was a different structure of visibility control. The powerful remained powerful partly by controlling who was worth watching.

Social media has dramatically complicated the synoptic picture. Anyone can now be watched by many, and many can watch anyone. This has produced genuine instances of accountability — the filming of police brutality, the public exposure of corporate misconduct — but it has also produced surveillance from below directed at ordinary people who have not sought public life: the employee whose off-duty social media post goes viral, the teenager whose private video is shared without consent, the private individual doxed by an online mob.

🎓 Advanced: Mathiesen's concept anticipates what later scholars would call "sousveillance" (Steve Mann's term) — watching from below, typically by citizens watching institutional actors. Mann argued that sousveillance could serve as a corrective to institutional panopticism. Critics note, however, that the state has consistently responded to sousveillance by criminalizing it: "ag-gag" laws prohibiting filming of factory farms, statutes penalizing citizens for filming police (subsequently struck down in most U.S. jurisdictions), and corporate NDAs that prohibit employees from discussing workplace conditions.


1.5 Visibility Asymmetry: The Book's Central Concept

If surveillance is the focused, systematic, and routine attention to personal details, then visibility asymmetry is the structural condition that makes surveillance a question of power rather than merely a technical fact.

Visibility asymmetry describes the structural imbalance between the watcher and the watched. The watcher typically knows: - That they are watching - Who they are watching - Why they are watching - What they are looking for - What they will do with what they find

The watched typically does not know: - That they are being watched (or not fully) - Who is watching them - Why they are being watched - What counts as relevant observation - What will be done with what is found

This asymmetry is not incidental or accidental. It is, in most surveillance arrangements, functional — the watcher's advantage depends on the watched being unaware or uncertain. The employer who announces exactly what the productivity-tracking algorithm measures and how it weights each metric gives up some power. The government agency that publishes the exact parameters of its watchlist gives up operational advantage. The advertiser who discloses the full extent of its behavioral profile gives up the illusion of serendipity (you found this product; the product found you).

📝 Note: Visibility asymmetry does not require malicious intent. A well-meaning parent who places a tracking app on their teenager's phone to ensure their safety creates a visibility asymmetry. The parent knows where the child is at all times; the child may know this in principle but not in the constant, operational sense that the parent does. Good intentions do not dissolve structural imbalances in who can see what.


1.6 A Taxonomy of Surveillance

The field of surveillance studies has produced numerous classification schemes. For the purposes of this textbook, we will use a five-part taxonomy that will recur across all subsequent chapters and parts. Each category refers primarily to the agent of surveillance — who is doing the watching and in what institutional context.

1.6.1 State Surveillance

State surveillance is conducted by government bodies — law enforcement, intelligence agencies, military, regulatory bodies, and public administrative apparatus. It encompasses activities ranging from criminal investigation to border control to public health monitoring to census-taking.

State surveillance is the category most often associated with the word "surveillance" in popular usage, and it carries the highest normative charge because of the state's unique legal authority to deprive persons of liberty. A government agency that places someone under surveillance can, ultimately, arrest them, prosecute them, or — in some regimes — disappear them. This coercive backstop is what distinguishes state surveillance from other forms, even when the immediate activity looks similar.

Examples: FBI electronic surveillance under FISA warrants; NSA bulk collection programs revealed by Edward Snowden; CCTV networks operated by UK police; China's Social Credit System; ICE's use of commercial data brokers; public health contact tracing during COVID-19.

1.6.2 Commercial Surveillance

Commercial surveillance is conducted by private entities for economic purposes — primarily to sell advertising, to model consumer behavior, to assess risk, and to manage workers.

Commercial surveillance has exploded in scale with the rise of what Shoshana Zuboff calls "surveillance capitalism" — an economic logic in which human behavioral data is the raw material from which predictions are manufactured and sold. Every app that tracks your location, every website that deposits a cookie, every loyalty card that records your purchases, every streaming service that models your viewing patterns is engaged in commercial surveillance.

Importantly, commercial surveillance data frequently flows to state surveillance through purchase, legal compulsion, or informal cooperation — a process sometimes called the "public-private surveillance partnership."

Examples: Google's ad-targeting infrastructure; Facebook's behavioral profiling; insurance companies' telematics programs; employers' monitoring software; Amazon's warehouse productivity tracking; credit bureaus.

1.6.3 Domestic Surveillance

Domestic surveillance is conducted within personal and familial relationships — parents monitoring children, intimate partners tracking each other, households monitoring domestic workers, neighbors watching neighbors.

This category often occupies an ambivalent moral space. Parental monitoring of a five-year-old differs morally from spousal tracking of an adult partner. Yet the technological infrastructure is often identical (the same apps serve both purposes), and the same visibility asymmetries apply. Domestic surveillance is particularly important in understanding gendered and intimate-partner violence contexts, where tracking technology has become a tool of coercive control.

Examples: Parental control apps; life360 family tracking; baby monitors; Ring doorbell camera networks; nanny cams; stalkerware disguised as "relationship" or "family safety" apps.

1.6.4 Environmental Surveillance

Environmental surveillance is the monitoring of physical spaces and ecological conditions — surveillance that captures persons incidentally or as a class rather than as named individuals.

Traffic cameras watching roadways, satellites imaging land use, air quality sensors logging emissions, hydrophones detecting marine life, acoustic monitoring of bird populations — all of these constitute environmental surveillance. The distinction from other categories is that the primary target is a space or condition, not a person. However, environmental surveillance frequently captures personal data as a byproduct, and the boundaries between environmental and state or commercial surveillance are routinely crossed.

Examples: Traffic cameras; satellite imagery platforms; wildlife monitoring systems; smart city sensor networks; weather stations with secondary data collection; license plate readers on highway gantries.

1.6.5 Self-Surveillance

Self-surveillance is the voluntary or semi-voluntary monitoring of oneself for purposes of self-management, self-improvement, or social presentation.

Fitness trackers, food diaries, mood logging apps, journaling, productivity timers, and the careful curation of social media profiles all involve degrees of self-surveillance. This category is philosophically interesting because it appears to dissolve the visibility asymmetry: the watcher and the watched are the same person. But this dissolution is often illusory — the data generated by self-surveillance is frequently shared with or extracted by commercial actors (the fitness app sells your health data to insurers), and the practices of self-surveillance are often driven by internalized social expectations rather than autonomous self-knowledge.

Michel Foucault saw self-surveillance — what he called the "technologies of the self" — as one of power's most elegant achievements: subjects disciplining themselves so that external enforcement becomes unnecessary.

Examples: Fitbit and Apple Watch; MyFitnessPal food logging; social media curation; self-help journaling; academic self-assessment rubrics; confessional practices (religious and secular).

📊 Real-World Application: In 2019, a ProPublica investigation found that the mental health app Talkspace was sharing user messages — personal disclosures made to therapy platforms — with marketers for advertising targeting. Users had engaged in self-surveillance through therapeutic self-disclosure; that data was then extracted for commercial surveillance. The five categories are not watertight compartments. They are analytic tools for identifying who is watching, why, and with what authority — with the understanding that in practice, surveillance crosses and combines categories constantly.


1.7 Why Surveillance Matters: Power, Freedom, and Equity

Jordan's roommate Marcus is articulating the most common lay defense of surveillance: I have nothing to hide, so I have nothing to fear. This formulation has a surface plausibility. If you are not doing anything wrong, the reasoning goes, you are not the target of surveillance, so its existence should not concern you.

This argument has been critiqued from several directions, and we will encounter those critiques throughout the book. Here, three foundational objections:

1.7.1 The Power Objection

Surveillance is not a neutral technology that happens to exist; it is a mechanism of power. The power to watch and classify persons is the power to manage, reward, punish, exclude, and control them. Surveillance data is not merely stored; it is used — to set insurance premiums, to determine parole conditions, to allocate loan approvals, to configure news feeds, to flag individuals for further scrutiny.

Even if you, Marcus, are doing nothing wrong, you are participating in a system that is used to do things to others. Your innocuous compliance with a surveillance regime normalizes that regime for those who have more to fear from it — not because they are doing something wrong, but because the categories of wrongness are defined by whoever controls the surveillance apparatus.

1.7.2 The Freedom Objection

The chilling effect — behavioral modification caused by the knowledge of being watched — is one of the most empirically robust findings in surveillance research. People change their behavior when they know they are being observed, and they change it in predictable directions: toward conformity, toward caution, away from transgression and experiment.

📊 Real-World Application: A 2016 study published in the Journal of Information Technology & Politics found that visits to Wikipedia articles about topics related to terrorism, extremism, and weapons dropped significantly in the six months following Edward Snowden's 2013 NSA revelations — a period during which media coverage of government surveillance was at its highest. The subjects of this behavioral change were ordinary Wikipedia readers, not terrorism suspects. The chilling effect operated across an enormous range of people who had, by any legal standard, nothing to hide.

Freedom requires the ability to explore ideas, associate with unpopular groups, express unconventional views, and make mistakes — all without permanent record and external judgment. Pervasive surveillance does not merely catch wrongdoers; it redefines the conditions under which ideas are explored and expressed.

1.7.3 The Equity Objection

Surveillance does not fall equally. Social sorting — the use of surveillance data to classify populations and treat them differently — systematically disadvantages already-marginal groups. Predictive policing concentrates surveillance in neighborhoods that are already over-policed, creating feedback loops. Facial recognition systems have demonstrated substantially higher error rates for darker-skinned women than for lighter-skinned men. Algorithmic credit scoring encodes historical discriminatory patterns. Immigration enforcement surveillance targets communities of color.

Sociologist David Lyon defines social sorting as the process by which surveillance data is used to create and maintain categories that structure life chances — who gets the loan, who gets the job, who gets the parole, who gets the targeted ad for predatory financial products and who gets the one for upscale investment management.

For Jordan Ellis, as a mixed-race first-generation college student working in a warehouse, these are not abstract concerns. The systems that classify, track, and manage Jordan are not neutral. They reflect and reinforce the social structures in which Jordan lives.

🌍 Global Perspective: The equity objection takes on different forms in different national contexts. In China, the Social Credit System (a constellation of government and commercial scoring systems) has been used to restrict travel and access for individuals designated as untrustworthy — a category that has included journalists, dissidents, and people who failed to pay debts. In India, the Aadhaar biometric identity system was initially presented as a tool for welfare delivery equity, but has generated significant exclusion — people who cannot be biometrically verified (often the poorest and most marginalized) lose access to state services. In the European Union, the General Data Protection Regulation (GDPR) represents an attempt to regulate commercial surveillance, but enforcement has been uneven and the structural power asymmetries persist. Surveillance inequity is global, but its specific mechanisms vary with political economy and governance structure.


1.8 Historical Continuity: Surveillance Before the Digital Age

⚠️ Common Pitfall: A significant misconception about surveillance is that it is essentially a modern, and especially a digital, phenomenon — a product of computer networks, smartphones, and government security programs post-9/11. This misconception is self-serving for both technologists (who can sell surveillance tools as solving new problems) and civil libertarians (who can frame their concerns as responses to unprecedented threats). Both miss the longer history.

Surveillance is as old as power. Ancient Egyptian pharaohs employed scribes to census populations and track agricultural yields — and therefore taxable capacity. The Roman Empire conducted regular censuses that are referenced in the Gospel of Luke (the census that brought Mary and Joseph to Bethlehem). Medieval European states required parishes to maintain registers of births, deaths, and marriages — data that served administrative, fiscal, and religious authority simultaneously.

The Catholic sacrament of confession, as Foucault analyzed in The History of Sexuality, required believers to produce detailed verbal accounts of their private thoughts, desires, and actions — a form of surveillance in which the subject voluntarily disclosed to institutional authority. Colonial administrators maintained detailed records of colonized populations' identities, locations, properties, and racial categories. The plantation's account book tracked every enslaved person's labor, health, reproduction, and value.

We will explore this history in depth in Chapter 3. The point here is foundational: the technologies change, but the structural logic — the powerful systematically collecting information about the less powerful to manage them — is ancient.

What is new about digital surveillance is not the intention but the scale, speed, persistence, and aggregation capacity. These changes are significant — they are what make contemporary surveillance qualitatively different in its reach and power. But they do not make it different in its fundamental social logic.

💡 Intuition: Think of it this way: a quill pen and a printing press are both writing technologies, but they operate at vastly different scales and with vastly different political implications. The difference matters enormously for understanding power — but it does not change the fact that both are writing. Similarly, a parish register and a cloud database are both surveillance technologies. The difference in scale matters enormously. But recognizing the continuity helps us understand that surveillance is a social practice that pre-exists its technical substrates, and which new substrates serve rather than create.


1.9 Thought Experiment: The Transparent Society

🧠 Thought Experiment: The Transparent Society

The technology journalist David Brin proposed what he called the "transparent society" — a world in which surveillance is universal and mutual. Every person can see every other person, including those who currently wield surveillance power over them. Citizens can watch police on body cameras. Employees can monitor managers. Voters can access politicians' communications. Corporations' internal processes are visible to regulators and the public.

Brin argued that the solution to surveillance asymmetry is not less surveillance but symmetric surveillance — everyone watching everyone.

Consider the following questions:

  1. Does Brin's proposal eliminate the power problems with surveillance, or does it merely redistribute them?

  2. Who, in Brin's transparent society, has the time and resources to actually watch? Does the structural capacity to interpret and act on surveillance data equalize along with the formal right to access?

  3. What would be lost — psychologically, culturally, politically — in a world of universal transparency? What would be gained?

  4. Is there a difference between "I can watch you if I choose to" and "a system watches you continuously and stores everything"? Does optionality matter morally?

  5. Return to Jordan's roommate Marcus. Would Marcus's "nothing to hide" argument be satisfied by the transparent society? Would yours?


1.10 Introducing Key Analytical Concepts

The following concepts will recur throughout this textbook. We introduce them briefly here and will develop them in depth as they become relevant.

Panopticism (Chapter 2): Foucault's concept of self-discipline arising from the possibility — not necessarily the reality — of being observed. The panopticon's power is that you cannot tell when you are being watched, so you behave as though you always are. We will return to this concept in Chapter 2.

Function creep: The gradual expansion of a surveillance system beyond its original stated purpose. A database collected for one purpose (say, national security) migrates to a second purpose (immigration enforcement), then a third (health data sharing), without users' awareness or explicit consent. Function creep is one of the primary mechanisms by which surveillance regimes expand.

📊 Real-World Application: The Social Security number was introduced in 1936 explicitly and exclusively for tracking workers' earnings for Social Security benefit purposes. Its enabling legislation prohibited its use for any other purpose. By 2000, it had become a nearly universal identifier used by banks, schools, hospitals, insurance companies, employers, and landlords. The legislative prohibition was simply ignored as the utility of the identifier grew. Function creep does not require conspiracy; it requires only that a tool be useful.

Social sorting: The use of surveillance data to create and maintain social categories that determine differential treatment. This is Lyon's preferred term for what critical race theorists and others analyze through the lens of algorithmic discrimination.

Chilling effect: Behavioral modification caused by the knowledge of being watched. The key insight is that surveillance modifies behavior even when no enforcement action follows — the possibility of observation is sufficient.

Consent as fiction: A theme we will return to throughout the book. Most commercial surveillance is nominally consensual — you clicked "Agree" on the terms of service, or you use the free service whose terms disclose data collection. But consent that is uninformed, without meaningful alternatives, or buried in thousands of words of legal text is a structurally different thing from meaningful informed consent. Much of this book is an investigation of who actually consented to what.


1.11 The Book's Intellectual Project

This textbook argues for a structural analysis of surveillance — one that asks not just what the technology does, but what social arrangements it serves, who benefits, who bears the costs, and how it connects to the long history of power organizing itself through information control.

Structural analysis does not mean ignoring individual agency. Jordan Ellis, over the course of this book, will make real choices with real consequences — choices about what apps to use, what terms to accept, what data to share, and how to resist or comply with monitoring systems. Individual choices matter.

But structural analysis insists that individual choices happen within structures that shape and constrain them. When every alternative requires accepting surveillance — every employer monitors productivity, every communication platform extracts data, every service requires an account — then choosing not to be surveilled is not a simple individual option. It requires either unusual resources and technical skill or real social costs.

Understanding surveillance structurally is the first step toward imagining — and working toward — different structures.

🔗 Connection: Part 2 (Chapters 6–10) examines state surveillance in depth. Part 3 (Chapters 11–15) examines surveillance capitalism and commercial surveillance. Part 4 (Chapters 16–20) turns to bodies and biometrics. Part 5 (Chapters 21–25) examines domestic and intimate surveillance. Part 6 (Chapters 26–30) examines workplace surveillance. Part 7 (Chapters 31–36) addresses resistance, ethics, and reform.


1.12 Jordan's Day Revisited: Naming the Systems

Let's return to Jordan's Tuesday, now equipped with a vocabulary.

The sleep-algorithm alarm: commercial surveillance (self-surveillance/dataveillance hybrid); the app collected biometric behavioral data, processed it with a proprietary algorithm, and used it to manage Jordan's schedule. Jordan's nominal "consent" was clicking Agree on a 47-page terms of service document.

The targeted running shoe ad: dataveillance enabled by commercial surveillance; Jordan's text message — a private communication — was processed by the messaging platform's systems, and the inferred topic was added to Jordan's advertising profile. The function creep here is stunning: a communication tool became an intelligence-gathering tool.

The warehouse badge scanner: commercial surveillance deployed in a workplace context; every movement is logged, and the data feeds algorithmic management systems that affect Jordan's employment. Jordan's consent was accepting the job.

The traffic cameras, the license plate reader, the Stingray device: state surveillance operating in public space; Jordan has no meaningful ability to opt out except by not going outside. The visibility asymmetry is total: Jordan knows cameras exist in the abstract but does not know which ones are networked, which agencies have access, or what triggers additional scrutiny.

The university network monitoring: commercial (by the institution) and potentially state (via third-party data sharing or legal process) surveillance; Jordan "consented" by enrolling and using campus infrastructure.

The medical search: dataveillance with significant social sorting implications; the medical information Jordan was seeking could, through commercial data brokers, reach insurance companies, employers, and other institutional actors.

Marcus's smart speaker: commercial surveillance enabled by domestic infrastructure; Jordan did not consent to the device in their shared space.

Every one of these systems operates through visibility asymmetry. Every one of them was, in some formal sense, "consented to." Every one of them operates as routine — background noise in an ordinary Tuesday.

This is what surveillance looks like when you name it. Not dramatic. Not exceptional. Ordinary.


1.13 Primary Source: David Lyon on Surveillance Society

📜 Primary Source Excerpt

"Surveillance, as I use the term, refers to the focused, systematic, and routine attention to personal details for purposes of influence, management, protection, or direction. Surveillance is deeply ambiguous. It is not simply a negative phenomenon. However, its power to classify, to monitor, to coordinate, and to control is growing rapidly, and in ways that demand critical attention... The growth of surveillance is intimately connected with modernity, with the emergence of nation-states and capitalist enterprises as the dominant institutions of social life."

— David Lyon, Surveillance Society: Monitoring Everyday Life (2001), pp. 2–3

Discussion Questions: 1. Lyon acknowledges that surveillance is "deeply ambiguous" and "not simply a negative phenomenon." Given the examples in this chapter, do you find this concession appropriate, or does it soften what should be a stronger critique?

  1. Lyon links surveillance to "modernity" — but this chapter has suggested that surveillance predates modernity. Is this a contradiction, or can both claims be true simultaneously?

  2. Lyon identifies "nation-states and capitalist enterprises" as the dominant surveillance institutions. Does this framework adequately account for domestic and self-surveillance? What does it illuminate, and what does it miss?


1.14 Research Study Breakdown: The Panopticon Effect in Cyberspace

📊 Research Study Breakdown

Study: Penney, Jon R. (2016). "Chilling Effects: Online Surveillance and Wikipedia Use." Berkeley Technology Law Journal, 31(1), 117–182.

Research Question: Did public awareness of NSA surveillance (following the Snowden revelations in June 2013) cause a measurable reduction in Wikipedia article traffic to articles about terrorism, extremism, and related topics?

Method: Penney conducted a time-series analysis of Wikipedia article traffic, comparing pre- and post-Snowden periods. He examined traffic to 48 articles that the Department of Homeland Security had flagged as "terrorism-related," comparing them to a control group of articles on sensitive but non-terrorism-related topics.

Key Findings: - Traffic to terrorism-related Wikipedia articles fell by approximately 20% following the Snowden revelations. - This decline was sustained, not simply a temporary dip. - Traffic to the control articles did not show the same decline. - Survey evidence suggested that awareness of government surveillance — not other factors — drove the behavioral change.

Significance: This study provides rare causal evidence of the chilling effect in an internet context. It demonstrates that ordinary people — Wikipedia readers, not suspected terrorists — modified their information-seeking behavior in response to surveillance awareness. The chilling effect is not hypothetical or anecdotal; it is empirically measurable.

Limitations: The study relies on aggregate traffic data and cannot identify individual users' motivations. Alternative explanations (changes in media coverage of terrorism affecting interest in Wikipedia articles) cannot be entirely ruled out. Survey-based confirmation is subject to social desirability bias.

Implication for This Chapter: The study operationalizes the chilling effect in precisely the context Jordan's Tuesday illustrates — ordinary internet use in an ordinary life. The users who stopped looking up terrorism articles were, presumably, curious people rather than terrorism suspects. Surveillance made them less curious, or at least less visibly so. This is a form of harm to freedom that "nothing to hide" arguments do not account for.


1.15 Debate Framework: Nothing to Hide vs. Something to Fear

🔬 Debate Framework

Position A: "If you have nothing to hide, you have nothing to fear."

Best version of the argument: Surveillance is targeted at genuine harms — terrorism, child exploitation, financial crime. Law-abiding citizens do not engage in these activities. Therefore, surveillance of the law-abiding is merely a byproduct of surveillance aimed at actual wrongdoers, and the privacy costs to innocent people are minimal compared to the security gains.

Key evidence: Proponents cite post-9/11 intelligence that allegedly prevented terrorist attacks; crime reduction in areas with CCTV coverage; financial fraud detection enabled by bank transaction monitoring.

Position B: "The question is not what you're hiding but what the watcher can do with what they find."

Best version of the argument: What counts as "something to hide" is defined by whoever controls the surveillance apparatus, and that definition is not fixed. Legal activities become illegal (civil rights organizing was surveilled as subversive). Political affiliations become disqualifying. Health conditions become insurance risks. The "nothing to hide" argument assumes a benevolent, competent, and stable surveillance authority — assumptions that history consistently refutes.

Key evidence: FBI COINTELPRO surveillance of civil rights leaders; NSA collection of lawyer-client communications; predictive policing feedback loops; facial recognition misidentifications leading to wrongful arrests.

Questions for Discussion: 1. Which position better accounts for Jordan's situation at the warehouse? 2. Does your assessment change if you consider Jordan as a member of a historically over-surveilled community versus a member of a community that has historically experienced less surveillance? 3. Is there a version of Position A that takes the structural critique seriously? Can you articulate it? 4. Daniel Solove, a legal scholar, responds to "nothing to hide" by arguing that "the problem with surveillance is not that it exposes wrongdoers but that it changes the nature of the relationship between the individual and the institution that watches." Evaluate this response.


1.16 What's Next

You have now encountered surveillance's core definition, its major subtypes, its animating dynamics (visibility asymmetry, dataveillance, synopticism), and the stakes of taking it seriously as a social phenomenon. Jordan's Tuesday has given you a concrete anchor for abstract concepts that will develop across the rest of this textbook.

The next chapter turns to the intellectual architecture that has shaped surveillance studies' most important conceptual framework: the panopticon. Jeremy Bentham designed a prison in 1791 that he never built. Michel Foucault analyzed it in 1975 as a metaphor for modern power. Together, Bentham and Foucault gave us the conceptual tools to understand why the possibility of being watched is often sufficient to change behavior — and why surveillance has expanded from prisons to schools to offices to the open internet.

When Jordan notices, in Chapter 2, that they're checking their phone before Dr. Osei calls on them in class, they will have a name for what they're doing. They are practicing panopticism on themselves.


Chapter 1 of 40 | Part 1: Foundations of Surveillance | The Architecture of Surveillance