Answers to Selected Exercises

The Architecture of Surveillance

This appendix provides model answers and discussion frameworks for selected exercises from each chapter. Exercises marked with a star (★) in the main text are answered here. Answers are organized by chapter and exercise number.

For discussion questions, this appendix provides a framework for discussion rather than a single "correct" answer, because surveillance studies questions rarely have simple right-or-wrong resolutions. The frameworks identify key considerations, relevant concepts, and productive tensions that should animate discussion. For analytical exercises, model answers demonstrate the reasoning process, not just the conclusion.


Part 1: Foundations

Chapter 1: What Is Surveillance?

Chapter 1, Exercise 1 Define surveillance in your own words. Then compare your definition to David Lyon's definition ("the focused, systematic and routine attention to personal details for purposes of influence, management, protection, or direction"). What does your definition include or exclude that Lyon's does not?

Lyon's definition does important conceptual work by insisting on three qualifiers: surveillance must be focused (not random observation), systematic (organized, not incidental), and routine (ongoing, not occasional). These qualifiers help distinguish surveillance from other forms of observation — a passerby who glances at you is not surveilling you; a stalker who follows your movements systematically is. Notice also that Lyon's definition identifies four purposes: influence, management, protection, and direction. This plurality is important because it resists the reduction of surveillance to its most sinister forms. Workplace monitoring, medical record-keeping, and child welfare monitoring all fit Lyon's definition.

Most student-generated definitions will emphasize one purpose (usually "control" or "spying") or omit the systematicity requirement. A good self-assessment asks: does my definition include parental monitoring of children? Does it include a doctor reviewing a patient's history? Does it include Google Maps tracking your location? If not, is that omission principled or accidental? Lyon's definition would include all of these, which reveals both its breadth and its usefulness as a starting point.

Chapter 1, Exercise 3 Identify three examples of surveillance you have personally encountered in the past 24 hours. For each, identify: (a) who was watching; (b) who was watched; (c) the stated purpose; and (d) any unstated purposes you can infer.

This exercise is designed to cultivate surveillance literacy — the ability to recognize surveillance in everyday contexts that typically go unremarked. Model answer:

Example 1: Campus card access logs. (a) University administration/campus security; (b) the student; (c) stated purpose is building security and access management; (d) unstated purposes may include monitoring student attendance patterns, tracking movement for post-incident investigation, or satisfying insurance requirements. The data may be accessible to law enforcement via subpoena without the student's knowledge.

Example 2: Targeted advertisement on a social media feed. (a) The platform's advertising system and the advertiser; (b) the student; (c) stated purpose is "relevant advertising"; (d) unstated purposes include behavioral prediction modeling, political microtargeting, and the sale of the student's behavioral data profile to third parties the student has never interacted with.

Example 3: Smartphone location services. (a) The operating system provider (Apple or Google), app developers, and potentially data brokers who purchase location data; (b) the student; (c) stated purpose varies by app (navigation, weather, etc.); (d) unstated purposes include location history storage by the platform, advertising profile enrichment, and potential law enforcement access.

The pattern students should identify: surveillance is not exceptional or visible — it is structurally embedded in ordinary institutional and consumer life.

Chapter 1, Exercise 5 What is the "chilling effect" and why do civil liberties scholars consider it a harm even when no punishment results from surveillance?

The chilling effect describes how the awareness of being watched changes behavior, causing people to self-censor, avoid lawful activities, and conform to assumed expectations — even when they have done nothing wrong and face no immediate threat of punishment. The harm occurs at the level of freedom: a society in which people are afraid to read certain books, attend certain meetings, or express certain opinions has suffered a loss of liberty regardless of whether anyone is actually arrested or punished.

Civil liberties scholars consider chilling effects a harm for several interconnected reasons. First, the behavior changes are real and measurable — researchers have documented declines in searches for sensitive topics following surveillance disclosures. Second, the harm is democratic: if citizens self-censor their political speech and association, the quality of democratic deliberation deteriorates. Third, the harm falls unevenly — people with more to fear from state attention (undocumented immigrants, political dissidents, members of minority religious communities) experience more severe chilling effects, meaning surveillance amplifies existing inequalities. Fourth, and most subtly, the chilling effect operates without any observable coercion — it is the internalization of external authority, which Foucault would recognize as the panoptic mechanism working as designed.


Chapter 2: The Panopticon and Its Legacy

Chapter 2, Exercise 2 Foucault argues that the panopticon represents a generalized principle of modern power, not just a prison design. Identify a contemporary institution (school, hospital, workplace, social media platform) and analyze how panoptic principles operate within it.

Framework for Discussion: A strong response will identify three features that Foucault treats as essential to panopticism:

  1. Visibility asymmetry: those subject to power can be observed; those exercising power are relatively invisible.
  2. Uncertainty of observation: subjects cannot know when they are being watched, which drives self-regulation.
  3. Automaticity of power: once subjects internalize the possibility of observation, external enforcement becomes unnecessary.

Example analysis — social media platform: The platform's algorithm creates visibility asymmetry (the platform knows exactly when and how content is consumed; users have little visibility into how the algorithm works). The platform moderates behavior through enforcement that is unpredictable and opaque — accounts are suspended or content is removed without clear notice, creating an uncertainty of observation dynamic. Over time, users internalize the platform's community standards not because enforcement is certain but because the consequences of violation are potentially severe and unpredictable. Heavy users report self-censoring content not because they fear a specific violation but because they have absorbed the platform's implicit behavioral norms.

A strong response will also identify where the panopticon metaphor breaks down: unlike Bentham's panopticon, surveillance on social platforms is often horizontal (users watching users) and many-to-many rather than few-to-many. The platform architecture both extends and complicates Foucault's model.

Chapter 2, Exercise 4 What does it mean to say that the panopticon is "architectural"? Why does it matter that surveillance is built into the structure of institutions rather than enacted through individual watchers?

The architectural quality of the panopticon is precisely what makes it so analytically significant for Foucault. When surveillance is individual — one person watching another — its effects depend on the watcher's presence, attention, and consistency. An individual guard who falls asleep, who becomes friendly with an inmate, or who is susceptible to bribery undermines the surveillance function. But when surveillance is architectural — built into the arrangement of space, sight lines, and institutional procedures — it operates continuously, impersonally, and without requiring the sustained attention of any individual watcher.

This matters for contemporary surveillance analysis because most significant surveillance is now architectural in precisely this sense: not a detective following a suspect, but a system of credit bureaus, location data brokers, social media analytics, and algorithmic management that operates automatically, regardless of any individual's decision to observe. When surveillance is architectural, the question of who is "responsible" for it becomes genuinely difficult — no individual made the decision to watch you, yet you are watched. This architectural quality also makes surveillance harder to contest: there is no individual watcher to appeal to, no relationship to cultivate, no specific decision that can be reversed.


Chapter 3: Synopticism and the Society of the Spectacle

Chapter 3, Exercise 1 Explain the difference between panopticism and synopticism in your own words. Give one example of each from contemporary life.

Panopticism describes a surveillance structure in which the few watch the many: an authority (the watcher in the tower, the platform's algorithm, the state's intelligence apparatus) monitors a population of subjects. Power flows from the few to the many through the mechanism of visibility — being seen by power shapes behavior.

Synopticism, Thomas Mathiesen's term, describes the complementary structure in which the many watch the few: audiences observe celebrities, news consumers watch politicians, social media followers track influencers. In synoptic arrangements, visibility confers status and power rather than imposing discipline.

Contemporary example of panopticism: Amazon's warehouse monitoring system tracks individual workers' rates of productivity, scanning, and movement against algorithmic benchmarks. One system — Amazon's — watches thousands of workers.

Contemporary example of synopticism: Millions of followers watch a single influencer's every post, story, and appearance. The influencer is hypervisible; their behavior — what they wear, endorse, say, and do — is shaped by this constant audience observation just as surely as the prisoner's behavior is shaped by the possible gaze from the tower.

The theoretical payoff of Mathiesen's intervention is that modern societies contain both structures simultaneously. The same person may be surveilled by an employer (panoptic) while surveilling a celebrity (synoptic). Power operates through both gazes.

Chapter 3, Exercise 3 In what ways does reality television represent a synoptic form of surveillance? Does participation change the power dynamic compared to involuntary surveillance?

Framework for Discussion: This exercise invites analysis of voluntary versus involuntary surveillance, and of spectacle as a social form. Key considerations:

Reality television is synoptic in the classic sense — millions watch a few — but it differs from Mathiesen's original analysis in that the observed are voluntary participants. This voluntariness might seem to neutralize the power dynamics, but analysis should complicate this. Contestants on reality programs may consent to filming without being able to anticipate how footage will be edited, contextualized, or broadcast. The power to determine what is shown, in what order, with what narrative framing, resides with producers — not with subjects. Participants consent to being watched but not to how they will be represented.

A deeper analysis might invoke Guy Debord's Society of the Spectacle (1967): in a society organized around spectacle, what is real becomes subordinate to its representation. Reality television participants are not simply watched — they are constituted as social subjects by the watching. Identity formation in the spectacle differs from identity formation in ordinary social life. Students might consider whether influencer culture represents a new synthesis: the influencer is simultaneously surveilled by the platform (panoptic) and watched by followers (synoptic), and has voluntarily enrolled in both relationships.


Chapter 4: Pre-Modern and Industrial Surveillance

Chapter 4, Exercise 2 Define "function creep" and trace one historical example of a surveillance technology that was repurposed beyond its original stated purpose.

Function creep is the expansion of a surveillance system's use beyond its original stated scope, often gradually and without explicit policy decisions. Function creep is a well-documented historical pattern because surveillance infrastructures, once built, create capabilities that are tempting to exploit for purposes beyond the original mandate.

Historical example — the Social Security Number (SSN): The Social Security number was created in 1936 for the narrow administrative purpose of tracking workers' earnings for Social Security benefits. When it was introduced, the Social Security Administration explicitly promised that SSNs would not be used for identification purposes outside the Social Security program. Over the following decades, however, the SSN became the de facto national identification number for tax purposes, financial account opening, medical records, university enrollment, military service, and eventually as the primary identifier for credit bureau files. Today the SSN is a universal identifier for financial and governmental systems — precisely the role it was promised it would never fulfill. This transformation happened not through any single decision but through the gradual convenience of using an existing universal number.

Contemporary function creep example: Contact tracing apps developed during COVID-19 were designed for the narrow purpose of notifying individuals of potential exposure. Several jurisdictions subsequently faced public debates about whether location data from these apps could be used for law enforcement purposes.

Chapter 4, Exercise 4 How did the rise of industrial capitalism change the nature and scale of workplace surveillance? What new surveillance problems did industrialization create?

Pre-industrial labor was primarily agricultural or artisanal, performed in spatially distributed settings where direct, continuous supervision was impractical. The master craftsman could observe apprentices in a shared workshop, but the farming family dispersed across fields was largely beyond direct oversight. Pay was often by output (piece-rate) rather than time, which reduced the need for moment-to-moment observation.

Industrialization concentrated workers in factories, which created both the possibility and the perceived necessity of systematic observation. Factory managers could physically observe workers on a single floor, and time-discipline — the clock replacing the natural rhythm of agricultural work — became the new medium of labor control. E.P. Thompson's analysis of time and work-discipline is essential here: industrial capitalism required workers to subordinate their rhythms to the factory clock, and surveillance was the mechanism of this subordination.

Industrialization also created new surveillance problems. The scale of industrial enterprises exceeded the capacity of owner-supervision; managerial hierarchies emerged specifically to maintain control over large workforces. The scientific management (Taylorism) movement of the early twentieth century applied industrial methods to the supervision of work itself — breaking down tasks into measurable components, timing each component with a stopwatch, and using surveillance data to set production standards and discipline workers who fell short. Frederick Winslow Taylor's time-and-motion studies are the industrial antecedent of today's algorithmic management.


Chapter 5: Theoretical Frameworks

Chapter 5, Exercise 2 Compare and contrast Foucault's, Marx's, and feminist approaches to surveillance theory. What does each framework illuminate that the others miss?

Framework for Discussion: This exercise assesses theoretical literacy across multiple frameworks. A strong response will identify what is distinctive, useful, and limiting about each approach.

Foucault's approach illuminates the productive dimension of surveillance power — how surveillance does not merely repress but constitutes subjects, creates norms, and makes possible new forms of subjectivity. Foucault shows how surveillance operates through institutions (the prison, hospital, school, army) that are not obviously "coercive" in a simple sense. His framework is less attentive to the economic dimensions of surveillance and to the specific racial, gendered, and class-based targeting of surveillance. Critics note that Foucault's analysis tends to treat surveillance power as diffuse and multidirectional in ways that can obscure how some groups are watched far more intensively than others.

Marxist approaches emphasize surveillance's role in capitalist accumulation — as labor discipline, as commodity (data), and as an element of class power. Shoshana Zuboff's surveillance capitalism thesis is partly a neo-Marxist analysis, showing how behavioral data becomes a new raw material extracted from the population for capitalist profit. Marxist frameworks can struggle to explain how surveillance operates in non-capitalist settings, and they have sometimes treated class as the master category in ways that underweight the specific dynamics of racial and gendered surveillance.

Feminist approaches have made several crucial contributions: attention to the gendered dimensions of both surveillance and privacy (the home as a site of male surveillance of women; the differential privacy expectations of men and women in public space); analysis of how surveillance technologies are designed from a male gaze; and analysis of stalkerware, reproductive surveillance, and intimate partner monitoring. Feminist scholars have also foregrounded the labor of care and the surveillance of domestic workers. Feminist surveillance studies increasingly intersects with critical race studies through scholars like Simone Browne.

A strong synthesis might argue that these frameworks are complementary rather than competitive: Foucault explains the mechanism of surveillance power; Marxist analysis explains the economic structure within which surveillance occurs; feminist and critical race analyses explain the differential distribution of surveillance across social categories.


Part 2: State Surveillance

Chapter 6: COINTELPRO and the History of Political Surveillance

Chapter 6, Exercise 1 What was COINTELPRO and what techniques did it use? Based on your reading, what was the primary purpose of the program — law enforcement, or political control?

COINTELPRO (Counter Intelligence Program) was the FBI's covert domestic political surveillance and disruption program, operating from 1956 to 1971 under Director J. Edgar Hoover. Initially targeted at the Communist Party USA, COINTELPRO expanded to target the Civil Rights Movement, the Black Panther Party, the American Indian Movement, the Socialist Workers Party, the New Left and Students for a Democratic Society, and various other political organizations. The program was concealed from Congress, the courts, and the public.

Techniques included: physical and electronic surveillance of targets and their associates; placement of informants within organizations; creation of internal dissension through forged letters and planted rumors; anonymous threatening communications sent to targets; coordination with local police departments to facilitate arrests on pretextual charges; "black bag jobs" (unauthorized break-ins); and efforts to destroy the personal relationships and reputations of targets. The FBI sent a particularly notorious anonymous letter to Martin Luther King Jr. implying he should commit suicide.

The question of purpose — law enforcement versus political control — is crucial and the answer is clearly political control. COINTELPRO was not primarily directed at stopping specific crimes; it was directed at neutralizing political movements and their leadership. The FBI's internal documents, exposed by the Citizens' Commission to Investigate the FBI in 1971 and subsequently through congressional investigation, reveal a program explicitly designed to "expose, disrupt, misdirect, discredit, and neutralize" political organizations. The targets included Nobel Peace Prize laureates and civil rights leaders. This is political repression using surveillance as a tool, not law enforcement.

Chapter 6, Exercise 3 How do the legal reforms following the Church Committee hearings compare to the surveillance authorities that exist today? What safeguards were created and which have been eroded?

The Church Committee (formally the Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, 1975-76) documented COINTELPRO and other surveillance abuses and produced a series of reforms. These included: establishment of the Foreign Intelligence Surveillance Act (FISA, 1978) creating judicial oversight for intelligence surveillance; the Attorney General Guidelines for FBI domestic investigations; restrictions on FBI infiltration of domestic political organizations; and the creation of congressional intelligence oversight committees.

These reforms created real constraints. The FISA Court requirement represented a genuine check on intelligence surveillance targeting US persons; the Attorney General Guidelines restricted when the FBI could open investigations of political groups.

However, subsequent decades have substantially eroded these safeguards. The USA PATRIOT Act (2001) dramatically expanded FISA authorities, lowered evidentiary standards, and introduced new surveillance tools including roving wiretaps and National Security Letters. FISA Section 702 has been interpreted to authorize sweeping collection of communications. The Snowden revelations (2013) demonstrated that NSA programs operated at a scale that no one outside the intelligence community — including most members of Congress — was aware of. The FBI's surveillance of Muslim communities post-9/11 used many of the same informant and disruption tactics documented in COINTELPRO.

A thoughtful answer will resist two opposite errors: the nostalgic view that the Church Committee fixed the problem, and the cynical view that reform never works. The reforms mattered and created real constraints; those constraints were subsequently circumvented through legal reinterpretation, new legislative authorities, and technological capabilities.


Chapter 7: Biometrics and Identity Surveillance

Chapter 7, Exercise 2 Why is biometric data particularly sensitive compared to other forms of personal data? What specific harms can result from compromise of biometric databases?

Biometric data is categorically different from other forms of personal data in a crucial respect: it cannot be changed. If your credit card number is stolen, you cancel the card and get a new one. If your password is compromised, you reset it. If your Social Security number is exposed, the harm is severe but you still have legal recourse and the number itself can be flagged. But if your fingerprints are compromised — if the digital representations of your fingerprints are stolen from a database — you cannot get new fingerprints. You are permanently compromised as a biometric subject.

This permanence creates specific harms. Database compromise in biometric systems — which has occurred (the Office of Personnel Management breach in 2015 compromised fingerprints of 5.6 million current and former federal employees) — creates permanent vulnerability for affected individuals. Every future system that relies on those fingerprints for authentication can potentially be defeated. Biometric spoofing (creating fake fingerprints from database records) is a demonstrated technique.

Beyond database compromise, biometric surveillance enables a form of tracking impossible with non-biometric identifiers: precise, reliable identity verification at scale, in public, without cooperation from the subject. A person can use a pseudonym, pay cash, or avoid providing their real name — but if their face is in a database, they cannot walk through a city without being identified. This represents a qualitative change in surveillance capability.

Chapter 7, Exercise 4 Research the deployment of facial recognition technology by police departments. What evidentiary value is claimed for facial recognition matches? What does the research on false match rates suggest about these claims?

Framework for Discussion: Law enforcement advocates argue that facial recognition provides investigative leads — identifying potential suspects whose photos appear in databases — rather than definitive identifications. This framing matters: facial recognition is typically presented as a tool that generates names for further investigation, not as proof of identity.

The research picture is more troubling. Studies, including Joy Buolamwini and Timnit Gebru's Gender Shades project (2018), have documented dramatically higher error rates for darker-skinned individuals, women, and older people compared to light-skinned men. A 2019 NIST study found that many commercial facial recognition systems had false positive rates 10-100 times higher for African American and Asian faces compared to Caucasian faces.

The evidentiary problem in practice is that "investigative lead" status does not reliably prevent false arrests. Documented cases of wrongful arrest based on facial recognition — including the arrests of Robert Williams, Michael Oliver, and Nijeer Parks — all involve Black men mistakenly identified by facial recognition systems. In each case, the facial recognition match became a driver of investigation and arrest rather than merely one initial lead. Students should examine what institutional pressures make facial recognition matches "stickier" as evidence than the technology's accuracy warrants.


Chapter 8: CCTV and Public Space

Chapter 8, Exercise 1 The United Kingdom has one of the highest concentrations of CCTV cameras per capita in the democratic world. Critically evaluate the evidence for and against the claim that CCTV reduces crime.

This is fundamentally an empirical question, and the evidence is more equivocal than CCTV advocates typically acknowledge.

Evidence supporting crime reduction effects: Several studies, including a meta-analysis by Welsh and Farrington (2009), found that CCTV had statistically significant effects on vehicle crime in car parks — a category where clear evidence of perpetrator identity has obvious deterrent value. Some studies find modest overall crime reduction effects in specifically defined areas.

Evidence against or complicating the reduction claim: The crime-displacement hypothesis holds that surveillance reduces crime in monitored areas by pushing it to unmonitored adjacent areas — not reducing total crime but redistributing it. The evidence for displacement is real but also contested. Studies in high-crime urban areas frequently find no significant crime reduction effect from CCTV installation. The Home Office's own evaluations have found limited effectiveness in most deployment contexts outside car parks.

The detection versus deterrence distinction is important: CCTV may be more useful for post-hoc investigation than for deterrence. Deterrence requires potential offenders to be aware of cameras, to perceive a meaningful risk of identification and apprehension, and to weigh those costs rationally. Many crimes occur under conditions (emotional volatility, intoxication, desperation) that reduce rational deterrence calculations.

A critical evaluation should also assess distributional effects: CCTV deployment is concentrated in commercial and high-property-value areas, which may reflect commercial interests as much as crime patterns. Communities that have CCTV installed report concerns about who controls footage, how long it is retained, and what other purposes it serves.


Chapter 9: Intelligence Surveillance and PRISM

Chapter 9, Exercise 3 Explain the Third-Party Doctrine and its significance for digital privacy. How does the Supreme Court's decision in Carpenter v. United States modify the doctrine?

The Third-Party Doctrine derives from two Supreme Court cases: United States v. Miller (1976) and Smith v. Maryland (1979). The doctrine holds that when individuals voluntarily share information with third parties — banks, phone companies, service providers — they assume the risk that those third parties may share the information with the government. Because the information has been "voluntarily" shared, the government does not need a warrant to obtain it; the Fourth Amendment's protection attaches only to information kept private.

The doctrine's significance for digital privacy is enormous and problematic. In the analog world, "voluntarily sharing with third parties" was a relatively limited category: you shared information with your bank and your telephone company, but most of your communications and movements were not mediated by entities with records about them. In the digital world, virtually all communication and much movement is mediated through third parties (ISPs, cell carriers, app providers, cloud services) that maintain detailed records. If all of this information lacks Fourth Amendment protection because it has been shared with third parties, then modern life is effectively conducted without Fourth Amendment protection.

Carpenter v. United States (2018) represents a significant but carefully limited modification of the doctrine. The Court held, 5-4, that the government needs a warrant to obtain seven or more days of cell-site location information (CSLI) from a carrier. Chief Justice Roberts's majority opinion reasoned that CSLI is fundamentally different from the bank records and dialed phone numbers at issue in Miller and Smith: it is comprehensive, revealing a detailed record of an individual's physical movements; it is generated automatically and passively, not truly "voluntarily" shared; and the duration of potential collection is effectively unlimited.

Critically, the Court explicitly declined to overrule Miller and Smith, and emphasized the narrowness of its holding. Carpenter established that the third-party doctrine has limits at some point of comprehensiveness, duration, and involuntariness — but where exactly those limits lie remains an open question for future litigation.


Chapter 10: China's Social Credit System

Chapter 10, Exercise 2 What are the most significant differences between China's social credit system and the data collection and scoring practices of Western governments and corporations? Where are the differences less clear than is commonly assumed?

Framework for Discussion: This exercise pushes against both the tendency to treat China's system as categorically alien and the tendency to minimize its differences from Western practices.

Clear differences: China's government operates a unified political project explicitly designed to shape citizen behavior across social domains, with coordinated participation by state agencies, banks, courts, and internet platforms. The system involves explicit government-assigned scores that determine access to public services, travel, and employment. Political compliance is explicitly among the scoring criteria. The system is state-directed with minimal separation between commercial and governmental surveillance infrastructure.

Where differences are less clear than commonly assumed: Western governments also maintain watchlists, no-fly lists, and databases that restrict individuals' access to services and opportunities without transparent process. Credit scores in the United States function as behavioral scorecards with significant consequences for access to housing, employment, and services. The United States and European governments purchase commercial surveillance data from data brokers to evade legal restrictions on direct collection. Large US technology platforms maintain comprehensive behavioral profiles of users that are accessible to government through legal process. The "separation" between commercial and state surveillance in Western democracies is real but often overstated.

A sophisticated answer will resist the self-satisfied Western narrative that "it couldn't happen here" while also resisting false equivalence. The differences are real — legal frameworks, institutional separations, political pluralism, and press freedom all matter — while also being less robust than comfortable mythology suggests.


Part 3: Commercial Surveillance

Chapter 11: The Data Economy

Chapter 11, Exercise 1 Explain Shoshana Zuboff's concept of "behavioral surplus" in your own words. What makes it different from ordinary data collection for product improvement?

Behavioral surplus is Zuboff's term for the portion of behavioral data collected from users that goes beyond what is needed to improve the service being offered. To understand this, consider the distinction she draws between what companies need to know and what they actually collect.

A company running a navigation app needs to know where users are and where they want to go — that's the service. It might also collect data about traffic patterns and arrival times to improve the app's predictions — that's product improvement. But the company also collects data about how long users pause at certain intersections, what they do when they get to their destination, whether they look at their phone while driving, what other apps they use, and how their route choices correlate with their other behaviors. None of this improves the navigation experience. This additional layer — the data that flows beyond the service provision — is behavioral surplus.

Behavioral surplus is then extracted from users without their meaningful knowledge and processed through prediction machines to generate what Zuboff calls "behavioral futures" — predictions about what users will do, buy, click on, or believe. These predictions are sold to businesses in advertising markets. Users receive no compensation for the extraction of behavioral surplus; they are not partners in this transaction but rather raw material sources.

This differs from ordinary data collection for product improvement because the entire purpose of behavioral surplus collection is not to serve users but to serve the advertisers who are the actual customers. The user's experience is a means to the extraction of behavioral surplus; the surveillance of users is the actual product.

Chapter 11, Exercise 4 Identify what you consider the three most significant ethical objections to surveillance capitalism as an economic system. For each, explain why it is an ethical rather than merely a practical concern.

Strong response framework:

1. Autonomy violation: Surveillance capitalism works by predicting and nudging behavior without subjects' meaningful knowledge or consent. The "behavioral modification" goals that Zuboff documents — shaping behavior to drive desired commercial outcomes — treat persons as objects to be manipulated rather than agents capable of rational choice. This is an ethical objection rooted in Kantian respect for persons: using people as means to others' ends without their knowledge or consent violates their dignity as rational agents. It is not merely a practical concern because the harm exists even when the manipulation is mild and even when the manipulated person benefits from the specific outcome (a well-targeted ad for something they wanted to buy anyway).

2. Democratic incompatibility: If behavioral data is used to microtarget political messaging, then surveillance capitalism enables manipulation of democratic processes at scale. Individual voters receive bespoke political content calibrated to their psychological profiles. This is an ethical concern for democracy: self-governance requires that citizens encounter a common information environment in which they can deliberate, disagree, and make shared judgments. Microtargeting that operates at the individual level — invisible to others, unverifiable by journalists, inaccessible to democratic scrutiny — is incompatible with the transparency democracy requires.

3. Epistemic inequality: Surveillance capitalism creates a profound asymmetry of knowledge: companies know vast amounts about individuals; individuals know essentially nothing about what companies know, how it is used, or how decisions made on the basis of that knowledge affect their lives. This is an ethical concern because epistemic equality is a prerequisite for meaningful autonomy and democratic participation. A population that cannot know the terms of its own surveillance is a population that cannot contest, negotiate, or limit that surveillance.


Chapter 12: Cookies, Tracking, and the Advertising Machine

Chapter 12, Exercise 2 Explain how real-time bidding (RTB) works and why privacy researchers argue it structurally violates GDPR.

When a user navigates to a webpage that carries advertising, what happens in the fraction of a second before the page fully loads is a sophisticated auction. The publisher's ad server sends a bid request to an ad exchange — a real-time marketplace — containing a data package about the user: often their IP address, browser fingerprint, inferred location, browsing history profile, demographic segments, and purchase intent signals. This bid request is broadcast, within milliseconds, to potentially hundreds of demand-side platforms (DSPs) representing thousands of advertisers. Each DSP evaluates the data package, compares it to their advertiser's targeting parameters, and submits a bid price. The winning bid's advertisement loads; the entire process typically completes in 150-300 milliseconds.

Privacy researchers (notably Johnny Ryan of the Irish Council for Civil Liberties) argue that RTB structurally violates GDPR for several reasons. First, each bid request involves the broadcast of personal data to hundreds of entities simultaneously — entities that have not obtained consent from the data subject and cannot demonstrate a lawful processing basis. The IAB's Transparency and Consent Framework, which purports to provide a GDPR-compliant consent mechanism for RTB, has been ruled unlawful by several European data protection authorities. Second, broadcast bid requests cannot be un-sent: even when a bid is not successful, the personal data has been received by every DSP that received the request. Third, GDPR's data minimization principle requires that only data necessary for the purpose be processed — but RTB packages contain far more data than necessary to serve a single advertisement.

The structural argument is that RTB cannot be made GDPR-compliant through notice-and-consent patches because its fundamental architecture — broadcasting personal data to hundreds of parties — is incompatible with the GDPR's requirements for purpose limitation, data minimization, and lawful processing basis.


Chapter 13: Social Media Surveillance

Chapter 13, Exercise 3 What is a "shadow profile" and what ethical concerns does its existence raise?

A shadow profile is a data profile maintained by a platform about an individual who has not registered for or consented to that platform. Facebook's shadow profiles were most publicly documented during congressional hearings in 2018. They are built from several sources: when registered users upload their contact lists, the phone numbers and emails of their contacts — including non-Facebook users — are ingested. Tracking pixels embedded in websites outside Facebook send data about browsing behavior to Facebook even from non-users. Data broker information is used to supplement and enrich these profiles.

The result is that Facebook (and similar platforms) maintain profiles of millions of people who have never agreed to Facebook's terms of service, have never created an account, and may actively object to Facebook's data practices. These people cannot view their profiles, cannot correct inaccuracies, cannot request deletion (with limited exceptions under GDPR), and cannot know what the profiles say.

The ethical concerns are significant. Most fundamentally, shadow profiles violate the foundational privacy principle that individuals should have some say over data collected about them. The opt-out logic of data collection (you must take action to stop collection) is extended, through shadow profiles, to people who never opted in to anything. This is data collection without even the pretense of consent.

Shadow profiles also reveal the hollowness of the "you agreed to our terms of service" defense that platforms routinely invoke. If data is collected on people who signed no terms of service, then consent frameworks are inadequate to address the full scope of surveillance capitalism's data collection.


Part 4: Domestic and Personal Surveillance

Chapter 15: The Internet of Things

Chapter 15, Exercise 2 Security researchers have documented numerous vulnerabilities in IoT devices, including baby monitors, smart locks, and medical devices. What systemic factors explain why IoT security has been so persistently poor?

IoT security failures are not primarily accidental — they reflect structural incentive problems. Several systemic factors are worth analyzing:

Market pressure for low cost: IoT devices compete primarily on price and features, not security. Security engineering is expensive; the companies that manufacture inexpensive connected devices (many of which have thin margins) face strong pressure to minimize security investment. Buyers rarely have information about device security, so manufacturers do not compete on security.

Short product cycles without long support: Consumer electronics manufacturers typically move to new products on annual cycles. Software security requires ongoing patching as vulnerabilities are discovered — often for years after a product's sale. IoT manufacturers have often not committed to multi-year patch support, leaving devices permanently vulnerable once security researchers discover (and publish) vulnerabilities.

Complexity and the attack surface: IoT devices run software stacks of considerable complexity — operating systems, network stacks, application layers, cloud APIs — each of which represents a potential attack surface. This complexity is inherent to the functionality that makes IoT devices valuable; it is not easily reducible.

Regulatory gaps: Until recently, no comprehensive security standards governed IoT devices in most markets. Unlike medical devices (where FDA oversight includes security requirements) or automobiles (where recall authorities exist), connected consumer devices faced no mandatory security baseline. The EU's Cyber Resilience Act (2024) and California's SB-327 represent early steps toward regulatory requirements.

The "move fast" culture: Many IoT products are developed by software teams with internet-services experience who are less familiar with the security challenges of long-lived embedded systems. Default passwords, hardcoded credentials, and unencrypted communications — all documented in commercial IoT products — reflect development practices more appropriate to rapidly-iterated web services than to persistent physical devices.


Chapter 16: Ring Doorbells and Neighborhood Surveillance

Chapter 16, Exercise 1 Analyze Ring doorbells as a form of "lateral surveillance." How does the Ring/Neighbors platform distribute surveillance function across a community? Who benefits and who bears the costs?

Ring doorbells distribute surveillance function in several ways that distinguish them from conventional CCTV. First, the cameras are owned by individual residents, not a central authority — creating a surveillance network without a single operator. Second, the Neighbors app aggregates camera feeds and incident reports into a neighborhood-level surveillance platform, enabling users to share footage and coordinate responses. Third, Ring's agreements with police departments (at their peak, Amazon had partnerships with over 2,000 departments) allowed police to request footage from Ring users without subpoenas, creating a mechanism by which distributed resident cameras functioned as de facto extensions of police surveillance.

This distribution is a form of lateral surveillance — neighbor watching neighbor — but with corporate infrastructure (Amazon) and institutional connections (police partnerships) that complicate the peer character of the watching. It is not simply neighbors looking out for each other; it is a commercial platform that has extracted community trust and deployed it in service of surveillance integration with law enforcement.

Who benefits: Amazon benefits commercially from the platform's growth. Homeowners with security concerns derive some benefit from documented deterrence effects (limited) and investigation assistance. Police departments benefit from expanded surveillance infrastructure at no cost.

Who bears costs: Civil liberties research — including ACLU analysis — has documented that Ring/Neighbors posts disproportionately target Black, Latino, and Asian community members as suspicious, regardless of behavior. Delivery workers, domestic workers, and people of color in predominantly white neighborhoods bear disproportionate surveillance burdens. Renters and residents without the capital to purchase Ring systems receive the surveillance attention without the surveillance capability. Civil liberties costs — chilling of movement and association in surveilled neighborhoods — fall most heavily on already-targeted communities.


Chapter 19: Stalkerware and Intimate Partner Surveillance

Chapter 19, Exercise 3 Why is stalkerware considered a domestic violence issue rather than simply a technology problem? What does this framing reveal about how we define surveillance harm?

Domestic violence advocacy organizations and academic researchers classify stalkerware as a domestic violence issue because it functions as a tool in the pattern of coercive control that defines abusive intimate partner relationships. The technology cannot be separated from the social context of its use.

Coercive control is a pattern of behavior — documented extensively by Evan Stark and others — in which an abusive partner systematically restricts the victim's freedom, autonomy, and access to resources. Coercive control predates smartphones; abusers have always used surveillance tactics (demanding to know whereabouts, intercepting mail, monitoring phone calls). Stalkerware represents a dramatic amplification of surveillance capability available to abusive partners, enabling real-time location tracking, communication monitoring, and remote camera access — all invisibly.

The domestic violence framing is correct because the harm is not primarily a technology harm (privacy violation in the abstract) but a harm within a relationship context characterized by power imbalance, fear, and coercion. The same surveillance technology deployed by a non-abusive partner (with knowledge and consent) raises different ethical concerns than stalkerware deployed in an abusive context — not because the technology differs but because the social context transforms its meaning and harm.

What this framing reveals about surveillance harm: surveillance harm is not always reducible to the data itself. The harm of stalkerware is not merely that location data is collected but that the data enables an abuser to monitor, control, and threaten a victim. Surveillance enables harm when it empowers one actor to exercise coercive power over another. This insight generalizes: understanding surveillance harm requires understanding the power relationships within which surveillance occurs.


Part 5: Environmental and Scientific Surveillance

Chapter 21: Satellite Imagery

Chapter 21, Exercise 2 What does the "democratization" of satellite imagery mean, and what are the implications of high-resolution satellite imagery being available to commercial operators and researchers as well as governments?

Framework for Discussion: "Democratization" in this context refers to the declining cost and increasing accessibility of high-resolution satellite imagery, driven by companies like Planet Labs, Maxar, and Satellogic. Where once satellite imagery was a capability held exclusively by national intelligence agencies, commercial providers now offer sub-meter resolution imagery available for purchase by journalists, researchers, NGOs, corporations, and governments.

Positive implications: Commercial satellite imagery has enabled significant accountability journalism. The New York Times's use of satellite imagery to verify atrocities in Xinjiang and Ukraine, Bellingcat's use of satellite imagery to document military movements, and Global Fishing Watch's use of satellite data to monitor illegal fishing are all examples where satellite access has served accountability and transparency goals that governmental surveillance would not have served.

Complicating implications: The same capability that enables accountability journalism also enables corporate surveillance (competitors monitoring each other's factory parking lots and shipping docks), government surveillance of populations through commercial intermediaries, and the aggregation of satellite observation with other data sources to create comprehensive tracking of individuals' movements. The "democratization" also raises the question of what previously inaccessible spaces are now visible: informal settlements, refugee camps, farms, and remote communities are now subject to regular overhead observation without their knowledge or consent.

Students should examine how "democratization" framing can obscure power dynamics: the ability to access satellite imagery correlates with economic resources, technical capacity, and institutional infrastructure. The imagery is "democratic" in the sense of being available to multiple actors; it is not democratic in the sense of being equally accessible.


Chapter 22: Birdsong Monitoring and Environmental Surveillance

Chapter 22, Exercise 1 Passive acoustic monitoring (PAM) systems designed to monitor wildlife can incidentally record human speech and activity. What does this case study reveal about surveillance concepts like "function creep" and "collateral collection"?

Passive acoustic monitoring illustrates several surveillance concepts with unusual clarity precisely because it occurs in an environmental/scientific context where surveillance of humans is explicitly not the purpose. This makes it a useful case study for examining how surveillance effects arise from systems not designed as surveillance tools.

Function creep in PAM: Systems deployed to monitor bird populations (and other wildlife) are capable of recording any sound in their range — including human conversation, vehicle sounds, and other activity. Researchers have noted that recordings from conservation monitoring stations sometimes contain evidence of illegal logging, poaching, and human encampment. The temptation to use these recordings for law enforcement purposes is an instance of function creep: the monitoring infrastructure, built for ecological purposes, begins to be repurposed for human surveillance purposes.

Collateral collection: Intelligence agencies use the term "collateral collection" to describe information gathered incidentally while targeting other subjects. PAM systems engage in a form of collateral collection — human activity data gathered while targeting bird vocalizations. The question this raises: does the incidental nature of collection change its ethical status? Most privacy frameworks focus on intentionality; PAM's incidental collection challenges this focus by demonstrating that the effects of surveillance — data about individuals existing in surveillance databases — are identical whether collection is intentional or incidental.

What this reveals: surveillance effects can arise from systems designed with entirely different purposes. This has practical implications for privacy by design: systems must account not just for their intended surveillance functions but for their incidental collection capabilities and the pressure to repurpose them.


Part 6: Workplace Surveillance

Chapter 26: Performance Reviews

Chapter 26, Exercise 3 How does the annual performance review function as a surveillance mechanism? In what ways is it panoptic, and in what ways does it depart from the panoptic model?

The annual performance review shares key features with panoptic surveillance. The reviewing manager occupies a position analogous to the panopticon's tower — with visibility into the employee's activities and the power to evaluate and sanction. The reviewed employee knows that their performance is being observed and evaluated, though they cannot see the full evaluation apparatus or know what specific behaviors are being assessed as significant. This uncertainty — what exactly is the manager noticing? which of my behaviors are being evaluated? — induces self-regulation throughout the year, not just during the review. In this sense, the annual review is not only a moment of evaluation but a continuous mechanism of behavioral shaping.

Departures from the panoptic model: First, the annual review is periodic rather than continuous — surveillance is compressed into a formal event, unlike the continuous possibility of observation in Bentham's design. Second, performance reviews involve an element of negotiation and reciprocity (most review systems include employee self-evaluation and goal-setting components) that the panopticon does not. Third, the information is not perfectly one-directional — a skilled employee may strategically manage their manager's perceptions throughout the year, cultivating particular impressions in ways that a panopticon inmate cannot.

Algorithmic performance management (discussed in Chapter 28) is more thoroughly panoptic than traditional reviews: it is continuous, automated, and less negotiable, and the "reviewing manager" function is replaced by a system that the employee cannot cultivate or appeal to interpersonally.


Chapter 28: Algorithmic Management

Chapter 28, Exercise 2 Amazon's warehouse workers have described working under systems that measure productivity by the second and issue automated discipline warnings. Analyze this system using at least two theoretical frameworks from Part 1.

Foucauldian analysis: Amazon's warehouse management system is a near-perfect realization of Taylorist scientific management transposed into a digital infrastructure. The system tracks each worker's "time off task" (TOT) — moments not spent scanning items — and generates automatic warnings and terminations based on TOT statistics. This represents the extension of the factory clock from the hour to the second, and the elimination of the human supervisor whose judgment might introduce inconsistency, sympathy, or negotiation. The panoptic mechanism operates: workers cannot know when they are being assessed (the monitoring is continuous), and the knowledge of continuous assessment drives conformity to productivity norms. The automatic discipline warning is the automated enforcement mechanism that Bentham's design promised but could not deliver: discipline that requires no human decision to activate.

Marxist analysis: The algorithmic management system represents an intensification of labor extraction. Taylor's original scientific management project sought to extract maximum productivity by breaking work into timed components and eliminating "soldiering" (deliberate slowdowns). Amazon's system extends this by making every second accountable to productivity measurement. The extraction of behavioral surplus (in Zuboff's terms) from workers — data about their every movement, position, and action — serves the accumulation of productivity data used to set ever-tighter performance benchmarks. Workers who exceed benchmarks implicitly set new norms that make it harder for all workers to avoid discipline. The system is designed to make labor more legible to capital while making the management system itself less legible to workers.


Chapter 30: Whistleblowing

Chapter 30, Exercise 1 Compare the legal treatment of three whistleblowers: Daniel Ellsberg, Chelsea Manning, and Edward Snowden. What does the comparison reveal about how the law treats disclosure of surveillance programs versus other government secrets?

Framework for Discussion: This comparison reveals important patterns in how the legal system responds to disclosures of different kinds of government wrongdoing.

Daniel Ellsberg (Pentagon Papers, 1971): Ellsberg leaked classified Defense Department history of the Vietnam War to newspapers. The Nixon administration's prosecution under the Espionage Act collapsed when it was revealed that the prosecution had engaged in illegal misconduct (the Plumbers' break-in of Ellsberg's psychiatrist's office). Ellsberg was never convicted. The Supreme Court declined to allow prior restraint on publication.

Chelsea Manning (WikiLeaks, 2010): Manning, a US Army intelligence analyst, leaked hundreds of thousands of diplomatic cables, military incident logs, and the "Collateral Murder" video to WikiLeaks. She was court-martialed under the Espionage Act and the UCMJ, convicted, and sentenced to 35 years (later commuted by President Obama to 7 years). The disclosures included evidence of civilian casualties, diplomatic cable contents, and Guantanamo detention files.

Edward Snowden (NSA Programs, 2013): Snowden, an NSA contractor, disclosed classified documents about bulk surveillance programs including PRISM and Section 215 metadata collection to journalists. He has been charged under the Espionage Act and resides in Russia under temporary asylum, having never returned to the US to face trial. Unlike Manning, Snowden went to journalists rather than raw publishing; unlike Ellsberg, he remains under indictment.

What the comparison reveals about surveillance disclosures: Both Manning and Snowden disclosed information about surveillance programs and neither has been treated leniently. The Espionage Act, originally enacted in 1917, makes no distinction between disclosure to enemies and disclosure to journalists; it contains no public-interest defense. Ellsberg's escape from conviction reflects procedural luck (prosecutorial misconduct) rather than legal protection. The US legal system provides essentially no formal protection for those who disclose surveillance abuses — a significant gap in accountability infrastructure.


Part 7: Resistance, Ethics, and Futures

Chapter 31, Exercise 3 Why have US privacy laws developed in a "sectoral" rather than "comprehensive" model? What are the advantages and disadvantages of this approach compared to GDPR?

The United States has not enacted a comprehensive federal privacy law; instead, privacy is regulated through a patchwork of sector-specific laws: HIPAA for health data, FERPA for education records, COPPA for children's online data, ECPA for electronic communications, VPPA for video rental records, and the Fair Credit Reporting Act for credit information. This sectoral approach reflects several features of US political culture and institutional structure.

Political economy: Industry lobbying has consistently blocked comprehensive federal privacy legislation. Sectoral laws, which emerge from specific triggering events (the Bork video rental history disclosure that prompted VPPA; the HIV testing privacy concerns that shaped HIPAA) address specific identified harms without constraining industry across the board. Comprehensive legislation would threaten business models across multiple sectors simultaneously, generating unified opposition.

Federalist structure: US constitutional structure allocates significant regulatory authority to states, and the EU's comprehensive model depends on supranational harmonization mechanisms unavailable in the US constitutional design. California's CCPA represents the most significant state-level comprehensive effort.

Advantages of the sectoral approach: Sector-specific laws can be precisely tailored to the characteristics and risks of specific industries; HIPAA's technical security standards are appropriate for healthcare contexts in ways that differ from what is appropriate for retail. Sectoral laws are also easier to update in specific domains without requiring renegotiation of a comprehensive framework.

Disadvantages: The sectoral approach creates significant gaps. Many of the most significant contemporary data collection activities — by social media platforms, data brokers, commercial app developers — fall into no regulated sector. GDPR's comprehensive coverage means that any entity processing EU residents' data faces regulatory requirements; US law leaves large swaths of commercial surveillance unregulated. The patchwork also creates compliance complexity for entities operating across multiple sectors.


Chapter 33: Encryption and Technical Resistance

Chapter 33, Exercise 2 What is "obfuscation" as a privacy strategy, and what are its limitations? Is it ethically different from encryption?

Obfuscation, as theorized by Finn Brunton and Helen Nissenbaum in their 2015 book of that name, is the deliberate introduction of misleading, irrelevant, or false information into a surveillance data stream to make individual records harder to isolate, analyze, or act upon. Unlike encryption (which prevents data from being read) or anonymization (which removes identifying information), obfuscation leaves data visible but degrades its quality or interpretability.

Examples: Browser extensions that generate fake browsing traffic alongside real traffic, making behavioral profiles noisier and less reliable. Loyalty card sharing networks in which users swap cards, muddying purchase histories. Automated search queries added to real queries to obscure search patterns. Wearing "CV Dazzle" camouflage makeup designed to confuse facial recognition algorithms.

Limitations: Obfuscation is not security. Sophisticated adversaries with sufficient data can often filter noise from signal. Machine learning systems can sometimes identify obfuscatory behavior and discount or eliminate it. Obfuscation strategies require effort, technical knowledge, and consistency that may be beyond most users. Some obfuscation strategies (fake browsing traffic) may conflict with bandwidth constraints or terms of service.

Is obfuscation ethically different from encryption? Encryption is a positive technical protection — it makes data inaccessible. Obfuscation is a form of deception — it makes data present but misleading. Some critics argue that large-scale obfuscation degrades data infrastructure in ways that have collateral effects (noisier data leads to worse services for other users). Brunton and Nissenbaum argue that obfuscation is ethically justified as a defensive tactic available to those who cannot avoid surveillance — those who cannot opt out and cannot encrypt have a reasonable interest in degrading the quality of data collected from them. This argument treats obfuscation as an act of self-defense rather than deception in the morally simple sense.


Part 8: Capstone and Synthesis

Chapter 36: Race and Surveillance

Chapter 36, Exercise 1 What does Simone Browne mean when she calls surveillance "racializing"? How is racial surveillance different from surveillance that is merely racially disproportionate?

Browne argues in Dark Matters (2015) that surveillance practices do not merely reflect pre-existing racial categories but actively produce and enforce racial identities and hierarchies. This is the distinction between surveillance that happens to have racially disparate effects and surveillance that is a technology of racialization.

Surveillance that is merely racially disproportionate might be race-neutral in design but applied with racial bias: a facial recognition system with equal design intent that performs worse on darker-skinned faces due to biased training data. The racially disparate outcome is problematic but the system was not designed to racialize.

Raciating surveillance, in Browne's analysis, constructs race itself through surveillance practices. Her analysis of lantern laws is the key historical example: the lantern requirement applied to enslaved Black people created a category of person who was required to be perpetually legible to white authority, whose legitimate presence in public space at night was always conditional and marked. The surveillance practice did not merely monitor a pre-existing racial category; it enacted racial meaning — it made Blackness in public after dark presumptively suspicious. Similarly, contemporary predictive policing systems trained on historical arrest data encode prior racist enforcement patterns into algorithmic predictions, then generate enforcement actions that produce new data confirming the patterns. The system is not merely racially biased; it actively reproduces racial hierarchy through its operations.

Chapter 36, Exercise 3 How does the concept of "discriminatory surveillance" help explain why civil rights organizations consistently oppose expansions of surveillance infrastructure even when those expansions are presented as serving public safety?

Discriminatory surveillance refers to the disproportionate targeting of surveillance at communities of color, immigrant communities, Muslim communities, and other historically marginalized groups — not necessarily through explicit discriminatory intent but through the operation of systems designed in, and informed by, environments saturated with racial inequality.

Civil rights organizations' opposition to surveillance infrastructure expansion reflects the well-documented historical record: each major expansion of surveillance capability has been directed most intensively at Black, immigrant, Muslim, and politically dissident communities. COINTELPRO's primary targets were Black civil rights organizations. Post-9/11 surveillance expansion was directed disproportionately at Muslim communities. Predictive policing systems have reproduced and amplified existing racial enforcement disparities. This is not coincidence — it reflects the structural relationship between surveillance capability and racial power: new surveillance tools are trained on, and deployed against, communities that existing power arrangements have already marked as suspect.

Civil rights organizations therefore resist the "public safety" framing not because they do not value public safety but because the history demonstrates that "surveillance for public safety" predictably becomes "surveillance of communities of color for their presence." The expansion of surveillance infrastructure creates capabilities that will be used in accordance with existing power relations, not in defiance of them.


Chapter 38: Surveillance of Children

Chapter 38, Exercise 2 What is the "right to be forgotten" and how does it apply specifically to children? What are the arguments for and against applying this right to content that was voluntarily posted by children themselves?

The right to be forgotten (more precisely, the right to erasure under GDPR Article 17) allows individuals to request that certain personal data about them be deleted by data controllers, under specified conditions — including when the data is no longer necessary for its original purpose, when consent is withdrawn, or when the data has been unlawfully processed.

The specific application to children is stronger than for adults: GDPR's recitals specifically reference children's data protection needs, and the UK implementation (UK GDPR and the Age Appropriate Design Code) includes particularly robust provisions for children's data. The argument for a stronger right to erasure for children rests on developmental grounds: children's capacity for understanding the long-term implications of their online disclosures is limited by cognitive development; information posted at 13 should not define opportunities at 23.

Arguments for applying erasure rights even to voluntarily posted content: Children cannot give fully informed consent to sharing information whose long-term implications they cannot foresee. Social media companies have designed platforms to be engaging and sharing-encouraging in ways that exploit developmental vulnerabilities in adolescent psychology (peer approval seeking, present-bias, risk-taking). The concept of "voluntary" posting is complicated when platforms are designed to make sharing the path of least resistance.

Arguments against or complicating: Historical record has value; "forgetting" content creates problems for accountability. Defining which content counts as something an adult should be able to erase (embarrassing teenage photo) versus something the public has a legitimate interest in (a public statement by a young person about a public matter) requires line-drawing that law has difficulty accomplishing cleanly. There is also a question of third-party rights: content posted by a teenager about interactions with others cannot be erased without affecting those others' record.


Chapter 39: Designing for Privacy

Chapter 39, Exercise 1 What is "privacy by design" and what distinguishes it from a compliance-based approach to privacy protection?

Privacy by design, developed by Ann Cavoukian in the 1990s and now a GDPR requirement, is an approach to systems and institutional design that builds privacy protections into systems from the ground up, rather than retrofitting them as compliance measures after systems are designed and deployed.

Cavoukian articulated seven foundational principles of privacy by design: (1) proactive not reactive — anticipate and prevent privacy risks before they occur; (2) privacy as the default setting — privacy-protective settings should be the default, not something users must actively choose; (3) privacy embedded into design — not added as an afterthought; (4) full functionality — privacy without trade-offs between privacy and security; (5) end-to-end security — throughout the data lifecycle; (6) visibility and transparency; and (7) respect for user privacy — keeping it user-centric.

The compliance-based approach treats privacy as a legal risk to be managed: companies design systems for functionality and commercial objectives, then legal and compliance teams assess what regulations require and add features or limitations to achieve regulatory compliance. Privacy becomes a constraint on design rather than a design objective. This approach predictably produces systems that are technically compliant but practically privacy-invasive: cookie consent banners that are designed to maximize "accept all" clicks while formally satisfying GDPR's consent requirements are the paradigmatic example.

Privacy by design changes the design question from "what do we need to do to comply?" to "how do we design this system so that privacy is protected throughout?" This shifts responsibility from legal/compliance teams to product and engineering teams, and requires privacy expertise to be embedded in design processes from their inception.


Chapter 40: The Future of Surveillance

Chapter 40, Exercise 3 Jordan Ellis — the student character who has accompanied us through this textbook — began as someone who "naively accepted" surveillance as a background condition of modern life and ends the textbook with "critical awareness and a commitment to action." What intellectual moves constitute the transition from naïve acceptance to critical awareness? What does "action" look like at the individual, community, and structural levels?

Framework for Discussion: This capstone exercise invites synthesis across the full textbook and reflection on the relationship between knowledge and agency.

The transition from naïve acceptance to critical awareness involves several intellectual moves that the textbook has scaffolded. First, recognition: learning to see surveillance in contexts that had seemed ordinary or invisible — the workplace monitoring system, the loyalty card, the browser cookie — and naming these as surveillance practices with power implications. Second, analysis: moving from recognition to understanding how surveillance practices function, who benefits, who bears costs, and what mechanisms (legal, technical, social) sustain them. Third, evaluation: developing a framework for assessing surveillance practices — what makes some legitimate and others not, what trade-offs are acceptable, what rights claims are relevant. Fourth, situating: understanding one's own position in surveillance systems — as a subject, as someone who may participate in lateral surveillance, as a potential actor with some capacity for change.

"Action" at different levels: - Individual: Technical self-protection (encrypted communication, VPN use, privacy browser settings), informed consumer choices, opting out of data collection where possible, knowing one's legal rights, being a thoughtful participant in lateral surveillance systems (not contributing to neighborhood apps in ways that target people of color). - Community: Organizing with others to contest specific surveillance deployments (opposing facial recognition at local transit systems), supporting organizations doing surveillance accountability work (EFF, ACLU), educating community members about surveillance practices, practicing collective privacy norms. - Structural: Civic engagement on surveillance regulation — voting for candidates with thoughtful surveillance policy positions, engaging in public comment processes, supporting legislative reform efforts, participating in democratic deliberation about the appropriate scope of surveillance in public life.

A sophisticated discussion will note that individual action is necessary but insufficient. The structural conditions of surveillance capitalism cannot be addressed through individual privacy hygiene alone; they require collective political action. But individual action has value both intrinsically (exercising autonomy) and instrumentally (building the habits and knowledge that sustain collective action).


These model answers and discussion frameworks are intended as resources for students and instructors, not as definitive adjudications of complex questions. Many exercises in this textbook do not have single correct answers; the goal is rigorous analysis, not factual recall. Students are encouraged to challenge these frameworks and to develop their own well-reasoned positions.