Chapter 32 Exercises: Counter-Surveillance


Exercise 32.1 — Threat Modeling: Who Is Your Adversary?

Type: Reflective analysis | Difficulty: Beginner | Time: 20-30 minutes

Effective counter-surveillance starts with threat modeling: understanding specifically who or what you are protecting yourself from, so you can choose appropriate tools.

Instructions: Read each scenario below. For each, identify: (1) Who is the likely adversary? (2) What information are they most likely to seek? (3) Which counter-surveillance tools from this chapter would be most useful? (4) Which would be irrelevant or provide false confidence?

Scenario A: You are a college student writing a paper critical of your university's administration. You're concerned about retaliation if your research is discovered before publication.

Scenario B: You are a domestic abuse survivor who has left an abusive partner. Your partner has shown signs of tracking your location and monitoring your communications.

Scenario C: You work for a health insurance company and are considering becoming a whistleblower about discriminatory claims-denial practices you've witnessed. You want to contact a journalist without being identified.

Scenario D: You live in a country with internet censorship that blocks access to news sites, VPN services, and social media. You want to access blocked content and communicate with people outside your country.

Scenario E: You are an ordinary person who wants to reduce advertising tracking and sell fewer of your behavioral data to marketing companies, but have no unusual concerns about government surveillance or physical threats.

Reflection: Which scenario most resembles your actual situation? What does your threat model reveal about which tools you should prioritize?


Exercise 32.2 — The Nothing to Hide Argument: Map and Refute

Type: Argument mapping | Difficulty: Intermediate | Time: 35 minutes

The "nothing to hide" argument is the most common dismissal of privacy concerns. This exercise develops your ability to identify its structure and respond to it rigorously.

Part A — Map the argument. The "nothing to hide" argument can take several forms. Identify the implicit premises in each version below, and draw an argument map showing the premises, inferences, and conclusion.

Version 1: "If you have nothing to hide, you have nothing to fear from surveillance."

Version 2: "Privacy is just about protecting criminals and people with something to hide. Law-abiding citizens benefit from surveillance because it catches criminals."

Version 3: "You already share your information with Google, Facebook, and your phone company. Why do you care if the government sees it too?"

Part B — Refute each version. For each version, write a two-paragraph refutation. Each refutation must: (a) accurately identify the argument's core premise, (b) provide a specific counterexample or empirical challenge to that premise, and (c) articulate what value the argument misses.

Part C — Steelman. Write the strongest possible one-paragraph version of the "nothing to hide" argument — the version that takes privacy concerns most seriously while still concluding that surveillance is acceptable. What does steelmanning reveal about what's genuinely contested?


Exercise 32.3 — Hands-On Privacy Audit

Type: Practical/experiential | Difficulty: Beginner | Time: 1-2 hours

Instructions: Conduct a privacy audit of your digital life. This exercise asks you to assess your current practices and identify your highest-priority improvements.

Step 1 — Browser audit: - What browser do you currently use? - Open your browser's extension list. Which extensions are installed? Do any collect data you're unaware of? - Open your browser settings and locate cookie and tracking settings. What is currently enabled? - Go to a website that tracks you (try coveryourtracks.eff.org) and run a fingerprint test. What does your browser fingerprint reveal?

Step 2 — App permission audit: - On your smartphone, go to Settings > Privacy and review which apps have access to: location, microphone, camera, contacts, photos - Identify three apps with permissions they don't functionally need - Revoke those permissions and note whether the apps still function normally

Step 3 — Account security audit: - List your five most important accounts (email, banking, social media, etc.) - For each: (a) Is your password unique? (b) Is two-factor authentication enabled? - If you don't use a password manager, estimate how many passwords you reuse

Step 4 — Communication audit: - Which messaging apps do you currently use? Does any provide E2EE? - Who in your life already uses Signal?

Deliverable: Write a 400-word audit report covering: what you found, your three highest-priority security/privacy improvements, and what's preventing you from making those improvements (if anything).


Exercise 32.4 — The Re-Identification Challenge

Type: Research and analysis | Difficulty: Intermediate | Time: 45 minutes

This exercise illustrates the limits of "anonymization" using public information about real re-identification cases.

Part A — Research the following cases: 1. The AOL search data release (2006) — Find the original New York Times article "A Face Is Exposed for AOL Searcher No. 4417749" 2. The Netflix Prize de-anonymization (Narayanan & Shmatikoff, 2008) — Read the abstract or summary of the paper 3. The New York City Taxi dataset de-anonymization (2014)

Part B — For each case, answer: 1. What was claimed about the anonymization? 2. What information was actually required to de-anonymize? 3. What harm could have resulted from the de-anonymization? 4. Who bears responsibility — the data publisher, the researchers who re-identified, or both?

Part C — Apply to your own data: Choose one type of "anonymized" data that companies claim to collect about you (location data, browsing history, purchase history, etc.). Research: what information would theoretically be required to re-identify you from that dataset? Write a 250-word assessment of the claim that your data is "anonymized."


Exercise 32.5 — Going Dark Debate: Structured Fishbowl

Type: Structured debate | Difficulty: Intermediate | Time: 60 minutes (group)

The "going dark" debate — law enforcement access to encrypted communications — is one of the central policy debates in digital rights. This exercise uses a fishbowl format to develop sophisticated positions.

Setup: Four students sit in the inner circle (fishbowl); the rest observe and take notes. After 15 minutes, rotate.

Inner Circle Positions: - FBI Director: Argue that exceptional access to encrypted communications is necessary for law enforcement and national security. Ground your argument in specific cases where encrypted communications prevented investigation. - Cryptographer: Argue that backdoors are technically impossible to implement securely — any weakness that allows government access will be exploited by adversaries. This is not a political position but a mathematical one. - Civil Liberties Advocate: Argue that the "going dark" narrative is misleading and that surveillance capabilities have expanded, not contracted, in the digital age. - Journalist: Argue from the perspective of a reporter who depends on source confidentiality — what does weakened encryption mean for press freedom?

Observers: Take structured notes on: strongest arguments made, logical fallacies identified, what evidence was cited or missing.

Debrief (full group): Is there a policy position that respects both law enforcement needs and security/privacy concerns? Or is this a genuine zero-sum trade-off?


Exercise 32.6 — Obfuscation Ethics

Type: Ethical analysis | Difficulty: Advanced | Time: 40 minutes

Helen Nissenbaum argues that obfuscation — deliberately generating misleading information to pollute surveillance datasets — is a legitimate privacy strategy when surveillance is nonconsensual and unavoidable.

Part A — Read the following positions and evaluate each:

Position 1 (Nissenbaum): "When the only options are giving up privacy or resisting through whatever means are available, obfuscation is a legitimate — perhaps the only — form of resistance available to ordinary people."

Position 2 (Critic): "Obfuscation is dishonest. It involves deliberately deceiving data collectors. Even if those collectors are doing something wrong, responding with deception is ethically problematic — it degrades the information environment for everyone, including legitimate uses."

Position 3 (Pragmatist): "The ethics of obfuscation are irrelevant compared to its effectiveness. If individual obfuscation doesn't meaningfully degrade surveillance systems, it's not worth doing. The question is empirical, not philosophical."

Part B — Apply ethical frameworks. Evaluate obfuscation (specifically, using AdNauseam) through three ethical lenses: 1. Consequentialism: What are the likely outcomes? Who benefits? Who is harmed? 2. Deontology: Are there duties or rights that support or prohibit obfuscation? 3. Virtue ethics: What character traits does obfuscation express? Is it consistent with a person of good character?

Part C: Write your own 300-word position on the ethics of obfuscation. State where you come down and why.