Exercises — Chapter 38: The Future of Surveillance


Exercise 38.1 — Scenario Analysis and Extrapolation (Individual, 60–75 minutes)

Overview: Develop one of the chapter's three 2050 scenarios in detail, then analyze its implications.

Instructions:

Select one of the three 2050 scenarios presented in the chapter (Libertarian, Authoritarian, or Democratic-Regulated). Write a 700–900 word analysis that:

  1. Extends the scenario: Add three to four specific details about what daily life would look like in this scenario — how a person would commute to work, interact with healthcare, seek employment, and move through public space
  2. Traces the path: Identify three to five specific policy decisions, technological developments, or political events (drawing from present-day trends) that would need to occur to produce this scenario by 2050
  3. Identifies resistances: What forces (political, economic, social) would resist or complicate this scenario? What would they look like?
  4. Evaluates legitimacy: Is the surveillance system described in your scenario legitimate? By what standard? Who would accept this standard as valid?

Class discussion: After individual writing, compare scenarios across the class. Where do the paths diverge? What does the comparison reveal about the choices that matter most?


Exercise 38.2 — AI Surveillance Red Team (Group, 75–90 minutes)

Overview: Conduct a "red team" evaluation of an AI surveillance system — identify its failure modes, bias risks, and misuse potential.

Setup:

Your group is a red team hired to evaluate the deployment of an AI-powered predictive threat assessment system for a large urban transit authority. The system claims to: - Identify passengers who may be at risk of violent behavior using a combination of gait recognition, facial expression analysis, and behavioral anomaly detection - Generate real-time alerts for transit security personnel - Integrate with law enforcement databases to flag individuals with prior criminal records - Learn continuously from security staff feedback on alert accuracy

Your red team analysis should address:

  1. Accuracy: What failure modes are predictable given the capabilities and limitations of AI surveillance described in this chapter? Where will it produce false positives? False negatives?

  2. Bias: Based on the chapter's discussion of AI bias amplification, which populations are likely to be disproportionately flagged? How would the continuous learning mechanism interact with initial biases?

  3. Accountability: What happens when the system flags the wrong person? Who is responsible? What are the mechanisms for challenge and correction?

  4. Mission creep: Once deployed, what are the likely pressures to extend this system's use beyond its original scope? What data will it generate that is valuable for other purposes?

  5. Recommendation: Should the transit authority deploy this system? If yes, under what conditions? If no, what alternatives exist for achieving the safety goals?

Deliverable: A 600–800 word red team report with specific findings and recommendations.


Exercise 38.3 — The Minority Report Ethics Debate (Group Discussion, 60 minutes)

Overview: Engage with the pre-crime ethics question using contemporary examples.

Background reading (before class): - Read or review the basic premise of The Minority Report (Philip K. Dick, 1956) - Review the chapter's discussion of predictive policing and the Chicago Strategic Subject List

Debate questions (structured discussion, 10–12 minutes each):

  1. The accuracy threshold question: At what level of predictive accuracy (if any) would it be ethical to detain or restrict someone's freedom based on a predicted future crime they have not yet committed? What factors would matter beyond raw accuracy?

  2. The response gradient question: Detention is the most extreme intervention. Are there less extreme interventions — mandatory counseling, enhanced monitoring, restricted access to certain locations — that become acceptable at lower accuracy thresholds? Where does each intervention fall on the spectrum of ethical acceptability?

  3. The asymmetry question: The chapter notes that predictive systems in practice produce racially disproportionate outcomes even when race is not an explicit input variable. Does this asymmetry make predictive intervention unethical regardless of accuracy? Or can it be addressed through technical or procedural means?

Post-discussion written reflection (individual, 300–400 words): Identify the point in the discussion where you found the ethical questions most difficult to resolve. What made it difficult? What principle or framework would you need to have a confident answer?


Exercise 38.4 — Neural Privacy Framework (Individual, Major Essay, 700–1,000 words)

Overview: Develop a privacy framework for brain-computer interface data.

Instructions:

The chapter identifies neural surveillance as "qualitatively different" from behavioral surveillance because it accesses mental processes rather than their outputs. Your essay should:

  1. Articulate the difference: Explain why the author characterizes neural surveillance as qualitatively different. Do you agree that the distinction is morally significant, or is it a difference of degree rather than kind?

  2. Apply existing frameworks: Take two of the privacy frameworks established earlier in the book (e.g., contextual integrity, the four Warren-Brandeis privacy harms, Nissenbaum's contextual integrity) and apply them to neural data. What do these frameworks imply about the appropriate handling of neural data?

  3. Identify gaps: Where do existing frameworks fail to capture what is distinctive about neural privacy? What new principles or concepts would a neural privacy framework need?

  4. Draft a principle: Write one to three principles that you believe should govern the collection, use, retention, and sharing of neural data from brain-computer interfaces.

  5. Anticipate the objection: The leading developers of BCIs argue that neural data enables transformative medical applications — restoring movement to paralyzed patients, treating severe depression, enabling communication for people with ALS. How does your privacy framework accommodate these beneficial applications while guarding against surveillance misuse?


Exercise 38.5 — Write Your Own 2050 Reflection (Individual, 500 words)

Overview: Write the equivalent of Jordan's 2050 essay from your own perspective.

Instructions:

Write a 500-word reflection on what you believe — or what you fear — surveillance will look like in 2050. Your reflection should:

  • Be grounded in specific technologies and trajectories discussed in this chapter (not science fiction speculation)
  • Engage with the question of whether the history of surveillance is primarily a history of technology or a history of power
  • Reflect your own position — what you are willing to accept, what you would resist, what you would demand
  • End with a question rather than a conclusion, in the spirit of Jordan's essay

Instructor note: This exercise can be collected as an informal writing assignment or used as a basis for class discussion. It is also productive as a comparison — collecting these reflections at the beginning of the course and then at this point in the semester reveals how students' understanding has evolved.


Exercise 38.6 — The Deepfake Evidentiary Crisis (Group, 45–60 minutes)

Overview: Analyze the implications of synthetic media for surveillance-based evidence in criminal proceedings.

Scenario:

A defendant in a criminal case has been charged based in part on surveillance camera footage showing them at a crime scene. Their defense attorney introduces expert testimony that the footage could have been generated using commercially available deepfake software. The prosecution maintains that the footage is authentic.

Discussion questions:

  1. How would a court evaluate the authenticity of surveillance footage in a world where deepfakes are technically feasible? What standards of evidence would be needed?

  2. How does the possibility of deepfake-based false evidence affect the reliability of surveillance systems as an evidentiary foundation for criminal prosecution?

  3. If surveillance footage becomes presumptively contestable, what are the implications for the entire architecture of surveillance-based law enforcement?

  4. Is there a technical solution to the deepfake authentication problem? What would it require, and who would control it?

  5. Does the existence of deepfake technology create a moral hazard for guilty parties — a "my video was a deepfake" defense that creates reasonable doubt in cases where the footage is genuine?