Chapter 35 Exercises: Facial Recognition
Exercise 35.1 — Myth vs. Reality: Facial Recognition Claims
Type: Critical analysis | Difficulty: Beginner | Time: 30 minutes
The chapter opens with a myth vs. reality format addressing common claims about facial recognition. This exercise extends that analysis.
Instructions: For each claim below, write a 100-150 word myth-busting response. Your response should: (a) state why the claim is misleading or false, (b) state what is actually true, and (c) explain why the distinction matters for policy.
Claim 1: "Facial recognition is more accurate than human eyewitness identification, so it's an improvement over the error-prone eyewitness testimony that leads to wrongful convictions."
Claim 2: "Facial recognition just confirms what we already know from CCTV footage — it's just automating what police would have done anyway by asking people to identify faces."
Claim 3: "The companies developing facial recognition are actively working to eliminate accuracy disparities, so any current bias problems will soon be resolved by technical improvement."
Claim 4: "Real-time facial recognition in public spaces is already the norm in China, so resisting it in the US and EU is fighting the tide — it's coming regardless."
Claim 5: "People who oppose facial recognition are protecting criminals. Law enforcement needs every tool available to identify violent criminals and terrorists."
Exercise 35.2 — The Gender Shades Study: Reading Primary Research
Type: Research analysis | Difficulty: Intermediate | Time: 45 minutes
Background: The Gender Shades paper by Joy Buolamwini and Timnit Gebru (2018) is one of the most important and influential AI auditing studies ever conducted. This exercise develops your ability to read and evaluate scientific research on AI systems.
Part A — Read the study. The paper "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification" is freely available through the MIT Media Lab website and through academic databases. Read the abstract, introduction, methodology section, and results (you can skim the technical details but understand the structure of the methodology).
Part B — Answer: 1. What was the research question? 2. Why did Buolamwini and Gebru create a new benchmark dataset (PPB) rather than using existing benchmarks? What was wrong with existing benchmarks? 3. What were the highest and lowest accuracy rates found? Which demographic group had the worst performance? Why? 4. The study evaluated commercial systems from major tech companies. What does the fact that these companies' systems showed large accuracy disparities tell us about how these systems were developed and tested? 5. After the study was published, the companies improved their systems' accuracy for underrepresented groups. What does this improvement tell us about whether the prior accuracy disparities were inevitable?
Part C — Implications (300 words): The Gender Shades study focused on gender classification, not criminal suspect identification. How do its findings apply to law enforcement facial recognition? What additional concerns arise when you move from gender classification to suspect identification?
Exercise 35.3 — Robert Williams Case: Rights and Accountability Analysis
Type: Case analysis | Difficulty: Intermediate | Time: 40 minutes
Instructions: Read the following description of the Williams case and answer the questions.
Robert Williams was arrested at his home in Detroit in January 2020, handcuffed in front of his family, and detained overnight. The arrest was based on a Clearview AI facial recognition match. Detroit Police's own policy prohibited arrest based solely on facial recognition match — the policy required the match to be used as an investigative lead requiring corroboration. The policy was violated.
Williams had to demonstrate his innocence: he was able to show that his employer's records placed him at work at the time of the crime. He was released but his DNA and fingerprints were taken and retained.
Questions:
-
The burden of proof: The presumption of innocence means the state must prove guilt; the accused does not have to prove innocence. How did the facial recognition match effectively reverse this presumption? What would have happened to someone who didn't have clear alibi records?
-
Policy violation and accountability: Detroit's policy was violated. Should the officers responsible face discipline? How can policies governing surveillance technology be enforced effectively in law enforcement cultures where "the computer says so" carries weight?
-
DNA and fingerprint retention: Williams's DNA and fingerprints were taken as part of the arrest process. Even though the charges were dropped, this biometric data may be retained in law enforcement databases. Is this appropriate? What remedy, if any, should Williams have?
-
Systemic accountability: Williams was one documented wrongful arrest. Research suggests he is not alone — many similar cases may not be documented or litigated. What kind of accountability mechanism would identify all cases of wrongful arrest from facial recognition and ensure they are addressed?
-
Design question: If law enforcement facial recognition cannot be abolished, what policy requirements would you design to prevent wrongful arrests? Be specific: what must be required before an arrest can occur? Who must review what information?
Exercise 35.4 — Clearview AI and Public Anonymity
Type: Philosophical debate | Difficulty: Advanced | Time: 50 minutes
Clearview AI argues that scraping publicly posted photos is protected First Amendment activity — collecting information that is publicly available does not violate privacy. Critics argue that aggregating photos into a searchable face database creates qualitatively different privacy harm than the individual photos alone.
Part A — Map the legal argument. Clearview's First Amendment claim rests on the following premises: - The First Amendment protects collection and use of publicly available information - Photos posted online are publicly available - Therefore, collecting those photos is First Amendment-protected
Evaluate each premise. Is each premise accurate? Does the conclusion follow from the premises?
Part B — Map the contextual integrity argument. Helen Nissenbaum's "contextual integrity" framework (from Chapter 33's further reading) holds that information flows appropriately when they match the norms of the context in which they originate.
- A photo posted on Instagram is shared in the context of social networking (friends, followers, etc.)
- Clearview's use is in the context of law enforcement identification
- Does this flow violate contextual integrity? What would Nissenbaum say?
Part C — The aggregation argument. Warren and Brandeis (1890) recognized that combining individually innocuous information can produce a qualitative privacy harm. Apply this argument to Clearview: is the privacy harm from the aggregation of 30 billion photos greater than the sum of the harms of each individual photo being public?
Part D — Your position (300 words): Should Clearview AI be legal? If you believe it should be banned, what is the legal basis? If you believe it should be permitted, what limits should apply?
Exercise 35.5 — Designing a Facial Recognition Use Policy
Type: Applied policy design | Difficulty: Intermediate | Time: 60 minutes
Scenario: You are advising a mid-sized U.S. city that is considering whether to allow its police department to use facial recognition. The city council is divided; advocates for communities of color are strongly opposed; law enforcement argues it would help solve violent crimes; civil liberties groups have provided testimony on wrongful arrest risks.
Design a use policy covering the following dimensions. For each, state your policy provision and justify it in 75-100 words:
-
Permitted uses: What specific types of cases may facial recognition be used in? (Violent felonies? All crimes? Only crimes with no other leads?)
-
Prohibited uses: What uses are explicitly prohibited? (Real-time tracking? Use on protest footage? Use based on race?)
-
Corroboration requirement: Before making an arrest based on a facial recognition match, what independent evidence must exist?
-
Match threshold: What confidence score must a match achieve? How must the match be reviewed (by one officer? by a supervisor? by an independent analyst?)?
-
Database restrictions: Whose images may be in the database the police search? (People with prior convictions? All residents with driver's licenses?)
-
Transparency and accountability: What reporting requirements exist? Who audits compliance? How are errors documented and addressed?
-
Community input: What role does community input have in the policy? Who must be consulted?
-
Sunset and review: When and how is the policy reviewed? What evidence would lead to suspension or termination?
Final assessment: Having designed this policy, do you believe facial recognition in law enforcement can be adequately governed through use restrictions? Or does the history of policy violations (as in the Williams case) suggest that no use policy can adequately address the problem?
Exercise 35.6 — The EU AI Act and U.S. Comparison
Type: Comparative regulatory analysis | Difficulty: Advanced | Time: 45 minutes
The EU AI Act prohibits real-time remote biometric identification in public spaces for law enforcement purposes (with narrow exceptions). The United States has no comparable federal law.
Part A — Understand the EU provision. Research: (a) What does the EU AI Act's prohibition on real-time biometric identification actually cover? (b) What are the exceptions? (c) What enforcement mechanism applies?
Part B — Compare to U.S. law. What U.S. federal law, if any, restricts law enforcement facial recognition in public spaces? What state and local restrictions exist?
Part C — Evaluate. Which approach do you find more protective of privacy? Which is more practical to implement and enforce? The EU approach is a categorical prohibition with narrow exceptions; the U.S. approach (where it exists) tends to be use restrictions. What are the tradeoffs between these approaches?
Part D — Draft a proposal. In 300 words, propose a U.S. federal approach to facial recognition in public spaces. Should the U.S. adopt something like the EU's categorical prohibition? A use-restriction approach? A moratorium pending further study? Justify your choice.