Chapter 35 Quiz: Facial Recognition
1. The four stages of the facial recognition pipeline, in order, are:
a) Detection → Alignment → Feature extraction → Matching b) Capture → Analysis → Identification → Verification c) Alignment → Detection → Matching → Verification d) Feature extraction → Detection → Comparison → Identification
Answer: a — The pipeline: (1) face detection (finding faces in an image), (2) alignment (normalizing the face image), (3) feature extraction (producing a mathematical representation of the face), and (4) matching (comparing the representation against a database). Each stage has distinct error modes.
2. What distinguishes "1:N identification" from "1:1 verification" in facial recognition?
a) 1:N identification compares faces at 1 meter distance; 1:1 verification compares at any distance b) 1:1 verification compares one face against one stored face; 1:N identification compares one face against N stored faces in a database c) 1:N identification is more accurate than 1:1 verification for all demographic groups d) 1:1 verification is used for law enforcement; 1:N identification is used for commercial applications
Answer: b — 1:1 verification answers "is this the same person as [specific person]?" — used for phone unlock, passport verification. 1:N identification answers "who among N people is this?" — used for law enforcement databases, watchlist screening. The 1:N case is more error-prone at scale and is the type used in documented wrongful arrests.
3. Clearview AI's database contains over 30 billion images. How were these images collected?
a) Clearview purchased images from social media platforms under commercial licensing agreements b) Clearview obtained images through law enforcement agencies' existing biometric databases c) Clearview scraped images from social media platforms, news websites, and other publicly accessible online sources without the knowledge or consent of the people depicted d) Clearview collected images through its own network of cameras installed in public spaces
Answer: c — Clearview AI scraped images from publicly accessible websites — without the consent of platforms or people depicted — to build its database. This practice has been challenged under BIPA and other laws; Clearview has settled some suits but continues to operate under contested legal status.
4. The Gender Shades study found accuracy disparities in commercial facial analysis systems. Which demographic group had the highest error rates?
a) Lighter-skinned men b) Lighter-skinned women c) Darker-skinned men d) Darker-skinned women
Answer: d — Buolamwini and Gebru found that the worst performance occurred at the intersection of darker skin tone and female gender — with error rates up to 34.7% for darker-skinned women, compared to less than 1% for lighter-skinned men in some systems. This intersectional pattern reflects training data composition.
5. Why does using historical crime data in facial recognition databases create racially disparate false arrest risk, even if the algorithm itself is "race-neutral"?
a) Because the algorithm has a racial bias built into its matching weights b) Because the criminal record databases that feed facial recognition searches over-represent people of color due to racially biased policing and prosecution, and because facial recognition is less accurate for darker-skinned individuals c) Because darker-skinned people have fewer publicly available photos, so matching is less accurate d) Because police departments in communities of color are better funded and use facial recognition more effectively
Answer: b — Two factors compound: (1) facial recognition accuracy is lower for darker-skinned people (Gender Shades finding), and (2) databases used in law enforcement facial recognition (arrest records, mugshot databases) over-represent people of color due to racially biased criminal justice practices. The technology amplifies existing structural bias, even without explicit racial variables in the algorithm.
6. Robert Williams was arrested in Detroit in 2020 based on a facial recognition match. What was the significant policy violation in his case?
a) Police used Clearview AI without a warrant, violating FISA requirements b) Detroit's own policy required facial recognition to be an investigative lead requiring corroboration — not the basis for arrest — but Williams was arrested based solely on the match c) The facial recognition system used had not been approved by the Department of Justice d) Williams was a minor at the time and facial recognition on minors is prohibited under federal law
Answer: b — Detroit Police had a policy specifically prohibiting arrest based solely on a facial recognition match. That policy was violated in Williams's case. This illustrates that policy existence does not ensure compliance, particularly in law enforcement cultures where algorithmic outputs carry disproportionate weight.
7. What is the "false positive at scale" problem with large-scale 1:N facial recognition databases?
a) Larger databases require faster computers, which introduces errors in processing b) Even a highly accurate system produces false matches proportional to database size — a 99.9% accurate system searching a 10 million person database produces approximately 10,000 false matches c) Large databases contain more corrupted images, reducing overall accuracy d) The false positive problem only affects facial recognition on databases larger than 1 billion images
Answer: b — The scale problem is mathematical: even a very accurate system will produce false matches proportional to database size. A 99.9% accurate system produces ~1 false match per 1,000 searches; against a 10 million person database, this could generate ~10,000 false identifications. This is why large-scale 1:N identification for law enforcement is inherently error-prone.
8. Illinois's Biometric Information Privacy Act (BIPA) is considered the strongest state facial recognition protection in the United States. What feature of BIPA makes it particularly effective?
a) It includes criminal penalties for violations b) It requires federal government agencies to comply with Illinois biometric standards c) It has a private right of action — individuals can sue companies that violate it without waiting for government enforcement d) It applies to all U.S. residents who are photographed in Illinois, regardless of their home state
Answer: c — BIPA's private right of action allows individuals to sue directly for violations, with statutory damages (per violation, per person), without requiring the Illinois Attorney General or FTC to act first. This creates a direct accountability mechanism that has produced significant settlements — Facebook ($650M), TikTok ($92M), Clearview AI (multiple suits).
9. What does Simmel's concept of the "freedom of anonymity" in cities mean, and why is facial recognition a threat to it?
a) Simmel argued that anonymity in cities produces alienation; facial recognition restores community by identifying strangers b) Simmel identified the freedom to move through public space without being identified as a feature of urban modernity; comprehensive facial recognition databases threaten this freedom by enabling identification of any person in any surveilled public space c) Simmel's concept refers to legal anonymity rights codified in early 20th century European law d) The freedom of anonymity means the right to assemble without government identification — a First Amendment principle
Answer: b — Simmel (1903) argued that the city's gift was freedom through anonymity — the ability to move among strangers without being known. A world where any face in any camera can be matched to a comprehensive database ends this freedom: you cannot be anonymous in a city where your face is in a 30-billion-image database searched in real time.
10. The EU AI Act's treatment of facial recognition in public spaces represents which approach?
a) A risk-based framework that requires facial recognition to be licensed before deployment b) A prohibition on real-time remote biometric identification in public spaces for law enforcement purposes, with narrow exceptions c) A requirement that facial recognition systems pass accuracy audits before deployment d) A mandatory opt-in consent framework for any use of facial recognition in public spaces
Answer: b — The EU AI Act categorically prohibits real-time biometric surveillance in public spaces for law enforcement (with narrow exceptions for serious crime prevention and terrorism). This is stronger than use-restriction frameworks: it prohibits the most dangerous application category rather than just regulating how it is used.
11. Why is the "opt out" option for facial recognition fundamentally different from opt-out options for other forms of data collection?
a) Opt-out forms for facial recognition are more complex and require more personal information b) Facial recognition is always collected in real time, making opt-out technically impossible c) You cannot change your face — unlike a username, email address, or phone number, the biometric identifier is involuntary and permanent d) Facial recognition opt-out is governed by HIPAA, which makes it administratively difficult
Answer: c — The core distinction: other forms of data collection involve identifiers you can change (names, email addresses, device IDs) or voluntarily provided (search queries, social media posts). Your face is involuntary — you cannot choose a different face to present to surveillance cameras. This makes the "opt out" concept fundamentally inapplicable.
12. What does "feature extraction" in the facial recognition pipeline produce?
a) A photographic copy of the face stored in the database b) A mathematical vector (set of numerical values) representing distinctive characteristics of the face c) A list of facial features (eye color, nose shape, cheekbone structure) as text descriptors d) A comparison score between the input face and all faces in the database
Answer: b — Feature extraction transforms the aligned face image into a compact mathematical representation — typically a vector of several hundred to a few thousand numbers that encode the distinctive characteristics of the face. This vector is what is actually stored and compared, not the image itself (though many systems store the original image as well).
13. The Nijeer Parks case (New Jersey, 2019) involved:
a) An innocent Black man charged with shoplifting and assault based on a facial recognition match, who spent approximately 10 days in jail before charges were dropped b) A correct facial recognition identification of a robbery suspect who was later acquitted at trial c) A facial recognition false match involving a white man, challenging the claim that false arrests disproportionately affect Black people d) A case where facial recognition was used without a warrant and subsequently suppressed under the Fourth Amendment
Answer: a — Nijeer Parks was wrongly accused of shoplifting and assault based on a facial recognition match. Despite having a cash receipt showing he was 30 miles away at the time, he spent approximately 10 days in jail before the charges were dropped. He filed a civil rights lawsuit that was settled in 2024.
14. Which of the following best explains why Jordan's scenario (being matched by a Clearview AI search) is a structural problem rather than just a technical error?
a) The algorithm made a mistake, which better training data would have prevented b) The combination of: less accurate systems for Black people + disproportionate deployment in higher-surveillance communities + lack of corroboration requirements + burden shift to the accused creates systemic racial disparity, not just individual error c) Clearview AI violated BIPA by searching Jordan's face without consent d) The grocery store's loss prevention system was not properly calibrated for the surveillance camera's resolution
Answer: b — The structural analysis: accuracy disparities + disproportionate deployment + weak accountability requirements produce a system in which Black and Brown people are systematically more likely to face false accusations. Framing this as "the algorithm made a mistake" treats a structural pattern as an individual technical failure and suggests the solution is accuracy improvement rather than structural reform.
15. Amazon's Rekognition system was tested by ACLU researchers using members of Congress as a database. What did the test find?
a) The system was unable to match any members of Congress to their official government photos b) The system incorrectly identified 28 members of Congress as having arrest records, with African American members misidentified at a higher rate c) The system was 100% accurate for all demographic groups, contradicting the Gender Shades findings d) The system could not run on the ACLU's database because it lacked sufficient computing power
Answer: b — The ACLU's 2018 test of Amazon Rekognition, using a database of 25,000 mugshot photos and photos of members of Congress, produced 28 incorrect matches — falsely identifying members of Congress as people with arrest records. African American members were misidentified at a higher rate, consistent with Gender Shades findings.
16. The concept of "real-time tracking" in facial recognition refers to:
a) Tracking when a facial recognition match was made, for audit purposes b) Continuously processing video frames to identify and track specific individuals through a scene or across multiple cameras in real time c) Updating the facial recognition database in real time as new photos become available d) Tracking the real-time accuracy of facial recognition systems in deployed environments
Answer: b — Real-time tracking (also called "live" or "on-the-move" biometric identification) involves continuously analyzing video streams to identify individuals as they move through a scene. This is the most powerful and most dangerous form of facial recognition — enabling the surveillance of entire populations in public spaces, as in some Chinese applications.
17. After the Gender Shades publication, the commercial facial recognition companies tested improved their accuracy for underrepresented groups. What is the significance of this improvement for the policy debate?
a) It proves that facial recognition bias is temporary and will be eliminated through better technology, so regulation is unnecessary b) It demonstrates that the prior accuracy disparities were choices, not technical inevitabilities — companies could have achieved better accuracy across demographic groups but didn't until publicly shamed into it c) It shows that academic research is more effective than legislation for improving facial recognition d) It confirms that facial recognition is now sufficiently accurate for law enforcement use without racial bias
Answer: b — The fact that companies improved accuracy after public disclosure reveals that the prior disparities reflected priorities, not impossibilities. The companies were optimizing for overall accuracy on datasets that over-represented lighter-skinned individuals. When they prioritized accuracy for underrepresented groups, they could achieve it. This is important because it means the disparities are not inherent to the technology — they reflect whose interests the technology was built to serve.
18. Why does Jordan's case illustrate the limits of the "nothing to hide" argument in the facial recognition context?
a) Jordan was hiding information about the shoplifting, so the "nothing to hide" argument doesn't apply b) Jordan had nothing to hide and still was falsely accused — demonstrating that innocent people face significant risk from facial recognition errors, and that privacy protections matter regardless of guilt c) The "nothing to hide" argument is only relevant to content surveillance (messages, search queries) and doesn't apply to biometric systems d) Jordan's case involves algorithmic error, not surveillance, so the "nothing to hide" argument is irrelevant
Answer: b — Jordan's case is a concrete example of the "nothing to hide" argument's failure: they were innocent, had nothing to hide, and still faced false accusation, potential arrest, and the burden of proving innocence. The "nothing to hide" argument assumes that surveillance only harms the guilty; Jordan's case demonstrates that surveillance harms innocent people who are misidentified, particularly those from demographics where the technology is least accurate.
Score interpretation: 16-18 correct — Excellent mastery of facial recognition technology and policy | 13-15 — Good understanding with gaps | 10-12 — Review sections 35.1–35.8 | Below 10 — Revisit the full chapter