Chapter 26: Quiz — Biometrics and Facial Recognition Ethics
20 questions. Mix of multiple choice, true/false, and short answer.
Multiple Choice
1. Which of the following is the most accurate characterization of the "permanence problem" in biometrics?
a) Biometric systems are more permanent than other AI systems and therefore harder to update b) Biometric identifiers cannot be revoked or replaced if compromised, creating lifetime residual risk c) Biometric data is stored permanently in government databases without expiration d) Facial recognition databases are permanent once assembled because deletion is technically impossible
Answer: b
2. In NIST's Face Recognition Vendor Testing (FRVT) 2019 report, what was the general finding about false positive rates for African American faces compared to white faces across most tested algorithms?
a) False positive rates were approximately equal across demographic groups b) African American faces had false positive rates 2–3 times higher than white faces c) African American faces had false positive rates 10–100 times higher than white faces d) White faces had higher false positive rates than African American faces
Answer: c
3. What is the critical distinction between 1:1 verification and 1:many identification in facial recognition?
a) 1:1 verification is used for law enforcement while 1:many identification is used commercially b) 1:1 verification compares a face against a single template; 1:many searches a database of many faces, with higher error accumulation c) 1:1 verification uses neural networks while 1:many uses traditional pattern matching d) 1:many identification is more accurate than 1:1 verification because of the larger comparison set
Answer: b
4. The Gender Shades study by Buolamwini and Gebru (2018) found that the highest error rates in commercial gender classification systems occurred for which demographic group?
a) Lighter-skinned males b) Darker-skinned males c) Lighter-skinned females d) Darker-skinned females
Answer: d
5. Under the EU AI Act, what is the default regulatory treatment of real-time remote biometric identification in public spaces for law enforcement purposes?
a) It is a high-risk AI system subject to conformity assessment and transparency requirements b) It is prohibited, with narrow judicial-authorization exceptions for serious crimes, terrorism, and finding victims c) It is permitted if accuracy rates exceed 95% across all demographic groups d) It requires registration in the EU AI Act database but is otherwise unrestricted
Answer: b
6. What distinguishes Illinois's Biometric Information Privacy Act (BIPA) from most other US biometric privacy laws?
a) BIPA applies to government agencies as well as private entities b) BIPA prohibits all biometric collection regardless of purpose c) BIPA creates a private right of action allowing individuals to sue without showing specific harm d) BIPA requires annual independent audits of all covered biometric systems
Answer: c
7. Which of the following best describes Clearview AI's database assembly method?
a) Purchasing licensed images from social media platforms under commercial agreements b) Collecting images voluntarily submitted by law enforcement partner agencies c) Scraping billions of publicly accessible images from social media without platform consent or individual consent d) Accessing government-issued identity document databases through federal law enforcement agreements
Answer: c
8. The "base rate problem" or "rarity problem" in 1:many facial recognition refers to which phenomenon?
a) The difficulty of detecting rare or unusual facial features that distinguish individuals b) The increasing probability of false matches as database size grows c) The tendency of algorithms trained on majority groups to perform poorly on minority groups d) The scarcity of diverse training data for developing accurate facial recognition models
Answer: b
9. Which case involved Madison Square Garden using facial recognition to exclude attending patrons?
a) Attorneys with pending litigation against MSG entities b) Individuals with prior criminal records c) Journalists who had published negative coverage of MSG d) Ticket scalpers and unauthorized resellers
Answer: a
10. In the Robert Williams wrongful arrest case, what was the significance of the photo array conducted after the facial recognition match?
a) The photo array was administered correctly and provides an example of best practice b) The photo array was the independent corroboration required to establish probable cause, but it was not actually independent since it was constructed around the facial recognition candidate c) The photo array replaced the facial recognition match as the primary evidence in the case d) The photo array violated Williams's Fourth Amendment rights and was the primary legal claim in the ACLU lawsuit
Answer: b
True/False
11. Under GDPR, biometric data used for unique identification is classified as ordinary personal data and processed under the same rules as other data.
Answer: False. GDPR Article 9 classifies biometric data used for unique identification as a "special category" of personal data, subject to a general prohibition on processing with specific exceptions. Processing biometric data requires a legal basis from Article 9(2), not merely the standard Article 6 legal bases.
12. NIST's FRVT testing found that algorithms developed by Asian companies showed the same demographic error patterns as algorithms developed by US companies.
Answer: False. NIST found that algorithms developed by Asian companies showed different demographic patterns — performing better on Asian faces and relatively worse on Caucasian faces compared to US-developed algorithms — suggesting that training data composition is a primary driver of differential performance.
13. Facial recognition categorization systems, which infer attributes like apparent age or gender from facial images, are legally distinct from identification systems and are generally not subject to the same regulatory concerns.
Answer: False. Categorization systems raise their own significant legal and ethical concerns: the EU AI Act prohibits biometric categorization to infer sensitive characteristics including race, political opinions, and sexual orientation. The scientific basis for inferring internal states (emotion, intent) from facial appearance is contested, and categorization can encode demographic stereotypes.
14. The FTC's 2023 settlement with Clearview AI required the company to delete its entire database of scraped facial images.
Answer: False. The FTC settlement did not require deletion of the existing database. It prohibited Clearview from selling database access to private businesses (not law enforcement), required an opt-out tool, and required enhanced data security — but the fundamental biometric database, assembled from scraped social media images, was allowed to remain.
15. San Francisco's 2019 facial recognition ban applies to both government agencies and private businesses operating within the city.
Answer: False. San Francisco's Stop Secret Surveillance Ordinance applies to city agencies. Private businesses deploying facial recognition within San Francisco are not covered by the city ban, though they may be subject to CCPA/CPRA and other state law requirements.
16. Voice authentication systems used in telephone banking are potentially vulnerable to voice cloning attacks because voice cloning technology can generate convincing reproductions from a small audio sample.
Answer: True. Voice cloning advances have substantially challenged the security model of voice authentication. Documented fraud cases involving voice cloning include a CEO being deceived into transferring funds via a voice-cloned impersonation. Voice authentication systems now require liveness detection to address this vulnerability.
17. Investigative genetic genealogy (forensic genealogy) can identify individuals who have never submitted a DNA sample to any database by using the genetic data of their relatives who have submitted samples.
Answer: True. Because family members share genetic material, a crime scene DNA profile can be matched against relatives in genealogy databases, allowing investigators to construct a family tree and identify suspects who have no direct database entry. This technique identified the Golden State Killer and has been used in hundreds of subsequent cases.
Short Answer
18. Explain the "CCTV-to-recognition transition" problem in two to three sentences. Why does adding facial recognition to existing CCTV infrastructure change the social contract of public surveillance, even if no new cameras are installed?
Model Answer: CCTV cameras have been present in many public spaces under an implicit social accommodation that footage is primarily used for post-incident review and that passive recording does not constitute systematic tracking. Facial recognition changes this accommodation by converting passive observation into active, real-time identification — enabling automated tracking of individuals across an entire camera network. The cameras are the same; the social meaning of their existence changes fundamentally because the practical constraints (human review of vast footage archives) that previously limited their surveillance potential are eliminated.
19. The NIST FRVT report found that differential performance patterns across demographic groups varied depending on whether the algorithm was developed by a US-based or Asian-based organization. What does this finding imply about the root cause of differential accuracy, and what does it suggest about the claim that facial recognition bias is an inevitable property of the technology?
Model Answer: The finding that demographic error patterns correlate with the geographic origin of the developer — US-developed algorithms performing worse on darker-skinned faces; Asian-developed algorithms performing relatively better on Asian faces — strongly implies that training data composition is a primary driver of differential performance. Algorithms learn to distinguish faces based on the variation present in their training data; if training data over-represents certain demographic groups, the algorithm learns the distinctions within those groups more finely. This means differential accuracy is not an inherent property of facial recognition as a technology — it is a product of specific design choices, specifically the composition of training data. Bias can, in principle, be reduced through intentional dataset curation, though market incentives have not consistently driven this outcome.
20. A retail company argues that posting a small sign at the entrance of each store disclosing facial recognition use constitutes adequate consent for scanning customers' faces. Evaluate this argument using the three components of meaningful consent: informed, freely given, and revocable.
Model Answer: This argument fails on all three dimensions of meaningful consent. First, "informed": a small entrance sign cannot convey who operates the system, what database faces are matched against, the false positive rate, how long data is retained, or how it is shared — the information a subject would need to make a genuinely informed decision. Second, "freely given": if the store is one of limited grocery options in the area, or if the customer needs medication from its pharmacy, the "choice" to enter and be scanned is not freely given — it is conditioned on access to essential goods. Under GDPR guidance, consent given as a condition of accessing a service is generally not freely given. Third, "revocable": once inside the store, the customer cannot prevent scanning — their face is continuously visible. There is no practical mechanism to revoke. Notice at the entrance does not transform what follows into consented surveillance; it merely provides after-the-fact notice of surveillance the individual cannot avoid.
Chapter 26 | AI Ethics for Business Professionals