Further Reading: Chapter 35 — Facial Recognition


1. Buolamwini, Joy and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency (2018): 77–91.

The foundational paper on algorithmic accuracy disparities in commercial facial analysis systems. Freely available through the MIT Media Lab website and the ACM Digital Library. Reading the full paper is essential for understanding both the methodology (why a new benchmark was needed, how it was constructed) and the findings (which systems, which subgroups, what error rates). The paper is accessible to non-specialists; the appendix contains the full technical details. This is the primary source for Chapter 35's Gender Shades discussion.


2. Buolamwini, Joy. "Actionable Auditing: Investigating the Impact of Publicly Naming Artificial Intelligence Performance Results on Racial Dynamics." In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019): 429–435.

The follow-up paper documenting how Microsoft, IBM, and Face++ improved their accuracy after the Gender Shades publication. This paper provides empirical evidence that the original accuracy disparities were choices, not technical inevitabilities, and evaluates whether public disclosure functions as an effective accountability mechanism for AI systems. Essential for the policy discussion in Section 35.8.


3. Hill, Kashmir. Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It. New York: Random House, 2023.

Kashmir Hill has been the most important journalist covering Clearview AI, and this book provides the definitive account of the company's founding, its practices, and its implications. Highly readable and deeply researched, the book traces Clearview's origin story, the mechanics of its web scraping, its sale to law enforcement agencies across the United States, and the implications of a world where public anonymity is ended by a privately held database. This is essential reading for the Section 35.9 discussion of the "database problem."


4. Hartzog, Woodrow and Evan Selinger. "Facial Recognition Is the Perfect Tool for Oppression." Medium, August 2018.

A concise, influential argument for a moratorium on facial recognition, grounded in the technology's specific characteristics: its ubiquity potential, its covertness, its permanence, and its inevitable function creep. Hartzog and Selinger argue that facial recognition's specific combination of properties makes it uniquely dangerous as a surveillance tool — unlike most technologies, it cannot be meaningfully opted out of, it works without subjects' knowledge, and it enables comprehensive tracking through public space. This is the strongest version of the abolitionist argument for this specific technology.


5. Raji, Inioluwa Deborah and Joy Buolamwini. "Actionable Auditing: Investigating the Impact of Publicly Naming Artificial Intelligence Performance Results on Racial Dynamics." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2019).

Raji and Buolamwini's auditing methodology paper, which formalizes the approach used in Gender Shades for systematic evaluation of AI systems. This paper has been influential in the emerging field of "AI auditing" — independent evaluation of commercial AI systems for accuracy, bias, and other performance characteristics. Essential for students interested in the methodological question of how AI accountability is actually implemented.


6. Garvie, Clare, Alvaro Bedoya, and Jonathan Frankle. "The Perpetual Line-Up: Unregulated Police Face Recognition in America." Georgetown Law Center on Privacy and Technology, October 2016.

The Georgetown study is the most comprehensive analysis of law enforcement facial recognition deployment in the United States available. It documented the extent of facial recognition use by police departments, the lack of formal policies and oversight, the databases used (including driver's license photos — meaning law-abiding citizens are in law enforcement face recognition databases), and the specific risks of error and abuse. This report, published before the Williams and Parks wrongful arrests became public, anticipated the accountability failures those cases documented.


7. Buolamwini, Joy. "AI, Ain't I A Woman?" Medium/Vimeo, 2018.

Buolamwini's spoken word/video performance confronting facial recognition systems with images of Black women historical figures that the systems fail to correctly identify. Available on Vimeo and through AJL's website. Watching the performance is a valuable complement to reading the academic papers — it provides the affective register that research papers cannot. The performance's title is a reference to Sojourner Truth's 1851 "Ain't I a Woman?" speech, explicitly connecting the technological invisibility of Black women in AI to the historical invisibility of Black women in social and political recognition.


8. American Civil Liberties Union. "Unregulated and Unaccountable: How Facial Recognition Technology Is Used by Local Police." ACLU Report, 2022.

The ACLU's comprehensive update on law enforcement facial recognition, documenting policies (or lack thereof), database access, error incidents, and accountability gaps across U.S. police departments. The report provides the most current empirical snapshot of law enforcement use available from a reliable civil liberties source. It documents both the scale of use and the consistent pattern of inadequate accountability mechanisms.


9. Keyes, Os, Josephine Hoy, and Margaret Drouhard. "In Plain Sight: The Neglected Linkage Between Bridgewater's CtrlAlt and Algorithmic Harm." Proceedings of CHI Conference on Human Factors in Computing Systems (2019).

This paper examines how gender recognition systems — closely related to facial recognition — harm transgender and non-binary individuals who are systematically misclassified. It connects the Gender Shades analysis to a broader critique of binary-category AI systems that force complex human characteristics into categories that don't reflect lived reality. Essential for understanding that the accuracy disparity problem extends beyond race and has specific implications for gender-variant individuals.


10. European Data Protection Board. "Guidelines 05/2022 on the Use of Facial Recognition Technology in the Area of Law Enforcement." EDPB, 2023.

The official guidance from the EU's top data protection authority on facial recognition in law enforcement. This document articulates the legal framework under the Law Enforcement Directive and anticipates the AI Act's provisions. Reading official regulatory guidance alongside advocacy documents and academic research gives a complete picture of the regulatory landscape. Available free from edpb.europa.eu.


11. Stark, Luke. "Facial Recognition Is the Plutonium of AI." XRDS: Crossroads, The ACM Magazine for Students 25, no. 3 (2019): 50–55.

Stark's provocative comparison of facial recognition to plutonium — extremely dangerous, with no safe use case for most applications — makes the abolitionist argument in accessible form. He argues that unlike other AI technologies that have mixed risks and benefits, facial recognition's specific combination of properties (covert, ubiquitous, permanent, function creep) makes it too dangerous to deploy in most contexts regardless of accuracy improvements. A useful primary source for the abolition vs. reform debate about this specific technology.


For primary data on facial recognition systems: Buolamwini's Algorithmic Justice League (ajl.org) provides the most current auditing research. The Georgetown Center on Privacy and Technology (law.georgetown.edu/privacy-technology-center) publishes ongoing research on law enforcement biometrics. NIST (nist.gov) publishes Face Recognition Vendor Testing (FRVT) reports with technical accuracy evaluations across demographic groups.