Chapter 26: Further Reading — Biometrics and Facial Recognition Ethics
A curated selection of 18 sources for readers who wish to go deeper. Organized by category.
Foundational Research
1. Grother, P., Ngan, M., & Hanaoka, K. (2019). Face Recognition Vendor Testing (FRVT) Part 3: Demographic Effects. NIST Interagency/Internal Report 8280. The authoritative technical source on demographic disparities in facial recognition accuracy. NIST evaluated 189 algorithms from 99 developers, finding false positive rates for African American and Asian faces 10–100 times higher than for white faces in algorithms developed by US organizations. Essential reading for anyone evaluating facial recognition systems. Available free from NIST.gov.
2. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT*). The landmark paper that demonstrated intersectional demographic disparities in commercial gender classification systems. Darker-skinned women faced error rates as high as 34.7% while lighter-skinned males achieved near-perfect accuracy. A foundational paper in algorithmic fairness research with direct policy implications.
3. Klare, B., Burge, M., Klontz, J., Vorder Bruegge, R., & Jain, A. (2012). Face Recognition Performance: Role of Demographic Information. IEEE Transactions on Information Forensics and Security. An earlier academic paper documenting demographic variation in facial recognition performance, providing context for the subsequent NIST and Gender Shades findings. Demonstrates that awareness of the problem predates widespread deployment.
Investigative Journalism
4. Hill, K. (2020, January 18). The Secretive Company That Might End Privacy as We Know It. The New York Times. The investigation that exposed Clearview AI to the public. Required reading for understanding the Clearview case. Hill documented the company's database assembly method, its law enforcement clients, and the implications for privacy. She continued to report extensively on Clearview and facial recognition for the Times; her subsequent 2022 book is listed below.
5. Hill, K. (2022). Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It. Random House. Hill's book-length treatment of the Clearview story and the broader facial recognition industry. Provides extended context for the reporting summarized in the case study, including detailed accounts of specific investigations where Clearview was used, interactions with company founders, and the regulatory responses across multiple countries.
6. Harwell, D. (2022, November 14). Madison Square Garden Is Using Facial Recognition to Ban Its Owner's Legal Enemies. The Washington Post. The investigation that revealed MSG's use of facial recognition to exclude attorneys with pending litigation against MSG entities. Illustrates the potential for commercial facial recognition to be weaponized against specific individuals rather than serving security purposes.
Legal and Policy Analysis
7. Garvie, C., Bedoya, A., & Frankle, J. (2016). The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law Center on Privacy and Technology. The Georgetown Law study that first systematically documented the use of facial recognition by law enforcement in the United States. Found that more than half of American adults had their images in a facial recognition database searchable by at least one law enforcement agency, and that existing systems lacked accuracy standards, audit requirements, and oversight mechanisms. Essential policy context.
8. American Civil Liberties Union (2018). Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots. The ACLU's documentation of its experiment matching members of Congress against Amazon Rekognition using arrest photo databases. Found disproportionate false matches among members of color. Triggered significant controversy about threshold settings and accountability for commercial facial recognition products.
9. Office of the Privacy Commissioner of Canada, et al. (2021). Joint Investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d'accès à l'information du Québec, the Information and Privacy Commissioner for British Columbia, and the Information and Privacy Commissioner of Alberta. The multi-jurisdictional Canadian investigation of Clearview AI, finding mass surveillance incompatible with Canadian privacy law. A well-reasoned analysis of consent and legitimate interest arguments that repays reading by legal and compliance professionals.
Academic Books and Monographs
10. Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press. A sociological analysis of how algorithmic systems, including facial recognition, encode and perpetuate racial hierarchies. Benjamin argues that technological systems embed social assumptions that reflect existing power structures, and that neutrality claims mask discriminatory designs.
11. Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press. An examination of how algorithmic systems disproportionately affect low-income and marginalized communities. While not exclusively focused on facial recognition, Eubanks provides essential context for understanding why automated surveillance technologies create distributional justice concerns beyond technical accuracy metrics.
12. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. The comprehensive analysis of the surveillance capitalism business model, of which facial recognition is one expression. Zuboff's theoretical framework for understanding how behavioral data collection constitutes a new form of economic power provides context for the commercial facial recognition cases.
Primary Legal and Regulatory Sources
13. Illinois Biometric Information Privacy Act, 740 ILCS 14/1 et seq. (2008). The text of BIPA, available through the Illinois General Assembly. A practitioner must read the actual statute, not merely summaries of it. The Act is remarkably brief (a few pages) and clearly drafted for a non-legal audience. Understanding its requirements — informed written consent, written policy, prohibition on profit from biometric data, private right of action — is essential for compliance work.
14. European Parliament. (2024). Regulation (EU) 2024/1689 (EU AI Act), Articles 3, 5, and 10. The EU AI Act's definition of biometric identification systems, prohibition on real-time biometric identification in public spaces, and data governance requirements. Article 5 contains the prohibited AI practices provisions, including real-time biometric identification; the specific exceptions are enumerated in Article 5(1)(h)(i-iii). Essential for organizations operating in or with EU markets.
15. Federal Trade Commission. (2023). In the Matter of Clearview AI, Inc. — Commission Decision and Order. The text of the FTC settlement with Clearview AI. Practitioners should read consent decrees and settlements directly: the specifics of what is prohibited, what is required, and what is not addressed often diverge significantly from journalistic summaries.
Documentary and Multimedia
16. Coded Bias. (2020). Documentary film directed by Shalini Kantayya. 7th Empire Media. A documentary following Joy Buolamwini's research and subsequent advocacy work on facial recognition bias. Includes interviews with researchers, policymakers, and individuals affected by facial recognition surveillance. Accessible to general business audiences and effective in conveying the human stakes of the technical issues.
Reports and Institutional Publications
17. Facial Recognition Technology in the Criminal Justice System: Report to the Senate Judiciary Committee. (2023). United States Government Accountability Office. The GAO's assessment of federal law enforcement use of facial recognition technology. Documents the agencies using the technology, the governance frameworks in place (or absent), and recommendations for oversight improvements. Provides empirical grounding for policy discussions about federal use.
18. AI Now Institute. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute, New York University. A broader survey of discrimination in AI systems, including facial recognition, with policy recommendations. The AI Now Institute has produced multiple annual reports relevant to algorithmic accountability that repay regular review.
Note on Currency
This field evolves rapidly. Regulatory developments — new enforcement actions, legislative enactments, court rulings — occur frequently. Readers are encouraged to monitor:
- The NIST Face Recognition Vendor Testing program (nist.gov/programs-projects/face-recognition-vendor-testing-frvt)
- The Algorithmic Justice League (ajl.org)
- The ACLU's work on surveillance technologies (aclu.org)
- The International Association of Privacy Professionals' regulatory updates (iapp.org)
- Electronic Frontier Foundation surveillance resources (eff.org/issues/face-recognition)
Chapter 26 | AI Ethics for Business Professionals