Further Reading — Chapter 10: Bias in Hiring and HR Systems
Part 2: Bias and Fairness
The sources below are organized thematically. All are real, verifiable sources as of the chapter's publication date. Annotations indicate why the source is valuable, what it adds beyond the chapter, and who will find it most useful.
Foundational Research on Hiring Discrimination
1. Bertrand, M., & Mullainathan, S. (2004). "Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination." American Economic Review, 94(4), 991–1013.
The foundational audit study on name-based hiring discrimination. Researchers sent matched résumés with White-sounding and Black-sounding names to real job postings and measured callback rates. White-sounding names received 50% more callbacks. The methodology — sending matched pairs of fake résumés — has since been replicated in numerous countries and contexts. Essential reading for understanding the empirical basis of name-based bias claims and the audit methodology that can be applied to AI systems. Freely available through NBER Working Paper 9873.
2. Quillian, L., Pager, D., Hexel, O., & Midtbøen, A. H. (2017). "Meta-analysis of field experiments shows no change in racial discrimination in hiring over time." Proceedings of the National Academy of Sciences, 114(41), 10870–10875.
A meta-analysis of 24 field experiments on racial discrimination in hiring conducted between 1989 and 2015, finding that discrimination against Black and Hispanic candidates has not declined in that period despite substantial social change. Provides the larger context for understanding why AI systems trained on historical data are inheriting a persistent pattern rather than a historical artifact. Important for executives who might assume discrimination is primarily a problem of the past.
3. Kline, P., Rose, E., & Walters, C. (2022). "Systemic Discrimination Among Large U.S. Employers." The Quarterly Journal of Economics, 137(4), 1963–2036.
A large-scale audit study examining discrimination across Fortune 500 companies, using callback rates to assess the prevalence of racial discrimination across major employers. Finds significant variation across companies — some show substantial discrimination, others do not — and documents that discrimination is systematic rather than confined to individual bad actors. Valuable for executives who want to understand the evidence base for employer-level accountability.
Legal Framework
4. Equal Employment Opportunity Commission. (2023). "Artificial Intelligence and Algorithmic Fairness: What You Should Know." EEOC Technical Assistance Document.
The EEOC's authoritative guidance document on AI and employment discrimination, released May 2023. Affirms that existing anti-discrimination law applies fully to AI hiring tools, that employers bear liability for discriminatory vendor tools, and that AI assessment tools must comply with ADA accommodation requirements. Available free at eeoc.gov. Required reading for any HR professional or legal counsel involved in AI hiring tool decisions.
5. Equal Employment Opportunity Commission. (1978). "Uniform Guidelines on Employee Selection Procedures." 29 C.F.R. Part 1607.
The regulatory document establishing the four-fifths rule and validity requirements for employee selection procedures. Though pre-AI, these guidelines remain the operative legal standard for evaluating AI hiring tool validity and adverse impact. Available at eCFR.gov. Important for understanding the legal standard that AI tool vendors must meet and that employers must apply.
6. New York City Local Law 144 of 2021 (Int 1894-A). Automated Employment Decision Tools.
The text of New York City's landmark AI hiring regulation, effective July 5, 2023. Requires annual independent bias audits, public disclosure, and candidate notification for automated employment decision tools used in NYC hiring and promotion. Available at legistar.council.nyc.gov. Essential for any organization hiring in New York City; also the best available US regulatory template for practitioners in other jurisdictions building voluntary compliance programs.
7. Office of Personnel Management & Department of Labor. (2023). "Reducing the Risks of Artificial Intelligence in Federal Hiring." Guidance Memorandum.
Federal guidance on reducing AI-related discrimination risks in federal hiring, applicable to federal agencies and federal contractors. Covers validation requirements, adverse impact testing, and human oversight obligations. Provides insight into how the executive branch is interpreting existing law in the AI hiring context. Available at OPM.gov.
AI Hiring Technology: Validity and Research
8. Hickman, L., Saef, R., & Ryan, A.M. (2021). "Do Hiring Algorithm Scores Predict Job Performance? The Case of Game-Based Assessment." Journal of Applied Psychology, 106(5), 664–679.
A peer-reviewed study examining whether game-based hiring assessments predict job performance — one of the few independent validation studies not funded by a vendor. Finds mixed results: some game-based measures show criterion validity, but adverse impact reduction claims are not uniformly supported. Valuable as an example of the type of independent validation evidence that practitioners should require from vendors.
9. Narayanan, A. (2022). "The limits of the quantitative approach to discrimination." Keynote address at the FAccT Conference.
A careful critique of how quantitative fairness metrics can mask rather than reveal discrimination — arguing that the focus on metrics obscures qualitative harms that numbers cannot capture. Particularly relevant to the audit-as-compliance-theater concern raised in the chapter. Available at cs.princeton.edu/~arvindn/.
10. Köchling, A., & Wehner, M. C. (2020). "Discriminated by an Algorithm: A Systematic Review of Discrimination and Fairness by Algorithmic Decision-Making in the Context of HR Recruitment and HR Development." Business Research, 13(3), 795–848.
A systematic review of the academic literature on discrimination and fairness in algorithmic HR decision-making. Covers résumé screening, video interviews, personality assessment, and other AI applications. Provides a comprehensive overview of what the research shows across tool types, with particular attention to bias mechanisms and mitigation approaches. Good bridge between academic literature and practitioner applications.
Disability, Accommodation, and AI Assessment
11. Wheatley, D., & Saddiqui, S. (2020). "Technology-mediated recruitment and selection: Disability and the digital divide." Work, Employment and Society, 34(1), 68–85.
Research on how technology-mediated recruitment disadvantages candidates with disabilities, including analysis of specific tools and specific disability categories. Provides empirical grounding for the ADA accommodation analysis in the chapter. Valuable for HR professionals designing accommodation programs.
12. Harpur, P., & Loudoun, R. (2023). "The ADA and AI Hiring: Disability Discrimination in the Age of Algorithms." Disability Studies Quarterly, 43(1).
A legal and disability studies analysis of how existing ADA frameworks apply to AI hiring tools — including video interviews, cognitive testing, and personality assessment — and where current law leaves gaps in candidate protection. Identifies specific tool types and specific ADA provisions. Valuable for legal and compliance teams.
Facial Recognition Bias: The Technical Foundation
13. Buolamwini, J., & Gebru, T. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st ACM Conference on Fairness, Accountability and Transparency, 77–91.
The foundational Gender Shades research demonstrating accuracy disparities of up to 34 percentage points in commercial facial analysis systems across demographic groups, with the largest gaps for darker-skinned women. The methodological foundation for understanding why AI video interview tools that rely on facial analysis will not perform equally across demographic groups. Available at gendershades.org.
14. National Institute of Standards and Technology (NIST). (2019, updated 2022). "Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects." NISTIR 8280.
NIST's comprehensive evaluation of 189 commercial facial recognition algorithms across demographic groups, finding consistent and substantial accuracy disparities affecting darker-skinned individuals, women, and older individuals. The authoritative technical source for accuracy claims about commercial facial analysis. Available free at nvlpubs.nist.gov.
Workplace Surveillance and Flight Risk AI
15. Ajunwa, I., Crawford, K., & Schultz, J. (2017). "Limitless Worker Surveillance." California Law Review, 105(3), 735–776.
A comprehensive legal analysis of workplace surveillance technologies — including productivity monitoring, biometric tracking, and behavioral analytics — and the limits that existing law places on employer surveillance. Directly relevant to the flight risk prediction and performance monitoring sections of this chapter. Establishes the legal framework within which workplace AI monitoring must operate.
Assessment Science and Standards
16. Society for Industrial and Organizational Psychology (SIOP). (2018). Principles for the Validation and Use of Personnel Selection Procedures (5th ed.). SIOP.
The authoritative professional standards document for personnel selection procedures, covering validation methodology, adverse impact analysis, and evidence standards for new and emerging assessment technologies. Essential reference for HR professionals evaluating vendor validation claims. Sets the professional benchmark that Chapter 10's validity discussion references. Available at siop.org.
17. Chamorro-Premuzic, T., & Furnham, A. (2010). The Psychology of Personnel Selection. Cambridge University Press.
A comprehensive academic treatment of the science underlying personnel selection — criterion validity, assessment design, adverse impact, and the relationships between psychological constructs and job performance. Provides the psychometric foundation for evaluating vendor claims about assessment tools. More technically oriented than most chapter readings; recommended for HR professionals with quantitative backgrounds or those managing I-O psychology teams.
International and Comparative Perspectives
18. European Parliament. (2024). "Regulation (EU) 2024/1689 of the European Parliament and of the Council: Artificial Intelligence Act." Official Journal of the European Union.
The text of the EU AI Act, available at eur-lex.europa.eu. Annex III, Item 4 classifies AI systems used for recruitment, CV filtering, assessing and evaluating persons in the course of job interviews, and assessment and monitoring of performance and behavior as high-risk applications. Annex VIII provides documentation requirements. Essential for organizations operating in European markets or hiring EU candidates.
19. AlgorithmWatch. (2023). "AI Hiring Tools in Europe: A Mapping Study."
An empirical study by the European algorithmic accountability nonprofit AlgorithmWatch documenting the AI hiring tools in use across European companies, their compliance status under emerging EU regulation, and case studies of adverse impact incidents. Provides a useful comparative perspective on the EU landscape versus the US landscape. Available at algorithmwatch.org.
20. Center for Democracy and Technology (CDT). (2022). "Hidden in Plain Sight: How the Use of Artificial Intelligence in Hiring Perpetuates Discrimination." CDT Report.
A civil society analysis of AI hiring tool practices, vendor claims, regulatory gaps, and policy recommendations. Draws on vendor documentation, legal analysis, and interviews with affected workers. Provides a useful counterpoint to vendor perspectives and a civil liberties framing that complements the legal compliance framing of this chapter. Available at cdt.org.
For additional resources on measuring fairness metrics as applied to hiring data, see the Python code resources in Chapter 9 (Measuring Fairness) and Appendix: Python Reference for AI Fairness Auditing. For international regulatory context, see Chapter 33 (Regulation and Compliance: GDPR, EU AI Act, and Beyond). For accountability frameworks applicable to AI hiring harms, see Chapter 18 (Who Is Responsible When AI Fails?) and Chapter 19 (Auditing AI Systems).