Further Reading: Chapter 29 — HR Analytics and Predictive Hiring
Foundational Research
Rivera, Lauren A. Pedigree: How Elite Students Get Elite Jobs. Princeton University Press, 2015. Rivera's sociological study of hiring at elite investment banks, law firms, and consulting firms provides the empirical foundation for the chapter's analysis of "culture fit." Based on extensive interviews with hiring professionals and observations of hiring processes, Rivera demonstrates that "culture fit" assessments consistently measure class and leisure-activity similarity rather than values or performance-relevant qualities. Essential reading for understanding how algorithmic culture fit scoring amplifies existing social stratification.
Bertrand, Marianne, and Sendhil Mullainathan. "Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination." American Economic Review 94, no. 4 (2004): 991–1013. The foundational experimental study on name-based discrimination in hiring, sending equivalent resumes with names implying different racial identities and finding significantly lower callback rates for names associated with Black applicants. While predating AI resume screening, the mechanism it documents — name-based discrimination — is precisely what the Python simulation in this chapter demonstrates in algorithmic form.
AI Hiring Specific Research
Raghavan, Manish, Solon Barocas, Jon Kleinberg, and Karen Levy. "Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices." ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2020. The most rigorous academic evaluation of bias mitigation claims made by algorithmic hiring vendors, including Pymetrics and other platforms. The paper systematically evaluates the methodological quality of vendor bias auditing and finds significant limitations. Essential for evaluating vendor claims of bias reduction.
Barrett, Lisa Feldman. How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt, 2017. Barrett's theory of constructed emotion provides the scientific framework for the chapter's critique of HireVue's facial expression analysis. The book argues that emotions are not hardwired readouts of facial muscle movements but context-dependent constructions — undermining the scientific basis for AI emotion recognition from faces. Accessible to general readers.
Wang, Angela, et al. "Measuring Algorithmic Fairness in Predictive Hiring Assessments." Proceedings of the AAAI Symposium on Artificial Intelligence and Human-Machine Systems, 2022. An academic examination of how fairness is defined and measured in algorithmic hiring assessments, demonstrating that different definitions of fairness (demographic parity, equal opportunity, individual fairness) are often mutually incompatible — meaning that optimizing for one form of fairness can violate others.
Journalism and Investigative Reporting
Harwell, Drew. "A Face-Scanning Algorithm Increasingly Decides Whether You Deserve the Job." Washington Post, November 6, 2019. The most comprehensive early journalism on HireVue and AI video interviewing, including worker and applicant perspectives, expert critique, and company responses. Provided significant impetus for the Illinois AIVIA and subsequent regulatory attention.
Dastin, Jeffrey. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, October 10, 2018. The primary source for the Amazon resume algorithm bias story. Documents how Amazon discovered and ultimately abandoned its ML-based resume screening tool. Brief but essential.
Policy and Legal Resources
Equal Employment Opportunity Commission. "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees." EEOC Technical Assistance Document, 2022. Available at eeoc.gov. The EEOC's guidance applying ADA requirements to AI hiring tools. Confirms that AI tools that produce disparate impact on people with disabilities may constitute ADA violations, and provides practical guidance for employers and applicants.
Illinois Department of Labor. "Artificial Intelligence Video Interview Act — Overview and Compliance Guidance." Available at labor.illinois.gov. Official state guidance on AIVIA compliance requirements. Useful as a model for what state-level disclosure requirements look like in practice.
Electronic Privacy Information Center (EPIC). "Screened and Scored: A Report on Automated Hiring Tools." 2022. Available at epic.org. EPIC's comprehensive report on automated hiring tools, documenting the range of products in the market, their capabilities, and the regulatory gaps that allow discriminatory systems to operate without accountability. Accessible to non-specialist readers.
Books for Deeper Engagement
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. O'Neil's accessible critique of algorithmic systems includes analysis of hiring algorithms as a case study in what she calls "weapons of math destruction" — systems that are opaque, scale rapidly, and systematically harm the people they most affect. The book was published before the AI hiring wave fully accelerated but remains analytically accurate.
Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, 2019. Benjamin's analysis of how technology encodes racism — her concept of the "New Jim Code" — provides the critical race theory framework for understanding how resume screening algorithms and AI hiring tools perpetuate racial inequality under the guise of neutral optimization. Essential for the political economy of algorithmic hiring.