Chapter 29 Quiz: HR Analytics and Predictive Hiring

Answer all questions. Multiple choice questions have one best answer. Short answer questions should be 2–4 sentences.


Part I: Multiple Choice

1. "People analytics" refers to:

a) Surveys that organizations give to employees about their managers b) The systematic use of data and quantitative analysis to inform human resource decisions, including hiring, development, and separation c) Social media research conducted by individual employees d) Performance review conversations between managers and employees


2. The primary mechanism by which resume screening algorithms reproduce historical discrimination is:

a) The algorithms are programmed to discriminate against specific groups b) Algorithms trained on historical hiring data learn patterns from hiring decisions that reflected documented discrimination, and perpetuate those patterns at scale c) Employers deliberately exclude certain schools from screening d) The algorithms have errors in their programming


3. Amazon's abandoned machine learning resume screening system was found to:

a) Be too slow to process large application volumes b) Systematically downgrade applications from women, because it was trained on historical hiring data reflecting a male-dominated tech industry c) Be too expensive to maintain d) Produce results identical to human reviewer decisions


4. The "disparate impact" doctrine in employment discrimination law establishes that:

a) Only intentionally discriminatory practices are unlawful b) Facially neutral employment practices can constitute illegal discrimination if they produce statistically significant disparate effects on protected groups without justification by business necessity c) Statistical evidence alone is never sufficient to prove discrimination d) Disparate impact only applies to public sector employers


5. The key scientific problem with HireVue's facial expression analysis is:

a) The technology is too expensive to be widely used b) Facial expressions do not have universal emotional meanings — they vary by culture, neurological profile, and context — making AI classification of emotions from faces scientifically contested and prone to systematic error c) Facial analysis technology is not yet capable of analyzing video d) Hiring managers prefer in-person interviews


6. The OCEAN model describes:

a) An environmental monitoring framework used in HR analytics b) The "Big Five" personality dimensions — Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism — the most scientifically supported personality framework in organizational psychology c) A screening algorithm used by online job platforms d) A compliance framework for HR data privacy


7. "Culture fit" algorithms are criticized primarily because:

a) They are too complex to administer b) Research shows that "culture fit" assessments often measure demographic and class similarity rather than values alignment that actually predicts job performance c) They do not consider candidates' technical qualifications d) Employees resent being evaluated on cultural dimensions


8. "Flight risk" prediction models raise privacy concerns because:

a) They are inaccurate and fire people who weren't planning to leave b) They extend employer surveillance into employees' legitimate career management activities — like updating a LinkedIn profile — treating normal professional behavior as a signal of disloyalty c) They are only used for executives d) They cannot be calculated without salary data


9. GDPR Article 22 is relevant to hiring because:

a) It requires all European employers to use AI in hiring b) It gives individuals the right not to be subject to solely automated decisions producing significant effects, including hiring decisions — requiring human review to be available for automated hiring assessments c) It prohibits background checks in EU countries d) It requires employers to share all candidate data with applicants


10. New York City's Local Law 144 (2023) requires:

a) All employers to stop using AI in hiring b) Employers using AI tools in hiring for NYC positions to conduct and publish bias audits, and to notify applicants that AI tools are being used c) All AI hiring decisions to be reviewed by the NYC Department of Labor d) Employers to hire candidates rejected by AI tools


11. Social media screening in hiring is particularly problematic because:

a) Social media profiles are often inaccurate b) Reviewing social media can expose protected characteristics (race, religion, disability status, sexual orientation) that employers are legally prohibited from considering in hiring decisions, with no way to prevent this knowledge from influencing the decision c) Employers don't know how to use social media d) Social media screening is prohibited in all states


12. The Python simulation in this chapter demonstrates bias by showing:

a) That algorithms always choose the wrong candidate b) That Aaliyah Washington — the most qualified applicant by actual qualification score and highest GPA — is sorted below the screening threshold because of her HBCU education and NSBE membership, while a less qualified elite-school applicant advances c) That Jordan Ellis performs worse than other applicants d) That Python algorithms are inherently biased


13. Pre-employment credit checks are criticized primarily because:

a) They take too long to process b) Poor credit is often the result of adverse life events (medical debt, divorce, job loss) rather than dishonesty, and has racial disparate impact reflecting structural inequalities — creating a poverty trap that harms people already disadvantaged by unemployment c) Credit reports are frequently inaccurate d) Employers don't know how to interpret credit reports


14. The "ban the box" movement advocates for:

a) Prohibiting employers from using computers in hiring b) Requiring employers to remove criminal conviction questions from initial job applications, allowing applicants to be assessed on qualifications before criminal history is considered c) Banning background check companies d) Requiring background checks for all applicants


15. Jordan's experience with the HireVue interview represents which recurring theme most directly?

a) Historical continuity — AI interviewing has existed for decades b) Visibility asymmetry — Jordan's face, voice, and words were analyzed in detail; Jordan received no information about the assessment or why they were rejected c) Consent as fiction — Jordan was not told the interview was happening d) Normalization of monitoring — everyone knows about video interviews


16. When companies claim that AI hiring tools are "more objective" than human reviewers, what is the most accurate critical response?

a) Human reviewers are actually more objective b) AI tools encode the values, assumptions, and biases of their designers and training data; objectivity requires transparent methodology and consistent application, not automation per se c) Objectivity is impossible in any hiring process d) AI tools are objective when properly trained


Part II: Short Answer

17. Explain how the Python simulation's treatment of extracurricular activities demonstrates "disparate impact without discriminatory intent." Specifically, how can penalizing NSBE membership (which no one at the company decided to discriminate against) produce racial discrimination?

Your answer:


18. Jordan Ellis could potentially seek a remedy for the HireVue rejection under the ADA (if vocal pattern analysis disadvantages anxiety disorders), Title VII (if facial analysis produces racial disparate impact), or GDPR Article 22 (if Jordan were in the EU). What is the practical obstacle to pursuing each of these remedies, and what does the existence of these obstacles reveal about the relationship between legal rights and actual accountability?

Your answer:


Answer Key (Instructor Version)

  1. b
  2. b
  3. b
  4. b
  5. b
  6. b
  7. b
  8. b
  9. b
  10. b
  11. b
  12. b
  13. b
  14. b
  15. b
  16. b

17. Disparate impact without intent: The algorithm penalizes NSBE membership not because any programmer decided "reject Black candidates" but because the algorithm was trained on historical hiring data in which NSBE-affiliated candidates were underrepresented among hires. The algorithm learned that NSBE membership correlates (historically) with not being hired — a correlation that reflects past discrimination, not actual underqualification. The algorithm faithfully reproduces this historical pattern, producing racially disparate outcomes without anyone intending discrimination. This is the structure of disparate impact: the neutral-seeming rule (penalize activities correlated with non-hire) produces discriminatory outcomes because the correlation itself reflects historical injustice.

18. Practical obstacles: (1) ADA: Jordan would need to prove their vocal pattern analysis was specifically disadvantaged by an anxiety disorder and that HireVue's system produced disparate impact on people with that disability — requiring access to HireVue's algorithmic data and methodology, which is proprietary. (2) Title VII: Jordan would need statistical evidence of racial disparate impact across HireVue's applicant pool — data Jordan doesn't have access to and that HireVue isn't required to disclose. (3) GDPR Article 22: Jordan isn't in the EU, so this protection doesn't apply. The obstacle pattern reveals a structural gap: legal rights exist in theory, but exercising them requires information (about the algorithm's methodology and its aggregate effects) that is systematically withheld. Rights without information become unenforceable.