Chapter 29 Exercises: HR Analytics and Predictive Hiring


Exercise 29.1 — Personal Audit: Mapping Your Pre-Employment Surveillance Footprint

Type: Individual reflective research Time: 45–60 minutes

Instructions

Every job or internship application generates a data footprint that feeds into screening systems. This exercise helps you map your own surveillance exposure.

Part A: Audit your existing footprint (30 minutes)

  1. Search your name. Google yourself and examine the first page of results. What information about you is publicly available? What images appear? Are there any results you would not want an employer to see?

  2. Review your social media. For each platform you use (Instagram, TikTok, Twitter/X, LinkedIn, Facebook), examine your public-facing content. What protected characteristics are visible (religion, political views, health conditions, relationships)? What could be inferred? Are your privacy settings configured to limit public visibility?

  3. Check your credit. Request your free annual credit report from annualcreditreport.com. Are there any errors? Do you have a credit history that could be checked by an employer in a state that permits credit checks?

  4. Identify your criminal record exposure. Are there any arrest records, charges, or convictions in your history that would appear in a background check? In your state, how long do records remain visible?

Part B: Assessment (200–300 words)

Based on your audit, what aspects of your pre-employment surveillance footprint do you consider most vulnerable? What would you change if you were applying for your ideal job tomorrow? What cannot be changed regardless of your effort?

Part C: Reflection (150–200 words)

The chapter argues that pre-employment surveillance disadvantages first-generation students and people from marginalized communities in specific, structural ways. Does your audit reveal any ways in which your own background might interact with algorithmic screening systems? Are there aspects of your history or identity that would be penalized by the systems described in this chapter?


Exercise 29.2 — Python Lab: Debiasing the Resume Scorer

Type: Technical/analytical coding exercise Time: 60–90 minutes Prerequisites: Basic Python; Chapter 29 Python code

Instructions

The chapter demonstrates a biased resume scoring algorithm. This exercise asks you to attempt to "debias" it and analyze what trade-offs emerge.

Setup: Copy the chapter's Python code into a new file.

Debiasing Task 1: Remove the institution multiplier

The most explicit bias in the current scorer is the institution multiplier that advantages Ivy League schools over HBCUs and regional schools. Remove this multiplier so that all institutions are treated equally.

After removing it: 1. Re-run the simulation. Who advances now? 2. Does removing the institution multiplier fully debiase the algorithm? What biases remain?

Debiasing Task 2: Neutralize the activity penalties

Remove the PENALIZED_ACTIVITIES list so that NSBE membership, diversity fellowships, and community service are neither penalized nor specially rewarded. Keep the BONUS_ACTIVITIES rewards.

After this change: 1. Re-run the simulation. Who advances now? 2. Is this enough? What about applicants whose activities — though no longer penalized — still lack the bonus signals associated with privileged backgrounds?

Debiasing Task 3: Redesign the scoring system

Now try to design a scoring system that would advance the most qualified applicants (highest actual_qualification_score) regardless of their background. You have access to the actual_qualification_score field in the Applicant class — but remember: in a real system, you can't see this field. It represents ground truth that the algorithm can't directly observe.

Written Analysis (400–500 words): 1. Were you able to fully debias the algorithm? What prevented complete debiasing? 2. What does this exercise reveal about the relationship between the available data and the underlying quality you want to predict? 3. In what ways is the debiasing problem a technical problem? In what ways is it a social/historical problem that technical solutions cannot fix? 4. What would it actually take — technically, organizationally, and legally — to make algorithmic resume screening non-discriminatory?


Exercise 29.3 — The Video Interview Experiment

Type: Empirical observation exercise Time: 45–60 minutes

Instructions

Jordan's video interview was evaluated by AI analyzing facial expressions, vocal patterns, and word choice. This exercise asks you to examine what variables these systems might pick up that are unrelated to job qualifications.

Part A: Record yourself (15 minutes)

Set up a video recording of yourself (phone or laptop) and answer these two questions in the style of a job interview:

  1. "Tell me about a time you solved a complex problem."
  2. "Where do you see yourself professionally in five years?"

Record two versions: - Version 1: Natural, authentic self - Version 2: Deliberately "professional" — more formal posture, slower speech, more standard vocabulary

Part B: Analysis (20–30 minutes)

Watch both recordings and note: 1. What differences in facial expression, vocal pattern, and word choice exist between versions? 2. Which version would you expect an AI system trained on "professional" norms to score higher? 3. Are there aspects of Version 1 that reflect genuine strengths, personality, or qualifications that would be missed or penalized by AI analysis? 4. What would someone from a different cultural background, with a disability, or with a different gender presentation notice about what this exercise requires?

Part C: Written Reflection (250–350 words)

What does this exercise reveal about what AI video interview assessment actually measures? Is "performing professionalism" for an AI camera the same as demonstrating job qualifications? Whose definition of "professionalism" is being enforced by these systems?


Exercise 29.4 — EEOC Complaint Simulation

Type: Applied legal analysis (individual or pairs) Time: 45–55 minutes

Instructions

The EEOC enforces federal employment discrimination law. This exercise asks you to apply the disparate impact framework to an algorithmic hiring scenario.

Scenario:

Meridian Tech uses an AI hiring platform that analyzes video interviews and generates scores. A civil rights organization analyzes hiring data and finds: Black applicants to Meridian Tech are advancing past the initial video screening at a rate of 32%, while white applicants advance at a rate of 61%. The two groups have similar credential profiles (GPA, degree field, experience years).

You are preparing an EEOC complaint on behalf of Jordan Ellis, who applied, completed a video interview, and was rejected.

Draft the complaint by addressing:

  1. Parties: Who is filing? Against whom?

  2. Factual allegations: What happened to Jordan? What do the aggregate statistics show?

  3. Legal theory — disparate impact: - What protected characteristic is allegedly being discriminated against? - What facially neutral employment practice produces the disparate impact? - Do you need to prove discriminatory intent? Why or why not? - What statistical standard applies? (Research the "4/5ths rule" used by the EEOC)

  4. Business necessity defense: What argument would Meridian Tech make that its AI screening is justified by business necessity? How would you rebut this argument?

  5. Remedy requested: What would Jordan be entitled to if the complaint succeeds?

Note: You do not need to produce a formal legal document. This is an analytical exercise that should demonstrate your understanding of how disparate impact doctrine applies to algorithmic hiring.


Exercise 29.5 — The Culture Fit Interview

Type: Observation and critical analysis exercise Time: 30–40 minutes (plus optional research component)

Instructions

Lauren Rivera's research found that "culture fit" in elite hiring often means demographic and class similarity rather than values alignment. This exercise asks you to examine how "culture fit" manifests in hiring practices.

Part A: Review 3–5 job postings (20–25 minutes)

Find 3–5 actual job postings (on LinkedIn, Indeed, or company websites) that mention "culture fit," "cultural fit," or similar terms (e.g., "culture add," "values alignment," "team fit").

For each posting, identify: 1. How is "culture" described or implied? 2. What specific values, work styles, or personality traits are associated with their culture? 3. What does the culture description suggest about who is likely to "fit"? 4. Are there any signals in the job posting that suggest what demographic groups the culture has historically selected for?

Part B: Written Analysis (250–350 words)

Based on your review, answer: 1. Is "culture fit" a meaningless or meaningful criterion in the postings you reviewed? 2. For the most explicit "culture fit" descriptions, who is most likely to "fit" based on the criteria described? Who is least likely to fit? 3. If you were designing a job posting that valued diversity while also caring about team cohesion, how would you describe what you're looking for differently?