Case Study: Fairness in College Admissions Algorithms
"We want a class that looks like America. We also want a class that will succeed. These goals are not always compatible — and an algorithm forces you to confront that tension." — University admissions administrator (anonymous), quoted in The Atlantic, 2022
Overview
College admissions is one of the most contested sites for fairness debates in American society. For decades, the question has been addressed through affirmative action, holistic review, and litigation culminating in the Supreme Court's 2023 decision in Students for Fair Admissions v. Harvard, which effectively ended race-conscious admissions at U.S. universities. As universities search for race-neutral alternatives, algorithmic admissions tools are increasingly discussed — and deployed.
This case study examines how the fairness definitions from Chapter 15 apply to college admissions. It uses a hypothetical but realistic university scenario to demonstrate that the tensions between demographic parity, equalized odds, calibration, and individual fairness are not merely academic — they have direct consequences for who gets admitted, who gets excluded, and what "fairness" means in one of the most consequential sorting processes in American life.
Skills Applied: - Applying multiple fairness definitions to a specific institutional context - Evaluating how structural inequality complicates fairness analysis - Analyzing the impossibility theorem in a non-criminal-justice setting - Connecting algorithmic fairness to broader questions of educational equity
The Scenario
Oakridge State University
Oakridge State University (fictional) is a public research university with an enrollment of 30,000 undergraduates. It receives approximately 45,000 applications per year for 7,500 first-year slots — an acceptance rate of approximately 17%.
Facing budget pressures and a desire for more consistent admissions decisions, Oakridge's administration proposes supplementing its holistic review process with an algorithmic "Academic Success Predictor" (ASP) — a machine learning model trained on the records of previous Oakridge students to predict which applicants are most likely to "succeed."
The Algorithm: Academic Success Predictor
The ASP uses the following features:
- High school GPA (weighted for course difficulty)
- SAT/ACT scores
- Number of AP/IB courses completed
- High school ranking (if available)
- Extracurricular involvement score (number and intensity of activities)
- Zip code (used to estimate socioeconomic context)
- First-generation college student status
- Written essay score (generated by NLP analysis)
The target variable — "success" — is defined as: the student earns at least a 2.5 GPA in their first year and returns for their second year (retention). The model is trained on 10 years of Oakridge student records.
The Data
Analysis reveals the following patterns in the training data:
| Metric | White Students | Black Students | Hispanic Students |
|---|---|---|---|
| Base rate ("success" as defined) | 82% | 68% | 71% |
| Average high school GPA | 3.62 | 3.31 | 3.38 |
| Average SAT score | 1280 | 1140 | 1170 |
| AP courses completed | 5.2 | 2.8 | 3.1 |
| First-generation % | 18% | 42% | 47% |
The base rate differences are significant and reflect the structural inequalities that shape educational opportunity in the United States: school funding disparities, access to AP courses, test preparation resources, family wealth, and the challenges of navigating college as a first-generation student.
Applying the Fairness Definitions
Demographic Parity: Equal Admission Rates
If Oakridge applies demographic parity to the ASP, it would require that the algorithm admit the same proportion of applicants from each racial group. If 20% of white applicants are admitted, then 20% of Black and Hispanic applicants must also be admitted.
Strengths in this context: - Produces a diverse student body that "looks like" the applicant pool - Addresses the historical exclusion of underrepresented groups - Aligns with the educational mission of exposure to diverse perspectives
Tensions in this context: - If average academic preparation differs across groups (due to school quality, not ability), admitting equal proportions may mean admitting some students who are less prepared — potentially setting them up for difficulty if the university does not invest in support programs - The base rate data shows that "success" rates differ, which means that equal admission rates will produce unequal success rates — unless the university provides differential support - Critics will argue that demographic parity in admissions is a "quota" — a term with significant legal and political baggage since Regents of the University of California v. Bakke (1978)
Equalized Odds: Equal Accuracy Across Groups
If Oakridge applies equalized odds, it would require that the algorithm be equally accurate across racial groups — specifically, that the TPR (proportion of students who would succeed that are correctly admitted) and FPR (proportion of students who would not succeed that are incorrectly admitted) are the same across groups.
Strengths in this context: - Ensures that a qualified Black applicant has the same chance of admission as a qualified white applicant - Ensures that an unqualified applicant has the same chance of rejection regardless of race - Focuses on the fairness of the process, not the distribution of outcomes
Tensions in this context: - Requires defining "qualified" and "unqualified," which depends on the target variable ("success"). If "success" is defined as first-year GPA > 2.5, the definition itself may be biased — first-generation students and students from under-resourced high schools may need more time to adjust - If base rates differ, achieving equalized odds will produce different selection rates — fewer Black and Hispanic students will be admitted because a smaller proportion are predicted to "succeed" (as defined) - The "ground truth" (actual success) is contaminated by the university's own support systems — students who succeed do so partly because of institutional support, and students who fail may do so partly because of institutional failure
Calibration: Equal Predictive Accuracy
If Oakridge applies calibration, it would require that among students admitted with the same ASP score, the actual success rate be equal across racial groups. An admitted student scored "80" should have the same probability of success regardless of race.
Strengths in this context: - Admissions officers can trust the algorithm's scores: a "strong admit" is a strong admit regardless of the applicant's race - The system does not systematically mislead decision-makers - Individual predictions are accurate within groups
Tensions in this context: - A calibrated system with different base rates will admit different proportions of each group — producing a less diverse class - Calibration treats the differential base rates as given facts rather than products of structural inequality - If the university's goal is not merely prediction but correction — actively working to increase diversity and opportunity — calibration is insufficient
Individual Fairness: Similar Applicants, Similar Decisions
If Oakridge applies individual fairness, it would require that applicants with similar qualifications receive similar admissions decisions regardless of race.
Strengths in this context: - Aligns with the intuition that admissions should be based on merit - Avoids explicit group-based adjustments that may be legally prohibited
Tensions in this context: - "Similar qualifications" is a contested concept. Two students with the same GPA may have achieved that GPA under radically different conditions — one at a well-resourced suburban high school, the other at an under-resourced urban school. Are they "similar"? - Using GPA and SAT scores as the similarity metric effectively encodes structural inequality, because access to educational resources is unequally distributed - If the similarity metric is defined to include context (school quality, family income), it begins to approximate a group-based adjustment — blurring the line between individual and group fairness
The Impossibility in Admissions
The base rate differences in the Oakridge data (82% vs. 68% vs. 71%) trigger the impossibility theorem. The university cannot simultaneously:
- Admit equal proportions of each racial group (demographic parity)
- Achieve equal accuracy across groups (equalized odds)
- Ensure that the same score means the same thing for every group (calibration)
Any admissions policy that uses the ASP must prioritize one definition over others. This is not a limitation of the algorithm — it is a mathematical reality given the structural conditions in which the algorithm operates.
The Deeper Question: What Is the Algorithm Predicting?
The most important question about the ASP may not be about fairness metrics at all. It may be about the target variable: what does "success" mean?
The ASP predicts whether a student will earn a 2.5 GPA and return for their second year. But is that what the university should be trying to predict? Consider alternatives:
- Four-year graduation: A longer time horizon might reveal different patterns — students who struggle initially but graduate eventually
- Post-graduation outcomes: Career satisfaction, income, community engagement, graduate school enrollment
- Learning growth: How much a student's skills and knowledge grow during their time at the university, regardless of starting point
- Contribution to the university community: Leadership, diversity of perspective, cultural enrichment
Each definition of "success" produces a different algorithm, which produces different admissions decisions, which produces a different student body. The choice of target variable is, like the choice of fairness metric, a value-laden decision — perhaps the most consequential design choice in the entire system.
The Post-Affirmative Action Landscape
The 2023 Supreme Court decision in Students for Fair Admissions v. Harvard effectively prohibited race-conscious admissions at U.S. universities. In this new legal landscape, some institutions have turned to algorithmic tools as a way to maintain diversity without explicitly considering race.
This approach faces a fundamental tension. If the algorithm uses features correlated with race (zip code, family income, first-generation status, school quality), it may produce a more diverse class — but critics will argue that these features are proxies for race, effectively achieving race-conscious admissions through indirect means. If the algorithm uses only "race-neutral" features that are not correlated with race, it will not produce diversity — because educational opportunity in the United States is profoundly shaped by race.
The algorithm does not resolve the political debate about affirmative action. It displaces it into a different register — from explicit policy to implicit design choices embedded in feature selection, target variable definition, and fairness metric selection.
Discussion Questions
-
The target variable question. If you were designing the ASP, how would you define "success"? Why? How would your definition change who gets admitted? Consider at least three alternative definitions and their implications.
-
The preparation gap. If the base rate differences in the training data reflect differences in academic preparation (due to unequal school quality) rather than differences in ability, what obligation does the university have? Should the algorithm predict success given current preparation or success given the student's potential with adequate support? What would each choice require?
-
Race-neutral proxies. After the 2023 Supreme Court decision, universities are exploring "race-neutral" alternatives to affirmative action. Evaluate the use of zip code and first-generation status as admissions features. Are these genuine race-neutral alternatives, or are they proxies for race? Does it matter legally? Does it matter ethically?
-
The impossibility in practice. Given the impossibility theorem, which fairness definition should Oakridge prioritize? Does the answer depend on the university's mission? Its legal obligations? Its student body's needs? Propose a specific fairness framework for Oakridge and defend it.
Your Turn: Mini-Project
Option A: Admissions Simulation. Create a simplified simulation (in Python or on paper) with 1,000 applicants from three groups, using the base rates and average scores from the Oakridge scenario. Apply three different admissions policies: one optimized for demographic parity, one for calibration, and one for equalized odds. Compare the resulting classes: How many students from each group are admitted? What is the predicted success rate for each class? Write a one-page analysis.
Option B: Fairness Audit. Research the admissions process at your own university (or a university you are familiar with). Identify which features are used, what the effective target variable is, and what fairness definition — if any — the process implicitly prioritizes. Write a two-page analysis applying the framework from this chapter.
Option C: Post-SFFA Policy Design. You are an admissions officer at a selective university after the Students for Fair Admissions v. Harvard decision. Design a holistic admissions process that does not explicitly consider race but promotes diversity. Explain which features you would use, what target variable you would predict, and which fairness definition you would prioritize. Address the tension between legal constraints and diversity goals.
References
-
Students for Fair Admissions, Inc. v. President and Fellows of Harvard College, 600 U.S. 181 (2023).
-
Regents of the University of California v. Bakke, 438 U.S. 265 (1978).
-
Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. "Inherent Trade-Offs in the Fair Determination of Risk Scores." Proceedings of Innovations in Theoretical Computer Science (ITCS), 2017.
-
Reardon, Sean F. "School Segregation and Racial Academic Achievement Gaps." Russell Sage Foundation Journal of the Social Sciences 2, no. 5 (2016): 34-57.
-
Dynarski, Susan, C.J. Libassi, Katherine Michelmore, and Stephanie Owen. "Closing the Gap: The Effect of Reducing Complexity and Uncertainty in College Pricing on the Choices of Low-Income Students." American Economic Review 111, no. 6 (2021): 1721-1756.
-
Narayanan, Arvind. "21 Fairness Definitions and Their Politics." Tutorial at Conference on Fairness, Accountability, and Transparency (FAT)*, February 2018.
-
Chetty, Raj, John N. Friedman, Emmanuel Saez, Nicholas Turner, and Danny Yagan. "Mobility Report Cards: The Role of Colleges in Intergenerational Mobility." NBER Working Paper No. 23618, 2017.