Chapter 30: Further Reading — AI in Criminal Justice Systems
Foundational Investigations
1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). "Machine Bias." ProPublica, May 23, 2016. The foundational investigative journalism piece that documented COMPAS's racial disparities and catalyzed the algorithmic fairness debate. Available at propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Essential reading.
2. Dressel, J., & Farid, H. (2018). "The Accuracy, Fairness, and Limits of Predicting Recidivism." Science Advances, 4(1). Established that COMPAS's predictive accuracy is matched by simple two-variable models and by untrained human raters given brief case descriptions — questioning whether the algorithmic sophistication adds predictive value.
3. Richardson, R., Schultz, J., & Crawford, K. (2019). "Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice." New York University Law Review Online, 94, 192. Essential analysis of how police misconduct data feeds into predictive policing systems, undermining claims of data-driven objectivity. Introduces the "dirty data" concept systematically.
The Fairness Mathematics
4. Chouldechova, A. (2017). "Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments." Big Data, 5(2), 153–163. The mathematical proof establishing impossibility of simultaneous satisfaction of multiple fairness criteria when base rates differ. Essential for understanding the fundamental nature of the fairness debate.
5. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). "Inherent Trade-offs in the Fair Determination of Risk Scores." ITCS Conference Proceedings. A parallel impossibility result establishing different fundamental constraints on algorithmic fairness, complementing Chouldechova's analysis.
6. Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press. (Available free online at fairmlbook.org) The standard academic reference on algorithmic fairness, covering mathematical definitions, impossibility results, and practical approaches. Accessible to readers with some technical background.
Risk Assessment Tools — Academic Analysis
7. Skeem, J. L., & Lowenkamp, C. T. (2016). "Risk, Race, and Recidivism: Predictive Bias and Disparate Impact." Criminology, 54(4), 680–712. The major academic defense of calibration as the appropriate fairness criterion, finding that PSA is calibrated across races while acknowledging differential risk levels attributable to structural inequalities.
8. Harcourt, B. E. (2007). Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. University of Chicago Press. The foundational abolitionist argument against actuarial criminal justice, arguing that profiling and prediction in criminal justice is inherently discriminatory regardless of technical accuracy. Essential for understanding the strongest critique of the entire enterprise.
9. Monahan, J., & Skeem, J. L. (2016). "Risk Assessment in Criminal Sentencing." Annual Review of Clinical Psychology, 12, 489–513. Academic overview of risk assessment in criminal sentencing — methodology, evidence, and policy implications — from two of the leading researchers in the field.
Predictive Policing
10. Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood, J. S. (2013). Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation. RAND's comprehensive assessment of predictive policing — methodology, evidence of effectiveness, and policy implications. Valuable for understanding effectiveness evidence.
11. Lum, K., & Isaac, W. (2016). "To Predict and Serve?" Significance, 13(5), 14–19. Important statistical analysis showing how predictive policing trained on biased crime data will replicate and intensify racial disparities in enforcement. Highly accessible and rigorous.
Case Law and Legal Analysis
12. State v. Loomis, 881 N.W.2d 749 (Wis. 2016). The full court opinion in the case discussed throughout this chapter. Essential primary source for understanding the due process analysis applied to algorithmic sentencing.
13. Cyphert, A. M. (2017). "A Human Being Wrote This Law Review Article: Legal Issues Surrounding Artificial Intelligence." Drexel Law Review, 10(1). Overview of legal issues around AI in legal proceedings, including evidence admissibility, due process, and liability.
14. Citron, D. K. (2008). "Technological Due Process." Washington University Law Review, 85(6), 1249–1314. Foundational law review article arguing for due process requirements in algorithmic administrative and criminal justice decisions — prescient and foundational for the due process analysis applied to COMPAS.
Surveillance Technologies
15. MacArthur Justice Center. (2021). "Chicago's ShotSpotter Problem." MacArthur Justice Center. The investigation finding that 89% of ShotSpotter alerts led to no evidence of a gun crime. Essential primary source.
16. Garvie, C., Bedoya, A., & Frankle, J. (2016). "The Perpetual Lineup: Unregulated Police Face Recognition in America." Center on Privacy & Technology, Georgetown Law. Comprehensive investigation of law enforcement facial recognition use in the United States — coverage, accuracy, accountability gaps, and reform recommendations. The standard reference for facial recognition in law enforcement.
17. ACLU. (2021). "The Strategic Subject List: An Assessment." ACLU of Illinois. Investigation of Chicago's "heat list" — the Strategic Subject List — documenting error rates, racial concentration, and use for enforcement contact.
International Perspectives
18. European Union Agency for Fundamental Rights. (2020). "Bias in Algorithms — Artificial Intelligence and Discrimination." FRA. The EU's comprehensive review of algorithmic discrimination across multiple domains including criminal justice. Valuable for comparative perspective.
19. Oswald, M., Grace, J., Urwin, S., & Barnes, G. C. (2018). "Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and 'Experimentalist Governance'." Information & Communications Technology Law, 27(2), 223–250. Academic analysis of the UK's Durham HART predictive policing model — one of the first neural-network-based criminal justice AI tools — and its governance implications.
Policy and Reform
20. Stevenson, M. T., & Doleac, J. L. (2022). "Algorithmic Risk Assessment in the Hands of Humans." Journal of Law and Economics, 65(3). Empirical analysis of how risk assessment tools are actually used by human decision-makers — finding that judges respond differently to algorithmic recommendations based on race and other factors, and that human discretion can compound algorithmic bias.