Chapter 30: Key Takeaways — AI in Criminal Justice Systems
Core Concepts
1. AI Is Present Across the Full Criminal Justice Pipeline AI is not limited to one stage of criminal justice — it operates from pre-crime prediction through post-release supervision. Bias and error at any stage compound bias and error at subsequent stages through feedback loops, creating systemic rather than isolated effects.
2. Predictive Policing Contains a Self-Fulfilling Feedback Loop Predictive policing systems trained on historical crime detection data will systematically predict more crime where police previously deployed more attention — because detection correlates with enforcement intensity, not necessarily with underlying crime occurrence. This creates a self-reinforcing cycle that validates the model's predictions while potentially intensifying differential policing of minority communities.
3. The Chouldechova Impossibility Result Changes the Fairness Question Mathematical proof establishes that when two groups have different base rates of the outcome being predicted, no risk assessment tool can simultaneously satisfy all fairness criteria (equal false positive rates, equal false negative rates, and calibration). This means the fairness debate cannot be resolved technically — it requires explicit value choices about which criterion to prioritize, made democratically rather than embedded invisibly in algorithmic design.
4. ProPublica's Findings and Northpointe's Rebuttal Were Both Correct COMPAS simultaneously has higher false positive rates for Black defendants (ProPublica's finding) and calibrated scores that mean the same thing across races (Northpointe's finding). These are not contradictory — they measure different properties of the same system. Understanding this is essential for sophisticated analysis of AI fairness claims.
5. Trade Secrecy in Criminal Justice AI Is Ethically Untenable The ability to challenge evidence used against oneself is foundational to fair legal proceedings. Allowing a private company to deploy a proprietary formula influencing criminal sentences while protecting it from disclosure through trade secrecy doctrine creates a fundamental contradiction between commercial intellectual property law and constitutional due process.
6. Facial Recognition Wrongful Arrests Follow a Documented Pattern Documented wrongful arrests based on facial recognition misidentification — Robert Williams, Michael Oliver, Porcha Woodruff, Nijeer Parks, and others — overwhelmingly affect Black individuals, consistent with documented higher error rates for darker-skinned faces in most commercial facial recognition systems. This is not a random technology failure; it is a predictable consequence of biased training data deployed in a high-stakes context.
7. ShotSpotter's 89% False Positive Rate Exemplifies Evidence-Free Procurement The MacArthur Justice Center's finding that 89% of ShotSpotter alerts in Chicago led to no evidence of a gun crime represents a documented failure of public safety AI procurement: a technology deployed at massive scale and public expense, creating enormous police-community contact in minority neighborhoods, without independent evidence that it reduces the violence it is deployed to address.
8. Algorithmic Due Process Is an Unsolved Problem The Wisconsin Supreme Court's Loomis ruling permits the use of proprietary risk assessments in sentencing as long as they are not given "exclusive or determinative weight." This standard does not adequately protect defendants' ability to challenge algorithmic evidence that influences their freedom. What constitutes meaningful challenge to an algorithm — and what disclosure it requires — remains legally and practically unresolved.
9. International Frameworks Take a More Restrictive Approach The EU AI Act's categorical prohibition on certain criminal justice AI applications — including AI that profiles individuals for criminality risk — reflects a rights-based framework that goes significantly further than US approaches. The Dutch SyRI ruling and the EU AI Act together represent a European model that treats algorithmic transparency and rights protection as non-negotiable, rather than factors to be balanced against efficiency.
10. The Accountability Gap Is Structural, Not Incidental The combination of judicial immunity, prosecutorial immunity, qualified immunity for law enforcement, sovereign immunity limitations, trade secrecy for vendors, and Section 1983's limitations creates near-comprehensive insulation from accountability for harms caused by criminal justice AI. Closing this gap requires legislative and regulatory action — it cannot be resolved through existing legal mechanisms alone.
The COMPAS Case in Summary
The COMPAS controversy produced three lasting contributions to thinking about criminal justice AI:
- The ProPublica methodology — showing how to analyze AI bias using outcome data matched to algorithmic predictions, establishing a template for investigative AI accountability journalism
- The Chouldechova impossibility result — proving mathematically that all fairness criteria cannot be simultaneously satisfied when base rates differ, transforming the technical debate into a values debate
- The Loomis litigation — establishing (inadequate) constitutional minimums for algorithmic due process in criminal sentencing
Together, these contributions did not resolve the problem of biased criminal justice AI — COMPAS remains in use — but they established the intellectual framework for all subsequent analysis.
Questions Business Professionals Should Ask
When evaluating any AI system used in high-stakes decisions affecting individuals: - What is the documented accuracy of this system, measured by whom? - What are the false positive and false negative rates, separately by demographic group? - What fairness criterion does the system prioritize, and is that criterion appropriate for this context? - Can affected individuals access the specific inputs and outputs relevant to their case and challenge them effectively? - What accountability exists for harms caused by the system's errors? - Is the evidence of effectiveness independent of the vendor?