Case Study 36-1: The Chicago Heat List — Predictive Surveillance and the Presumption of Dangerousness
Background
In 2013, the Chicago Police Department, in partnership with the Illinois Institute of Technology, developed what it called the "Strategic Subject List" — quickly dubbed by journalists and critics the "Heat List." The system assigned a numerical "risk score" to individuals in Chicago, ranking them by their predicted likelihood of being involved in gun violence, either as a perpetrator or a victim.
At its peak, the Heat List contained approximately 400,000 names. The highest-ranked individuals — those assigned scores indicating the greatest predicted risk — were approached by police officers and social workers as part of an "intervention" program that notified them they were on the list and warned them about consequences if they were found to be involved in violence.
The program was presented as a data-driven alternative to traditional reactive policing: rather than waiting for violence to occur and then investigating, Chicago would use predictive analytics to identify people at risk and intervene before violence happened. It was framed as both a public safety initiative and, in some formulations, a social service — a way of identifying people who needed intervention and connecting them with support.
How the Score Was Calculated
The Strategic Subject List assigned scores based on a combination of factors, including:
- Prior arrests (own history)
- Gang affiliation
- Prior victimization (whether the subject had previously been a victim of gun violence)
- Social network factors: the criminal histories of known associates
The algorithm did not directly incorporate race as a variable. But the inputs to the algorithm were themselves products of racially biased systems. Prior arrest records reflect where Chicago police had historically concentrated enforcement. Gang affiliation designations were maintained by CPD and reflected designations that had been made, in many cases, on the basis of race, neighborhood, and association rather than actual gang membership. And the city's history of segregation meant that victimization — being previously shot or assaulted — was highly concentrated in specific Black and Latino neighborhoods on Chicago's South and West Sides.
The result was a score that was algorithmically colorblind but functionally racial. Of the approximately 1,400 highest-risk individuals when the program launched, the overwhelming majority were young Black men living on Chicago's South and West Sides.
The Problems
Punishing Victims
One of the most striking findings about the Heat List was that prior victimization — having already been shot — increased a person's score. The algorithm interpreted victimization as a risk factor for future involvement in gun violence. This meant that young men who had survived shootings, who were in many cases themselves victims of neighborhood violence, found themselves on a list that the police department characterized as a registry of dangerous individuals.
Several interviews with Heat List subjects conducted by journalists revealed people who had been shot, who had survived traumatic injury, and who then received visits from police officers informing them that they were considered high-risk. The visits themselves were experienced as threatening — as surveillance events that sent the message: "We are watching you. We consider you dangerous." For people who had been victimized, this was a compounding harm.
The Intervention That Wasn't
The theoretical framework of the program held that high-risk individuals would receive social services — help with employment, housing, mental health support. In practice, the social service component of the program was dramatically underfunded relative to the police contact component. Many people on the list received visits from police officers who delivered warnings; far fewer received meaningful social support.
Research by the RAND Corporation and others found that assignment to the Strategic Subject List had no statistically significant effect on the likelihood that a person would be shot or would shoot someone. The program did not work as advertised. It did produce extensive surveillance of Black and Latino men, generate police contacts that were experienced as threatening by subjects, and create a digital record of "high risk" designation for hundreds of thousands of Chicago residents.
The Disclosure Problem
Subjects of the Strategic Subject List were not notified of their designation unless they were among the highest-risk tier and received a police visit. The vast majority of the 400,000 people on the list did not know they were on it. The list was not subject to appeal — there was no process by which a person could challenge their score or request correction of the factors that generated it.
The scoring factors that elevated a person's risk score included information about their associates. This meant that a person could have a high risk score not because of anything they themselves had done but because of who their brother, their neighbor, or their childhood friend was. The algorithm punished association, not action.
Racial Composition and the Feedback Dynamic
Analysis of the Strategic Subject List consistently found that it was disproportionately populated by young Black men. A 2016 investigation by the Chicago Tribune found that Black people made up roughly 56 percent of Chicago's population but constituted approximately 70 percent of the highest-tier Heat List subjects. Latino residents were also over-represented relative to their population share.
This disproportion was not random. It was the product of the feedback dynamic that Chapter 36 describes: the Heat List was trained on arrest data from a police department that had historically concentrated enforcement in Black and Latino neighborhoods on Chicago's South and West Sides. The algorithm flagged the people who had accumulated arrest records in those neighborhoods. Those records accumulated in those neighborhoods in part because that is where police concentrated their activity. The algorithm then directed additional police attention to those same individuals and neighborhoods, generating more arrests, which would feed future versions of the model.
The Strategic Subject List did not create the racial disproportion in Chicago's criminal justice system. It inherited it, encoded it, and extended it into algorithmic form — and in doing so, made it appear to be a product of objective science rather than of political choices about where to police.
Discussion Questions
-
The Strategic Subject List did not use race as a direct input variable. Does that make it racially neutral? Justify your answer using the concepts of structural racism and disparate impact analysis from Chapter 36.
-
The program classified prior victimization — being previously shot — as a risk factor. What does this classification reveal about the underlying theory of who is dangerous in the program's framework? Who does the program protect, and from what?
-
Heat List subjects had no mechanism to challenge their designation or correct their score. Using the book's framework of consent as fiction, analyze the surveillance relationship between the CPD and Heat List subjects.
-
The program was framed as both a public safety initiative and a social service. Evaluate this framing. What purposes does it serve? What does it obscure?
-
If you were advising the Chicago City Council on whether to continue, reform, or abolish a predictive risk-scoring system for gun violence, what would you recommend, and on what grounds?
Extension Research
- Review the 2016 RAND Corporation evaluation of Chicago's predictive policing programs
- Examine the 2020 decision by the Chicago Police Department to shut down the Strategic Subject List
- Compare Chicago's approach with Santa Cruz, California, which became the first U.S. city to ban predictive policing in 2020
- Read the work of sociologist Bernard Harcourt on "actuarial penology" and the use of risk scoring in criminal justice