Case Study 8.1: Chicago's Surveillance Infrastructure and the Predictive Policing Controversy

Overview

Chicago operates one of the most extensive urban surveillance networks in the United States — approximately 32,000 cameras across the city, integrated with license plate readers, gunshot detection systems, and, controversially, predictive analytics tools. This case study examines Chicago's surveillance infrastructure as a case study in how urban camera networks integrate with data analytics to produce what critics call "prediction-based policing" — the use of algorithmic assessment to identify people as future risks before they have committed any crime. The case raises fundamental questions about the relationship between surveillance, prediction, race, and the presumption of innocence.


The Infrastructure: How Chicago Built Its Network

Chicago's surveillance network developed in stages over two decades.

Operation Virtual Shield (mid-2000s). The city created a central network of CPD-operated cameras and integrated them with cameras from the Chicago Transit Authority and Chicago Housing Authority. A central Police Operations Center monitored feeds from across the network.

The Private Sector Camera Program. Chicago allowed businesses and private entities to connect their private security cameras to the CPD network, providing police with feeds from thousands of cameras they did not operate. By the 2010s, this program had connected cameras from hospitals, schools, parks, and private businesses to CPD monitoring systems.

The ShotSpotter Network. Chicago deployed ShotSpotter — an acoustic sensor network designed to detect and locate gunshots — across large portions of the city. The sensors are mounted on existing infrastructure throughout covered areas and continuously monitor ambient sound. When an algorithm identifies a likely gunshot, it alerts a CPD dispatcher who can review the audio and deploy resources.

License Plate Readers. CPD deployed fixed and mobile LPR systems throughout the city, generating a large database of vehicle location records. These records are shared with federal agencies and, through data broker arrangements, with agencies nationwide.

Predictive Analytics Integration. The most controversial layer: Chicago experimented with several algorithmic tools designed to use data from police databases to identify individuals "at risk" of involvement in shootings — either as shooters or as victims. This became the focus of intense public controversy.


The Strategic Subject List: Predictive Policing in Practice

Between approximately 2012 and 2020, the Chicago Police Department operated a "Strategic Subject List" (SSL) — also called the "heat list" — that used a predictive algorithm to score people in the CPD's criminal database on a scale of 0–500, based on factors drawn from their arrest and criminal history records.

The algorithm was developed by a professor at the Illinois Institute of Technology and updated over subsequent years. Factors in the score included: number of prior arrests, types of prior arrests, age at most recent arrest, number of times the person had been a victim of a shooting, number of gang affiliations identified by police.

People with high scores were flagged for outreach by a CPD "Violence Reduction Team" — officers who would visit flagged individuals' homes, in some cases with social service representatives, to warn them that they were on police radar and offer resources. The visits were called "custom notifications."

The Problems

The SSL generated substantial controversy on several fronts:

Accuracy. An independent evaluation by RAND Corporation found that the SSL's predictive accuracy was not meaningfully better than could be achieved by simply ranking everyone by their number of prior arrests. The algorithmic complexity added limited predictive value while creating the appearance of sophisticated objective analysis.

Racial composition. The SSL was disproportionately composed of Black men. This was partly a function of CPD's historical patterns of arrest — more intensive policing of Black neighborhoods produced more arrest records for Black residents, which fed into the algorithm's inputs. The algorithm reproduced and amplified historical arrest disparities without independently measuring actual violence.

The gang database problem. One input in the SSL was gang affiliation identified by police — an entry in the CPD's "gang database." Chicago's gang database contained records on approximately 134,000 individuals. Researchers and civil liberties advocates found the database rife with errors: people listed who had died, children under 14 listed, people added based on a single officer's assessment without any due process or independent verification. The database was not public, was not subject to regular auditing, and people could not easily challenge their inclusion. Data from this error-prone database fed into the algorithm.

The visit as surveillance escalation. The "custom notification" visits were presented as offering resources and warnings. But visiting someone's home based on their SSL score — without any pending criminal case — communicated to the individual that they were under surveillance, and communicated to their neighbors and family members that police considered them a threat. The visits themselves functioned as a form of surveillance and stigmatization.

The feedback loop. People who received custom notification visits were now on police radar in a new way — their visits might be documented, their responses noted, their associations observed. Any subsequent police contact with them would be interpreted through the lens of their high SSL score. The algorithm created a surveillance feedback loop: a high score triggered attention, which generated more documented police contact, which could increase future scores.


The Abolition and Aftermath

After years of criticism from civil liberties organizations, academic researchers, and community groups, CPD announced in 2020 that it was discontinuing the Strategic Subject List. The announcement followed a report by the city's Inspector General finding significant problems with the gang database — which was also subsequently reformed.

But the architectural decisions that produced the SSL have not been undone:

  • The CPD data infrastructure — arrest records, gang databases, surveillance camera feeds — remains in place
  • The algorithmic disposition toward data-driven "risk assessment" in policing has not been formally rejected as an approach
  • The data from the SSL period was not deleted; it remains in police systems
  • Several successor programs to the SSL have been piloted or proposed

The discontinuation of a specific tool does not dismantle the surveillance infrastructure that enabled it. This is a recurring pattern in surveillance history: controversy produces the cancellation of a specific program while the underlying architecture, data, and institutional commitment to data-driven targeting continue.


ShotSpotter: A Parallel Controversy

Simultaneously with the SSL controversy, Chicago's use of ShotSpotter came under scrutiny.

The technology. ShotSpotter deploys acoustic sensors across neighborhoods and uses an algorithm to classify sounds as likely gunshots. When the algorithm classifies a sound as a likely gunshot, it sends an alert to a police dispatcher who can listen to a sample of audio and decide whether to dispatch officers.

The civil liberties concerns. ShotSpotter's sensors continuously record audio in the neighborhoods where they are deployed. The company has stated that audio is only retained when the algorithm flags an event. But investigative reporting revealed cases in which ShotSpotter audio recordings were used in criminal prosecutions for events other than the specific gunshot alert — the recordings captured conversations, arguments, and other audio that was later used as evidence.

In one publicized case in Chicago, prosecutors used ShotSpotter audio to argue that gunshot sounds at a crime scene had been preceded by conversations that implicated the defendant. Defense attorneys raised concerns about whether the audio had been selectively retrieved and whether the ShotSpotter system's continuous recording created an undisclosed ambient surveillance capability.

The deployment geography. ShotSpotter is deployed in predominantly Black and Latino neighborhoods in Chicago. The company's promotional materials note that cities can expand coverage to reduce "gaps in detection." The geographic deployment decisions mean that the acoustic surveillance network — continuous ambient audio monitoring, whatever ShotSpotter says about retention — is not distributed equally across the city.


The Intersection with CCTV

Chicago's camera network, the LPR system, the ShotSpotter network, and the predictive analytics tools do not operate independently. They have been progressively integrated through the Police Operations Center, creating a surveillance system in which a gunshot detection alert can trigger camera review, camera footage can identify a vehicle whose plate is then searched in the LPR database, and the vehicle's owner's SSL score (or successor tool score) shapes the intensity of the subsequent police response.

This integration is the "smart city" vision operationalized in a policing context: multiple data streams converging in real time to shape police decision-making. The question that the chapter and this case study raise is not whether integration produces operational benefits — it likely does in some cases — but whether those benefits are proportionate to the costs, whether the costs are distributed equitably, and whether meaningful accountability exists.


Discussion Questions

  1. The RAND evaluation found that the SSL's predictive accuracy was not significantly better than simply ranking people by number of prior arrests. If the algorithm adds no accuracy, what might explain why CPD continued to use it? What does the persistence of algorithmic tools with limited demonstrated value tell us about why surveillance systems are adopted and maintained?

  2. The SSL's inputs included arrest records and gang database entries — data that reflects historical patterns of racially unequal policing rather than actual rates of violence. This creates what researchers call a "feedback loop" in predictive policing. Explain this feedback loop in your own words. Can you design an algorithm that predicts violence risk without encoding historical policing disparities? If so, what data would you use? If not, what does that impossibility tell us about predictive policing as an approach?

  3. The "custom notification" visits — where officers came to the homes of high-SSL individuals to warn them that they were on police radar — were presented as a social service intervention. Evaluate this framing. What are the genuine service dimensions of such visits? What are the surveillance and stigmatization dimensions? Is it possible to separate them?

  4. ShotSpotter's audio recordings were used as evidence in criminal prosecutions for events beyond the specific gunshot alerts they were designed to detect. From the chapter's framework, identify which surveillance concept this most closely illustrates. What does this episode suggest about the relationship between stated and actual uses of ambient monitoring technology?

  5. Chicago's surveillance infrastructure was built primarily in Black and Latino neighborhoods. Community organizations in those neighborhoods have argued that they receive less policing of the crimes that most harm them (property crime investigations, responses to calls for service) and more surveillance-based policing that generates arrests for minor offenses and flags residents as future risks. Evaluate this argument. What does the distribution of surveillance intensity and policing resources tell us about the relationship between surveillance and community protection?

  6. The SSL was discontinued, but the infrastructure that enabled it — the arrest database, the gang database, the camera network — remains in place. Apply the chapter's concept of function creep to anticipate what new uses of this existing infrastructure are likely in the next decade. What governance mechanisms would you propose to prevent the re-emergence of predictive risk scoring through different programmatic vehicles?


Case Study 8.1 | Chapter 8: CCTV and the Surveilled City | Part 2: State Surveillance