Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
The PredPol Story — Who Had Power, Who Had Voice, Who Was Harmed
Overview
Between 2011 and 2020, dozens of American cities — including Los Angeles, Santa Cruz, Chicago, Atlanta, and Plainfield, New Jersey — deployed a predictive policing software system called PredPol (later rebranded as Geolitica). The system used historical crime data to generate 500-square-foot "boxes" on a city map, predicting where crimes were likely to occur in the next 12 hours. Police departments used these predictions to direct patrol resources.
PredPol's story is one of the most thoroughly documented cases of AI deployment in the public sector. It is also one of the clearest illustrations of what happens when powerful stakeholders — law enforcement agencies and technology vendors — make decisions that impose costs on less powerful ones — communities of color — without meaningful engagement, oversight, or consent. The story's arc, from confident deployment to city bans and company rebranding, provides a complete narrative of stakeholder conflict in AI: who wins in the short run, who pushes back, and what it takes to change the outcome.
1. What PredPol Claimed to Do
PredPol was founded in 2012, growing out of research by UCLA anthropologist Jeff Brantingham and a team of applied mathematicians. The system used a seismic aftershock model — originally developed to predict where earthquakes would occur following a major seismic event — to predict the spatial-temporal clustering of crime events. The founding premise was that crime, like seismic activity, clusters in ways that are statistically predictable: a burglary in a given location increases the probability of subsequent burglaries nearby.
The company marketed PredPol to police departments with claims of scientific rigor and impressive efficacy. Marketing materials cited studies showing that PredPol outperformed other predictive methods and that departments using the system had reduced crime rates. The company emphasized the system's objectivity: unlike human judgment, an algorithm is not racist, it does not profile. "Our system only uses three variables: crime type, crime location, and crime date and time," CEO Brian Macdonald told reporters. "No demographic information of any kind."
This framing — objectivity through demographic blindness — was PredPol's core ethical claim and, as subsequent research demonstrated, its most fundamental flaw.
2. The Police Department's Perspective: Efficiency and Resource Allocation
From the perspective of police department leadership, PredPol offered a compelling value proposition. Police departments operate under constant pressure to demonstrate crime reduction while managing constrained budgets. A system that promised to direct officers to the right places at the right times — increasing the efficiency of an existing patrol force without requiring additional hires — addressed both imperatives simultaneously.
For the Los Angeles Police Department, which became one of PredPol's largest and most prominent clients beginning in 2011, the system fit within a longer trajectory of data-driven policing. The LAPD's CompStat program, which had used crime data to direct departmental resources since the 1990s, had already established a culture of quantitative performance management. PredPol was presented as an evolution of that culture: the same data-driven approach, now with machine learning.
Operational officers had more varied responses. Some found the prediction boxes useful, particularly in high-density districts where allocating patrol resources was genuinely difficult. Others found them arbitrary — the 500-foot box could include blocks with very different crime profiles, and officers knew their beats better than the algorithm did. The union-level politics of algorithmic management also created friction: officers subject to performance metrics tied to their presence in prediction boxes sometimes felt that the system constrained their professional discretion in ways that did not improve public safety.
At the department leadership level, the overwhelming incentive was to declare success. Crime data is highly sensitive to how it is collected, what offenses are reported, and how those reports are coded. A department that wanted to demonstrate that PredPol was working had considerable discretion over how to measure and report outcomes.
3. The Technology Vendor's Perspective: Growth, Revenue, and Validation
PredPol's business interests were straightforward: sign contracts with police departments, demonstrate efficacy to drive expansion, and build toward a national and eventually international market. The company's pricing model — typically several hundred thousand dollars per year for mid-sized departments — meant that each municipal contract was a significant revenue line. Expansion to the largest police departments (LAPD, NYPD, Chicago PD) would represent transformational scale.
The vendor's interest in validation created a particular dynamic. Early PredPol deployments were accompanied by research partnerships with academic institutions — principally UCLA — that produced publications measuring the system's efficacy. These publications generated credibility and were widely cited in the company's marketing. The independence of this research — funded by the vendor or conducted by the vendor's academic founders — was a conflict of interest that went largely unexamined in early media coverage.
PredPol also had strong incentives to resist scrutiny of what the algorithm was actually learning from its data. An algorithm that claimed demographic neutrality because it did not explicitly use demographic inputs was not necessarily actually neutral, because the spatial and temporal patterns of historical arrests are themselves products of prior policing decisions that were racially patterned. If the LAPD had historically concentrated patrols in Black and Latino neighborhoods — and it had — then historical arrest data would reflect that concentration, and a model trained to predict future crime from historical crime location data would learn to predict intensified policing in those same neighborhoods. The model's predictions were, to a significant degree, predictions about where police would be rather than where crime would occur.
This critique was available in the statistical and criminological literature for anyone who chose to engage with it. PredPol's leadership was aware of it. Their public response was to insist that the algorithm was predicting crime, not policing, and that demographic blindness guaranteed objectivity. This was a position of advocacy, not of scientific rigor.
4. The Community's Perspective: Disproportionate Surveillance
For residents of Los Angeles neighborhoods flagged as high-risk by the PredPol algorithm — primarily Black and Latino communities in South LA, East LA, and parts of the San Fernando Valley — the system's deployment was experienced not as data-driven objectivity but as intensified, algorithmically legitimated over-policing.
The lived experience of residents in these neighborhoods was consistent with the model's structural flaw. Intensified patrol presence in prediction boxes led to more stops, more searches, and more arrests — not necessarily because crime was higher but because policing was more intensive. Those arrests then fed back into the training data, reinforcing the model's assessment that those locations were high-crime and warranting continued intensive patrol. Critics called this a "feedback loop" or "dirty data" problem: the model was being trained on data that reflected prior discriminatory policing, and its outputs intensified those patterns.
Community organizations in affected neighborhoods documented the effects through intake of community complaints, know-your-rights training, and direct outreach to city council members. The reports they compiled described the same basic experience across multiple communities: increased frequency of police encounters, heightened anxiety about policing, and a deepening sense that the neighborhood was being treated as a problem to be managed rather than a community to be served.
The community perspective is crucial to the ethical analysis not only because it documents harm but because it identifies a form of expertise that the technical analysis could not provide. Residents of over-policed neighborhoods knew from lived experience that the model was doing something other than what the vendor claimed. Their testimony was available but was not sought — no community engagement process preceded the LAPD's adoption of PredPol, and no mechanism existed for community input into the ongoing evaluation of the system's effects.
5. The City Council's Limited Role
The Los Angeles City Council's engagement with PredPol was, by most accounts, limited. The LAPD adopted PredPol through an administrative process that did not require council approval for initial deployment; the contract was eventually subject to council budget oversight, but the substantive policy questions — whether predictive policing was an appropriate use of public resources, whether the system's methodology was sound, what communities bore its costs — received limited council attention for most of the deployment period.
This is a structural failure of democratic accountability that is not unique to Los Angeles. Police departments in most American cities have significant administrative autonomy that insulates their technology adoption decisions from direct political oversight. Procurement processes that route through departmental budgets rather than requiring legislative authorization effectively exclude elected officials — and through them the public — from consequential technology decisions.
The contrast between what city council oversight of a predictive policing deployment would ideally look like and what it actually looked like in Los Angeles illuminates a gap in democratic governance of public-sector AI. An idealized oversight process would include: independent technical review of the system's methodology; structured community input from neighborhoods likely to be affected; assessment of constitutional and civil rights implications; ongoing monitoring with regular public reporting; and clear criteria for discontinuation if harms exceeded benefits. None of these elements were systematically in place.
6. The Academic Researchers Who Documented Harms
The most rigorous external scrutiny of PredPol came from academic researchers who applied independent methodological tools to the question of what the algorithm was actually learning and predicting.
A 2021 study by a team at the Human Rights Data Analysis Group (HRDAG), led by Rashida Richardson, Jason Schultz, and Kate Crawford, documented what they called "dirty data" — training datasets for predictive policing algorithms in numerous cities that were demonstrably contaminated by prior discriminatory practices including racially biased patrol patterns, stop-and-frisk programs that had been found unconstitutional, and arrest data from investigative units later found to have fabricated evidence. The analysis showed that predictive policing systems trained on such data would systematically reproduce and amplify the discriminatory patterns embedded in it.
A separate investigation by the Stop LAPD Spying Coalition and a research team including researchers at UCLA published findings in 2021 that analyzed PredPol's deployment in Los Angeles specifically, documenting the geographic overlap between LAPD prediction boxes and historically redlined neighborhoods, the concentration of prediction box activity in Black and Latino communities, and the feedback loop dynamic through which patrol concentration in prediction boxes generated arrests that reinforced the model's assessments.
This academic research performed a function that no market mechanism, regulatory process, or internal review had performed: independent, rigorous, methodologically transparent analysis of whether the system was doing what it claimed to do and at what cost to which communities. It provided the evidentiary foundation for subsequent advocacy and litigation.
7. Civil Liberties Organizations: Litigation and Advocacy
The American Civil Liberties Union of California and the Stop LAPD Spying Coalition were the primary civil society organizations engaged on PredPol in Los Angeles. Their approach combined public advocacy, community organizing, legal analysis, and direct political engagement.
The ACLU's work on predictive policing nationally, and in Los Angeles specifically, included: publication of reports documenting civil liberties concerns; testimony before legislative bodies considering predictive policing policy; litigation in cases where algorithmic policing led to specific constitutional violations; and public education campaigns designed to inform affected communities of their rights.
The Stop LAPD Spying Coalition pursued a community-centered approach: intensive organizing in affected neighborhoods, documentation of community member experiences with predictive policing, and direct engagement with elected officials. Their campaign against PredPol and related surveillance technologies was one of the most sustained and ultimately successful community-led AI accountability campaigns in the United States.
These organizations did not prevail quickly. Years elapsed between the initial deployment of PredPol in Los Angeles and the LAPD's eventual discontinuation of the system. During that time, community members continued to bear the costs of intensified policing. The slowness of accountability mechanisms in AI governance is itself an ethical problem: by the time successful advocacy produces changes, substantial harm has already been done.
8. Santa Cruz: The First US City to Ban Predictive Policing
In June 2020, the city of Santa Cruz, California, became the first city in the United States to ban predictive policing software outright. The Santa Cruz City Council voted unanimously to prohibit city agencies from acquiring, using, retaining, or disclosing data or assessments produced by any predictive policing system.
The Santa Cruz decision was the product of organizing by a coalition that included local activists, national civil liberties organizations, and academic experts — and it benefited from a particular local political context. Santa Cruz is a small, progressive university city, and the political conditions that enabled a unanimous council vote exist in relatively few jurisdictions. The decision nonetheless established a precedent that other cities have since followed: Oakland, California; Portland, Oregon; and King County, Washington have all enacted restrictions on predictive policing software.
The Santa Cruz ban was important not only as a policy outcome but as a proof of concept. Community organizing, sustained over years, coordinating civil society organizations, academic researchers, media coverage, and political advocacy, can produce accountability for AI systems that cause harm. The path from deployment decision to accountability was long and costly in human terms — years of community members bearing the costs of over-policing — but it was navigated successfully.
9. Stakeholder Map: Power, Interest, and Voice
Applying the stakeholder matrix framework from Chapter 4:
High Power, High Interest: - LAPD leadership — had authority to deploy and continue the system; directly measured by crime statistics shaped by deployment - PredPol/Geolitica — controlled the technology; had revenue stake in continued deployment - City of Los Angeles administration — political accountability for public safety outcomes
High Power, Lower Interest (initially): - City Council — had budget authority but limited engagement with technical details - State-level elected officials — could create regulatory requirements but had other priorities - Federal courts — would have authority if constitutional challenges were brought and won
High Interest, Low Power: - Residents of Black and Latino neighborhoods targeted by prediction boxes — highest stake, minimal formal voice - Community organizations representing those residents — high interest, limited resources and formal authority - Individual police officers — significant interest in how the system affected their work; union voice but not decisive on technology adoption
Variable Power, High Interest: - Academic researchers — limited power but high interest and some ability to shape public discourse and policy through publications - ACLU and civil liberties organizations — limited formal power but ability to litigate, advocate, and generate media coverage - Investigative journalists — significant power to shape public perception and political pressure
Data Subjects: - People whose prior arrests formed the training data — had no voice in whether their data was used, no ability to correct erroneous records, and no awareness of their role in the system
Invisible Affected Parties: - All residents of LAPD prediction-box neighborhoods who experienced intensified policing — affected by the system without any formal relationship to it
10. What Good Stakeholder Engagement Would Have Looked Like
A deployment process that genuinely engaged the full stakeholder ecosystem before deploying PredPol in Los Angeles would have looked fundamentally different from what actually occurred.
Before deployment: - Independent methodological review of PredPol's algorithm by criminologists, statisticians, and legal scholars without financial relationships with the vendor, examining whether the "demographic neutrality" claim was methodologically valid given the nature of the training data. - Structured community input from neighborhoods likely to be targeted by prediction boxes, using established community engagement methods including town halls, focus groups, and partnerships with trusted community organizations. - Civil rights legal analysis assessing Fourth and Fourteenth Amendment implications of prediction-box policing, with findings disclosed to city council before the deployment decision. - Transparency about the system's methodology sufficient to allow independent technical evaluation. - Clear, publicly stated criteria for assessing the system's success — including equitable impact across demographic groups, not only aggregate crime statistics. - A sunset clause: a defined period after which the system would be independently evaluated and re-approved or discontinued based on evidence, not momentum.
During deployment: - Ongoing community liaison function with genuine power to surface community concerns and require departmental response. - Public reporting of prediction-box locations and police activity within them, enabling academic and civil society scrutiny. - Regular, independent audit of the system's accuracy claims and demographic impact.
Accountability mechanism: - City council review of the system at defined intervals, with community input processes preceding each review, and clear authority to discontinue the contract if review findings warranted it.
None of these mechanisms would have guaranteed a better outcome. But they would have created the conditions under which the problems that eventually came to light could have been identified earlier, and in which affected communities had a meaningful voice in decisions that profoundly affected their lives.
11. Discussion Questions
-
PredPol claimed that using only location, date, and crime type — not demographic data — made its algorithm objective and race-neutral. Evaluate this claim. What does it reveal about the assumptions underlying "demographic blindness" as an approach to algorithmic fairness?
-
The LAPD adopted PredPol through an administrative process that did not require city council approval. Should AI systems with civil liberties implications require legislative approval rather than executive or departmental authorization? What are the practical arguments for and against requiring this?
-
The communities most affected by PredPol — predominantly Black and Latino residents of Los Angeles — had the highest stake in the deployment decision and the least formal voice in it. Design a specific stakeholder engagement process that would have given these communities a genuine (not merely consultative) voice before deployment. What institutional mechanisms would be required to make that process meaningful rather than performative?
-
The Santa Cruz ban came nearly a decade after PredPol began operating in California. What does this timeline reveal about the speed of AI accountability mechanisms? What reforms to legal, regulatory, or governance structures could accelerate accountability when AI systems cause harm?
-
PredPol has since rebranded as Geolitica and continues to operate, framing itself as a tool for understanding crime patterns rather than predicting them. Does rebranding solve the fundamental methodological problems identified in this case? What would you require the company to demonstrate before a city could responsibly deploy its software?
This case study should be read alongside Section 4.5 (The Invisible Stakeholders) and Section 4.6 (Stakeholder Analysis in Practice) in the chapter text.