Case Study: Facial Recognition in Law Enforcement: The Detroit Case
"I was humiliated. My daughters were crying. And the whole thing happened because some computer got it wrong." — Robert Williams, wrongfully arrested based on a faulty facial recognition match, January 2020
Overview
On January 9, 2020, Robert Williams — a 42-year-old Black man — was arrested in the driveway of his home in Farmington Hills, Michigan, a suburb of Detroit. His wife watched from the doorway. His two young daughters watched from the window. Williams was handcuffed, placed in a police car, and taken to a Detroit Police Department detention facility, where he was held for thirty hours and interrogated about a shoplifting incident at a Shinola watch store that he did not commit.
The arrest was based on a facial recognition match. A surveillance camera at the store had captured a grainy image of the suspect. Detroit police ran the image through a facial recognition system, which returned a list of potential matches from a law enforcement photo database. Robert Williams's driver's license photo was among the results. An investigator placed Williams's photo in a six-person photo lineup and showed it to a loss prevention employee, who identified him — a process later scrutinized as circular and suggestive, since the witness was essentially being asked to confirm the algorithm's output.
Williams was the first known case of a wrongful arrest caused by a flawed facial recognition match in the United States. He was not the last. Within two years, two other Black men in Detroit — Michael Oliver and Nijeer Parks — were similarly misidentified by facial recognition technology, arrested, and later cleared. All three were Black. This was not a coincidence.
This case study examines the deployment of facial recognition in Detroit law enforcement, the technical and institutional failures that produced wrongful arrests, and the broader questions about facial recognition, racial bias, and democratic governance that Eli's Detroit thread in Chapter 8 raises.
Skills Applied: - Analyzing the intersection of technical bias and institutional decision-making - Evaluating surveillance technology deployment through a racial justice lens - Assessing governance mechanisms for emerging law enforcement technologies - Connecting a specific case to broader theories of surveillance and power
The Situation
Project Green Light: Background
Detroit's engagement with surveillance technology did not begin with facial recognition. In 2016, the Detroit Police Department (DPD) launched Project Green Light, a public-private partnership in which local businesses — gas stations, convenience stores, restaurants, party stores, and eventually schools and churches — installed high-definition surveillance cameras that streamed live feeds directly to the DPD's Real Time Crime Center (RTCC). Participating businesses displayed a flashing green light to signal their membership in the program. By 2020, over 700 locations had enrolled, creating a network of thousands of cameras feeding continuous video to police.
Project Green Light was presented as a community safety initiative. The DPD argued that visible surveillance deterred crime and that real-time video feeds enabled faster police response. The program received support from some community members and business owners, particularly those in high-crime areas. But it also drew criticism from civil rights organizations, community organizers, and residents who argued that the program:
- Concentrated surveillance in Black neighborhoods. The vast majority of Project Green Light locations were in predominantly Black communities on Detroit's east and west sides. Wealthier, whiter areas of the city had few or no participating locations.
- Operated without meaningful community input. The program was designed by the police department and the mayor's office without public hearings, community votes, or participatory governance processes.
- Normalized police monitoring of everyday commercial activity. By placing cameras with live police feeds at gas stations, grocery stores, and restaurants, the program transformed routine daily errands into surveilled activities.
- Created infrastructure for more invasive technologies. Civil liberties advocates warned that a network of high-definition cameras connected to a centralized command center was precisely the infrastructure needed for facial recognition deployment.
That warning proved prescient.
Facial Recognition in Detroit: The Technology
In 2017, the DPD began using facial recognition technology, powered initially by DataWorks Plus software that incorporated algorithms from NEC Corporation and later from Rank One Computing. The system worked by extracting a "faceprint" — a mathematical representation of facial geometry — from a probe image (typically a surveillance photo of an unknown suspect) and comparing it against a gallery database of known faces (driver's license photos, mugshot databases, or other law enforcement records).
The system returned a list of potential matches ranked by confidence score, along with the caveat — stated clearly in the software's user manual and in DPD policy — that a facial recognition match was not a positive identification. It was an investigative lead that required independent verification. DPD policy stated that facial recognition results "shall not be used as the sole basis for an arrest."
The gap between policy and practice proved to be enormous.
The Robert Williams Case
On October 10, 2018, a theft was reported at a Shinola store in Midtown Detroit. Surveillance video showed a heavyset Black man in a red St. Louis Cardinals cap taking five watches worth approximately $3,800. The video quality was poor — grainy, captured at a distance, with partial facial visibility.
In March 2019, a detective submitted a still frame from the surveillance footage to the Michigan State Police facial recognition system, which searched it against a database of 49 million driver's license and state ID photos. The system returned a list of candidates. Robert Williams appeared as a possible match.
What happened next is critical. The detective placed Williams's driver's license photo into a six-person photo lineup and showed it to a Shinola loss prevention employee — the same employee who had reviewed the original surveillance footage. The employee identified Williams. But this identification was heavily influenced by the process itself: the employee had already studied the surveillance image, and the photo lineup was constructed based on the algorithm's output. The investigative process was circular rather than independent.
On January 9, 2020, police arrived at Williams's home with an arrest warrant. Williams — who had never been to the Shinola store, had no criminal record, and bore only a general resemblance to the actual suspect — was handcuffed in front of his family and transported to a detention center. During interrogation, a detective showed Williams a blurry printout of the surveillance image and asked, "Is this you?" Williams held the image next to his face and replied: "I hope you don't think all Black people look alike."
The detective, according to Williams's account, paused and said: "The computer says it's you." Williams was held for thirty hours before being released on bond. The case was eventually dismissed after the Wayne County Prosecutor's office concluded that the evidence was insufficient.
Subsequent Cases
Williams's case was not an isolated incident:
Michael Oliver was arrested in 2019 for allegedly throwing a phone at a car in downtown Detroit, based on a facial recognition match. Oliver was held for two days. Charges were dropped after a second investigation found that the actual suspect did not resemble Oliver — beyond both being Black men.
Nijeer Parks, a Black man in New Jersey, was arrested in 2019 based on a facial recognition match tied to a shoplifting and assault case at a Hampton Inn in Woodbridge. Parks had never been to Woodbridge. He spent ten days in jail and nearly a year fighting the charges before they were dismissed.
In all three cases, the individuals were Black men. In all three cases, the facial recognition match was inaccurate. In all three cases, the investigative process treated the algorithmic output as more reliable than it was.
The Technical Problem: Bias in Facial Recognition
The Accuracy Gap
The wrongful arrests in Detroit were not random errors. They occurred against the backdrop of well-documented accuracy disparities in facial recognition technology.
In December 2019 — weeks before Williams's arrest — the National Institute of Standards and Technology (NIST) published the most comprehensive evaluation of facial recognition accuracy to date. Testing 189 algorithms from 99 developers, NIST found:
- False positive rates — the rate at which the system incorrectly matches two different people — were 10 to 100 times higher for Black and Asian faces compared to white faces in many algorithms.
- Women of color experienced the highest error rates across nearly all systems tested.
- Accuracy varied dramatically by algorithm. Some algorithms showed minimal demographic disparities; others showed massive ones. But the average consumer or police department would not know which category their system fell into without independent testing.
Earlier, Joy Buolamwini and Timnit Gebru's "Gender Shades" study (2018) had tested commercial facial recognition systems from Microsoft, IBM, and Face++ and found:
- Error rates for lighter-skinned males: 0.8%
- Error rates for darker-skinned females: 34.7%
The disparity was a factor of more than 40x. While some companies subsequently improved their algorithms, the study demonstrated that facial recognition systems, trained predominantly on datasets overrepresenting lighter-skinned faces, systematically performed worse on the people they were most frequently deployed against in policing contexts.
Why the Bias Exists
Facial recognition algorithms learn to recognize faces from training data — large datasets of facial images labeled with identities. If the training data overrepresents certain demographic groups (lighter-skinned males) and underrepresents others (darker-skinned women), the algorithm develops stronger pattern recognition for the overrepresented group. This is not a conspiracy; it is a predictable consequence of biased training data combined with the absence of systematic testing for demographic fairness.
Additional technical factors compound the problem:
- Image quality. Surveillance cameras in low-resource settings (gas stations, convenience stores) often produce lower-quality images than the controlled, well-lit photos in training datasets. Poor lighting disproportionately affects the visibility of darker-skinned faces.
- Environmental conditions. Outdoor cameras, nighttime footage, and cameras positioned at unusual angles all degrade accuracy, and these degraded conditions are more common in the high-crime, lower-income areas where surveillance is concentrated.
- Gallery database composition. If the reference database contains disproportionate numbers of mugshot photos of Black individuals — a reflection of racial disparities in arrests and policing — then the system's pool of potential matches is already skewed.
The Governance Failure
Policy vs. Practice
The DPD's own policy stated that facial recognition results "shall not be used as the sole basis for an arrest." This policy was violated — or, more precisely, circumvented. The detective who investigated Williams's case did not arrest him based solely on the algorithm. He constructed a photo lineup and obtained a witness identification. But the lineup was built on the algorithm's output, and the witness was the same person who had already reviewed the surveillance footage. The "independent" verification was, in practice, confirmation of the algorithm's suggestion.
This gap between policy and practice is a recurring pattern in surveillance governance. Formal policies can state anything; what matters is whether the incentives, training, and institutional culture ensure compliance.
Community Voice
At no point in the deployment of facial recognition in Detroit were residents asked whether they consented to the technology. There were no public hearings, no ballot measures, no community advisory boards with authority over the technology's use. The decision to deploy facial recognition was made by the police department and the mayor's office.
Eli's experience in the chapter's narrative reflects this reality. When he asks who approved the surveillance cameras in his neighborhood, the answer is: the city government, the police department, and the corporate vendors. The community whose daily life is most affected by the technology had no role in the decision.
The ACLU's Challenge
In 2019 and 2020, the American Civil Liberties Union (ACLU) of Michigan launched a sustained campaign against Detroit's use of facial recognition, combining legal advocacy, public education, and community organizing:
- The ACLU filed a complaint with the Detroit Board of Police Commissioners arguing that the DPD's facial recognition practices violated due process and equal protection.
- They published a detailed report documenting the technology's racial bias and the absence of governance safeguards.
- They organized community members — particularly from neighborhoods with heavy surveillance — to testify at city council and police commission hearings.
- They called for a moratorium on police use of facial recognition until adequate governance was established.
In 2020, the Detroit Board of Police Commissioners acknowledged the need for reform and adopted new restrictions, including a prohibition on using facial recognition to identify suspects during protests and a requirement for supervisory approval before conducting a facial recognition search. The ACLU argued these reforms were insufficient because they left the fundamental technology in place and did not address the accuracy disparities.
Broader Context: The National Debate
Detroit's experience was part of a broader national reckoning with facial recognition in law enforcement:
- San Francisco (2019), Boston (2020), Minneapolis (2020), and several other cities enacted bans or moratoriums on government use of facial recognition.
- King County, Washington (2021) prohibited county agencies from using facial recognition entirely.
- Amazon, IBM, and Microsoft announced pauses or restrictions on selling facial recognition technology to police departments, citing accuracy concerns and the absence of federal regulation.
- In Congress, multiple bills were introduced to regulate or ban federal use of facial recognition, though none had passed into law at the time of writing.
- The European Union's AI Act (2024) classified real-time biometric identification in public spaces as a "prohibited" AI practice, with narrow exceptions for law enforcement.
Discussion Questions
-
The technology vs. the system. Some critics argue that the problem is the technology itself — facial recognition is too inaccurate, especially across racial groups, to be used in law enforcement. Others argue that the problem is how the technology is implemented — with inadequate training, biased databases, and insufficient safeguards. Which position is stronger? Is it possible to fix the implementation, or does the technology carry inherent risks that no governance framework can adequately mitigate?
-
Community consent and policing. Eli argues that his community was never asked whether it wanted to be surveilled. Proponents of Project Green Light argue that many business owners voluntarily joined the program because they wanted safer streets. How should these competing perspectives be weighed? Is business owner participation sufficient to constitute "community consent," or does democratic governance require broader input? What mechanisms could ensure that the communities most affected by surveillance have a genuine voice in decisions about its deployment?
-
The photo lineup problem. In the Williams case, the detective used the facial recognition output to construct a photo lineup, which was then shown to a witness who had already viewed the surveillance footage. Analyze this process through the lens of confirmation bias. How did the algorithm's output shape the subsequent investigation? What procedural changes would be needed to ensure that human judgment operates independently of algorithmic suggestion?
-
Proportionality and consequences. Robert Williams was arrested for a property crime — the theft of five watches. He was handcuffed in front of his family, detained for thirty hours, and subjected to the trauma and stigma of a wrongful arrest. Evaluate whether the deployment of facial recognition technology is proportionate for property crime investigations. Does the calculus change for violent crimes? Where should the line be drawn, and who should draw it?
Your Turn: Mini-Project
Option A: Policy Analysis. Research the current status of facial recognition regulation in your city or state. Has your jurisdiction enacted any restrictions, bans, or moratoriums? If so, what do they cover? If not, what proposals have been made? Write a two-page analysis that evaluates your jurisdiction's approach (or lack of approach) against the Detroit case. Include at least three specific governance recommendations.
Option B: Accuracy Audit Design. Design a framework for auditing the accuracy of a facial recognition system before it is deployed in a law enforcement setting. Your framework should address: (a) what demographic groups should be tested, (b) what accuracy thresholds should be required, (c) how image quality conditions should be varied in testing, (d) how the audit results should be disclosed, and (e) who should conduct the audit (the vendor, the police department, or an independent third party). Present your framework as a one-page policy document with clear, actionable provisions.
Option C: Community Governance Model. Design a community governance structure for surveillance technology in a city like Detroit. Your model should include: (a) a decision-making body (who sits on it, how they are selected), (b) a process for evaluating proposed surveillance technologies before deployment, (c) ongoing oversight mechanisms, (d) a complaint and redress process for individuals harmed by surveillance errors, and (e) criteria for discontinuing a surveillance program. Present your model in two pages, referencing both the Detroit case and the governance concepts from Chapter 8.
References
-
Hill, Kashmir. "Wrongfully Accused by an Algorithm." The New York Times, June 24, 2020.
-
Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT)*, PMLR 81 (2018): 77-91.
-
Grother, Patrick, Mei Ngan, and Kayee Hanaoka. "Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects." NIST Interagency Report 8280, December 2019.
-
ACLU of Michigan. "Detroit Police Department's Use of Facial Recognition Technology." Policy Report, 2020.
-
Green, Ben. The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. Cambridge, MA: MIT Press, 2019.
-
Detroit Police Department. "Facial Recognition Policy: Directive 307.5." Revised 2020.
-
Garvie, Clare, Alvaro Bedoya, and Jonathan Frankle. "The Perpetual Line-Up: Unregulated Police Face Recognition in America." Georgetown Law Center on Privacy and Technology, October 2016.
-
Ferguson, Andrew Guthrie. The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. New York: New York University Press, 2017.
-
Koenig, Alexa. "Facial Recognition, Policing, and Race." Journal of Law and the Biosciences 7, no. 1 (2020).
-
European Parliament and Council. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." Official Journal of the European Union, 2024.