Case Study: A Model DPIA: Assessing a Facial Recognition Deployment

"A DPIA that acknowledges risks without genuinely addressing them is performative, not protective." -- Summary of the UK Court of Appeal's reasoning in R (Bridges) v. South Wales Police

Overview

Live facial recognition (LFR) technology represents one of the most consequential data processing activities of the twenty-first century. It operates in public spaces, processes biometric data from thousands of non-suspects to identify a few individuals on a watchlist, and raises fundamental questions about the right to anonymity in public life. This case study examines how the London Metropolitan Police and South Wales Police conducted DPIAs for their LFR deployments, what those assessments got right and wrong, and what the resulting UK Court of Appeal ruling established about the legal standard for DPIAs.

Skills Applied: - Evaluating a real-world DPIA against the standards from Section 28.2 - Applying the necessity and proportionality tests to a high-stakes deployment - Analyzing the relationship between risk acknowledgment and genuine mitigation - Understanding how courts evaluate the adequacy of impact assessments


Background

The Technology

Live facial recognition works by comparing faces captured by cameras against a watchlist of individuals sought by police. A camera captures an image. The system extracts a biometric template (a mathematical representation of facial features). The template is compared against templates on the watchlist. If the system identifies a potential match, an alert is generated and a human operator is supposed to verify the match before any police action is taken.

The process generates three categories of data:

  1. Probe images: Every face captured by the camera -- the vast majority belonging to people who are not on the watchlist and are not suspects.
  2. Biometric templates: Mathematical representations extracted from the probe images.
  3. Alert records: Cases where the system identifies a potential watchlist match, including the probe image, the watchlist image, and the confidence score.

Under GDPR, biometric data processed for the purpose of uniquely identifying a natural person is a "special category" of data under Article 9. This makes LFR an inherently high-risk processing activity and a mandatory DPIA trigger.

UK Deployment History

Beginning in 2016, several UK police forces piloted LFR at public events:

  • South Wales Police deployed LFR at major events including the 2017 UEFA Champions League Final in Cardiff, where approximately 170,000 people were processed. The system generated 2,470 potential matches, of which 2,297 were false positives (a false positive rate of approximately 93%).

  • The Metropolitan Police (Met) conducted LFR trials at various London locations between 2016 and 2020, including the Notting Hill Carnival, central London shopping areas, and transport hubs.

Both forces conducted DPIAs for their deployments. The quality of these assessments became a central legal issue.


The DPIAs: What They Covered

South Wales Police DPIA

The South Wales Police DPIA addressed:

Purpose and legal basis: Prevention and detection of crime. Legal basis: law enforcement purposes under Part 3 of the Data Protection Act 2018 (UK's implementation of the EU Law Enforcement Directive).

Data subjects: Anyone whose face was captured by the cameras, regardless of whether they were on the watchlist. At the Champions League Final, this included approximately 170,000 people.

Necessity: The DPIA argued that LFR enabled faster identification of wanted individuals compared to manual methods. Officers could not memorize the faces of hundreds of wanted persons; the technology supplemented human capability.

Risk identification:

Risk Assessment
False positive matches leading to wrongful stops Acknowledged; human verification required
Processing biometric data of thousands of non-suspects Acknowledged; data deleted unless a match is flagged
Bias in facial recognition accuracy across demographics Acknowledged; "operator training" cited as mitigation
Chilling effect on public assembly Acknowledged; deployment at specific events rather than permanent
Function creep (expansion beyond original scope) Acknowledged; policy limiting use to specified purposes

Metropolitan Police DPIA

The Met's DPIA was more detailed and was made partially public in 2019. It addressed similar categories but included:

Equality analysis: An assessment of the system's impact on different demographic groups. The Met acknowledged published research showing differential accuracy rates -- specifically, higher error rates for women and for people with darker skin tones (Buolamwini and Gebru, 2018).

Proportionality assessment: The Met argued that the proportionality of processing was justified by the seriousness of the offenses on the watchlist and the time-limited nature of deployments.

Retention policy: Probe images (faces of non-suspects) were to be deleted immediately after the real-time comparison was complete. Alert records were retained for investigation purposes.


The Court Challenge

R (Bridges) v. South Wales Police (2020)

Ed Bridges, a civil liberties campaigner from Cardiff, challenged South Wales Police's use of LFR after learning his face had been scanned while he was shopping. The case reached the UK Court of Appeal in 2020.

Key findings of the Court of Appeal:

1. The legal framework was inadequate. The court found that the existing legal framework did not provide sufficiently clear guidance on when and how LFR could be used. The police had too much discretion in determining who was placed on watchlists, where and when LFR was deployed, and how the technology was configured. This discretion, unguided by clear rules, violated the requirement for processing to be "in accordance with the law."

2. The DPIA was insufficiently rigorous. The court found specific deficiencies:

The necessity test was not genuinely applied. The DPIA assumed LFR was necessary without rigorously evaluating whether the same objectives could be achieved through less intrusive means. Could additional officers, conventional CCTV review, intelligence-led policing, or community cooperation achieve the same identification goals? The DPIA did not meaningfully engage with these alternatives.

Bias was acknowledged but not addressed. The DPIA noted that facial recognition performed less accurately on darker-skinned faces -- a finding confirmed by substantial research (Buolamwini and Gebru, 2018; NIST FRVT report, 2019). But the proposed mitigation was "operator training" -- teaching human operators to exercise extra caution when reviewing potential matches for certain demographic groups. The court and privacy advocates considered this mitigation inadequate for a systemic technological bias.

Community consultation was insufficient. The DPIA referenced public engagement activities but critics argued that meaningful consultation with affected communities -- particularly Black communities disproportionately impacted by policing decisions -- was absent. General public opinion surveys are not a substitute for targeted engagement with communities bearing disproportionate risk.

3. The equality impact assessment was deficient. The court found that South Wales Police had not fulfilled its obligations under the Public Sector Equality Duty (Equality Act 2010, Section 149). The force was required to have "due regard" to the need to eliminate discrimination and advance equality. The court found that while the force had considered equality issues, it had not done so with sufficient rigor -- particularly regarding the known disparity in accuracy for different ethnic groups.

The Ruling

The Court of Appeal ruled that South Wales Police's use of LFR was unlawful on three grounds:

  1. The legal framework was too discretionary.
  2. The DPIA did not adequately assess the risk of bias.
  3. The equality impact assessment was insufficiently rigorous.

The ruling did not ban facial recognition outright. It established that LFR could be lawful -- but only with a much more rigorous legal framework, a genuinely rigorous DPIA, and adequate safeguards against bias.


Analysis: What Makes a DPIA "Genuine"?

The Bridges ruling establishes several principles for what constitutes a genuine, legally sufficient DPIA:

Necessity Must Be Demonstrated, Not Assumed

A DPIA cannot simply state that the processing is necessary and proceed to risk analysis. It must rigorously evaluate alternatives -- including doing nothing, using less intrusive methods, or achieving the same goal through non-technological means. The necessity test is comparative: Is this method necessary relative to alternatives, not just for the stated goal?

Risk Mitigation Must Be Proportionate to Risk

Acknowledging a risk in a DPIA and proposing an inadequate mitigation is not sufficient. The court effectively established that mitigations must be commensurate with the identified risks. "Operator training" is not a proportionate mitigation for a systemic technological bias that produces measurably higher error rates for specific demographic groups. Proportionate mitigations might include: not deploying until the technology achieves comparable accuracy across groups, implementing different confidence thresholds for different demographic categories, or excluding demographic groups for which the technology performs below an acceptable standard.

Community Consultation Must Be Meaningful

A DPIA's stakeholder consultation section cannot be satisfied by general public opinion surveys or passive notice. For processing that disproportionately affects specific communities, meaningful consultation requires targeted engagement with those communities, genuine incorporation of their concerns into the assessment, and documented responses to objections raised.

Equality Impacts Must Be Actively Assessed

For public bodies, the obligation goes beyond identifying potential disparities. The DPIA must demonstrate "due regard" to equality -- which means actively assessing how the processing might affect different demographic groups, considering whether adjustments could reduce disparate impact, and documenting the reasoning for proceeding despite identified disparities.


Broader Implications

For Law Enforcement

The Bridges ruling has influenced LFR policy across the UK and beyond. Several police forces paused or abandoned LFR trials following the ruling. Those that continued (including the Met) implemented revised frameworks with clearer authorization processes, narrower watchlist criteria, and more rigorous DPIAs.

For DPIA Practice

The ruling established that courts will scrutinize the substance of DPIAs, not just their existence. A DPIA that checks the procedural boxes but lacks genuine analysis -- that identifies risks without proportionate mitigations, that assumes necessity without evaluation, that claims consultation without meaningful engagement -- can be found legally insufficient.

For Corporate Data Practices

While the Bridges case involved law enforcement, the principles apply to any DPIA conducted under GDPR. The necessity test, the proportionality requirement, the need for genuine mitigation, and the obligation of meaningful consultation are all GDPR Article 35 requirements that apply to private sector organizations as well.


Discussion Questions

  1. The false positive problem. At the Champions League Final, 93% of LFR alerts were false positives. Is any false positive rate acceptable when the consequence is a police stop? What false positive rate would you consider proportionate, and how would you justify that threshold?

  2. The operator training mitigation. Why did the court find "operator training" inadequate as a mitigation for algorithmic bias? Design an alternative mitigation strategy that would be more proportionate.

  3. The consent gap. The 170,000 people processed at the Champions League Final did not consent to biometric processing. Under what legal or ethical framework, if any, is mass biometric processing in public spaces justifiable without individual consent?

  4. The future of facial recognition DPIAs. What would a "model" facial recognition DPIA look like -- one that addressed all the deficiencies identified in the Bridges ruling? Outline its key elements.

  5. Connecting themes. How does the LFR DPIA case connect to the power asymmetry theme? Who has the power to deploy surveillance, who bears the consequences of misidentification, and how does the DPIA process address (or fail to address) this asymmetry?


Your Turn: Mini-Project

Option A: DPIA Redesign. Write a revised DPIA for the South Wales Police LFR deployment that addresses the three grounds on which the court found the original insufficient. Focus on necessity evaluation, bias mitigation, and community consultation.

Option B: Corporate Application. A major retailer wants to deploy facial recognition in its stores to identify known shoplifters. Using the Bridges ruling principles, conduct a DPIA for this corporate deployment. How do the principles translate from the law enforcement context to the retail context?

Option C: Comparative Analysis. Research facial recognition regulation in two jurisdictions beyond the UK (e.g., the EU AI Act's classification, San Francisco's ban, Illinois BIPA). Compare how each jurisdiction addresses the concerns raised in the Bridges case.


References

  • Court of Appeal of England and Wales. R (Bridges) v. Chief Constable of South Wales Police [2020] EWCA Civ 1058.

  • Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability, and Transparency (FAT)*, 77-91. 2018.

  • National Institute of Standards and Technology. "Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects." NISTIR 8280. December 2019.

  • Big Brother Watch. "Face Off: The Lawless Growth of Facial Recognition in UK Policing." Big Brother Watch Report, May 2018.

  • UK Information Commissioner's Office. "ICO Investigation into the Use of Live Facial Recognition Technology in Public Places by Law Enforcement." ICO, October 2019.

  • Fussey, Pete, and Daragh Murray. "Independent Report on the London Metropolitan Police Service's Trial of Live Facial Recognition Technology." Human Rights Centre, University of Essex, July 2019.

  • European Data Protection Board. "Guidelines on Facial Recognition Technology in the Area of Law Enforcement." EDPB, May 2022.