Case Study 8.2: London's Facial Recognition Trials and the Future of the Surveilled City
Overview
Between 2016 and the present, the Metropolitan Police Service (MPS) in London has conducted a series of trials of live facial recognition (LFR) technology — deploying cameras linked to facial recognition software at specific public locations and events, comparing faces of passersby in real time against a database of individuals the police wish to locate. These trials have been controversial, contested by civil liberties organizations, and subjected to multiple independent audits. They represent the clearest current case study of what happens when CCTV's static recording capability is augmented with real-time identification — and what the implications are for public space as a zone of anonymity or surveillance.
From Recording to Recognition: The Technical Shift
Traditional CCTV records what happens and allows review after an event. Investigators can examine footage to identify suspects after a crime has occurred. The individual walking through a camera's field of view is recorded but not immediately identified; their footage may never be reviewed if no relevant event occurs nearby.
Live facial recognition changes this relationship fundamentally. Every face that passes within range of an LFR camera is immediately compared against a watchlist — a database of individuals the police wish to locate. If a match is found, an alert is generated and officers nearby can approach the individual. If no match is found, the biometric data is (in the MPS's description of their system) discarded within seconds.
The difference in surveillance impact is significant. Traditional CCTV creates a potential surveillance record that is reviewed retrospectively; LFR creates a real-time identification system in which every pedestrian's face is checked against a government database in the moment of passing. The practical anonymity of moving through public space — knowing that while you might be recorded, you are unlikely to be individually identified as you walk past a camera — is eliminated.
The Metropolitan Police Trials: Timeline and Methodology
2016–2019: Pilot Trials. The MPS conducted eight LFR deployments between 2016 and 2019, primarily at public events including the Notting Hill Carnival and Champions League final zones. These trials were acknowledged to be experimental; the technology was being tested for accuracy and operational practicality.
Independent audits. In 2019, the MPS commissioned Professor Peter Fussey of the University of Essex to conduct an independent evaluation of the trials. His report, published in 2019, made headlines:
- In six of the eight deployments, the system did not generate a single correct identification
- False positive rates were very high: of 42 "matches" generated across the trials, 37 were false positives — the system identified innocent people as individuals on the watchlist
- Overall, the system was incorrect more than 81% of the time it generated a match alert
- The watchlist methodology was not clearly defined: it was unclear who had been added to the database, on what basis, and with what oversight
2020–present: Live Rollout. Despite the 2019 audit's damning accuracy findings, the MPS announced in January 2020 that it was moving beyond trials to operational deployment of LFR. The technology used in the operational deployment differed from the trial systems — the MPS argued it was more accurate — and the deployment framework had been updated.
The Legal Challenge: R (Bridges) v. South Wales Police
The most significant legal challenge to live facial recognition was brought not against the MPS but against South Wales Police (SWP), which had conducted its own LFR trials at public events. Edward Bridges, a civil liberties activist, challenged the deployment he had encountered at a public event and near a Christmas market.
The Court of Appeal ruled in August 2020 that South Wales Police's use of LFR had been unlawful. The court found three specific legal failures:
-
The legal framework was not "in accordance with the law" as required by Article 8 of the European Convention on Human Rights. The guidance under which SWP operated gave too much discretion to individual officers about where to deploy LFR and who to put on the watchlist.
-
The Data Protection Impact Assessment (DPIA) conducted by SWP was inadequate — it did not properly consider the interference with Article 8 rights.
-
The public sector equality duty had not been complied with — SWP had not considered whether the system had differential impacts across racial groups.
The Bridges decision is significant as a legal precedent: it did not ban live facial recognition but established that the existing legal framework governing its use was inadequate. It required clearer legal authority, more robust data protection assessment, and engagement with racial equity impacts.
The MPS's operational deployment — governed by a different framework than SWP's — has continued to be challenged on similar grounds and continues to be contested by civil liberties organizations.
The Accuracy Problem: Race and Facial Recognition at Scale
Chapter 7 introduced the NIST findings on demographic accuracy disparities in facial recognition algorithms. The MPS trials provide a real-world application of those findings.
The 2019 Fussey report did not provide full demographic breakdowns of false positive rates, partly because the MPS did not provide sufficient data to enable this analysis. But a subsequent analysis by Big Brother Watch found that, in the trials where ethnic background of incorrect matches could be identified, Black people accounted for a disproportionate share of false positives.
The Notting Hill Carnival deployments raise particular concerns. The Carnival is an event that celebrates Caribbean heritage and is attended predominantly by Black Londoners and visitors. Deploying LFR at the Carnival — with higher false positive rates for Black faces — means that the people most likely to be incorrectly identified, approached by police, and required to prove they are not on a watchlist are people attending an event that is specifically culturally Black.
The deployment site choice interacts with the accuracy disparity to produce a racially unequal burden on innocent people who happen to resemble someone on the police watchlist.
The "Discarded Data" Claim
The MPS consistently states that if a face does not match anyone on the watchlist, the biometric data captured is discarded "within moments." This claim is central to the MPS's argument that LFR does not create a general surveillance database of everyone who passes by — only the data of people who match the watchlist is retained.
Civil liberties organizations have raised several concerns about this claim:
Verifiability. The public cannot independently verify that data is discarded as described. The MPS's word is the only assurance available; there are no third-party audits of the data deletion process.
The pipeline problem. Even if individual face images are discarded, metadata about who was scanned, when, and where may be retained in system logs. The absence of an image does not necessarily mean the absence of a record that a scan occurred.
The watchlist as the real privacy concern. The MPS's focus on data deletion for non-matches deflects attention from the more fundamental question: who is on the watchlist? The Fussey report found that the criteria for adding individuals to the watchlist were not clearly defined, that oversight of watchlist composition was limited, and that there was no systematic process for removing individuals whose inclusion was no longer justified.
Public Response: Avoidance and Defiance
The MPS's LFR deployments have generated two documented forms of public response.
Avoidance. Civil liberties organizations and researchers observed that significant numbers of people, upon seeing signage indicating LFR was in use, changed their behavior — pulling up hoods, turning away from cameras, changing route. This is a direct behavioral chilling effect: people modified their use of public space in response to visible surveillance.
Defiance and resistance. Several individuals who had been warned of a deployment approached and challenged the system directly — some shielding their faces, some demanding to know on what basis they had been included in surveillance. In one documented case, a man pulled his hood up and walked past a deployment. Police officers stopped him and issued a fine for disorderly conduct. The stop itself — on the basis of apparent evasion of surveillance — generated a legal and political controversy about whether attempting to avoid being scanned is an actionable offense.
This controversy crystalized the stakes: in a public space with LFR deployed, does an individual have the right to avoid being scanned? If the state can require individuals to submit their faces to identification checks when entering public space, is the right to anonymous movement in public space effectively extinguished?
Discussion Questions
-
The Court of Appeal in Bridges found South Wales Police's use of LFR unlawful not because facial recognition is inherently impermissible but because the specific legal framework governing its use was inadequate. What does this ruling suggest about the path to lawful LFR deployment? What legal framework would satisfy the Court's concerns about legal authority, data protection, and racial equity? And do you believe such a framework can adequately protect against the concerns raised in this case study?
-
The MPS deployed LFR at the Notting Hill Carnival — an event with specific cultural significance for Black Londoners — with a system that has documented higher false positive rates for Black faces. Evaluate the MPS's decision to deploy at this location. What considerations should govern the selection of deployment sites for LFR? Does the intersection of deployment location and accuracy disparity create a specific racial equity concern beyond what would exist in other deployment settings?
-
The "discarded data" claim is central to the MPS's defense of LFR as non-invasive to innocent passersby. Evaluate this argument. What would need to be true for the claim to fully address the surveillance concern? What is not addressed by the claim even if it is fully accurate?
-
The chapter describes the behavioral responses to LFR deployments: avoidance (changing route, pulling up hoods) and defiance (directly challenging the system). What do these responses tell us about the relationship between surveillance and public space? Should people have the right to decline facial recognition scanning in public? What does answering this question require us to decide about the nature of public space?
-
Traditional CCTV creates a potential surveillance record reviewed retrospectively. LFR creates real-time identification of every person who passes within range. Is this difference a matter of degree or a qualitative change in the nature of surveillance? Draw on Chapter 2's panopticon analysis to support your argument.
-
Jordan reads about the Notting Hill Carnival deployments and says to Yara: "Imagine going to Carnival — a celebration of Black culture — and having your face checked against a police database just for being there." Yara responds: "That's the point. The surveillance never lets you just be in public as a Black person — it marks you as a potential suspect." Evaluate Yara's claim. Is she overstating the surveillance impact? Does the analysis in this case study support or complicate her argument?
Case Study 8.2 | Chapter 8: CCTV and the Surveilled City | Part 2: State Surveillance