28 min read

> "Facial recognition is a technology that quite literally reduces the human person to a number. It is one of the most invasive surveillance technologies that has ever existed."

Chapter 35: Facial Recognition: The Face as a Data Point

"Facial recognition is a technology that quite literally reduces the human person to a number. It is one of the most invasive surveillance technologies that has ever existed."

— Evan Greer, Fight for the Future

"No person should be arrested, prosecuted, or incarcerated for a crime they did not commit simply because a flawed algorithm pointed to them."

— ACLU, in litigation challenging facial recognition-based arrests


Myth vs. Reality: Facial Recognition Claims

Before examining how facial recognition works, let's address common claims circulating in both promotional and critical contexts.


MYTH: Facial recognition is "nearly perfect" — it identifies faces with 99%+ accuracy. REALITY: Accuracy claims in vendor marketing typically derive from benchmark tests using high-quality, well-lit photographs of people in demographic groups well-represented in training data. Real-world accuracy degrades substantially with lower image quality, varied lighting, non-frontal angles, and — critically — for people with darker skin tones. The Gender Shades study (Section 35.5) found error rates up to 34.7% for darker-skinned women in commercially deployed systems.


MYTH: Facial recognition just identifies faces — it's like a fingerprint database. REALITY: Unlike fingerprint databases (which require a sample, usually taken at arrest), facial recognition can operate on images collected without the subject's knowledge or consent — surveillance camera footage, social media photos, images scraped from the internet. Clearview AI has built a database of over 30 billion images scraped from social media and public websites. The "just a fingerprint database" framing ignores the distinction between biometrics collected with consent and biometrics collected covertly.


MYTH: If you're innocent, you have nothing to fear from facial recognition. REALITY: Robert Williams (Detroit, 2020) and Nijeer Parks (New Jersey, 2019) were both innocent people arrested and jailed based on incorrect facial recognition matches. Williams was arrested in front of his family; Parks spent 10 days in jail before charges were dropped. Facial recognition errors are not equally distributed — they are more frequent for darker-skinned people, women, and older individuals, meaning innocent people who look like what algorithms predict criminals look like bear disproportionate risk of false arrest.


MYTH: Facial recognition in airports is just for identifying terrorists. REALITY: Facial recognition at U.S. airports is operated by the Department of Homeland Security and used for: identity verification of travelers, identifying people on watchlists, and — in commercial contexts — connecting travelers to loyalty programs. The infrastructure built for one purpose (security screening) enables others (commercial profiling) through function creep.


MYTH: You can opt out of facial recognition. REALITY: You can opt out of some commercial uses of facial recognition (Delta's facial boarding opt-out, for example). You cannot opt out of surveillance camera systems, law enforcement databases, government systems, or scraped-photo databases like Clearview AI. And you cannot change your face.


Opening: Jordan's Scenario

Jordan got the call on a Thursday afternoon. A security manager at a grocery store chain had filed a report with local police: surveillance cameras had captured footage of a shoplifting incident. A facial recognition system had returned a match — Jordan's name, Jordan's address, Jordan's photo from a driver's license database.

The officer was polite. They weren't arresting Jordan — not yet. They wanted Jordan to come in and explain the footage.

Jordan had been at work when the shoplifting occurred. They had clocked in, clocked out, the warehouse tracking system had recorded every movement. The alibi was digital and irrefutable.

But the alibi required proving it. The innocent person had to demonstrate their innocence. The algorithm's accusation had shifted the burden.

Jordan called Yara, who called a lawyer she knew from the civil liberties organization. The lawyer was unsurprised. "It's happening more and more," she said. "The algorithm names someone, the police investigate, the person proves they weren't there. Then the same algorithm names someone else."

"So it just keeps going?" Jordan asked. "Bad match after bad match?"

"Until departments stop using it, or until enough people sue, or until the legislature acts. So far: mostly it keeps going."

Jordan thought about what they looked like. Mixed-race, Black and white. They knew, from readings for Dr. Osei's class, that facial recognition systems were less accurate for people with darker skin tones. They thought about who else had received calls like this one. Who didn't have a digital alibi. Who didn't have Yara with a lawyer's number on her phone.


35.1 How Facial Recognition Works: The Technical Architecture

Facial recognition is not one technology but a pipeline of several distinct processes. Understanding the pipeline helps clarify both how errors occur and where interventions are possible.

Stage 1: Face Detection

Before recognition can occur, the system must detect that an image contains a face. Face detection algorithms are trained on millions of face images to identify the visual patterns associated with human faces: two eyes above a nose above a mouth, in specific geometric relationships, across a range of sizes and orientations.

Face detection is a prerequisite for all subsequent steps. Errors at this stage mean the system either misses faces (false negatives — not detecting a face that is present) or identifies non-faces as faces (false positives).

Stage 2: Face Alignment

Once a face region is detected, the image is normalized: aligned to a standard orientation, scaled to a standard size, and adjusted for lighting where possible. This alignment process is necessary because the recognition algorithm was trained on aligned images; misaligned inputs produce degraded performance.

Alignment quality degrades with: extreme angles (profile views, heavily upward or downward angles), occlusions (hats, scarves, sunglasses, masks), unusual lighting (very bright backlight, very low light), and low image resolution.

These are precisely the conditions that characterize much real-world surveillance footage — CCTV footage is often from fixed angles, variable lighting, and sufficient distance to produce low resolution of individual faces.

Stage 3: Feature Extraction

The aligned face image is processed by a neural network to extract a compact mathematical representation — often called a "face embedding" or "feature vector" — that encodes the distinctive characteristics of the face. This representation is typically a vector of several hundred to a few thousand numerical values.

The neural network learns through training on millions of labeled face images to extract features that are: invariant across different images of the same person (same person under different conditions produces similar vectors) and discriminating across different people (different people produce different vectors).

The quality of the feature extraction depends critically on the training data: if the training set over-represents certain demographics (young, lighter-skinned, frequently photographed individuals), the network will be better calibrated for those demographics than for underrepresented ones.

Stage 4: Matching

The extracted feature vector is compared against a database of known face vectors. There are three types of matching:

1:1 Verification: Compare one face image against one stored face to determine if they're the same person. Used for: phone unlock (is this the registered owner?), border crossing (does this face match the passport photo?).

1:N Identification: Compare one face image against a database of N stored faces to find the best match. Used for: law enforcement databases (who is this person in surveillance footage?), watchlist screening (is this person on a list of known individuals?), Clearview AI-style searches.

Real-time tracking: Continuously process video frames to identify and track specific individuals through a scene. Used for: some Chinese mass surveillance systems, sports venue security, experimental law enforcement applications.

The 1:N identification case — used in law enforcement — generates the most documented harms. The larger the database, the higher the chance of a false positive match: even a very accurate system produces false matches at scale. If a system is 99.9% accurate and searches against a database of 10 million people, it will produce approximately 10,000 false matches per search query.

📊 Real-World Application: This is why threshold matters. Facial recognition returns not a binary yes/no but a similarity score — how close is this face to that face. The decision about what score threshold constitutes a "match" involves a trade-off: lower threshold catches more true positives but also more false positives (more false arrests). Higher threshold misses more true positives but produces fewer false positives. Law enforcement agencies using facial recognition as an investigative lead — not final determination — should treat even high-confidence matches as requiring substantial independent corroboration. Evidence suggests they often don't.


35.2 The Major Vendors: Clearview AI, Amazon, NEC, and More

Clearview AI: The Scraping Problem

Clearview AI, founded in 2017 by Hoan Ton-That and Richard Schwartz, represents the most radical manifestation of facial recognition's surveillance potential.

Clearview's database contains over 30 billion images scraped from social media platforms, news sites, government databases, and virtually any website with publicly accessible images. The images were collected without the knowledge or consent of the people depicted or the platforms hosting them. Clearview's core product allows customers — law enforcement agencies, in its original pitch — to upload a face image and receive results showing where that face appears online, along with links to the source pages.

The practical effect: Clearview collapsed the separation between private life and law enforcement scrutiny. A person who posted a photo on Instagram in 2012, long before they knew Clearview existed, may appear in Clearview's database. A law enforcement search for a face would potentially return that decade-old Instagram post, providing name, social network, and connections — without the person having ever volunteered their information to law enforcement.

Clearview's legal status has been contested: - Multiple states (Illinois, California, New York) have sent cease-and-desist letters - The ACLU sued Clearview under the Illinois Biometric Information Privacy Act (BIPA) - In 2022, a settlement committed Clearview to stop selling to most private businesses - Clearview continues to sell to law enforcement agencies, arguing its scraping is protected by the First Amendment as collection of public information

The Clearview case crystallizes a key issue: if information is publicly visible online, can it be aggregated into a comprehensive surveillance database? The public availability of each individual photograph does not obviously imply consent to their compilation into a searchable face identification system. This is the aggregation problem — individually innocuous public information becomes surveillance when systematically compiled.

🎓 Advanced Note: The First Amendment argument Clearview makes — that scraping publicly available information is protected speech — has not been definitively resolved by courts. The Supreme Court's Van Buren v. United States (2021) decision narrowed the Computer Fraud and Abuse Act's application to unauthorized computer access but did not directly address the legality of large-scale scraping. The legal status of Clearview-style aggregation remains contested.

Amazon Rekognition

Amazon's Rekognition facial recognition service is a cloud-based API that allows organizations to add facial recognition capabilities to their own systems. It has been sold to law enforcement agencies, including ICE (Immigration and Customs Enforcement) and Orlando Police Department.

Amazon Rekognition was tested by researchers at the ACLU in 2018: using a database of 25,000 "mugshot" photos and photographs of members of Congress, the system incorrectly identified 28 members of Congress as people with arrest records. African American members of Congress were misidentified at a higher rate.

Amazon subsequently challenged the ACLU's methodology and updated its accuracy claims. The fundamental issue — that the system's error rates are higher for people of color — persists across multiple commercial facial recognition systems.

NEC, Idemia, and the Government Market

NEC (Japanese electronics company) and Idemia (French security company) are major vendors to government agencies, including for border control, national ID systems, and law enforcement. These companies generally provide more controlled, higher-accuracy systems than Clearview's web-scraping approach, but their deployment in law enforcement and immigration contexts raises analogous accountability concerns.

NEC's NeoFace system is used at U.S. Customs and Border Protection, by several international border agencies, and by some law enforcement agencies. Idemia (which grew through acquisitions including the fingerprint biometrics company MorphoTrust) provides biometric verification for the TSA PreCheck program and multiple international identification systems.


35.3 Government Use Cases: Law Enforcement, Border, and Military

Law Enforcement

Facial recognition in law enforcement is typically used as an investigative lead: police capture surveillance footage of an unknown suspect, run it through a facial recognition system, and receive potential matches that investigators then pursue through traditional means. The process varies significantly by agency:

  • Some agencies have formal policies governing when facial recognition can be used, what threshold constitutes an actionable match, and what corroboration is required before arrest
  • Many agencies lack formal policies, using the technology as investigators see fit
  • The FBI, DHS, and various state law enforcement agencies operate or have access to facial recognition systems with databases including driver's licenses, passport photos, and arrest records
  • Local police often access these systems through contracts with vendors or partnerships with federal agencies

The investigative-lead-only model — facial recognition as one clue among many, requiring independent corroboration — is defensible in theory. In practice, documented wrongful arrests suggest that corroboration requirements are not always applied.

Border Control

The Department of Homeland Security's "Biometric Entry-Exit" system uses facial recognition at major airports to verify traveler identities. The program: - Matches travelers' faces against U.S. passport or visa photos - Applies to both U.S. citizens and non-citizens, though with different legal implications - Has been expanded to include verification at more airports and checkpoints - Has been extended into commercial partnerships (airlines using DHS photos for boarding verification)

The border context raises distinct legal issues: the Fourth Amendment applies differently at the border (courts have upheld extensive warrantless searches in border contexts). International travelers, including non-citizens who lack Fourth Amendment rights, are systematically biometrically processed.

Military Applications

Military use of facial recognition — for targeting in conflict zones, for identifying persons of interest in occupied territories, for analyzing drone footage — represents the highest-stakes deployment of the technology. Chapter 34's discussion of Project Maven (drone footage analysis using AI) is one context; broader military applications include:

  • Matching images from battlefield surveillance against databases of known persons of interest
  • Biometric enrollment of local populations in conflict zones (documented from Afghanistan and Iraq)
  • Border control in military contexts (processing populations crossing conflict zones)

The international humanitarian law implications of AI-assisted targeting are contested and largely unresolved.


35.4 Commercial Use Cases: Airports, Stadiums, Retail

Beyond law enforcement and government identification, facial recognition has entered commercial contexts:

Airports: Delta, British Airways, and other airlines have implemented optional facial boarding — passengers can board planes by having their face compared against a passport photo database rather than presenting a paper boarding pass. Opt-out is theoretically available but requires knowing to request it.

Stadiums and venues: Multiple sports venues and entertainment arenas have deployed facial recognition for security screening — identifying people on bans lists, season ticketholders, and persons of interest.

Retail: Loss prevention applications use facial recognition to identify people who have previously shoplifted in the same chain's stores. This is the context of Jordan's scenario: a system that identified Jordan as a potential suspect based on face matching.

Banks: Some financial institutions use facial recognition for ATM access and account verification.

Workplace: Employee timekeeping and building access systems increasingly use facial recognition as an alternative to badge-based access.

In most commercial contexts, disclosure and consent are variable. Some venues post notices; many do not. Some provide opt-out options; most do not. The legal framework for commercial facial recognition is underdeveloped — consumer protection law, state biometric privacy laws (like Illinois BIPA), and scattered FTC oversight provide inconsistent protection.


35.5 The Gender Shades Study: Documented Accuracy Disparities

The most significant empirical study of facial recognition accuracy disparities is the Gender Shades project, conducted by Joy Buolamwini (then a graduate student at MIT Media Lab) and Timnit Gebru (then a Microsoft researcher, later fired by Google in a separate controversy involving AI ethics research).

The Research

Published in 2018 in a paper titled "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," the study evaluated three commercial facial analysis systems — Microsoft Azure Face API, Face++ (Chinese AI company), and IBM Watson Visual Recognition — on a benchmark dataset Buolamwini and Gebru developed specifically to address demographic balance.

Prior benchmark datasets had been dominated by lighter-skinned individuals — reflecting the demographics of the sources (stock photos, academic datasets, entertainment industry images) rather than the demographics of the world. Buolamwini and Gebru created the Pilot Parliaments Benchmark (PPB), using photos of elected officials from several African and European countries to ensure demographic balance.

The Findings

The systems showed significant accuracy disparities across the intersections of gender and skin tone (measured on the Fitzpatrick scale):

  • Overall accuracy: Systems achieved high accuracy overall — 88%+ for some systems
  • Lighter-skinned males: Very high accuracy — up to 99.2%
  • Darker-skinned females: Much lower accuracy — error rates as high as 34.7%
  • Error rate gap: Up to 34.4 percentage points between best- and worst-performing subgroup

The intersectional pattern — the worst performance at the intersection of darker skin and female gender — reflects the composition of training data: AI systems trained primarily on lighter-skinned images of predominantly male subjects perform worse on darker-skinned women, the demographic least represented in typical training datasets.

Why This Matters

The accuracy disparities documented by Gender Shades have direct implications for law enforcement use of facial recognition:

  1. Racial bias in false arrest risk: If facial recognition is less accurate for darker-skinned people, darker-skinned individuals are more likely to be falsely identified as crime suspects. This creates a racially discriminatory effect, regardless of whether the algorithm was designed with discriminatory intent.

  2. Compounding existing disparities: Facial recognition in law enforcement is disproportionately used in communities with heavier police presence — communities that are disproportionately communities of color. Less accurate technology, applied in higher-surveillance contexts, compounds racial disparities in criminal justice.

  3. The audit gap: Many law enforcement agencies using facial recognition do not conduct accuracy audits. They are using systems without knowing how accurate those systems are for the populations they are policing.

The Gender Shades research was followed by "Actionable Auditing" (Buolamwini, 2019), which found that Microsoft, IBM, and Face++ all improved their systems' accuracy for darker-skinned women after the Gender Shades publication — demonstrating that disclosure and accountability can produce improvement, and that the prior disparities reflected choices about what to prioritize, not technical impossibility.

🌍 Global Perspective: The accuracy disparities documented in Gender Shades are amplified in contexts where facial recognition is deployed against populations that are most dissimilar from Western training datasets. Surveillance systems deployed by Western technology companies in African, Middle Eastern, or South Asian contexts may perform particularly poorly for local populations — raising serious human rights concerns about technology exports to contexts where the accountability mechanisms of the vendor's home country don't apply.


35.6 The Wrongful Arrests: Robert Williams and Nijeer Parks

The accuracy disparities documented in Gender Shades are not abstract. They have produced documented wrongful arrests.

Robert Williams (Detroit, 2020)

Robert Williams, a Black man living in Detroit with his wife and two daughters, was arrested in January 2020 after police used a Clearview AI match to identify him as a suspect in a watch store robbery. Williams was arrested at his home, handcuffed in front of his daughters, and detained overnight.

When presented with the surveillance footage and asked if the man in the video was him, Williams held the image next to his face for the detective and said, "No, this is not me." The detective reportedly responded, "The computer says it's you."

The actual surveillance footage — poor quality, captured from a distance — showed a heavyset man in a cap. Williams, who had been at work when the robbery occurred, looked somewhat similar to the suspect but was clearly not the same person on close examination.

Williams was released but spent 30 hours in detention. Detroit Police subsequently acknowledged the arrest was based solely on the facial recognition match — which the department's own policy prohibited. The policy required that facial recognition be used as an investigative lead, not as the basis for arrest. The policy was violated.

Williams subsequently sued the city of Detroit and the investigating officer. His case has been cited in litigation and legislative debates about facial recognition.

Nijeer Parks (New Jersey, 2019)

Nijeer Parks, a Black man from New Jersey, was charged with shoplifting and assault with a deadly weapon after a facial recognition match identified him as a suspect. Parks had been at a hotel 30 miles away at the time of the alleged crime. Despite having a cash receipt showing his presence elsewhere, Parks spent approximately 10 days in jail before the charges were dropped.

Parks filed a civil rights lawsuit against the police department and the city. His case was settled in 2024.

What These Cases Reveal

Both cases involve: an innocent Black man, a facial recognition match from low-quality surveillance footage, a failure to require corroboration before arrest, and significant harm — incarceration, family trauma, legal costs.

They also reveal systematic problems:

Policy violations are common: Detroit's policy prohibited arrest based solely on facial recognition match. The policy was violated. A policy's existence does not ensure compliance, particularly in law enforcement cultures where "the computer said so" carries significant weight.

Burden shift to the accused: Both Williams and Parks had to prove they were elsewhere. The facial recognition match shifted the presumption of innocence. Proving innocence is possible — but it requires resources, connections, and the practical ability to access and present evidence. Not everyone has Williams's ability to immediately identify the problem (pointing out his face next to the footage) or Parks's documented alibi.

Racial targeting is embedded in deployment context: Facial recognition is used more frequently in communities with heavier police presence — which are disproportionately Black and Brown communities. Less accurate technology, deployed in higher-surveillance contexts, means Black and Brown people disproportionately face false identification.

⚠️ Common Pitfall: The wrongful arrest stories might suggest that the problem is "bad implementation" — if police just followed their own policies, required corroboration, and didn't rely solely on facial recognition, the technology would be fine. This is the reform framing. The abolitionist response: if the technology systematically produces more errors for Black people, and if it is systematically deployed most heavily in Black communities, then "better implementation" cannot address the structural racial disparity. The technology is not neutral; its deployment is not neutral; the combination is inherently biased.


35.7 The "Opt Out" Impossibility

The biometric privacy argument that distinguishes facial recognition from other surveillance is rooted in the face's involuntary nature as an identifier.

You can change your username. You can change your phone number. You can change your email address. You can use a different credit card. You cannot change your face.

This involuntary quality of the face as biometric identifier means that:

You cannot opt out of a comprehensive face database once your image is in it. Clearview AI has 30+ billion images; you cannot opt out of being identified if your face appears in its database (though you can request deletion in some jurisdictions, the data may be retained in other forms or by other systems).

You cannot protect yourself from passive biometric collection. Walking through a public space, attending a public event, appearing in someone else's photograph — all of these actions can result in your face being captured and processed without your knowledge or consent.

The camera can follow you without your knowing. Physical surveillance systems in prior eras required physical presence by a surveillance officer. Automated facial recognition allows the camera's information to be retroactively searched. You don't know you're being recognized; you have no opportunity to object.

Aggregation creates a comprehensive spatial record. Multiple facial recognition systems in different locations — retail stores, transit stations, office buildings, street cameras — can create a location history analogous to cell site location information, without ever requiring access to your phone.

This is what the chapter's opening myth-busting identified: you cannot "opt out" of the face you have.


35.8 Regulatory Responses: EU AI Act, City Bans, and Legislative Battles

EU AI Act (2024)

The EU's AI Act, the world's first comprehensive AI regulation, classifies biometric identification systems used in public spaces for law enforcement purposes as "high-risk" AI systems, subject to strict requirements. More significantly, it prohibits:

  • Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions for serious crime prevention)
  • "Post-remote" biometric identification used in criminal justice (with judicial authorization requirements)
  • Biometric categorization systems that use sensitive attributes like race, ethnicity, religion, or political opinion

This is the most stringent facial recognition regulation in the world. Its enforcement and interpretation are still developing; practical compliance by member states is ongoing. The prohibition on real-time public biometric surveillance — subject to exceptions — represents a significant constraint on the Chinese-style mass facial surveillance that critics of facial recognition fear.

U.S. City and State Bans

In the absence of comprehensive federal legislation, a growing number of U.S. cities and states have restricted or banned facial recognition by government agencies:

  • San Francisco (2019): First major city to ban city agencies (including police) from using facial recognition
  • Oakland (2019): Banned city agency use
  • Boston (2020): Banned city agency use
  • Portland (2020): Banned both city agency and private commercial use — the most sweeping municipal ban
  • Massachusetts: Passed restrictions on police use of facial recognition (statewide)
  • Illinois: BIPA (Biometric Information Privacy Act, 2008) predates the facial recognition debate but provides significant protection — requires consent for biometric data collection, private right of action, and statutory damages. BIPA has been used to challenge Clearview AI and many corporate facial recognition applications.

The city bans have been criticized for being symbolic — they don't prevent federal agencies (FBI, DHS, ICE) from using facial recognition in those cities. But they have practical effects: they prevent local police from using the technology as an investigative tool, and they establish political precedents for state and federal legislation.

Federal Legislative Efforts

Congress has introduced multiple facial recognition bills — the Commercial Facial Recognition Privacy Act, the Facial Recognition and Biometric Technology Moratorium Act — but as of 2026, no comprehensive federal legislation has passed. The pattern echoes the broader federal privacy legislation failure examined in Chapter 31.

📝 Note: Illinois's BIPA is the most consequential U.S. facial recognition law because it has a private right of action — individuals can sue companies that violate it without waiting for government enforcement. BIPA lawsuits have resulted in significant settlements: Facebook paid $650 million to Illinois users for its facial recognition "Tag Suggestions" feature; TikTok paid $92 million; Clearview AI faced multiple BIPA suits. The private right of action is a central feature of effective enforcement.


35.9 The Database Problem: Clearview and the End of Public Anonymity

The most fundamental challenge posed by facial recognition is not its use in specific contexts — airports, law enforcement, retail — but the potential for a comprehensive face database that effectively ends public anonymity.

The aspiration to move freely through public space without being identified — to be a stranger in a crowd — has been a feature of urban life since the rise of modern cities. The sociologist Georg Simmel wrote in 1903 that the city's gift was the freedom of anonymity: the ability to be surrounded by people who did not know you, and to benefit from that anonymity as a form of social liberty.

Clearview AI's database represents the potential end of this freedom. In any city where Clearview's clients (law enforcement agencies) can search, any face visible in public can be identified — not by a human who happened to know you, but by a machine search of 30+ billion images. Your face at a protest, at a doctor's office, at a church, in a neighborhood you don't usually frequent — all of these are potentially searchable.

The Clearview scenario is not yet fully realized: not all law enforcement agencies use Clearview; not all public spaces are covered by cameras; the system makes errors. But the trajectory is toward comprehensive coverage. Each additional camera, each additional image in the database, each improvement in accuracy moves the system toward the elimination of public anonymity.

🔗 Connection: This is the convergence of CCTV (Chapter 8), biometrics (Chapter 7), and the data economy (Chapter 11) into a unified surveillance system. The architecture of surveillance is becoming more integrated: the camera that captures your face, the database that stores your face, the algorithm that matches your face, and the commercial or governmental system that acts on the match are increasingly connected.


35.10 Jordan's Case: Resolution and Aftermath

Jordan's case was resolved in three days. The warehouse tracking logs showed Jordan's movements at the time of the alleged shoplifting. The lawyer submitted the records to the police department. The case was closed.

But Jordan felt something that didn't resolve with the case: the experience of being named by an algorithm, of having to prove your innocence against a machine's accusation. The lawyer had explained what happened: the grocery chain's loss prevention system used Clearview AI. Jordan's face had been in Clearview's database from an old social media photo. The algorithm had matched it, with "high confidence," to the store's surveillance footage.

Jordan asked the lawyer: "Can I get my face removed from Clearview's database?"

"You can submit a request," the lawyer said. "Under Illinois law, if you're a resident, you have rights. Under some other state laws. Federally, there's nothing."

Jordan was not a resident of Illinois.

"The request process exists," the lawyer continued, "but it's not guaranteed to work. And even if they remove your image from their main database, they may retain data derived from it. And there are other companies with similar databases."

Jordan thought about Adam Harvey's CV Dazzle work — the geometric face camouflage from Dr. Osei's lecture on surveillance art. "What if I wore makeup that defeated the algorithm?"

The lawyer smiled. "That would probably draw more attention from the actual human security guards."

The paradox of facial recognition resistance: the most effective countermeasures are the ones that make you maximally visible to human observers while making you invisible to machines. Counter-surveillance and counter-human-observation pulled in different directions.

Jordan wrote in their journal that night: The algorithm named me. I was lucky I could prove it was wrong. What about the people who can't prove it? What about the people who have fewer resources, less documentation, no Yara on speed dial? The algorithm doesn't know who I am. It just knows what I look like. And what I look like was enough to accuse me.

They thought about the Gender Shades findings. They thought about Robert Williams and Nijeer Parks. They thought about every person who looked like them and didn't have a timestamped record proving they were somewhere else.

Privacy, they understood now, was not just about keeping secrets. It was about moving through the world without being reduced to your face. Without being accused by a machine that made more mistakes for people who looked like you.


35.11 Chapter Summary

Facial recognition is a pipeline technology — face detection, alignment, feature extraction, matching — whose accuracy varies significantly across deployment conditions and demographic groups.

How it works: The pipeline transforms face images into numerical vectors, which are compared against databases using similarity thresholds. Errors occur at each stage; real-world accuracy is significantly lower than vendor benchmark claims.

Clearview AI: By scraping 30+ billion images from social media and websites, Clearview has created a database that potentially ends public anonymity — allowing any face visible in public to be identified without consent. Its legal status is contested.

Other vendors: Amazon Rekognition, NEC, Idemia, and others serve law enforcement, border control, and commercial markets with varying levels of accuracy and accountability.

Accuracy disparities (Gender Shades): Error rates for darker-skinned women were up to 34.7% in commercial systems — a 34-percentage-point gap from lighter-skinned men. These disparities are not inherent to the technology but reflect training data composition.

Wrongful arrests: Robert Williams and Nijeer Parks — both Black men — were arrested based on incorrect facial recognition matches. Their cases document that accuracy disparities create racially discriminatory false arrest risk.

The opt-out impossibility: You cannot change your face. You cannot withdraw your image from databases you don't know exist. Facial recognition is the only surveillance technology from which there is genuinely no exit.

Regulatory responses: The EU AI Act prohibits real-time public biometric identification for law enforcement. City bans in San Francisco, Oakland, Boston, Portland, and elsewhere limit local government use. Illinois BIPA provides the strongest U.S. protection with private right of action. Federal legislation has not passed.

The database problem: Clearview's database and similar systems point toward the potential elimination of public anonymity — a transformation of urban life whose significance extends beyond any specific law enforcement application.


Next: Chapter 36 examines the racial dimensions of surveillance — how surveillance systems have historically targeted, harassed, and controlled racialized communities, and how facial recognition is the latest manifestation of a long history of racially organized watching.