Case Study: Should Clearview AI Exist? An Ethical Analysis

"The greatest danger to our rights today is not from government acting against the wishes of the people but from government and business acting in concert, driven by information technology." — Bruce Schneier, Data and Goliath

Overview

In January 2020, a New York Times investigation revealed that a small startup called Clearview AI had scraped more than three billion photographs from public websites — Facebook, YouTube, Instagram, LinkedIn, Venmo, and millions of other sites — to build a facial recognition tool that could identify almost anyone from a single photograph. The company sold access to this tool primarily to law enforcement agencies across the United States and, eventually, around the world. No one whose photograph was scraped consented to inclusion in the database. No one was notified. The tool worked.

This case study applies all five ethical frameworks from Chapter 6 to a question that lies at the intersection of technology, privacy, public safety, and fundamental rights: Should Clearview AI exist?

Skills Applied: - Applying five ethical frameworks to a real-world technology policy question - Analyzing the gap between technical capability and ethical permissibility - Evaluating arguments from multiple stakeholder perspectives - Constructing a reasoned ethical judgment on a contested issue


The Situation

What Clearview AI Built

Clearview AI was founded in 2017 by Hoan Ton-That, an Australian entrepreneur, and Richard Schwartz, a former aide to New York Mayor Rudy Giuliani. The company's technology operates on a simple but powerful principle:

  1. Data collection. Clearview scraped publicly available photographs from across the internet — social media profiles, news articles, employment directories, personal blogs, and video thumbnails. By early 2020, the database contained over three billion images. By 2022, it had grown to an estimated 20 billion.

  2. Biometric processing. Each scraped photograph was processed by a facial recognition algorithm that extracted a unique biometric "faceprint" — a mathematical representation of facial geometry. These faceprints were indexed and stored.

  3. Search functionality. A user (typically a law enforcement officer) uploads a photograph of an unknown person — from surveillance footage, a crime scene image, or a social media screenshot. Clearview's algorithm generates a faceprint from the uploaded photo and searches its database for matches. Within seconds, the system returns matching photographs along with links to the public websites where the images were found — revealing the person's name, social media profiles, employment history, and other identifying information.

  4. Scale. The company claimed that its tool had been used by more than 600 law enforcement agencies in the United States, including the FBI, the Department of Homeland Security, and hundreds of local police departments. International clients included agencies in Canada, the United Kingdom, Australia, and several other countries.

What Clearview AI Did Not Do

It is important to be precise about Clearview's operations:

  • Clearview did not hack into private accounts. It scraped publicly available images — photographs that individuals had posted publicly, or that appeared in news articles, employment directories, and other public-facing websites.
  • Clearview did not collect images from government databases (driver's license photos, passport photos). Its database was built entirely from internet sources.
  • Clearview did not make its tool available to the general public (at least not officially). Access was restricted primarily to law enforcement and, in some cases, to select corporate clients.

Why It Matters

The Clearview AI case crystallizes several tensions that run through Chapter 6:

The legal-ethical gap. At the time of the New York Times investigation, much of what Clearview did was arguably legal. No U.S. federal law prohibited scraping publicly available photographs, and no comprehensive facial recognition law existed. Clearview operated in the legal vacuum Dr. Adeyemi describes in Section 6.1.1 — the space where regulation has not caught up with technology.

The consent fiction. The three billion people whose photographs were scraped did not consent to having their biometric data extracted, indexed, and sold to law enforcement. But their photographs were "public." The question of whether posting a photograph online constitutes consent to unlimited biometric processing is one that no existing consent framework was designed to address.

The power asymmetry. Clearview possessed comprehensive biometric data on billions of people. Those people had no knowledge of the database, no ability to opt out, and no recourse. The asymmetry was total and structural.


Analysis Through Five Ethical Frameworks

Framework 1: Utilitarian Analysis

Central question: Does the existence of Clearview AI produce the greatest overall good?

The case for (aggregate benefits):

Law enforcement agencies reported that Clearview's tool helped solve crimes that were otherwise intractable. Specific claims include:

  • The Indiana State Police used the tool to identify a suspect in a shooting within twenty minutes, a case that would have taken days or weeks using traditional methods.
  • The New York Police Department reportedly used it in cases involving child sexual exploitation, identifying victims and perpetrators from images.
  • Multiple agencies reported using the tool to identify suspects in robbery, assault, and fraud cases.

If Clearview's tool genuinely accelerates criminal investigations, reduces violent crime, identifies child exploitation victims, and brings perpetrators to justice, the aggregate benefit is substantial. Public safety is a genuine good that affects millions.

The case against (aggregate costs):

  • Chilling effects on public expression. If anyone can be identified from any photograph, participation in public life — attending protests, political rallies, religious gatherings, or simply walking down the street — becomes a potentially surveilled activity. The chilling effect on First Amendment activity (freedom of assembly, freedom of expression) affects hundreds of millions of Americans.
  • Disproportionate error rates. Facial recognition systems have been repeatedly shown to have higher error rates for darker-skinned individuals, women, and elderly people. A 2019 NIST study found that many commercial facial recognition algorithms were 10 to 100 times more likely to misidentify Black and Asian faces compared to white faces. False identifications in law enforcement contexts carry devastating consequences: wrongful arrests, coerced confessions, imprisonment of innocent people. Robert Williams, a Black man in Detroit, was wrongfully arrested in 2020 based on a faulty facial recognition match — held in a detention cell for thirty hours for a crime he did not commit.
  • Erosion of anonymity. The ability to move through public spaces without being identified is a feature of democratic life, not a bug. Clearview eliminates this anonymity for anyone whose photograph appears anywhere on the internet — which, in practice, means nearly everyone. The long-term social cost of normalizing pervasive biometric surveillance is difficult to quantify but potentially enormous.
  • Mission creep. Once a surveillance tool exists, its uses tend to expand. Clearview initially marketed to law enforcement but also provided access to private companies and wealthy individuals during its early growth phase. The tool could be used for stalking, harassment, political targeting, immigration enforcement, or authoritarian repression if deployed by governments with fewer democratic constraints.
  • Precedent effects. If Clearview's model is accepted, it establishes the principle that any publicly visible data can be scraped, biometrically processed, and sold — without consent, notification, or regulatory oversight. This precedent extends far beyond facial recognition to any biometric or behavioral data that is technically accessible.

Utilitarian assessment: The utilitarian calculation is genuinely contested. The benefits to law enforcement are real but concentrated (helping solve specific cases). The costs — chilling effects, discriminatory error rates, loss of anonymity, mission creep, and precedent effects — are diffuse but potentially civilization-altering. A careful utilitarian analysis would need to weigh not just the direct benefits and harms but the systemic and long-term effects of normalizing this technology. When those systemic effects are included, the utilitarian case for Clearview AI weakens significantly.

Framework 2: Deontological Analysis

Central question: Does Clearview AI respect the rights and dignity of the people in its database?

The universalizability test. Kant asks: Can the maxim "it is permissible to scrape any publicly available photograph, extract biometric data, and sell the resulting database to anyone willing to pay" be universalized? If everyone did this — if every company, government, and individual scraped every publicly available image for biometric indexing — the result would be a world in which posting any photograph online makes you permanently identifiable and trackable by anyone. Most people would not endorse this as a universal principle. The maxim fails the universalizability test.

The humanity-as-an-end test. Clearview takes photographs that individuals posted for specific purposes — sharing memories with friends, maintaining a professional profile, participating in public discourse — and repurposes them for an entirely different function: biometric surveillance. The individuals in the photographs are treated purely as data subjects. They receive no benefit. They are not consulted. They cannot opt out. They are treated, in Kant's language, "merely as a means" — as raw material for a commercial product that serves law enforcement and generates profit for Clearview's investors.

This is not a marginal case. Clearview converts a person's face — arguably the most intimate and identifying feature of their physical body — into a commercial product without their knowledge or consent. The deontological prohibition is clear and strong.

The consent analysis. Deontological ethics takes consent seriously as a condition for respecting autonomy. Clearview's response to the consent objection — that the photographs were "public" — conflates accessibility with consent. The fact that someone's photograph is publicly accessible does not mean they consented to having it biometrically processed, indexed, and sold to police departments. As Section 6.3.2 establishes, consent that does not reflect genuine understanding of what is being agreed to is not morally valid. No one who posted a photograph on Facebook in 2015 understood that they were consenting to inclusion in a law enforcement facial recognition database.

Deontological conclusion: Clearview AI fails both formulations of the categorical imperative. It treats billions of people merely as means, it relies on a conception of "consent" that does not satisfy the conditions for genuine autonomy, and its operating maxim cannot be coherently universalized. A deontological analysis provides a strong prohibition.

Framework 3: Virtue Ethics Analysis

Central question: Does Clearview AI reflect the character traits that technology developers should cultivate?

The virtuous data practitioner table from Section 6.4.2 provides a useful framework:

Justice. Clearview's technology disproportionately affects communities that are already over-policed — particularly Black and brown communities, which face both higher rates of police contact and higher rates of algorithmic misidentification. A just technology developer would consider not just whether a tool works but whether it works equitably. Clearview did not.

Honesty. Clearview operated in secrecy for years before the New York Times investigation. Its clients — law enforcement agencies — were not required to disclose their use of the tool. Individuals in the database were never informed. Transparency was not just absent; it was actively avoided. The company understood that public awareness would generate opposition, and it chose concealment over openness. This fails the virtue of honesty.

Temperance. Clearview scraped everything — billions of photographs from across the internet, far beyond what any stated purpose would require. There was no data minimization, no proportionality assessment, no effort to limit collection to what was genuinely needed. The approach was maximalist: collect everything because you can. This fails the virtue of temperance.

Courage. Ray Zhao, the CDO in the chapter's guest lecture, argues that the "hard cases" require the courage to say "we could build this, but should we?" Clearview's founders never asked this question — or if they did, they answered it by proceeding without engagement with the people affected. The courage to refuse a lucrative but harmful technology is precisely the virtue that was absent.

Compassion. Did Clearview's founders consider the human impact of their technology? The person wrongfully arrested based on a false match? The domestic violence survivor whose abuser uses facial recognition to locate her? The political dissident whose participation in a protest is identified and catalogued? Compassion requires consideration of the humans behind the data points. There is no evidence that this consideration played a role in Clearview's development.

Virtue ethics conclusion: Clearview AI embodies the vices of excess in nearly every dimension — excessive collection (intemperance), concealment (dishonesty), indifference to distributional effects (injustice), refusal to question one's own creation (cowardice masked as innovation), and detachment from human consequences (lack of compassion). A person of practical wisdom would not have built Clearview AI as it exists.

Framework 4: Care Ethics Analysis

Central question: What relationships are at stake, and what does responsible care require?

The relationship of exposure. Every person whose photograph is in Clearview's database is in a relationship with the system — not a relationship they chose, but one imposed on them. Care ethics takes this seriously: the fact that a relationship is involuntary does not diminish the responsibilities it creates. If anything, involuntary relationships — where one party has power and the other has none — create heightened responsibilities for the party with power.

Vulnerability mapping. Care ethics asks: Who is most vulnerable? In the Clearview case:

  • Undocumented immigrants. Facial recognition combined with law enforcement databases creates a tool of extraordinary power for immigration enforcement. People living in the United States without documentation — many of whom have families, children in school, and years of community ties — are uniquely vulnerable to identification and deportation.
  • Domestic violence survivors. Facial recognition can be used to locate individuals who have changed their identity or location to escape abusers. Even if Clearview restricts access to law enforcement, data breaches, insider misuse, or unauthorized access could put survivors at risk.
  • Protest participants. People who exercise their right to peaceful assembly — at racial justice protests, labor rallies, environmental actions — become identifiable in ways that enable retaliation, harassment, or government surveillance.
  • Minors. Children and teenagers whose parents posted photographs of them online — or who posted their own photos — are in the database without any capacity for consent. Their biometric data is being processed before they are old enough to understand what biometric data is.
  • Communities of color. As noted above, higher error rates for darker-skinned individuals mean that Black and brown communities bear a disproportionate burden of false identifications — a form of harm that falls on those who already face systemic over-policing.

What responsible care requires. Care ethics would insist that technology developers cannot create a system that affects billions of people — particularly the most vulnerable — without attending to their needs, listening to their concerns, and taking responsibility for the harms the system creates. Clearview did none of these things. It built a tool that affects the most vulnerable and made no effort to understand or mitigate the specific harms they face.

Care ethics conclusion: Clearview AI represents a fundamental failure of care. It creates involuntary relationships with billions of people, concentrates vulnerability on those who are already most exposed, and takes no responsibility for the specific harms it enables. The care ethics assessment is unambiguous: a technology built without attention to the vulnerability it creates is not ethically defensible.

Framework 5: Justice Theory (Rawlsian) Analysis

Central question: Behind the veil of ignorance, would you accept a world in which Clearview AI exists?

The Rawlsian test is especially powerful here because of the scale of the technology: it affects virtually everyone. Behind the veil of ignorance, you do not know:

  • Whether you will be a law enforcement officer who benefits from the tool
  • Whether you will be a crime victim whose attacker is identified using the tool
  • Whether you will be a Black man wrongfully arrested because of a false facial recognition match
  • Whether you will be an undocumented immigrant identified and deported
  • Whether you will be a domestic violence survivor whose location is exposed
  • Whether you will be a teenager whose childhood photos are in the database
  • Whether you will be a political dissident in a country that purchases the technology
  • Whether you will be Clearview's CEO, profiting from the database

The equal basic liberties principle. Rawls's first principle requires equal basic liberties for all, including freedom of expression, assembly, and privacy. Clearview AI undermines these liberties by creating a system in which public participation — posting a photograph, attending a protest, walking down a street — becomes an act of biometric self-exposure. Behind the veil, you would likely insist on a right to move through public life without being biometrically catalogued.

The difference principle. Are the inequalities created by Clearview AI structured to benefit the least advantaged? The benefits flow primarily to law enforcement agencies and, through them, to public safety. But the burdens — false identifications, surveillance chilling effects, loss of anonymity — fall disproportionately on communities of color, immigrants, political dissidents, and other marginalized groups. These are precisely the least advantaged members of society. The difference principle would reject a system whose burdens are concentrated on the least advantaged and whose benefits primarily serve institutions that already hold power.

Rawlsian conclusion: Behind the veil of ignorance, rational persons would not accept Clearview AI as currently constituted. They would not accept a system that eliminates anonymity for everyone, that imposes disproportionate burdens on the least advantaged, and that provides benefits primarily to institutions of power. If facial recognition technology were to be permitted at all, the veil of ignorance would demand strict regulation: democratic oversight, anti-discrimination safeguards, proportionality requirements, independent auditing, and meaningful prohibitions on uses that target vulnerable populations.


Convergences and Divergences

Convergences

The five frameworks converge with unusual strength in this case:

  1. Consent failure. All five frameworks identify Clearview's consent model — treating public availability as consent to biometric processing — as ethically inadequate. This convergence across frameworks of very different types provides exceptionally strong ethical ground.

  2. Disproportionate burden on the vulnerable. Care ethics, justice theory, and virtue ethics all identify the concentration of harms on marginalized communities as a critical ethical failure. Even utilitarianism, when it accounts for the distribution of harms (not just their aggregate), reaches the same concern.

  3. Secrecy as a moral failure. The concealment of the system from those it affects is condemned by every framework: dishonesty (virtue ethics), failure to respect autonomy (deontology), failure of attentiveness (care ethics), failure to allow democratic scrutiny (justice theory), and distortion of the consent calculus (utilitarianism).

  4. The precedent problem. Multiple frameworks — particularly the deontological universalizability test and the utilitarian long-term consequences analysis — identify the precedent set by Clearview as potentially more harmful than the specific tool itself.

Divergences

The frameworks diverge primarily on degree and remedy, not on fundamental assessment:

  1. Deontology provides the clearest prohibition. The categorical imperative analysis suggests that Clearview's core practice — scraping and biometrically processing people's images without consent — is inherently wrong, not merely harmful. The other frameworks are more open to the possibility that a heavily regulated version of the technology might be permissible.

  2. Utilitarianism is the most sympathetic to the law enforcement use case. If the public safety benefits are sufficiently large and the error rates can be reduced, a utilitarian analysis might support a regulated version of facial recognition for serious crimes. The other frameworks place greater weight on rights, relationships, and distributive fairness — values that resist trade-off.

  3. The "should it exist?" question. Deontology and care ethics are most willing to answer "no" categorically. Utilitarianism and justice theory are more inclined toward "not in its current form, but perhaps with sufficient safeguards." Virtue ethics occupies an interesting middle position: it does not prohibit the technology per se but insists that the character required to deploy it responsibly was entirely absent from Clearview's development.


Reasoned Judgment

The convergence of five ethical frameworks — each approaching the question from different foundational principles — produces an unusually clear verdict: Clearview AI, as designed and deployed, is ethically indefensible. It fails every framework's test, albeit for different reasons.

The more difficult question is whether any form of facial recognition technology could be ethically permissible. Here the frameworks diverge, but several conditions emerge from the analysis:

  1. Democratic authorization. Any use of facial recognition by government agencies should require explicit legislative authorization, not administrative adoption. The decision to deploy biometric surveillance must be made through democratic processes, not by individual police departments purchasing commercial products.

  2. Strict proportionality. Facial recognition should be limited to investigations of serious crimes (violent felonies, child exploitation), not used for routine policing, immigration enforcement, or monitoring of lawful activity.

  3. Anti-discrimination safeguards. No facial recognition system should be deployed until its error rates are equalized across demographic groups. Deployment in communities already subject to over-policing should require heightened justification and independent oversight.

  4. Consent-based data collection. No biometric database should be built by scraping publicly available photographs without consent. If a facial recognition database is to be constructed, it should use images collected with specific, informed consent or images from government-issued identification documents under strict regulatory constraints.

  5. Transparency and auditing. Any facial recognition system used by government agencies should be subject to public disclosure, independent auditing, and regular reporting on usage patterns, error rates, and demographic impacts.

  6. Individual rights. Individuals should have the right to know whether their biometric data is in a facial recognition database, the right to request removal, and the right to challenge any identification or decision based on facial recognition.

These conditions represent what a pluralist ethical analysis demands: not a blanket prohibition on a technology, but a set of governance requirements that respect rights, protect the vulnerable, distribute burdens fairly, and ensure democratic accountability. Clearview AI meets none of these conditions. Whether a future system could meet them remains an open question — and one that the subsequent chapters of this book will address.


Discussion Questions

  1. The "public information" argument. Clearview argues that it only scraped publicly available photographs. If information is public, does the person who made it public retain any ethical claim over how it is used? What is the difference between a photograph being accessible and it being available for any purpose? How does the concept of contextual integrity (which we will explore in Chapter 7) apply to this distinction?

  2. The law enforcement trade-off. Suppose Clearview's technology genuinely helps solve 500 violent crimes per year that would otherwise go unsolved. Does this change your ethical analysis? How many solved crimes would be necessary to outweigh the costs identified in the case study? Is this even the right question to ask — or does framing the issue as a trade-off already concede too much?

  3. The international dimension. Clearview has sold its technology to government agencies in multiple countries, including some with poor human rights records. How should the ethical analysis change when the technology is deployed in contexts where democratic oversight is absent? Is the company ethically responsible for how its clients use the tool?

  4. Eli's perspective. How would Eli — a Black man from Detroit, the same city where Robert Williams was wrongfully arrested based on facial recognition — evaluate Clearview AI? How does his lived experience shape the ethical analysis in ways that abstract philosophical reasoning might miss? What does this suggest about whose voices should be included in technology governance decisions?


Your Turn: Mini-Project

Option A: Framework Comparison Matrix. Create a detailed matrix that applies all five frameworks to Clearview AI. For each framework, document: (1) the central question, (2) the key considerations, (3) the conclusion, (4) the strongest argument it generates, and (5) what it misses that another framework catches. Write a 500-word synthesis explaining where the matrix reveals the strongest consensus and the deepest disagreement.

Option B: Regulatory Proposal. Draft a one-page model regulation for facial recognition technology that reflects the ethical principles identified in this case study. For each provision of your regulation, identify which ethical framework(s) it operationalizes and explain why you included it. Your regulation should address: permitted and prohibited uses, data collection standards, anti-discrimination requirements, transparency obligations, individual rights, and enforcement mechanisms.

Option C: Comparative Case Analysis. Research one other real-world facial recognition case (possibilities include: the EU's proposed ban on public facial recognition in the AI Act, San Francisco's facial recognition ban, India's Aadhaar biometric system, or China's social credit system). Write a 1,000-word analysis comparing the case you chose to Clearview AI. Where do the ethical issues overlap? Where do they differ? Does the same five-framework analysis apply, or does the different context require different weightings?


References

  • Hill, Kashmir. "The Secretive Company That Might End Privacy as We Know It." The New York Times, January 18, 2020.

  • Hill, Kashmir. Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It. New York: Random House, 2023.

  • Grother, Patrick, Mei Ngan, and Kayee Hanaoka. "Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects." NIST Interagency Report 8280. National Institute of Standards and Technology, December 2019.

  • Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research 81 (2018): 1-15.

  • Kant, Immanuel. Groundwork of the Metaphysics of Morals. 1785. Translated by Mary Gregor. Cambridge: Cambridge University Press, 1998.

  • Mill, John Stuart. Utilitarianism. 1863. Edited by Roger Crisp. Oxford: Oxford University Press, 1998.

  • Rawls, John. A Theory of Justice. Rev. ed. Cambridge, MA: Harvard University Press, 1999.

  • Schneier, Bruce. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. New York: W.W. Norton, 2015.

  • Tronto, Joan C. Moral Boundaries: A Political Argument for an Ethic of Care. New York: Routledge, 1993.

  • Zuboff, Shoshana. The Age of Surveillance Capitalism. New York: PublicAffairs, 2019.

  • American Civil Liberties Union. "Clearview AI." ACLU Legal Documents and Analysis, 2020-2024.

  • Information Commissioner's Office (UK). "ICO Fines Clearview AI Inc More Than 7.5 Million and Orders UK Data to Be Deleted." Press release, May 23, 2022.