Case Study 5.2: The Cost of Getting It Wrong — Clearview AI's Legal, Reputational, and Operational Consequences

"The facial recognition database that could end privacy as we know it." — The New York Times, January 18, 2020


Overview

Clearview AI is a facial recognition company that built what it described as the world's largest facial recognition database by scraping billions of photographs from public-facing websites — Facebook, Instagram, LinkedIn, Venmo, YouTube, and countless others — without the knowledge or consent of the people in those photographs. The company then sold access to that database, primarily to law enforcement agencies, enabling users to identify individuals by uploading a photograph and matching it against the scraped database.

Clearview AI's story is one of the most instructive AI ethics cases in the business literature, but not because the company was naive or negligent. Clearview AI appears to have been fully aware of the legal and ethical vulnerabilities of its business model. The company's co-founders, Hoan Ton-That and Richard Schwartz, reportedly circulated internal documents discussing legal strategy and the expectation of regulatory and legal challenges well before the company's public exposure.

What makes the case instructive is the comprehensiveness of the consequences: legal, regulatory, operational, reputational, and strategic. When a company builds a business model on a foundation that is ethically and legally unsound, the costs do not arrive in isolation — they compound.


1. What Clearview AI Is and How It Works

The Technology

Clearview AI's core product is a facial recognition search engine. A user — typically a law enforcement officer — uploads a photograph of a person they are trying to identify. Clearview's system compares the photograph against its database of billions of images scraped from public websites and returns a list of potential matches, including the web pages where those images appeared.

The technical components are not unique to Clearview. Facial recognition technology is commercially available, and many companies use it for legitimate applications (unlocking smartphones, verifying identity at airports, sorting personal photo libraries). What is distinctive about Clearview's system is the database it searches: a collection of billions of images of private individuals assembled without their knowledge or consent, linking those individuals to their social media profiles and other publicly accessible information.

The scale of the database is significant. As of the company's public exposure in 2020, Clearview claimed to have scraped more than three billion photographs. By 2022, the company claimed the database had grown to more than 20 billion images. This scale means that a Clearview search covers a substantial fraction of the world's internet-active population.

The Law Enforcement Use Case

Clearview's primary market is law enforcement. Police departments, federal agencies, and other law enforcement entities used Clearview to identify subjects in criminal investigations — matching footage from surveillance cameras, witness photographs, or other images against Clearview's database to identify individuals.

The company has claimed that its technology has helped solve serious crimes, including cases involving child sexual abuse material, murder, and robbery. These claims are contested in some respects but not implausible: facial recognition technology, used carefully, can assist in identification tasks that are otherwise difficult.

The law enforcement use case, however, does not dissolve the legal and ethical issues raised by Clearview's data collection practices. The question of whether the technology's benefits justify its costs is separate from the question of whether the company collected its data lawfully.


Clearview's legal theory for its data collection practices rested on a broad interpretation of public availability. Because the photographs it scraped were publicly posted on social media and other websites, Clearview argued, it was simply aggregating publicly available information. The same way a search engine indexes web content, Clearview indexed images.

This theory has several significant vulnerabilities:

Website terms of service: Virtually every major platform whose content Clearview scraped has terms of service that prohibit scraping. By violating those terms of service, Clearview exposed itself to contract claims and, potentially, Computer Fraud and Abuse Act claims.

Biometric data protection: Many states — most importantly Illinois — have biometric data protection laws that impose consent requirements for the collection of biometric data including facial geometry. "Publicly available" is not a defense under Illinois' BIPA; the statute requires informed written consent regardless of whether the biometric information was derived from a publicly accessible image.

GDPR: The EU's General Data Protection Regulation requires a lawful basis for processing personal data including biometric data. "The data was publicly posted" is not a lawful basis under GDPR; the permitted legal bases are specific and do not include general public availability.

Reasonable expectations of privacy: Several legal and regulatory frameworks have grappled with whether individuals who post photographs on social media have consented to those photographs being used to build a face-identification database accessible to law enforcement. Courts and regulators in multiple jurisdictions have concluded that the answer is no.

The Scale of the Privacy Harm

The specific harm caused by Clearview's practices is worth examining carefully, because it is not the conventional "data breach" harm model. No one hacked Clearview's systems to steal personal data; the personal data was assembled from publicly available sources.

The harm is instead about what the aggregation enables. Individual photographs on social media do not create significant privacy harm in isolation. But an indexed, searchable database linking billions of photographs to social media profiles — accessible to law enforcement with a simple web interface — creates what privacy scholars call an "aggregation harm": the privacy impact of the combination is qualitatively different from the privacy impact of any individual piece.

The aggregation enables any law enforcement agency with a subscription to identify any person who has ever appeared in a photograph on the public internet, link them to their social media profiles, and therefore to a rich account of their associations, activities, and statements. This capability, deployed at scale, changes the surveillance landscape in fundamental ways that the individuals whose photographs were scraped never consented to.


Illinois BIPA — The Core Litigation

The Illinois Biometric Information Privacy Act (BIPA) proved to be Clearview's most significant legal exposure. BIPA requires informed written consent before any entity collects biometric data — including a "scan of face geometry" — from Illinois residents. The statute creates a private right of action, allowing individuals to sue without demonstrating actual harm, and provides statutory damages of $1,000 to $5,000 per violation per occurrence.

The class action against Clearview under BIPA was consolidated in Illinois federal court. In May 2022, Clearview agreed to a settlement that included:

  • A ban on selling its database to private companies (limiting sales to government agencies).
  • A restriction on sales to Illinois-based entities other than law enforcement.
  • A total settlement value estimated at approximately $52 million (payable in stock given the company's cash position).

The BIPA settlement was significant not primarily for its dollar value but for its operational consequences: the private sector ban effectively foreclosed the commercial market for Clearview's technology outside of law enforcement.

Other State Litigation

Beyond Illinois, Clearview faced litigation under similar biometric privacy statutes in Texas and Washington, which also have biometric privacy laws, though without Illinois' private right of action. The patchwork of state litigation created ongoing legal costs and management distraction even where settlement terms were less severe than in Illinois.

Class Action Beyond BIPA

Class actions were also filed under broader privacy theories — including claims under state consumer protection statutes and common law privacy torts. The legal landscape for these cases is more uncertain than the BIPA cases, but each represents ongoing legal exposure and defense costs.


4. International Regulatory Actions

GDPR Investigations and Fines

Clearview's operations in Europe attracted regulatory attention almost immediately after the company's public exposure. Under GDPR, the collection and processing of biometric data requires explicit consent as a legal basis, which Clearview could not claim. Several EU member state data protection authorities initiated investigations.

In 2022, data protection authorities in France (CNIL), Italy (Garante), Greece, and Sweden all issued enforcement actions against Clearview, including fines and orders to delete the data of EU residents and cease further processing. The fines were not individually enormous by GDPR standards — reflecting both Clearview's relatively limited EU-market presence and the enforcement authorities' practical limitations in collecting fines from a US-based company that had no EU presence. But the cumulative regulatory signal was clear: Clearview's business model was incompatible with GDPR.

The enforcement actions also created a compliance problem for Clearview's law enforcement customers in EU member states. EU law enforcement agencies that had been using Clearview were put on notice that their use of the system was relying on unlawfully processed data.

UK, Australian, and Canadian Actions

The UK's Information Commissioner's Office (ICO) investigated Clearview's practices and in 2022 issued a notice ordering Clearview to stop obtaining and using the personal data of UK residents and to delete existing data, accompanied by a fine of approximately £7.5 million.

The Australian Information Commissioner and the UK ICO conducted a joint investigation and published a coordinated report finding that Clearview had violated both countries' privacy laws. Canada's federal and provincial privacy commissioners issued a report finding that Clearview had violated the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA) and issued remediation recommendations.

Together, these international regulatory actions effectively excluded Clearview from the civilian market in major English-speaking jurisdictions, severely limiting its addressable market.


5. Platform Bans

The Platform Response

Within days of the January 2020 New York Times story exposing Clearview's practices, major technology platforms sent Clearview cease and desist letters demanding that it remove their users' data from its database and stop scraping their platforms.

Platforms taking legal action included: - Google and YouTube - Facebook and Instagram - Twitter - LinkedIn - Venmo

The legal theory for these cease and desist letters primarily relied on the platforms' terms of service, which prohibit scraping. Twitter sued under the Computer Fraud and Abuse Act and the California Computer Data Access and Fraud Act.

The platform response is instructive because it illustrates the aggregation of legal exposure that a company can face when its practices violate the terms of multiple parties simultaneously. Each platform has independent legal standing to enforce its terms of service. The combined cost of defending against multiple concurrent platform enforcement actions — even without any single action being dispositive — is substantial.

The Data Supply Problem

The platform bans created an operational problem beyond the immediate legal exposure: they constrained Clearview's ability to continue growing its database. If the company's source data — the scraped images from social media platforms — was subject to ongoing removal demands and legal action, the long-term viability of its data collection model was in question.

Clearview has disputed the effectiveness of the cease and desist demands and has continued to grow its database, but the platform bans represent an ongoing operational constraint and legal exposure that requires continuous management.


6. Reputational Damage

The Investigative Reporting Effect

The public story of Clearview AI began with a single piece of investigative journalism: a New York Times article by Kashmir Hill, published January 18, 2020, titled "The Secretive Company That Might End Privacy as We Know It." The article was the product of months of investigation, including discovery of an internal Clearview investor presentation and interviews with people who had been identified using the technology without their knowledge.

The article had immediate and cascading effects. It drove regulatory inquiries in multiple jurisdictions. It prompted congressional letters and hearing invitations. It triggered the platform cease and desist letters described above. It generated hundreds of follow-on news stories in publications around the world.

What the story revealed was not just that Clearview had built a surveillance database — it was that the company had done so deliberately, in a way designed to avoid public attention, understanding that the public disclosure of its practices would generate exactly the reaction it did. Internal communications showed that Clearview founders had discussed the likely public reaction and had chosen to operate quietly rather than seek legal or regulatory clarity on their business model.

This revelation — that the privacy harm was not inadvertent but was chosen, with full knowledge of the likely reaction — compounded the reputational damage significantly. A company that accidentally violates privacy and then corrects course has a different reputational narrative than a company that knowingly circumvents consent while hoping not to be noticed.

Brand Associations

Clearview AI's public identity is now inseparable from its reputational narrative as the company that built a mass surveillance database by scraping billions of photographs without consent. This brand association — regardless of whatever technical innovation Clearview may represent — limits its ability to attract mainstream investors, enterprise partnerships, and non-law-enforcement customers.

In an industry where trust is a prerequisite for market access, Clearview's reputational position is a permanent operational constraint.


7. The Operational Constraint: Limited Civilian Market

The Settlement Prohibition

The BIPA settlement's prohibition on selling to private companies was not merely a one-state restriction. Because Illinois' statutory framework was the basis for the largest and most consequential litigation, the settlement's terms effectively defined the parameters of Clearview's national business model. The company's ability to operate in the civilian market — selling to employers, landlords, financial institutions, retailers, or anyone other than government agencies — was severely constrained.

This operational constraint is a permanent reduction in Clearview's total addressable market. A business that began with ambitions to sell facial recognition services broadly to commercial enterprises now operates primarily as a government contractor. The civilian market — which would have been the basis for the kind of large-scale commercial success that early investors anticipated — is largely foreclosed.

Geographic Limitations

The international regulatory actions described above have similarly constrained Clearview's geographic market. The company's ability to operate in the EU, UK, Canada, and Australia — markets that collectively represent a substantial share of global AI spending — is severely limited by regulatory prohibitions.

The practical result is a company whose addressable market is substantially US law enforcement, a narrower and more politically volatile market than the broadly commercial opportunity the company's founders originally envisioned.


8. Congressional Scrutiny

The Legislative Response

Clearview's exposure generated significant congressional attention. Members of Congress sent letters to the company demanding information about its practices, its law enforcement customers, and its data security. Senate Judiciary Committee hearings on facial recognition technology featured Clearview as a central case study for the risks of unregulated facial recognition.

The company was invited to testify before Congress. Its responses — initially consisting primarily of claims that it operated within the law and that its technology assisted with serious crimes — were received skeptically by members of Congress across party lines.

The congressional scrutiny has contributed to ongoing legislative proposals for federal regulation of facial recognition technology, including proposals that would impose consent requirements similar to (or more stringent than) Illinois' BIPA at the federal level. If such legislation passes, it would impose on Clearview — and on the law enforcement agencies that are its customers — additional compliance requirements that are already anticipated in the regulatory direction of travel.


9. What the Company Says Now vs. What It Actually Does

The Evolving Public Narrative

Clearview AI's public narrative has evolved substantially since its 2020 exposure. The company now emphasizes:

  • That its technology is used exclusively by law enforcement for serious criminal investigations.
  • That its technology has helped solve murders, child exploitation cases, and other serious crimes.
  • That it has implemented safeguards to prevent misuse, including auditing of law enforcement usage.
  • That it complies with applicable law in each jurisdiction where it operates.

These claims are not necessarily false, but they represent a selective framing of the company's practices. The legal record — including the BIPA settlement, the GDPR enforcement actions, and the platform cease and desist letters — is more complicated than the current public narrative suggests.

The Monitoring Dispute

One specific area of dispute is Clearview's claimed auditing of law enforcement usage. Critics have argued that Clearview's audit procedures are inadequate to detect or deter misuse of the technology by individual officers, and that the company lacks the ability to know whether its technology is being used in compliance with the legal frameworks governing law enforcement searches.

This dispute matters for the company's ongoing regulatory relationship. If Clearview's safeguards are inadequate, the company faces continued regulatory scrutiny and potential enforcement action beyond the settlements already reached.


10. Total Business Cost Estimation

Estimating the total business cost of Clearview AI's legal, regulatory, and reputational consequences requires considering multiple categories of loss:

Direct legal costs: - BIPA class action settlement: approximately $52 million (in stock) - State litigation defense costs: estimated tens of millions - International regulatory fines: UK fine of ~£7.5 million; French, Italian, Greek, Swedish fines in aggregate - Platform litigation defense costs: estimated millions

Operational costs: - Market access loss from BIPA settlement private-sector prohibition: the commercial market that Clearview expected to serve is foreclosed; this is a permanent reduction in revenue potential that dwarfs the settlement amount - International market exclusion: estimated loss of significant addressable market in EU, UK, Canada, Australia - Ongoing compliance costs: legal monitoring, regulatory engagement, data deletion processing

Reputational costs: - Investor confidence: Clearview has had difficulty attracting mainstream venture capital investment since 2020; its investor base consists primarily of early investors and sources not associated with major VC firms - Partnership limitations: major technology platform relationships are foreclosed; standard enterprise partnerships are difficult - Talent acquisition: recruiting AI talent to a company with Clearview's reputational profile is materially more difficult than recruiting to a company with a positive ethics reputation

Opportunity costs: - The civilian facial recognition market that Clearview could have pursued — for applications such as access control, retail loss prevention, and event security — has been largely foreclosed by the BIPA settlement - The regulatory clarity that could have been obtained by seeking legal guidance before launch — at the cost of perhaps a few hundred thousand dollars in legal fees — could have produced a lawful business model; the cost of not seeking that clarity is the full extent of the subsequent legal exposure

The total cost of Clearview AI's approach — building first, asking legal questions later, and hoping not to be noticed — is measured not in millions of dollars of fines but in the permanent reduction of the company's addressable market, the ongoing burden of regulatory management, and the reputational constraints that limit its ability to grow.

The counterfactual — a company that built a facial recognition service on a consent-based model, working within privacy law rather than against it — would have faced a more limited initial market but would have had access to the global civilian market, positive relationships with technology platforms, and a reputational position that would support rather than constrain growth.


Discussion Questions

  1. Clearview AI's founders reportedly anticipated the legal and regulatory challenges their business model would generate before the company was publicly exposed. What does this suggest about the relationship between legal risk awareness and business model design? Does anticipating legal risk change the ethical analysis?

  2. The BIPA settlement's private-sector prohibition effectively removed Clearview from the civilian market. How should companies evaluate the systemic risk that a single state's regulatory action might impose constraints on their national business model?

  3. The case study notes a distinction between Clearview's public narrative ("we help solve serious crimes") and the legal record (multiple regulatory violations, consent failures). How should business leaders evaluate company narratives that emphasize benefits while minimizing documented harms? What due diligence questions would you ask?

  4. Design an alternative business model for Clearview AI — one that achieves the same facial recognition search capability for law enforcement while being built on a legally and ethically defensible foundation. What trade-offs would this model require, and what market constraints would it face?


Sources for this case study include Kashmir Hill's investigative reporting in The New York Times, regulatory enforcement notices from the ICO, CNIL, Garante, Australian Information Commissioner, and Canadian privacy commissioners; court documents from the Illinois BIPA litigation; Clearview AI's public statements and responses to regulatory inquiries; and academic analysis of the legal and ethical dimensions of Clearview's business model.