Case Study 26-2: Clearview AI — The Privatization of Surveillance
Overview
Clearview AI is a facial recognition company that built the world's largest private biometric database by scraping billions of images from public social media platforms without the consent of the individuals photographed, then sold search access to that database to law enforcement agencies across the United States and internationally. The company's exposure in a January 2020 New York Times investigation triggered regulatory action on multiple continents, multi-million-dollar fines in multiple jurisdictions, demands to cease operations from every major social media platform, and a cascade of public debate about the adequacy of existing privacy law.
Despite all of this, Clearview AI continues to operate. It has grown its database from three billion to over forty billion images. It retains law enforcement clients in the United States. And its operations reveal, with unusual clarity, the specific gaps in legal frameworks that allow a surveillance infrastructure of this scale to exist.
This case study traces Clearview's founding, its business model, its public exposure, the regulatory responses it has faced across jurisdictions, and what it reveals about the architecture of privacy protection in the digital age.
Founding and Early Development
Clearview AI was incorporated in Delaware around 2017. Its founders include Hoan Ton-That, an Australian-born software developer with a history in social media app development, and Richard Schwartz, a former aide to New York City Mayor Rudolph Giuliani with political connections in law enforcement circles. Early investors included Peter Thiel's Founders Fund, which provided seed funding of approximately $200,000, and investor David Scalzo.
The company's core innovation was not the facial recognition algorithm itself — similar algorithms were available from multiple vendors — but the database. Clearview built a proprietary database by writing automated software ("scrapers") that systematically downloaded images from publicly accessible pages on Facebook, Instagram, Twitter, YouTube, Venmo, LinkedIn, and thousands of other websites. Each image was processed through a facial recognition embedding, creating a searchable facial vector linked to the source URL and associated metadata (names, social media handles, captions, tags).
By the time the company became publicly known in early 2020, it had assembled approximately three billion images. The company's own pitch materials, obtained by journalists, described the database as the largest facial recognition database in the world at the time, significantly larger than FBI or Interpol databases.
The database's scale is what creates its investigative value. A detective submitting a photo of an unknown person to a law enforcement database containing hundreds of thousands of mugshots may find no match. The same photo submitted to a database of three billion internet images — images of ordinary people at parties, at work, at family gatherings, at protests — has a substantially higher probability of returning a match, because the majority of the population is represented in some form on the public internet.
This is both the product's appeal and its most troubling characteristic.
The Business Model: Law Enforcement as Primary Customer
Clearview's core business proposition, from its inception, was selling access to its database to law enforcement agencies. The company provided a mobile and web application through which law enforcement users — investigators, detectives, analysts — could upload a photo and receive a ranked list of potential matches, with links to the source pages where each matching image appeared.
The company marketed aggressively to law enforcement. According to documents reviewed by BuzzFeed News and the New York Times, Clearview's representatives offered free trials to law enforcement agencies and individual officers, building a client base before formal contracting. By early 2020, the company had clients including:
- The Federal Bureau of Investigation
- The Department of Homeland Security (through Immigration and Customs Enforcement and other components)
- The United States Army
- The US Attorney's Office in Manhattan
- Hundreds of local police departments across the United States
A Clearview data breach in February 2020, disclosed by the company, revealed the extent of its law enforcement client list — the breach exposed the full list of customers who had used the app, their search histories, and the number of searches conducted. The disclosure was ironic: a company that had built its operations on scraping others' data without disclosure experienced its own data exposure shortly after becoming publicly known.
Beyond law enforcement, the company reportedly offered services to some private clients, including banks conducting fraud investigations and retailers. The FTC settlement in 2023 subsequently restricted private sector sales.
Public Exposure: The New York Times Investigation
On January 18, 2020, The New York Times published "The Secretive Company That Might End Privacy as We Know It," reported by Kashmir Hill. The article drew on months of reporting, including interviews with company representatives, law enforcement users, privacy researchers, and — most dramatically — the experience of having her own photo submitted to the Clearview system by law enforcement users willing to demonstrate the product.
The article revealed:
- The scale of the database (3+ billion images at the time)
- The identity of law enforcement clients, including agencies the public had no reason to know were using the technology
- The scraping methodology, which violated the terms of service of every major social media platform
- The company's earlier, lower-profile existence as "Smartcheckr"
- The connection to investor Peter Thiel
- Specific investigative uses, including cases in which law enforcement credited Clearview with providing identifications in criminal investigations
The article's impact was immediate and extensive. It prompted:
- Cease-and-desist letters from Facebook, Google (YouTube), Twitter, and LinkedIn — all of which have terms of service prohibiting scraping
- Congressional inquiries from multiple legislators
- State attorneys general investigations
- Data protection authority attention in multiple countries
- Public statements from the technology industry, civil liberties organizations, and academic researchers
Clearview's response to the cease-and-desist letters was to contest the platforms' legal authority to prevent scraping of publicly accessible content, citing the Ninth Circuit's ruling in hiQ Labs v. LinkedIn that the Computer Fraud and Abuse Act does not prohibit scraping publicly available content. The platforms' legal position — that Clearview violated their terms of service — is distinct from the CFAA question, and litigation on the platform side has continued.
Regulatory Responses by Jurisdiction
United States
Despite Clearview's primary customer base being US law enforcement, federal regulatory action in the United States has been the most limited of any major jurisdiction.
The Federal Trade Commission reached a settlement with Clearview AI in May 2023. The settlement:
- Prohibited Clearview from selling its facial recognition database to private businesses (though not to government and law enforcement entities)
- Required Clearview to implement a "face print opt-out" tool through which US residents can request their facial image be removed from Clearview's database
- Required enhanced data security practices
- Did not require deletion of the existing database
- Did not impose any financial penalty
The settlement's limitations are significant. The prohibition on private sales addresses one market segment but leaves the core law enforcement business untouched. The opt-out mechanism is an opt-out for people who know about it, have internet access, are aware they may be in the database, and are willing to submit a photo to the company they are trying to opt out of (which is what the removal process requires). The absence of database deletion means the fundamental underlying asset — billions of scraped images — remains intact.
State-level action has been more aggressive. The Illinois Attorney General brought an action under BIPA, resulting in a settlement announced in 2023 that permanently barred Clearview from selling database access to private entities in Illinois and required it to cease providing free trial access to Illinois law enforcement (though it can still provide fee-based access to law enforcement). Clearview's total Illinois settlement value has been reported at approximately $52 million in Illinois credits.
United Kingdom
The Information Commissioner's Office (ICO) issued an enforcement notice and an intent to fine Clearview approximately £17 million in November 2021. The ICO found that Clearview:
- Failed to have a lawful basis for collecting and processing personal data under UK GDPR
- Failed to meet the higher standards required for processing biometric data under Article 9
- Failed to inform data subjects of how their data was being processed
- Did not have a legitimate interest that would override the fundamental rights and freedoms of UK data subjects
The fine was ultimately issued at approximately £7.5 million, reduced from the initial figure following representations by Clearview. Clearview appealed the entire enforcement, and in 2023, the First-Tier Tribunal (General Regulatory Chamber) partially upheld Clearview's appeal, ruling that the ICO did not have jurisdiction over Clearview's activities to the extent they were performed on behalf of foreign (non-UK) law enforcement agencies under the law enforcement processing exemption in Part 3 of the UK Data Protection Act 2018.
The Tribunal's ruling narrowed the ICO's enforcement power but did not eliminate all of it. The ruling is a reminder that domestic law enforcement exemptions in data protection law can create gaps when the processing is done by a private company in support of foreign law enforcement.
European Union
Multiple EU data protection authorities have taken enforcement action:
Italy (Garante): Fined Clearview approximately €20 million and issued an order requiring it to cease all processing of Italian residents' data and to delete existing data on Italian residents.
France (CNIL): Issued an injunction in December 2021 requiring Clearview to cease collecting data on French individuals and to delete existing data. When Clearview failed to comply, the CNIL issued a fine of €20 million.
Greece (Hellenic DPA): Issued a fine of €20 million.
Sweden (IMY): Issued a fine of approximately SEK 2.5 million (approximately €220,000) to the Swedish Police Authority for using Clearview AI, finding that the police authority had breached GDPR by using a tool that was itself non-compliant with the regulation.
The EU actions collectively represent the most aggressive regulatory response to Clearview globally. However, enforcement of these orders depends on Clearview having assets or operations within EU jurisdiction — which it has largely avoided. Clearview has disputed the jurisdiction of EU authorities and has not demonstrated compliance with deletion orders.
Canada
The Privacy Commissioner of Canada, jointly with provincial privacy commissioners, investigated Clearview AI and issued a report in February 2021 finding that Clearview's activities violated the Personal Information Protection and Electronic Documents Act (PIPEDA). The commissioners found that:
- Clearview collected personal information from Canadians without their knowledge or consent
- The surveillance it enabled was "mass surveillance" not justified by consent or a legitimate interest exception
- The information was used for purposes that individuals would not have expected or consented to
The commissioners ordered Clearview to cease collecting images of people in Canada, cease offering its services in Canada, and delete images and data of people in Canada from its database. Clearview initially disputed the commissioners' findings but subsequently announced it was "voluntarily" withdrawing from the Canadian market and ceased offering services to Canadian law enforcement.
Australia
The Office of the Australian Information Commissioner (OAIC) found in November 2021 that Clearview had breached Australian Privacy Principles by collecting sensitive information (biometric data) without consent, using it for a purpose the individuals had not consented to, and using unfair means to collect it (scraping without the knowledge of the individuals or the platforms). The OAIC ordered Clearview to cease collecting facial images of Australians and to destroy existing data.
Clearview disputed OAIC's extraterritorial jurisdiction, and enforcement of the order has been limited.
What Clearview Reveals About Existing Law
The Clearview case is analytically important because it is not a story about a rogue actor finding a minor loophole. It is a story about a company that built the world's largest biometric surveillance database through activities that were, in substantial part, legal under applicable US law at the time.
Gap 1: No General Prohibition on Biometric Database Construction
US federal law contains no general prohibition on constructing a database of scraped internet images linked to facial recognition embeddings. The activity is not prohibited by the Computer Fraud and Abuse Act (which concerns unauthorized access to computer systems, not scraping publicly accessible content). It is not prohibited by any general federal privacy law (which does not exist in comprehensive form). It is regulated by BIPA only in Illinois. The absence of a general biometric protection framework is precisely what allowed Clearview's database to be assembled.
Gap 2: The Platform Terms of Service Gap
Social media platforms prohibit scraping in their terms of service. But terms of service are contracts between the platform and the scraper, not between the scraper and the individuals whose data is scraped. The individuals who posted the photos that Clearview scraped have no contractual claim against Clearview under platform terms. And the platforms' ability to enforce their terms through litigation — expensive, slow, and not certain to succeed — is inadequate to prevent the kind of scraping Clearview conducted.
Gap 3: The Law Enforcement Exception
Privacy laws in most jurisdictions include exceptions or reduced protections for law enforcement and national security activities. Clearview has leveraged this structure: by primarily serving law enforcement, it positions itself within the law enforcement exception ecosystem, making regulatory action more legally complex and politically difficult. Data protection authorities that have jurisdiction over commercial processing may have reduced jurisdiction over processing "on behalf of" law enforcement.
Gap 4: Extraterritorial Enforcement
The countries that have most aggressively regulated Clearview — EU member states, UK, Canada, Australia — are not where Clearview operates its primary business. Orders to delete data and cease processing are orders directed at a US company serving primarily US law enforcement. Enforcement of foreign regulatory orders against US companies requires either US cooperation, assets in the relevant jurisdiction, or the company's own compliance. Clearview has largely declined to comply with foreign deletion orders.
Gap 5: Opt-Out Architecture in a Consent World
The FTC settlement required Clearview to create an opt-out mechanism. But opt-out architecture for biometric surveillance reverses the consent model that privacy frameworks are built on: instead of requiring affirmative consent before data collection, it requires individual action after collection to limit use. Given the scale of the database (billions of images), the practical opt-out rate will be vanishingly small — most people do not know their image is in the database, and those who know may not know how to exercise the opt-out.
Ongoing Operations and Future Direction
Clearview AI has not been put out of business by regulatory action. As of 2024, the company continues to operate in the United States and to serve US law enforcement. Its database has grown from three billion images at the time of public disclosure to reportedly over forty billion. The company has sought to expand its business model to include defense and intelligence applications, border security, and international law enforcement partnerships.
CEO Hoan Ton-That has given multiple interviews framing Clearview as a law enforcement tool that solves crimes, citing specific cases in which the technology identified child predators, human traffickers, and terrorism suspects. These cases are real, and they represent a genuine argument about the technology's value. The question the Clearview case forces is not whether facial recognition can solve crimes — it clearly can — but who decides the terms under which it operates, what oversight exists, and what the aggregate costs of a world of pervasive biometric surveillance are compared to the benefits of any individual solved case.
Discussion Questions
-
Clearview's database was assembled by scraping publicly accessible images. People who post photos on Instagram have made those photos publicly available. What is the argument that they have not consented to Clearview's use of those photos? What is the strongest counter-argument?
-
The EU has fined Clearview, issued deletion orders, and effectively driven the company from European markets. The US FTC settled for prohibition on private sales and an opt-out mechanism. What explains the difference in regulatory outcomes? What would be required for US regulation to approach the EU's approach?
-
Clearview serves law enforcement. Given that law enforcement has public safety functions that a purely commercial company does not, should facial recognition companies that serve primarily law enforcement face different regulatory treatment? What are the risks of law enforcement carve-outs in privacy law?
-
Consider the "solved crimes" framing. Clearview's CEO cites specific criminal investigations in which the technology produced identifications that led to arrests. How should this evidence be weighed against the consent and bias concerns? What process would allow society to make this trade-off legitimately?
-
What would "adequate" privacy law for the Clearview problem look like? Draft three specific statutory provisions that would address the gaps identified in this case study.
This case study connects to Chapter 23 (Data Privacy Fundamentals), Chapter 24 (Surveillance Capitalism), and Chapter 33 (AI Regulation and Compliance).