Case Study 18-2: Non-Consensual Intimate Deepfakes — Scale, Harm, and Legal Responses
Overview
Non-consensual intimate imagery (NCII) — sexually explicit material distributed without the subject's consent — has been recognized as a significant harm for decades. The emergence of deepfake technology has transformed this harm in scale, accessibility, and nature. Where traditional NCII (sometimes called "revenge porn") required the perpetrator to possess actual intimate images of the target, deepfake NCII can be generated from any photograph of any person — a profile picture, a yearbook photo, an image from a social media account. Every person with any publicly accessible photographs is a potential victim.
This case study examines the scope of the NCII deepfake problem, the documented harms to victims, the legislative and legal responses that have emerged globally, and the platform policies that have been implemented. It also addresses the fundamental tension between the scale of the harm and the limitations of current remedial frameworks.
The Scale of the Problem
Measurement Challenges
Accurately measuring the scope of NCII deepfakes is methodologically difficult: the content is primarily distributed through specialized websites, dark web forums, and peer-to-peer sharing rather than through mainstream platforms that produce measurable analytics. Victims are often unaware that synthetic imagery of them exists until notified by a third party. Many victims, once informed, choose not to pursue legal remedies or public visibility for the harm.
Despite these challenges, several research organizations have attempted systematic measurement:
Sensity (formerly Deeptrace) Research: Sensity's research, first published in 2019 and updated in subsequent years, found: - Approximately 96% of all deepfakes indexed on the public internet in 2019 were NCII pornography targeting women. - A handful of dedicated NCII deepfake websites hosted hundreds of millions of video views — comparable in scale to mainstream pornography sites. - The subjects were overwhelmingly women, including both celebrities and private individuals. - As of 2023, the number of NCII deepfake videos online had grown by orders of magnitude from the 2019 baseline.
The Internet Watch Foundation (IWF): The IWF, which operates a hotline for reporting child sexual abuse material, began documenting AI-generated child sexual abuse material (CSAM) in 2023 and found it growing rapidly as a category — a particularly alarming application of the same underlying technology.
The Henry Jackson Society / Channel 4 Research (UK, 2023): A joint investigation found multiple UK-specific NCII deepfake sites, identified cases involving UK schoolgirls, and found that some sites had hundreds of thousands of monthly visitors.
Who Is Targeted
NCII deepfakes disproportionately target:
Women generally: The targeting is overwhelmingly gendered. Research consistently finds that the vast majority of victims are women; the perpetrators are overwhelmingly men. This is not merely a reflection of differential representation in public-facing roles — private individuals are targeted at similar rates once controlling for social media photo availability.
Women in public-facing roles: Journalists, politicians, activists, and entertainers face elevated targeting both because of their higher public profile (more photographs available) and because the harm may be intended to deter their public activity — "silencing" through reputational attack.
Celebrities: A small number of dedicated websites specifically produce and distribute celebrity NCII deepfakes. These sites operate openly in jurisdictions with limited applicable laws and generate revenue through advertising and subscription models.
Minors: Documented cases exist, including school-based cases, of AI-generated NCII targeting minors. This category of harm has prompted some of the most urgent legislative responses.
Harm Documentation
Psychological Harm
Research on traditional NCII has documented severe and well-recognized psychological harms that apply with equal force — and arguably greater force given the expanded victim population — to deepfake NCII:
- Post-traumatic stress disorder (PTSD): Studies have found PTSD diagnoses among significant proportions of NCII victims.
- Depression and anxiety: Both acute and chronic depression and anxiety are well-documented sequelae of NCII victimization.
- Suicidal ideation and self-harm: Multiple documented cases link NCII victimization to suicidal ideation; some linked deaths have been reported.
- Social withdrawal: Many victims withdraw from social media, public activities, and in some cases employment as a response to the threat or reality of NCII.
For deepfake NCII, the psychological dynamic differs from traditional NCII in one important respect: the victim knows that the intimate imagery was fabricated. Yet research and clinical reports suggest this does not significantly mitigate the harm — the psychological experience of having sexual imagery of oneself distributed without consent is severe regardless of whether the imagery captures genuine intimacy.
Professional and Social Harm
Career consequences: Women in professional contexts — journalism, law, politics, academia — have documented career impacts from NCII deepfakes, including: - Withdrawal from public-facing roles following NCII attacks - Difficulty being taken seriously following the circulation of NCII imagery in professional networks - Documented cases of job loss following employer-directed harassment campaigns that included NCII deepfakes
Relationship harm: NCII deepfakes have caused documented harm to personal and professional relationships, including cases where the victims' colleagues, employers, or family members encountered the synthetic imagery and drew inaccurate conclusions.
Safety concerns: In some cases, NCII deepfake campaigns have been accompanied by or escalated to stalking, physical harassment, or direct physical threats, making the online harm an indicator of physical risk.
Documented Individual Cases
The chapter cannot name specific private individuals without their consent, but publicly reported cases include:
The Westfield High School case (New Jersey, 2023): Male students at Westfield High School used AI image generation tools to create non-consensual intimate deepfakes of their female classmates and circulated them among the student body. The case received significant national media attention and prompted legislative responses in New Jersey and nationally. The students were minors, complicating the legal response; the school district and law enforcement struggled to identify applicable law and practical remedies.
The Taylor Swift case (January 2024): Extremely explicit AI-generated images of Taylor Swift circulated widely on X/Twitter, reaching tens of millions of views before the platform took action. The images were apparently generated using Microsoft's AI image tools through a jailbroken prompt. The scale and visibility of the case — affecting the world's most famous pop star — prompted significant congressional attention and accelerated legislative proposals. Microsoft subsequently strengthened safety measures in its AI tools.
South Korea school-based NCII (2023-2024): South Korea documented a wave of AI-generated NCII targeting female students and teachers, distributed through Telegram channels. The cases prompted emergency legislative action and police investigations that revealed the scale of the problem — thousands of victims in school-based contexts alone.
Legislative Responses
United States — State Level
The United States lacks comprehensive federal NCII legislation as of early 2025, making state law the primary legal framework. The state landscape is fragmented:
Virginia (2019): Among the first states to explicitly address deepfake NCII, Virginia's law created criminal liability for creating and distributing non-consensual deepfake pornography, defined as synthetic intimate imagery created without consent and distributed with intent to coerce, harass, or intimidate. Maximum penalty: 12 months imprisonment and/or $2,500 fine.
California (2019, AB 602): California enacted both criminal provisions (AB 602) and a civil cause of action (SB 740) for NCII deepfakes. The civil cause of action is significant because it allows victims to sue in civil court without relying on prosecutor discretion.
New York (2023): New York enacted legislation creating criminal penalties for deepfake NCII and establishing a civil right of action that allows victims to sue anonymously — addressing the concern that civil litigation itself requires publicly associating one's name with the harm.
Texas, Georgia, Illinois, and others: Multiple additional states have enacted NCII legislation with varying scope and effectiveness. A patchwork of 20+ state laws exists, differing on: - Whether the law requires proof of intent to harm - Whether "private" individuals only, or all individuals, are covered - The penalty structure - The statute of limitations - Whether civil causes of action are included
Limitations of state criminal law: - Prosecution requires identifying an often anonymous perpetrator - Cross-border production and distribution complicate jurisdiction - Prosecution resources are limited and prosecutors prioritize other crimes - Criminal prosecution occurs after the harm — it cannot prevent distribution - Sentences for first offenses are often modest
United States — Federal Proposals
Several federal legislative proposals have been introduced:
The DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits, 2024): Would create a federal civil cause of action for victims of NCII deepfakes, allowing them to sue in federal court with potential damages including actual damages and punitive damages. The bill addressed a significant gap in federal law by focusing on civil rather than criminal remedies, which are more accessible to victims.
The TAKE IT DOWN Act (2024): Would require platforms to remove NCII (including AI-generated NCII) within 48 hours of notification. This is modeled on DMCA notice-and-takedown processes. Unlike most NCII legislation, it imposes obligations on platforms rather than (only) on perpetrators.
The AI Labeling Act and related proposals: Several proposals address the labeling of AI-generated content broadly, which would include NCII deepfakes among many other categories.
International Responses
United Kingdom: The Online Safety Act (2023) included provisions criminalizing the non-consensual sharing of deepfake pornography. Subsequent legislative proposals (Criminal Justice Bill 2023-24) sought to also criminalize the creation (not only distribution) of deepfake NCII.
South Korea: Following the 2023-24 school-based NCII wave, South Korea enacted emergency legislation in September 2024 criminalizing the creation, distribution, and possession of AI-generated NCII, with penalties up to five years imprisonment.
Australia: Australia's Online Safety Act authorizes the eSafety Commissioner to issue removal orders for NCII, including deepfake NCII. Australia has also enacted broader non-consensual intimate imagery laws with criminal penalties.
European Union: The EU's AI Act (2024) requires that AI-generated or manipulated image, audio, and video content be labeled as such; this requirement, applied to generation systems, would require disclosure that intimate imagery is AI-generated. The EU also extended existing NCII protections to synthetic content under the Digital Services Act framework.
Platform Policies
Existing Platform Policies
Major platforms have NCII policies that were extended to cover deepfake NCII:
Pornhub and Major Pornography Platforms: Following significant advocacy and reporting (particularly journalist Laila Mickelwait's investigations), major pornography platforms including Pornhub substantially restricted user-uploaded content in 2020 and implemented verification requirements for content creators. However, specialist deepfake NCII websites — many of which operate exclusively in this space — have been less responsive to advocacy pressure and have resisted platform-level regulatory pressure.
Google and Search Engines: Google expanded its policy for de-indexing non-consensual explicit imagery to include AI-generated NCII in 2024, allowing victims to request removal of search results linking to synthetic intimate imagery without proof of prior existence of authentic imagery.
Major Social Media (Facebook, Instagram, TikTok, YouTube, X): All major platforms have policies prohibiting NCII including deepfake NCII. Enforcement is inconsistent — automated detection is imperfect, human review is resource-intensive, and appeal processes are limited.
Specialized NCII Removal Organizations
Several nonprofit and civil society organizations have developed NCII removal support services:
StopNCII.org: Operated by the Revenge Porn Helpline (UK-based), StopNCII allows victims to create a "hash" (digital fingerprint) of their intimate imagery — including AI-generated imagery — without sharing the actual image with the organization. This hash is shared with platform partners who use it to automatically detect and remove the imagery. Major platforms including Meta, TikTok, and Snapchat participate.
NCMEC (National Center for Missing & Exploited Children): NCMEC's CyberTipline handles reports of CSAM including AI-generated CSAM. The organization works with platforms and law enforcement to address child-targeted synthetic media.
Take It Down (NCMEC): A dedicated service for removing youth-specific intimate imagery, which handles both authentic and synthetic content.
The Fundamental Tension: Scale vs. Remediation
The Scale Problem
NCII deepfake production and distribution operates at a scale that fundamentally challenges case-by-case remediation. Individual prosecution requires: 1. Victim awareness that the imagery exists 2. Identification of the (often anonymous) perpetrator 3. Prosecution decision by a resource-constrained legal system 4. Sufficient evidence for criminal conviction
At each step, significant attrition occurs. The realistic fraction of NCII deepfake perpetrators who face any legal consequence is very small.
Individual platform takedown similarly operates at scale disadvantage: each removed copy is often re-uploaded to the same or different platforms; specialist sites outside major platform ecosystems are not addressed by platform policies; and the production of new synthetic content requires only a photograph and available software.
The Infrastructure Intervention Alternative
Some advocates have argued for an infrastructure approach: rather than targeting individual perpetrators or individual pieces of content, targeting the infrastructure that enables NCII deepfake distribution:
Payment processors: Visa, Mastercard, and PayPal have cut off services to some NCII distribution platforms following advocacy campaigns, removing the revenue model that sustains commercial operation.
Hosting providers: Some cloud hosting providers have terminated services to dedicated NCII sites following policy violations.
App stores: Apple and Google have removed deepfake apps from their stores when the apps primarily enable NCII production.
AI generation platform safety measures: Major AI image generation platforms (Stable Diffusion hosted services, Midjourney, DALL-E) have implemented content policies that prohibit generating intimate imagery of real people. Open-source models deployed locally cannot be similarly restricted.
The Open-Source Challenge
A fundamental limitation of platform-level and regulatory responses: the open-source AI ecosystem means that the models underlying NCII deepfake production are freely available and can be deployed locally, outside any platform's control. Regulatory requirements imposed on commercial AI platforms will not affect locally deployed open-source tools.
This creates a situation analogous to the challenge of regulating any digital information that is freely copyable: once a model is released publicly, the production capability it provides cannot be retracted. Any regulatory framework that depends on controlling access to the generation technology will have limited effectiveness against determined bad actors with technical knowledge.
Emerging Approaches and Unresolved Questions
Technical Countermeasures
Immunization / adversarial perturbation: Research has explored the possibility of "immunizing" photographs against deepfake use through adversarial perturbations — subtle modifications invisible to humans but that disrupt GAN-based face-swap systems. Tools like Fawkes (developed at the University of Chicago) and PhotoGuard (MIT) have demonstrated this approach. Practical limitations include that immunized images must be uploaded rather than the original; that perturbations must be updated as generation technology evolves; and that in-the-wild images of subjects in public contexts cannot be retroactively protected.
Detection for automated removal: Platform-level automated detection of NCII deepfakes faces the same arms race challenges as general deepfake detection. However, for NCII specifically, the combination of explicit content detection (existing technology) and synthetic media signals may provide a more tractable target.
Unresolved Legal Questions
The "creation vs. distribution" distinction: Many existing laws focus on distribution, leaving the creation of NCII deepfakes (that are not distributed) unaddressed. Whether creation alone should be criminal — particularly when the content remains private — raises difficult questions about criminalizing thought and private action.
Anonymous platform liability: The Communications Decency Act's Section 230 generally immunizes platforms from liability for third-party content. The FOSTA-SESTA exception for sex trafficking and the proposed STOP CSAM Act exception for child sexual abuse material suggest that content-specific exceptions can be created; whether NCII should receive similar treatment is an active legislative debate.
International coordination: The perpetrators of NCII deepfakes often operate across borders. Effective legal response requires international cooperation that is currently fragmented and insufficient.
Discussion Questions
-
The scale of NCII deepfake harm — with millions of victims and billions of views — is dramatically larger than documented political deepfake harms, yet policy attention is arguably weighted toward the political harm category. What explains this allocation of attention, and is it appropriate?
-
The "open-source challenge" means that platform-level content policies cannot prevent determined bad actors from producing NCII deepfakes using locally deployed tools. Given this limitation, what is the appropriate scope of platform responsibility? What should platforms be required to do, and what is beyond their reasonable capacity?
-
The StopNCII.org "hash-based" approach — creating digital fingerprints of NCII without the organization seeing the actual imagery — represents a privacy-preserving innovation in content moderation. What are the limitations of this approach? Who is left unprotected by it?
-
Several legislative proposals focus on platform obligations (notice-and-takedown within 48 hours) rather than on perpetrator liability. What are the advantages and disadvantages of a platform-obligation framework compared to a perpetrator-liability framework?
-
The "immunization" approach — modifying photographs to disrupt deepfake generation — places the burden of protection on potential victims rather than on perpetrators or platforms. Is this an appropriate allocation of responsibility? Under what conditions might immunization tools be valuable, and what are their inherent limitations?
-
NCII deepfakes disproportionately target women and have been linked to chilling effects on women's participation in journalism, politics, and public life. How should this gendered dimension of the harm shape legal and policy responses? Does gender-specific harm justify gender-specific remedies?