Case Study 35.2: Non-Consensual Intimate Imagery — The Generative AI Harm Frontier

Synthetic Abuse at Scale


Overview

In January 2024, graphic non-consensual sexual images depicting Taylor Swift were generated using AI tools and posted to X (formerly Twitter). Within hours, the images had been viewed hundreds of millions of times. X eventually suspended the account that had posted them and blocked searches for "Taylor Swift" — a blunt content moderation response that testified to the platform's inability to contain the spread in real time.

Swift had not consented to the creation of these images. She had not posed for them. The images depicted events that had never occurred. They were entirely fabricated by AI systems trained on vast datasets of human imagery. And they joined a catalog of similar fabricated images — of celebrities, of private individuals, of school students — that had been proliferating since AI image generation tools became accessible to ordinary users.

The Taylor Swift episode is important not because of who the victim was, but because of what it made visible: that the harms of non-consensual intimate imagery, long documented in the context of authentic photographs, had entered a new phase in which AI could generate realistic sexual imagery of anyone whose face could be photographed. The barrier to this harm had dropped from requiring access to private intimate photos to requiring a publicly available photograph and a consumer AI tool.

This case study examines non-consensual intimate imagery (NCII) as a generative AI harm: its scale, its documented impacts on victims, the legal frameworks that have struggled to address it, and the governance responsibilities of AI developers and platforms.


Background: NCII Before Generative AI

Non-consensual intimate imagery existed before AI. The practice sometimes called "revenge porn" — distributing authentic intimate images of a person without their consent, often in the context of relationship dissolution — became a documented social harm in the early 2010s. The harm was real and severe: victims experienced harassment campaigns, professional consequences, relationship damage, and documented psychological trauma consistent with sexual assault.

Legal responses developed gradually. By 2023, more than forty-eight states and territories in the United States had enacted some form of criminal NCII law, though definitions, penalties, and coverage varied substantially. Federally, the United States lacked a comprehensive criminal NCII law until 2024.

The key characteristic of pre-AI NCII was that it required access to an authentic intimate image. This was a meaningful barrier. Intimate images typically existed only if the subject had created them (self-photography) or consented to their creation in a private context. The harm of distribution was severe, but creation required either the subject's participation or a violation of their physical privacy.

Generative AI fundamentally changed this. AI image generation tools, capable of producing photorealistic images of human beings in any situation, removed the requirement for any authentic intimate imagery. A bad actor needed only a photograph of a person's face — available from social media, professional websites, or any public context — to generate AI images depicting that person in sexual situations they had never participated in.


The Scale of AI-Generated NCII

Measuring the scale of AI-generated NCII is methodologically challenging because much of it occurs on anonymous platforms and because victims are often unaware that images of them exist. Available evidence, however, suggests the harm is substantial and growing.

A 2019 report by the cybersecurity company Deeptrace, published before AI image generation became widely accessible, found that 96% of deepfake videos online were non-consensual pornography. The vast majority — approximately 90% — targeted women. The most commonly targeted individuals were celebrities, whose publicly available photographs provided abundant training material.

As AI image generation tools became more accessible in 2022 and 2023, the scale expanded dramatically. Research published in 2023 documented dedicated websites hosting AI-generated NCII with millions of monthly visitors. These websites typically offered tools or services for users to generate fake intimate images of specific individuals from photographs, with the results aggregated in searchable catalogs.

By 2024, researchers estimated that AI-generated NCII had moved from primarily targeting celebrities to increasingly targeting private individuals. Documented cases included:

  • High school and college students whose classmates had generated and distributed AI-fabricated intimate images, in some cases leading to school disciplinary proceedings and law enforcement investigations.
  • Female athletes whose photographs from athletic competitions were used to generate AI intimate imagery.
  • Female professionals whose LinkedIn profile photographs were used as source images.
  • Teachers, whose photographs from school websites or professional directories were used.

A 2024 survey of NCII victims in the United Kingdom found that a substantial percentage of cases reported to the Revenge Porn Helpline in 2023 and 2024 involved AI-generated rather than authentic imagery — a sharp increase from prior years.


Documented Cases

High School Victims

Among the most alarming documented episodes were cases involving AI-generated NCII of high school students. In Westfield, New Jersey, in 2023, approximately thirty female high school students discovered that AI-generated nude images of them had been created and shared by male classmates using AI tools. The images were circulated through group chats and social media. Local law enforcement investigated; the boys responsible faced school discipline. The students and their parents reported significant psychological harm and described feeling unsafe at school.

Similar episodes were reported in Beverly Hills, California; in Georgia; in Spain; and in South Korea, where the problem became sufficiently widespread to generate legislative attention. The Westfield case was notable partly because it brought the issue to national attention and partly because it demonstrated that the harm was not limited to celebrities or to adults: any person with a social media presence was a potential target.

Celebrity Cases

Before the Swift episode, AI-generated NCII had targeted numerous celebrities, predominantly female performers in the entertainment industry. South Korean K-pop artists were among the most frequently targeted, with dedicated websites hosting AI-generated intimate images of specific performers and enabling users to generate additional images on demand. These sites operated across jurisdictions with varying legal responses, often hosted in countries with limited enforcement of NCII laws.

American actresses, female athletes, and female public figures were similarly targeted. In most cases, the practical remedies available to victims were limited: platforms could remove specific images when reported, but images spread faster than they could be removed, and new images could be generated continuously.

The Taylor Swift Episode

The January 2024 episode involving Taylor Swift illustrated several dimensions of the problem in high relief. The images were created using AI tools and posted to X, where they went viral with extraordinary speed before the platform took action. The specific images were eventually removed, but copies spread to other platforms. X's response — blocking the search term "Taylor Swift" for multiple days — was a blunt instrument that illustrated the inadequacy of existing content moderation tools for this type of harm.

Swift did not publicly comment on the episode, but her management team took action and her representatives contacted legal counsel. Members of the United States Congress cited the Swift episode in introducing or advancing federal legislation on AI-generated NCII. Representative Alexandria Ocasio-Cortez described the episode as "a bellwether moment," and bipartisan support for federal NCII legislation increased in the months following.


Psychological Harm

Research on the psychological impact of NCII — including AI-generated NCII — documents severe and lasting harm to victims. Studies and victim accounts consistently identify the following dimensions:

Loss of control and violation: Victims describe a profound sense of violation — the sense that their body and identity have been taken and used without their consent. This sense of violation is often described as comparable to physical sexual assault, and some researchers have characterized NCII as a form of sexual violence.

Trauma symptoms: Clinical documentation of NCII victims has found symptoms consistent with post-traumatic stress disorder, including intrusive thoughts, hypervigilance, avoidance behaviors, and emotional dysregulation. The chronic, ongoing nature of NCII harm — knowing that images exist and circulate even after specific instances are removed — can make resolution of trauma symptoms difficult.

Reputational and professional harm: Victims who are publicly identified with AI-generated NCII experience reputational damage, particularly if the images circulate in professional or community networks. The loss of control over one's public image has documented career and relationship consequences.

The impossibility of removal: Victims consistently report that the impossibility of complete removal — once an image is distributed, it cannot be fully recalled — makes the harm uniquely difficult to address. Even images successfully removed from one platform reappear on others. This perpetual exposure is a distinctive feature of digital NCII harm that distinguishes it from other forms of reputational damage.

Amplification by AI generation: The specific harm of AI-generated NCII may be experienced as more severe than authentic NCII in some respects, because the victim knows that infinite additional images can be generated from any publicly available photograph. There is no limited supply of images that can be removed; new images can always be created. This unlimited nature of the harm intensifies the sense of helplessness that victims report.


State Law

The United States' legal response to NCII has developed primarily at the state level, producing a patchwork of laws that varies in coverage, penalties, and applicability to AI-generated content. As of early 2024:

  • More than forty states had criminal laws addressing NCII, but most had been enacted before AI-generated NCII was widespread and used definitions that did not clearly cover AI-generated content. Definitions typically referred to authentic images depicting actual events, leaving ambiguity about whether fabricated AI imagery fell within scope.
  • Several states moved to explicitly amend their NCII laws to cover AI-generated content. California, Texas, and Georgia amended or enacted laws explicitly addressing AI-generated intimate imagery.
  • Civil remedies — the ability to sue for damages — were available under NCII laws in some states but not others, and the practical availability of civil remedies was limited when perpetrators were anonymous or judgment-proof.

Federal Law

Federal law in the United States had a significant gap on NCII through most of the period of AI-generated NCII proliferation. Section 230 of the Communications Decency Act, which immunizes platforms from liability for third-party content, had historically been interpreted to limit platform liability for hosting NCII. The FOSTA-SESTA amendments (2018) created a limited exception for sex trafficking, but NCII did not fit this category.

The federal Take It Down Act, introduced in Congress in 2024 and advancing with bipartisan support, would criminalize the non-consensual publication of intimate images including AI-generated content, require platforms to remove reported content within 48 hours, and create civil remedies. The bipartisan nature of support for this legislation — unusual in the contemporary Congress — reflected both the severity of the documented harm and the galvanizing effect of high-profile cases including the Swift episode.

International Comparison

The United Kingdom enacted the Online Safety Act in 2023, which included provisions criminalizing non-consensual sharing of intimate images and imposing obligations on online platforms to address NCII. Subsequent amendment was planned to specifically address AI-generated NCII. The UK Information Commissioner's Office has also addressed NCII in its guidance on online safety.

Australia enacted the Online Safety Act 2021 and the eSafety Commissioner has addressed NCII, including through removal notice schemes applicable to platforms. Australia was among the earlier jurisdictions to explicitly address AI-generated NCII in regulatory guidance.

The European Union's Digital Services Act (DSA) imposes obligations on large platforms to assess and mitigate systemic risks, including risks of NCII, and to remove illegal content upon notification. EU member states' criminal laws on NCII vary; some explicitly cover AI-generated content.

South Korea, where AI-generated NCII targeting K-pop artists was particularly documented, enacted legislation specifically addressing digital sexual crimes including AI-generated intimate imagery, and dedicated enforcement resources to the problem.


Platform Policies and Their Limits

Major social media and content platforms have established policies prohibiting NCII, including AI-generated NCII. Meta, Google, TikTok, and Microsoft have policies requiring removal of NCII upon identification. Some platforms, including Google, allow victims to submit removal requests that can result in removal of specific images across the platform's services.

The limitations of platform policies are significant:

Detection at scale: Automated detection of NCII is technically difficult. Even human reviewers have difficulty reliably identifying AI-generated NCII — the whole point of the technology is that it produces realistic-seeming imagery. Detection systems that rely on metadata or watermarks can be defeated by bad actors who strip metadata before posting.

Speed of spread vs. speed of removal: Content that goes viral spreads faster than it can be removed. The Taylor Swift episode illustrated that by the time X blocked the search term, the images had already been viewed hundreds of millions of times. Effective harm prevention requires upstream intervention — preventing generation and initial distribution — not only downstream removal.

Cross-platform spread: Removing content from one platform does not remove it from others. Images removed from X may persist on Telegram, Reddit, or dedicated NCII sites. Comprehensive removal requires coordinated action across many platforms, which does not occur systematically.

Anonymity: Perpetrators often operate anonymously, making both platform enforcement and legal action difficult.


AI Developer Responsibilities

AI image generation tools are the proximate means by which AI-generated NCII is created. The responsibility of AI developers in preventing this harm is significant and contested.

Major AI image generation providers — including OpenAI (DALL-E), Stability AI, and Midjourney — have established policies prohibiting the generation of NCII, including explicit content of real identified individuals. These policies are implemented through content filters that aim to prevent clearly prohibited outputs.

The effectiveness of these filters has been mixed. Research published in 2023 documented that popular AI image generation tools could be prompted to generate NCII through various workarounds: using euphemistic language, jailbreaking techniques, or fine-tuning models specifically to remove content restrictions. Dedicated NCII generation services have been built using open-source models without safety filters, specifically designed to circumvent the restrictions that responsible developers impose.

This creates a dilemma for AI developers: responsible developers who implement and maintain safety filters lose business to irresponsible competitors who do not. The market does not automatically reward safety. This is a classic collective action problem that argues for regulatory intervention establishing minimum safety requirements for all AI image generation services.

The governance responsibilities of AI developers include: implementing and maintaining robust content safety filters; not providing API access to services that use their tools for NCII generation; investing in detection tools that can identify AI-generated images; cooperating with law enforcement investigations; and supporting legislative frameworks that establish binding requirements for the industry.


The Governance Gap

The most important structural insight from AI-generated NCII is the gap between the pace of harm and the pace of governance. AI image generation technology capable of producing photorealistic NCII became widely accessible in 2022. Comprehensive federal law addressing it in the United States had not passed as of mid-2024. State laws were inconsistent and often technically inapplicable to AI-generated content. Platform policies existed but were inadequately enforced and outpaced by the speed of content spread.

In that governance gap, tens of thousands — probably many more — individuals experienced a form of sexual harm that was made possible by AI technology, preventable by AI governance, and largely without effective legal remedy. The gap is not merely a technical governance failure; it is a failure to take seriously the harms that AI technology enables at scale when those harms fall on people who are not the technology's primary intended beneficiaries.


Discussion Questions

  1. AI-generated NCII can be produced from publicly available photographs without the knowledge of the subject. Does this make the harm qualitatively different from NCII based on authentic images? How should the law account for this difference?

  2. Some AI image generation tools include safety filters that prohibit NCII, while others — particularly open-source tools — do not. Should the existence of an open-source model without safety filters that is used for NCII create liability for the organization that released it? What legal theory might support that claim?

  3. The Taylor Swift episode drew significant public and legislative attention because the victim was a highly visible public figure with resources to pursue legal remedies. How should this influence our thinking about the governance of NCII? What does it reveal about whose harms receive policy attention?

  4. Platform policies require removal of NCII but cannot prevent initial spread. Propose a governance approach that addresses the spread problem, not just the removal problem. What technical, legal, and platform governance mechanisms would it require?

  5. The international patchwork of NCII laws creates enforcement challenges when perpetrators, platforms, and victims are in different jurisdictions. What international governance framework would be necessary to address this problem, and how feasible is it to achieve?

  6. An AI image generation company argues that it should not be held responsible for the misuse of its technology by bad actors, just as knife manufacturers are not held responsible when knives are used in crimes. Evaluate this analogy. What are its strengths and weaknesses as applied to AI-generated NCII?