The Swipe Right Dataset contains 50,000 synthetic profiles — built to model the patterns that real-world dating app data, in aggregate, reveals about who matches with whom, who gets messages, and who gets left-swiped into invisibility. The dataset...
Learning Objectives
- Describe the major categories of technology-facilitated intimate harm
- Evaluate current legal frameworks for non-consensual image sharing and romance scams
- Analyze how dating app design creates or exacerbates discrimination
- Apply a design ethics lens to imagining safer digital courtship environments
In This Chapter
- 33.1 Non-Consensual Intimate Image Sharing: Definitions, Scope, and Harm
- 33.2 The Legal Landscape for NCII
- 33.3 Catfishing: From Romantic Deception to Criminal Fraud
- 33.4 Romance Scams: Financial and Emotional Devastation
- 33.5 Who Is Targeted: The Particular Vulnerability of Older Adults
- 33.6 Algorithmic Discrimination in Dating Apps
- 33.7 Surveillance in Relationships: Stalkerware and Controlling Apps
- 33.8 Deepfakes and Sexual Exploitation
- 33.8A Algorithmic Amplification of Harm
- 33.8B Regulating Digital Intimate Harm: Comparative Approaches
- 33.9 Children and Digital Romantic Predation
- 33.10 Doxxing and Coordinated Harassment
- 33.11 Platform Responsibility: Section 230 and Its Limits
- 33.12 Design Ethics: How Apps Could Be Built Differently
- 33.13 Legal and Policy Responses: The State of the Field
- 33.14 Support and Recovery for Digital Abuse Victims
- 33.15 The View from the Dataset
- Summary
Chapter 33: Technology and Harm — Catfishing, Revenge Porn, and Algorithmic Discrimination
The Swipe Right Dataset contains 50,000 synthetic profiles — built to model the patterns that real-world dating app data, in aggregate, reveals about who matches with whom, who gets messages, and who gets left-swiped into invisibility. The dataset is synthetic, the profiles fictional. But the patterns it models are drawn from real research, and one of those patterns, when you look at it, is quietly devastating.
Users who flagged on a satisfaction survey that they believed their profiles had been suppressed by the algorithm — shown to fewer potential matches than their profile quality alone would predict — were significantly more likely to be users of color. More specifically: Black women in the dataset reported the highest rates of algorithmic suppression concern, followed by Black men, followed by Asian men. White users of both genders reported the lowest rates.
The platform's design team had not intended to build a discrimination engine. They had intended to build a matching algorithm that optimized for engagement — that showed users profiles most likely to generate right-swipes, messages, and continued platform use. But engagement optimization, in a user base that shows documented racial preferences in swiping behavior, becomes — without explicit correction — a system for amplifying those preferences. The algorithm learns what users engage with. If users systematically engage less with profiles of certain racial groups, the algorithm shows those profiles less. If those profiles are shown less, they generate fewer matches. If they generate fewer matches, the platform scores them lower. The loop closes.
This is not a story about malevolent engineers. It is a story about what happens when design optimization is divorced from ethical analysis, and when "what users want" is treated as a neutral input rather than a social fact that carries history.
33.1 Non-Consensual Intimate Image Sharing: Definitions, Scope, and Harm
Non-consensual intimate image sharing (NCII) — colloquially called "revenge porn," though advocates prefer NCII because the material is not always shared as revenge and is not always pornographic — is the distribution of sexually explicit images or videos of a person without their consent. The perpetrator may be a former intimate partner, a hacker who has accessed private files, someone who surreptitiously recorded the material, or a stranger who obtained the images from a third party.
NCII is not a small or obscure problem. A 2016 Data & Society Research Institute study found that approximately 10.4 million Americans had been threatened with or experienced NCII. The Cyber Civil Rights Initiative's surveys consistently find that the majority of victims are women and that the vast majority of perpetrators are men known to the victim — typically a former partner, sometimes acting in response to a relationship ending.
The harms are severe and multi-dimensional. Victims report:
- Emotional distress: Shame, humiliation, anxiety, depression, and PTSD at rates substantially elevated above general population norms
- Reputational damage: Professional consequences including job loss, clients leaving, career opportunities foreclosed — particularly severe in conservative professional environments
- Relational damage: Impact on existing relationships and future relationship formation
- Safety risk: NCII is frequently accompanied by doxxing (publication of personal information) that enables physical harassment and stalking
- Ongoing loss of control: Unlike a slap or a theft, NCII creates harm that can persist indefinitely — images once distributed on the internet are essentially impossible to fully remove
📊 Research Spotlight: Henry, Flynn, and Powell (2020) conducted in-depth interviews with NCII victims in Australia and found that victims described the experience using language of violation that closely mapped onto research on sexual assault — the sense of bodily integrity being violated without consent, the loss of control over one's own intimate life, the feeling of ongoing exposure. Several victims used the phrase "it never ends" to describe the temporal quality of the harm — the awareness that images are potentially findable by anyone, forever.
The NCII Harm Cascade
Researchers have described the harms of NCII as operating in "cascades" — not a single event with a single consequence, but a set of interacting harms that compound each other over time.
The first cascade is informational: once images are distributed on one platform, they spread to others. Screen captures, re-uploads, and indexing by search engines mean that a single distribution event can result in images appearing on dozens of platforms within hours. The Cyber Civil Rights Initiative's crisis helpline regularly assists victims who, weeks or months after discovering NCII, find new sites where their images have appeared.
The second cascade is social: the discovery of NCII by specific people in the victim's life — employers, family members, colleagues, current or potential partners — produces distinct harms at each discovery. Each new person who sees the images is, from the victim's perspective, a new violation. Victims describe dreading the moment when a parent or child might search their name; some describe the loss of the ability to fully trust any new relationship because the images are always potentially discoverable.
The third cascade is psychological: the accumulating violations produce what researchers describe as "anticipatory shame" — a sustained state of dread about the next discovery — that can persist for years after the initial distribution. This is distinct from the acute distress of the initial discovery, which is itself traumatic, and represents a longer-term psychological burden that may require sustained therapeutic support.
Understanding the cascade structure of NCII harm has implications for how we think about remediation. Strategies that focus only on taking down images from a single platform address only the first cascade; they do not address the ongoing social and psychological harms. Comprehensive support for NCII victims needs to address all three cascades: technical (image removal), social (managing disclosure to important others), and psychological (treating the trauma of violation and anticipatory shame).
33.2 The Legal Landscape for NCII
The legal response to NCII has developed rapidly since the problem became publicly visible around 2012–2013. As of 2026, forty-eight U.S. states plus the District of Columbia have criminalized NCII in some form. The federal government passed the DEFIANCE Act in 2024, creating a federal civil cause of action for NCII victims. International legal development has been uneven: England and Wales criminalized NCII in 2015 under the Criminal Justice and Courts Act, and Scotland followed with separate legislation; Australia enacted NCII legislation at both federal and state levels beginning in 2017, with the 2021 Online Safety Act strengthening civil remedies significantly. The European Union's approach has been fragmented — GDPR provides some privacy-based remedies, and some member states (Germany, the Netherlands) have specific criminal provisions, but coverage across the EU is inconsistent. Legal coverage in the Global South is sparse: many jurisdictions have no specific NCII law and would need to address the harm through general harassment or defamation frameworks that are poorly adapted to the specific characteristics of NCII harm.
The research on psychological impact provides an important empirical foundation for understanding why legal remedies are necessary but insufficient. Henry, Flynn, and Powell (2020), conducting in-depth interviews with NCII survivors in Australia, found that survivors used language of sexual violation — not merely privacy invasion — to describe the experience. The sense of bodily integrity violated without consent, the permanent loss of control over intimate aspects of self, the anxiety about future discoveries — these psychological features of NCII harm require both legal recognition and therapeutic response.
Despite the legal development, victims and advocates describe the legal response as persistently inadequate:
Jurisdictional fragmentation: When images are uploaded from a different state or country than where the victim lives, jurisdiction becomes unclear and enforcement difficult. The Cyber Civil Rights Initiative's crisis helpline regularly works with victims whose images are hosted on servers in jurisdictions with no relevant law, making takedown requests legally unenforceable.
Platform liability shields: Section 230 of the Communications Decency Act (discussed in section 33.11) historically protected platforms from liability for user-uploaded NCII, reducing their incentive to remove it promptly. Recent legislative efforts have carved exceptions, but the shield remains largely intact and enforcement of the exceptions is inconsistent.
Burden of proof: Criminal prosecutions typically require proof of intent to harm, which perpetrators deny. Civil remedies are more accessible but require victim resources — legal fees, ability to identify and serve defendants, financial capacity to pursue litigation.
Speed: The legal system moves slowly; NCII spreads instantly. The time required for a legal remedy means that the harm — distribution, indexing by search engines, discovery by employers or family — typically occurs well before any legal intervention. The Cyber Civil Rights Initiative reports that the average time from first contact with their helpline to any legal outcome is measured in months; the images spread within hours.
The definitional gap for synthetic imagery: Laws written before deepfake technology became widely accessible often contain language requiring images to be "of" the victim — a definition that courts are divided on whether it covers AI-generated imagery where the victim's actual body doesn't appear. This gap is being addressed through legislative updates in multiple jurisdictions, but the patchwork creates significant unevenness in victim protection.
33.3 Catfishing: From Romantic Deception to Criminal Fraud
Catfishing — creating a fake online identity to deceive another person into a romantic or emotional relationship — exists on a spectrum from mildly deceptive to criminally fraudulent. The term entered popular use through the 2010 documentary film "Catfish" and the subsequent MTV series.
At the less severe end of the spectrum, catfishing involves relatively minor misrepresentation: using older photos, lying about age, or presenting oneself as having a different profession or relationship status. This kind of misrepresentation is common — research on online dating profile accuracy (Hall et al., 2010) finds that while most online daters engage in minor self-enhancement, significant deception (more than a few years off on age, major misrepresentation of appearance) is less common.
More serious catfishing involves sustained false identity — maintaining an elaborate fictional persona over weeks, months, or years, generating deep emotional attachment in the victim before the deception is revealed or the catfisher exploits it. Research on the psychological impact of catfishing (Drouin et al., 2018) finds that victims frequently experience symptoms similar to grief: they have lost not only a relationship but an entire person who, it turns out, never existed. This grief is compounded by shame and self-blame — "How could I not have known?" — that can delay victims from seeking support.
The motivations for catfishing are themselves diverse and worth understanding if we are to think clearly about how to address the phenomenon. Research by Drouin, Tobin, and Wygant (2018) identified several distinct motivational profiles. Identity exploration motivates some catfishers, particularly adolescents: they create alternative personas online to explore aspects of identity (sexuality, personality, social confidence) that feel unavailable or risky in their offline lives. The harm in these cases is often inadvertent — the catfisher becomes entangled in a relationship they can't easily exit without hurting someone. Loneliness and connection-seeking motivates others: people who feel socially inadequate in their actual identity create more attractive online personas to achieve connection they believe they couldn't get as themselves. This motivation is poignant, though it doesn't eliminate the real harm to the person being deceived. Malice motivates a smaller but more damaging group: catfishing specifically designed to humiliate, expose, blackmail, or emotionally harm the target — often carried out by people known to the target, including ex-partners, rivals, and former friends. Financial fraud (the romance scam) is the most costly variant, distinguished from other catfishing by its organized, instrumental character.
Understanding motivation matters because prevention and response look different depending on which type is operating. The adolescent identity explorer needs education and community; the scammer needs law enforcement; the malicious catfisher targeting a specific person needs immediate platform intervention and possibly legal action.
💡 Key Insight: Catfishing exploits the same psychological processes that make genuine online connection possible — the willingness to invest emotionally in a person you know primarily through text and image, the gradual building of trust over digital communication. The harm is not primarily the deception itself but what the deceiver does with the connection they build through deception.
Why Identity Deception Works Online
The psychology of why people believe false online identities draws on fundamental principles of human social cognition. We are not naturally equipped for the kind of skepticism that digital environments increasingly require. Human social cognition evolved in small groups where your interlocutors were people you had long-standing relationships with — where the signals you used to assess trustworthiness (physical presence, history, mutual acquaintances, behavioral consistency over time) were richly available.
Online environments strip away many of these signals. You cannot smell deception, assess microexpressions, or notice the hesitation before an answer. What remains are text and images — both of which can be fabricated without special skill. Research by Hancock and colleagues on deception in computer-mediated communication finds that online text-based communication is somewhat more susceptible to successful deception than face-to-face communication, particularly for skilled deceivers who can construct consistent, emotionally compelling narratives.
The gradual trust-building that makes catfishing work exploits what cognitive psychologists call "the consistency heuristic" — the assumption that a person who has been consistent across many interactions is more likely to be who they claim to be. This heuristic works well in environments where fabricated consistency is difficult; it fails in environments where a determined deceiver can carefully maintain a consistent persona across hundreds of text messages and email exchanges over months.
Awareness of these vulnerabilities is genuinely useful. Research on deception detection suggests that looking for internal inconsistencies in a person's story (details that don't quite add up when you review them), resistance to video calls or in-person meetings over extended periods, and the unusually rapid development of intense emotional intimacy are the most reliable indicators of potential catfishing — not any specific narrative detail, but the structure of the interaction.
33.4 Romance Scams: Financial and Emotional Devastation
Romance scams occupy the criminal extreme of the catfishing spectrum. They are not about loneliness or confused identity — they are organized financial fraud operations, frequently run by criminal networks operating internationally, that use fabricated romantic relationships to extract money from victims.
The FBI's Internet Crime Complaint Center (IC3) consistently ranks romance scams among the highest-loss internet crime categories. In 2023, the IC3 received approximately 17,800 romance scam complaints reporting total losses of $652 million — and these figures represent only what was reported. The Federal Trade Commission's (FTC) estimates, which include unreported cases, suggest total annual losses in the billions. The FTC's 2024 Consumer Protection Report found that romance scams produce higher median individual losses than virtually any other consumer fraud category — a reflection of the extended duration of the scam and the deep psychological investment victims develop before money is ever requested.
The mechanics of a romance scam follow a recognizable pattern that scam operations have refined over years:
-
Initial contact: The scammer contacts the victim through a dating app, social media, or sometimes a misdialed text (a strategy called "pig butchering" involves the "wrong number" leading to a prolonged conversation that transitions to romance). The profile used is fake, typically using stolen photos of an attractive person — often a military officer deployed overseas, a successful physician working internationally, an engineer on an oil rig, or a widowed professional. These categories are chosen deliberately: they explain why in-person meeting is impossible and create a narrative of trustworthiness combined with romantic vulnerability.
-
Love bombing: The scammer moves quickly to intense expressions of interest, affection, and eventually love. Communication is frequent, emotionally attentive, and often highly skilled — many scam operations employ professional writers who specialize in romantic messaging and maintain simultaneous conversations with dozens of victims. The emotional calibration can be sophisticated: scammers learn what each victim needs (validation, adventure, companionship) and provide exactly that.
-
Identity building: Over weeks to months, the scammer builds a detailed, consistent false identity, shares "photos" (stolen from real people's public social media), and may engage in voice or video calls — increasingly using AI-generated audio and video deepfakes that can convincingly replicate a fictional person's appearance and voice in real time.
-
The crisis: Eventually, an emergency arises that requires money: a medical crisis, a business deal requiring a temporary cash infusion, customs fees to release a package containing gifts, or a stranded family member. The first request is usually small — sometimes as little as a few hundred dollars. If successful, it is followed by others, with each crisis creating urgency and emotional pressure. Victims report feeling that refusing to help would be a betrayal of the relationship they have built.
-
Exit: When the victim can no longer send money, or when they discover the fraud, the scammer disappears. Victims frequently find that the account they communicated with has been deleted, the phone number disconnected, and every trace of the identity they knew — emails, messages, profiles — suddenly gone.
The psychological manipulation tactics used in romance scams draw explicitly on social psychology research, even if the scammers themselves have no formal psychological training. The "love bombing" phase exploits reciprocity and commitment-and-consistency effects (Cialdini, 1984): once a victim has invested emotionally in a relationship, they are motivated to remain consistent with that investment. The small initial request exploits the foot-in-the-door effect: people who comply with a small request are more likely to comply with larger ones later. The artificial urgency of each crisis exploits scarcity and threat-reactance effects that bypass deliberative reasoning. The overall architecture of the scam is a systematic attack on the cognitive processes that would normally enable someone to recognize fraud.
⚠️ Critical Caveat: Romance scam victims are frequently mocked or dismissed on the assumption that "obviously fake" profiles are only believed by gullible people. The research is unambiguous that this assumption is wrong. Victims are not disproportionately low-intelligence or socially naive. They are disproportionately people experiencing loneliness, recent loss (divorce, bereavement), or life transitions — which is to say, people whose normal social connections have been disrupted and who are genuinely open to connection. The scam is effective precisely because it targets real and legitimate human needs. This reality has an important implication for how we communicate about romance scams: victim-blaming language ("how could they not have known?") discourages reporting, delays recovery, and misunderstands the psychology of the harm. The appropriate frame is that skilled, organized fraud operations successfully deceive intelligent, capable people — because that is what the evidence shows.
33.5 Who Is Targeted: The Particular Vulnerability of Older Adults
FBI IC3 data consistently find that romance scam losses are highest for adults over fifty. A 2021 FTC report found that adults over seventy reported the highest median individual loss amounts — significantly higher than younger age groups. In 2023, adults over sixty accounted for approximately 28% of romance scam reports but over 40% of total financial losses, reflecting the combined effect of higher per-victim loss amounts and greater accumulated savings that provide larger targets.
The pattern of vulnerability is not random. Research by Whitty and Buchanan (2012, 2016) on romance scam victim profiles identified several consistent risk factors: recent relationship dissolution (divorce, bereavement), social isolation, previous positive experience with online connection (making online relationships feel normal and trustworthy), and higher emotional expressiveness (which both motivates seeking connection and makes it easier for scammers to mirror and validate). Importantly, Whitty's research found no significant relationship between victim intelligence, income, or education and victimization — a finding directly contradicting the stereotyped "gullible victim" narrative.
The reasons why older adults suffer higher losses are multiple and intersecting. They are more likely to have accumulated savings that represent meaningful targets; they may be more recently widowed or divorced and experiencing acute loneliness; they may have less familiarity with the technical markers of online fraud (reverse image search, domain registration checking); and they often grew up in a social environment where expressed romantic interest was taken at face value, since the institutional infrastructure for mass-scale romantic fraud didn't yet exist. A seventy-year-old who came of age in the 1970s, before the internet existed, learned to relate to romantic pursuit through face-to-face or telephone communication where the verifiability of identity was not a question one asked. Applying that social framework to online relationships is not a cognitive failure — it is the application of previously adaptive social norms to an environment that those norms were not designed for.
The specific financial mechanisms of elder romance scam losses are worth examining. Scammers frequently direct victims to transfer money in ways that are difficult to reverse: wire transfers, cryptocurrency transfers (particularly through Bitcoin ATMs in convenience stores, a common vehicle for elder scam losses), gift cards, and money-order services. Research by the Consumer Financial Protection Bureau on elder financial exploitation found that victims of romance scams showed significantly elevated rates of these irreversible transfer methods compared to elder victims of other fraud types — suggesting that scammers specifically steer victims away from bank transfers that might trigger fraud alerts or allow reversals.
Psychological research on recovery from romance scam victimization (Whitty, 2018; Cross, 2018) has identified what Cross calls a "double hit" — the simultaneous loss of the financial resources and the loss of the relationship. Victims must process both a financial crisis and a grief experience: they have lost not only money but a person who, for months or years, was a meaningful part of their emotional life. That person never existed, which creates a grief without a standard social script — no funeral, no obituary, no community acknowledgment of the loss. The shame that many victims feel compounds the isolation, making it less likely that they reach out for support.
This does not reflect gullibility — it reflects a mismatch between social norms internalized in a previous era and the predatory infrastructure of the present one. Understanding this mismatch is essential for both preventing victimization and avoiding the shame-based framing that discourages victims from reporting and seeking help.
Gender and Romance Scam Victimization
The gender distribution of romance scam victims differs somewhat from the public stereotype. While media coverage often features older women as the paradigmatic victims, FTC and IC3 data show that men are also victimized at significant rates — and may be even more reluctant to report. Research by Cross (2018) in Australia found that male victims in her sample were substantially less likely than female victims to seek help or disclose their victimization, citing shame and concerns about social judgment of being deceived ("a man should have known better"). The stigma of victimization, in other words, is gendered: female victims face the stigma of loneliness and gullibility; male victims face an additional layer of stigma around competence and susceptibility.
LGBTQ+ individuals face a specific scam variant that targets gay and bisexual men: "sextortion" through fake gay hookup or dating profiles, where victims are encouraged to share intimate images that are then used as leverage for money. This variant exploits the real vulnerabilities of being closeted or being in a culture where outing carries social costs — the threat being not simply "pay us or we'll release your images" but "pay us or we'll tell people you're gay." Research by GLAAD and the Cyber Civil Rights Initiative has documented this specific predatory pattern, which combines elements of romance scam, NCII threat, and extortion.
33.6 Algorithmic Discrimination in Dating Apps
The Swipe Right Dataset's racial patterns are synthetic but modeled on real research findings. Multiple studies have documented racial hierarchies in swiping and messaging behavior on dating apps — findings that replicate patterns first documented in OkCupid's own analysis of its data, published in 2014 by co-founder Christian Rudder in the book Dataclysm.
Rudder's data found that users rated Black women and Asian men as significantly less attractive than other groups, and that messaging behavior followed these ratings. These findings have been replicated in experimental and observational research. Feliciano, Robnett, and Komaie (2009) found systematic racial exclusion in online dating partner preferences. Fiore and Donath (2005) found that messaging behavior reflected racial hierarchies that closely tracked societal racial hierarchy.
The question for platform design is: what does an app do with this information?
One approach is to optimize for expressed preference — show users what they seem to want, even if what they seem to want encodes racial hierarchy. This maximizes short-term engagement but amplifies discrimination.
A second approach is algorithmic correction — adjusting the recommendation algorithm to reduce the amplification of racial bias, essentially auditing the output of the matching algorithm against demographic data and correcting for systematic disparities. Critics argue this involves the platform "overriding user preferences," which feels paternalistic. Advocates argue that platforms already manipulate what users see through many algorithmic choices; the question is not whether to intervene but what values to build into that intervention.
A third approach is radical transparency — showing users their own patterns, as OkCupid's experiments have done, and allowing users to reflect on whether their automated preferences match their stated values. Research suggests that some users, when shown data about their own racial biases in swiping, do shift behavior.
📊 Research Spotlight: The synthetic Swipe Right Dataset models a key finding from the literature: users who report algorithmic profile suppression (the experience of believing their profiles are being shown to fewer potential matches than expected) show significantly lower satisfaction scores and are more likely to report intention to leave the platform. If this pattern is accurate — and real-platform data from OkCupid and Hinge suggest it is — then algorithmic discrimination is not only a justice issue; it is a business problem, since it drives away users who feel excluded.
Disability and Algorithmic Exclusion
Racial discrimination is the most extensively documented form of algorithmic exclusion in dating apps, but it is not the only one. Research on disabled users' experiences with dating apps — an understudied area, in part because apps rarely collect disability status data — consistently finds that disabled people face both interpersonal discrimination (explicit rejection, unsolicited medical inquiries, fetishization) and structural barriers that apps do little to address.
Algorithmic systems that optimize for match rates and engagement may inadvertently disadvantage disabled users whose profile interaction patterns differ from non-disabled users — not because they are less desirable but because the forms of interest they generate are different in timing, frequency, or pattern. A user with a chronic illness that produces variable energy levels may have inconsistent login patterns; an algorithm that weights recency of activity in its desirability scoring may score them lower without any human judgment about their attractiveness.
Research by Alper (2017) on disability and digital media, and by Slater et al. (2018) on autistic people's use of dating apps, documents the ways in which app design embeds neurotypical social norms and creates friction for users whose social cognition and communication styles differ from those assumptions. The swipe interface itself — a rapid aesthetic judgment with no space for context — may be particularly ill-suited to users for whom appearance is less primary in attraction, or for whom rapid social judgment is cognitively challenging.
Body size discrimination in dating apps — though less studied than racial discrimination — follows similar patterns. Research documents that users with higher body weight receive lower match rates even controlling for other profile characteristics. Whether algorithmic systems amplify this discrimination through engagement-optimization loops similar to those documented for race has not been definitively established, but the hypothesis is theoretically coherent and consistent with preliminary data.
33.7 Surveillance in Relationships: Stalkerware and Controlling Apps
While the previous chapter discussed stalking as a pattern of unwanted contact, technology has created new forms of surveillance specifically designed for intimate relationships — applications that allow one person to secretly monitor another's location, messages, calls, and online activity.
"Stalkerware" — applications designed to hide themselves on a device while transmitting the device owner's communications and location to a third party — has been documented by cybersecurity researchers including those at Kaspersky Lab and the Coalition Against Stalkerware. These applications are frequently marketed as "parental monitoring tools" or "employee monitoring tools" but are commonly used by intimate partners, usually abusive ones. Specific applications documented in research and advocacy reports include FlexiSPY, mSpy, Hoverwatch, Spyic, and numerous others. These tools can intercept text messages and emails, record phone calls, access camera and microphone, track GPS location in real time, monitor social media activity, and log keystrokes — essentially providing comprehensive surveillance of every digital activity on the target device.
The Coalition Against Stalkerware, formed in 2019 as a collaboration between cybersecurity firms and domestic violence organizations, has tracked a significant increase in stalkerware installation rates correlating with periods of increased intimate partner contact — a pattern consistent with abusive partner surveillance. Kaspersky's annual Global Research and Analysis Team reports consistently find stalkerware on thousands of user devices in dozens of countries, with elevated rates in contexts of intimate partner conflict.
The National Network to End Domestic Violence's Safety Net project has documented stalkerware as an increasingly common feature of abusive relationship dynamics. Victims report that partners' use of monitoring apps produces profound loss of privacy and autonomy — the sense of being watched constantly, never able to have a private thought or conversation. This surveillance extends and intensifies the coercive control dynamic discussed in Chapter 32: the abusive partner knows when the victim is seeking help, who they are talking to, and what they are planning. Safety planning — the process of preparing to leave an abusive relationship safely — becomes significantly more dangerous when the abusive partner has surveillance access to the victim's device. Domestic violence organizations increasingly counsel victims to perform safety planning on devices the abusive partner doesn't have access to, using apps that don't appear in the device's standard app list.
Platform responses to stalkerware have been inconsistent and often inadequate. App stores have policies against applications designed for covert surveillance, but enforcement is imperfect — stalkerware apps are frequently repackaged and resubmitted after removal, and the marketing language ("monitor your child's device") makes enforcement genuinely difficult. Apple and Google have taken more active steps in recent years to detect and remove stalkerware from their stores, but the pattern of removal and resubmission continues.
🔵 Ethical Lens: Some location-sharing apps (Find My Friends, Life360) are used consensually between partners who both choose to share location with each other. The ethically relevant distinction is consent and mutuality — both partners can see both locations, and either can turn it off. Stalkerware is specifically designed to be invisible and one-directional. The technology is similar; the ethics are entirely different. A useful test: if the person being monitored would object if they knew about the monitoring, and the monitoring application is designed to prevent them from knowing, the ethical threshold has been crossed.
33.8 Deepfakes and Sexual Exploitation
Deepfake technology — AI-generated synthetic media that can convincingly superimpose one person's face onto another person's body — has created a new category of intimate harm: non-consensual synthetic intimate imagery.
Unlike traditional NCII (which requires the existence of actual intimate images), deepfake NCII can be created from nothing more than publicly available photos. Any person who appears in photographs — which, in 2026, includes virtually everyone — can theoretically be depicted in synthetic sexual imagery they never consented to participate in.
The scale of the problem is difficult to measure, but estimates from researchers at the University of Amsterdam (Deeptrace Labs, 2019) suggested that at that time, the vast majority of deepfake videos online were non-consensual intimate imagery of women. The technology has become substantially more accessible since then.
The legal landscape for deepfake NCII is newer and less developed than for traditional NCII. Many existing NCII laws were written before deepfakes were technically feasible and contain language requiring the image to be "of" the victim — language that prosecutors argue does not cover synthetic imagery in which the victim's actual body does not appear. Legislative updates are ongoing.
The Weaponization of Deepfakes Beyond Celebrities
Early media coverage of deepfake harm focused disproportionately on celebrity victims — high-profile cases where deepfake technology was used to create synthetic pornography featuring famous actresses and public figures. This focus, while generating attention, obscured the more widespread harm: non-celebrity women whose deepfake imagery is created and distributed by people who know them personally.
Research by Chesney and Citron (2019) on deepfakes as a weapon of harassment documents cases in which deepfake technology was used specifically against ex-partners, colleagues, and women who had rejected romantic advances — precisely the entitlement and rejection-aggression dynamics discussed in Chapter 32, now equipped with a new and particularly disturbing tool. The deepfake case in this context combines the harm of NCII (violation of intimate imagery norms) with the falseness of defamation (the creation of a record that doesn't reflect reality) in a combination that existing law is poorly equipped to address.
The psychological impact of deepfake NCII differs in some ways from traditional NCII. Victims of traditional NCII are dealing with the exposure of something that really happened — an image they genuinely created. Victims of deepfake NCII face the additional horror of a record suggesting something happened that never did — a potential for public belief in fabricated events, and a potential inability to definitively disprove the image's authenticity to everyone who sees it. Research on this specific harm is nascent, but early qualitative work documents the distinctive nature of the psychological experience.
The technical means of detecting deepfakes — AI-based detection tools that identify statistical artifacts of synthetic generation — are in an ongoing adversarial arms race with the tools that produce deepfakes. Current detection tools can identify many deepfakes produced by today's models; tomorrow's models will produce more convincing outputs. This arms race has no obvious resolution short of fundamental changes in how digital provenance is established — for instance, through cryptographic authentication of images at the point of capture.
33.8A Algorithmic Amplification of Harm
Individual perpetrators of digital intimate harm — the ex-partner who posts NCII, the stalker who monitors location, the scammer who constructs a false romantic identity — are the most visible actors in technology-facilitated harm. But a less visible actor shapes the scale and reach of that harm: the platform algorithm that determines what content is shown, to whom, and how broadly it is distributed.
Algorithmic amplification refers to the process by which platform recommendation and distribution algorithms increase the reach of content beyond what organic sharing alone would produce. In the context of intimate harm, this amplification can operate in at least three distinct ways.
Harassment campaign amplification occurs when coordinated harassment targeting a specific person is amplified by recommendation algorithms that surface high-engagement content regardless of whether that engagement is positive or negative. Hate raids — coordinated attacks on streamers where large numbers of abusive accounts flood a live stream's chat simultaneously — have been documented on Twitch and YouTube, where the sudden spike in activity can briefly push the targeted stream's content into recommendation feeds, dramatically increasing the number of people exposed to the harassment and providing a larger pool of potential secondary harassers. Research by Jhaver and colleagues (2019, 2021) on coordinated harassment on social platforms documents that platform architecture — specifically, the lack of coordinated-behavior detection in recommendation systems — enables harassment campaigns that individual human-to-human harassment could not achieve.
Content spread in NCII cases is significantly amplified by recommendation algorithms on platforms that host sexual content. Research by the Cyber Civil Rights Initiative and Platform Harm Studies (a research consortium) finds that NCII posted to platforms with weak moderation is frequently recommended to users by content-similarity algorithms before moderation systems catch it — creating a window of rapid spread that can distribute intimate imagery to thousands of users within the first hours of upload. The speed asymmetry (algorithmic amplification operates in seconds; moderation operates in hours or days) is one of the central technical challenges in NCII response.
Scam ecosystem amplification occurs when romance scam operations use platform advertising tools and recommendation algorithms to identify and reach vulnerable populations. Research by the Stanford Internet Observatory and the Global Anti-Scam Organization has documented sophisticated scam operations that use Facebook and Instagram advertising to target recently divorced, recently widowed, or recently relocated individuals — demographics identified through behavioral data as being in elevated vulnerability states. The platform's own targeting tools — designed for commercial advertising — are repurposed for predatory contact at scale.
Platform responses to algorithmic amplification of harm have been slow and partial. Twitch implemented anti-hate-raid measures including follower verification and account age requirements after significant pressure from targeted streamers, particularly Black and LGBTQ+ creators who were disproportionately targeted. Meta has implemented some restrictions on NCII-related targeting. But the fundamental tension — that engagement-optimization algorithms are indifferent to whether the engagement they generate is positive or harmful — has not been resolved, because resolving it would require accepting lower overall engagement metrics, which conflicts with advertising-revenue business models.
💡 Key Insight: The distinction between algorithmic amplification and individual perpetration matters for accountability and prevention. Individual perpetrators are subject to criminal law; algorithms are not. Holding platforms accountable for their algorithms' role in amplifying harm requires a different legal framework than holding individuals accountable for their actions — one focused on platform design choices and their foreseeable consequences rather than on intent.
33.8B Regulating Digital Intimate Harm: Comparative Approaches
The regulatory landscape for technology-facilitated intimate harm is one of the most active areas of law and technology policy in the current moment, and it varies significantly across jurisdictions in ways that create important natural experiments for understanding what actually works.
The EU Digital Services Act (DSA), which came into full force in 2024, represents the most comprehensive regulatory approach to platform harm currently in effect in any major jurisdiction. The DSA imposes obligations on "very large online platforms" (defined by user count, currently set at 45 million EU users) to assess and mitigate "systemic risks" — including risks arising from gender-based violence, child safety, and coordinated harmful behavior — to conduct annual risk assessments, to submit those assessments to EU regulators, and to implement mitigation measures that regulators can evaluate. Platforms that fail to comply face fines of up to 6% of global annual revenue — a figure large enough to be meaningful even for trillion-dollar companies. The DSA also imposes transparency obligations: platforms must provide researchers with access to data sufficient to assess systemic risks, including researchers studying intimate harm patterns. Early implementation assessments have been cautiously positive — regulators have issued investigations against several major platforms, and at least one major platform has changed its content recommendation architecture in the EU market in response.
The U.S. approach remains primarily governed by Section 230's liability shield, with patchwork legislation addressing specific harms (NCII, FOSTA-SESTA for sex trafficking-adjacent content, COPPA for children's privacy). The fundamental difference from the DSA approach: U.S. law focuses on removing liability for specific categories of content rather than imposing affirmative obligations to assess and mitigate harm. Critics argue this approach is structurally inadequate — it creates legal safe harbors for harms the law didn't anticipate, requires constantly updating specific carve-outs as new harms emerge, and places the burden of proof on victims who must establish platform liability rather than on platforms to demonstrate they have taken reasonable preventive measures.
The UK Online Safety Act (2023) takes an approach closer to the EU's — imposing duty-of-care obligations on platforms rather than simply expanding liability categories. Under the UK legislation, platforms are required to assess the risks of illegal content on their services, to implement systems to address those risks, and to produce transparency reports. The Act creates specific categories of "priority illegal content" that platforms must take additional steps to prevent, including NCII, content promoting self-harm, and content that facilitates child sexual exploitation. The UK approach explicitly extends to NCII and deepfake NCII through subsequent legislation, and creates a regulatory body (Ofcom) with enforcement authority.
Australia's Online Safety Act (2021) is notable for creating an "Online Safety Codes" system — industry-developed codes of conduct that, if registered by the eSafety Commissioner, are treated as compliance with the Act's requirements. This approach attempts to make industry self-regulation legally binding while retaining regulatory oversight. The Act also creates a "removal notice" system through which the eSafety Commissioner can directly order platforms to remove NCII and other harmful content, with legally binding effect. Early research on the removal notice system suggests it is faster than victim-initiated takedown requests in getting harmful content removed — an important finding given the speed asymmetry documented above.
What advocates want across jurisdictions tends to converge on several themes: mandatory rapid-response NCII removal systems with legal backing; transparency requirements about algorithm design and content moderation decisions; funding for victim support services and access to legal remedies; and research access provisions that allow independent study of platform harm patterns. The EU DSA's researcher access provisions are considered by many advocates the most significant innovation in this space — because independent research is the foundation for evidence-based regulation, and platforms have historically controlled the data that would allow researchers to assess the harms their systems produce.
⚖️ Debate Point: Some technology policy researchers argue that heavy-handed platform regulation risks unintended consequences — including suppression of legitimate speech, displacement of harm to less-regulated platforms, and disproportionate burden on smaller platforms that cannot afford compliance infrastructure. Others argue that these concerns, while real, are outweighed by the documented scale of harm that current under-regulation enables. The empirical evidence on what regulatory approaches actually work — as opposed to what theoretically should work — is still accumulating as the DSA, OSA, and Australian OSA gather implementation data. The field is watching closely.
33.9 Children and Digital Romantic Predation
The chapter's content on adults navigating digital courtship exists alongside a more serious and less ambiguous harm: adults using digital platforms to groom and abuse children.
Online predation typically works through a grooming process: the predator establishes connection with a child or adolescent through a gaming platform, social media, or dating app (where age verification is often inadequate), builds trust over time through gradually escalating emotional intimacy, and eventually requests sexual images or seeks in-person contact.
Research by the Crimes Against Children Research Center (Finkelhor et al.) has documented that contrary to popular panic, most online predators do not deceive victims about their nature or intent. Particularly in adolescent cases, victims often know they are in contact with an older person. This does not make the contact less exploitative — it makes it more important to understand the psychological dynamics of grooming, which progressively normalize boundary violations.
Platform responsibility in this domain is particularly fraught because the same features that enable predatory contact (private messaging, age ambiguity, lack of identity verification) are also features that LGBTQ+ youth in hostile home environments depend on for lifesaving peer connection. Simplistic solutions (eliminate private messaging for minors) create their own harms.
33.10 Doxxing and Coordinated Harassment
Doxxing — the publication of private personal information (home address, workplace, phone number, family members' information) — has become a tool of coordinated harassment against people for their sexual and romantic lives. Women who write publicly about gender and sexuality, people who reject or expose abusive partners, and people whose sexual or romantic lives become objects of community disapproval have all been targeted by coordinated doxxing campaigns.
The harm of doxxing extends beyond the exposure of information itself. The purpose is typically to facilitate harassment — to enable strangers to show up at someone's home, call their employer, or flood their communications with threatening messages. The asymmetry is dramatic: a single perpetrator can organize a large number of strangers against a single target.
Research on gendered online harassment (Mantilla, 2015) documents what Mantilla calls "gendertrolling" — coordinated, gender-based harassment campaigns targeting women who speak publicly about gender, sexuality, and related issues. The campaigns deploy doxxing, NCII threats, and mass-message abuse to create an environment of fear and to drive targets out of public discourse.
Doxxing, NCII, and the Weaponization of Intimate Information
The connection between doxxing and NCII is worth examining carefully, because they are frequently used together as a coordinated harassment strategy. NCII without doxxing limits the harm somewhat: the images may be seen by strangers on pornographic platforms, but the victim's real-world identity, workplace, and address are not connected to them. Doxxing without NCII is harmful (the publication of address and phone number enables physical harassment) but lacks the specifically intimate violation dimension. Together, they create a particularly severe harm: intimate images connected to identifying information, enabling targeted exploitation.
The combination is documented in the research on intimate partner violence in digital contexts. Research by Lenhart et al. (2016) found that threats to share intimate images (NCII threats) and the actual sharing of location and personal information were components of a broader pattern of digitally-facilitated intimate partner abuse. The threat alone — "I'll post your photos unless you..." — functions as coercion even when no images are ever actually posted, producing compliance through fear.
This coercive use of intimate information — its holding-in-reserve as leverage — represents a form of digital intimate terrorism that extends and intensifies the coercive control patterns discussed in Chapter 32. The digital dimension creates new vulnerabilities: images taken during a relationship that was consensual can be weaponized after consent is withdrawn. The person who consented to being photographed did not consent to that image becoming a coercive instrument; the retrospective weaponization of intimate information is a specific harm that existing consent frameworks (focused on the moment of image creation) inadequately capture.
33.11 Platform Responsibility: Section 230 and Its Limits
Section 230 of the Communications Decency Act (1996) provides that online platforms are not publishers and cannot be held liable for content their users post. This legal protection was essential for the development of the internet as a space for user-generated content — without it, no platform could afford the legal risk of hosting billions of posts.
It has also, critics argue, allowed platforms to externalize the harms of user-posted content: NCII, doxxing, harassment, and fraud that platforms profit from (through engagement, subscription fees, advertising) while avoiding responsibility for.
Legislative pressure has produced some changes. FOSTA-SESTA (2018) created liability exceptions for platforms hosting sex trafficking content. Bipartisan legislation has introduced exceptions for NCII. The Child Online Safety Act has addressed some predation-facilitation liability. But comprehensive platform accountability legislation has not yet passed in the U.S.
Platform self-governance has been uneven. Meta, Google, and Snap have all implemented NCII removal tools (hash-matching technology that can identify known NCII images and block re-upload). The Stop NCII initiative, operated by the Internet Watch Foundation, allows victims to create a digital fingerprint of their images that is then used to prevent upload across participating platforms. These tools are meaningful — and insufficient.
The Business Model Problem
A more fundamental critique of Section 230 and platform self-governance goes beyond the adequacy of specific tools to the underlying business model. Social media platforms and dating apps generate revenue through engagement — time on platform, ads viewed, subscriptions purchased. Content that generates intense emotional responses, including anger, fear, and sexual arousal, tends to produce high engagement. Harassment, NCII, and doxxing produce intense emotional responses in both perpetrators (who are engaged) and targets (who often remain on platform to monitor what is being said about them, even under duress).
This is not a conspiracy theory — it is basic incentive analysis. Platforms that are rewarded financially for engagement have structural reasons to be slow in removing content that generates engagement, even when that content is harmful. The research on content moderation response times consistently finds that high-engagement harmful content is removed more slowly than low-engagement harmful content, which is consistent with the engagement-incentive hypothesis.
The remedy that critics propose is not the elimination of Section 230 (which would be counterproductive, collapsing the liability protections that make user-generated content platforms viable) but the restructuring of platform incentives — through duty-of-care legislation (platforms have affirmative obligations to reduce foreseeable harms), transparency requirements (platforms must publish data on harm removal rates and response times), and potentially algorithmic accountability mechanisms that allow independent audit of the choices platforms make about what content is amplified.
33.12 Design Ethics: How Apps Could Be Built Differently
The Swipe Right Dataset analysis ends with a question that is, arguably, the most important one in the chapter: given everything we know about how platform design creates and amplifies harm, how could apps be built differently?
Design ethics frameworks (Friedman & Hendry, 2019) provide a useful vocabulary. Value-sensitive design asks developers to identify, from the design phase, whose values are being served and whose are being harmed by design choices. In dating app contexts, this means asking:
- For whom does this algorithm optimize? If it optimizes for engagement, it optimizes for the majority's preferences, which encode existing biases.
- Who bears the costs of design choices? Algorithmic suppression is experienced by users of color; the cost is distributed unequally.
- What would it mean to design for dignity as well as efficiency? Not every optimization question has a single right answer, but "what do we lose by optimizing only for swipes?" is a design question, not only a social justice question.
Specific design interventions that researchers and advocates have proposed include: requiring users to report what they matched with (not only who they swiped right on), to create accountability for stated preferences versus actual behavior; implementing algorithmic corrections that detect and reduce racially asymmetric suppression; making profile reporting friction-free for harassment; and creating opt-in "verification" pathways that give verified users enhanced visibility.
✅ Evidence Summary: Research on whether algorithmic corrections to racial bias in dating apps work is nascent. A 2023 study by Chopra, Yildirim, and colleagues found that users presented with information about their own racial swiping patterns showed some behavioral change in subsequent sessions — though the effect was modest and decayed over time. The design space here is new, and the ethical imperative is clear even where the technical solutions are still being worked out.
33.13 Legal and Policy Responses: The State of the Field
The legal and policy landscape for technology-facilitated intimate harm is one of the fastest-moving areas of law, which means that any specific description of current law is partially obsolete by the time it is published. As of 2026, the key developments are:
- NCII criminalization in forty-eight U.S. states; the federal DEFIANCE Act (2024) creating a federal civil cause of action
- Deepfake NCII legislation in development at federal and state levels, with specific challenges around the "synthetic imagery" definitional gap
- Romance scam enforcement primarily through FTC consumer protection actions and FBI IC3 investigations; rarely results in individual prosecution due to international jurisdiction of criminal networks
- Stalkerware regulation developing through FTC enforcement actions against app developers marketing covert monitoring tools for intimate partner surveillance
- Platform liability remaining primarily governed by Section 230, with ongoing legislative debate
🧪 Methodology Note: Students researching this area should use primary sources (FTC, FBI IC3, Cyber Civil Rights Initiative reports) rather than secondary news coverage, which frequently overstates the reach of specific laws or enforcement actions. The gap between what law exists and what law is enforced is particularly large in this domain.
33.14 Support and Recovery for Digital Abuse Victims
Survivors of NCII, romance scams, catfishing, and digital stalking face specific challenges in seeking support:
Shame and self-blame are pervasive and delay help-seeking. Survivors of NCII frequently report that they felt responsible for the existence of the images; survivors of romance scams report shame at having been deceived. These responses are understandable and wrong: the perpetrator bears responsibility for the harm.
Practical steps for NCII victims include: documenting the evidence before requesting takedown (screenshots, URL records); using platform-specific reporting processes, which have improved significantly; contacting the Cyber Civil Rights Initiative's crisis helpline (844-878-2274); consulting with an attorney about criminal and civil options; and using the Stop NCII hash-matching tool to prevent further spread.
For romance scam victims: The FTC has clear guidance on reporting. The FBI's IC3 accepts online complaints. Funds are rarely recovered but documentation supports aggregate enforcement. Community support (the subreddit r/Scams, for instance) can provide peer normalization without judgment.
Mental health support: Research by Bates (2017) on NCII survivors found that professional psychological support significantly improves outcomes. Many survivors do not seek professional help, again due to shame. Counselors and therapists who work with digital abuse should be familiar with the specific dynamics — the ongoing nature of the harm, the loss-of-control experience, the intersection with stalking and coercive control.
The Community Support Dimension
One dimension of support that research has increasingly recognized is the role of peer community. Both NCII victims and romance scam survivors report that one of the most significant secondary harms is the sense of isolation — the feeling that no one in their existing social world understands what they are going through or can provide support without judgment.
Online peer communities — while not a substitute for professional support — appear to provide meaningful benefit as a complement to it. The specific value: normalization without judgment. Peers who have experienced similar harms can provide the recognition that "this is not your fault" in a context that doesn't feel performative or clinical. They can share practical information about platform reporting, legal resources, and safety planning from direct experience. And they can provide evidence that survival and recovery are possible — something that abstract professional reassurance cannot always convey.
The caution with online peer support communities is that they can also become spaces where pain is amplified rather than processed — where shared grievance becomes an identity, or where misinformation about legal options and technical solutions circulates. Communities designed with mental health professional consultation and clear moderation principles are more consistently beneficial than organic communities without that support.
For therapists and counselors working with digital abuse victims, familiarity with the specific dynamics is essential. Standard trauma treatment frameworks apply but require adaptation: the ongoing nature of the harm (active stalkerware, still-circulating images) means that standard trauma-processing approaches designed for past events are insufficient; the shame dimension may be more intense than in some other traumas; and the technological dimension requires some fluency with the practical steps that can reduce ongoing exposure. Professional organizations including the American Psychological Association have developed continuing education on digital intimate abuse, and training is increasingly available.
33.15 The View from the Dataset
The Swipe Right Dataset, viewed from the right angle, is a document of human longing — 50,000 synthetic profiles, each representing someone who wanted something. Connection, romance, sex, companionship, the particular feeling of being known. The platform they used was not designed to harm them. It was designed to make money, which required keeping them engaged, which required — this is the quiet logic at the center of all of this — showing them what they wanted to see.
What they wanted to see, in aggregate, encoded the preferences that history made. The algorithm did not invent racial hierarchy in mate selection; it inherited it, optimized for it, and amplified it. The engineers did not intend to build a tool for coordinated harassment; they built messaging features and DM capabilities, and those features were used for things they were not designed for.
"Not designed for" is not the same as "unforeseeable." The researchers who study these platforms, the advocates who work with victims, and — increasingly — the designers themselves knew that these outcomes were foreseeable. The question of why foresight did not produce different design choices is, ultimately, a political and economic question as much as a technical one.
What design ethics asks is that the question be asked before the harm occurs, not after. That the fifty-thousand hypothetical people represented by the dataset be imagined as ends, not means. That the optimization question — "what do users want?" — be followed by the question that the dataset finally makes unavoidable: "at whose expense?"
Summary
This chapter examined the major categories of technology-facilitated harm in romantic and sexual contexts. We defined non-consensual intimate image sharing and traced its severe, multi-domain harms and the still-inadequate legal response. We distinguished catfishing from romance scams, examining the emotional and financial devastation of the latter and the particular vulnerability of older adults. We analyzed algorithmic discrimination in dating apps through the lens of the Swipe Right Dataset, tracing how optimization for engagement amplifies existing racial bias. We examined stalkerware as a tool of coercive control, deepfake technology as a new vector for intimate exploitation, platform responsibility under Section 230, and the design ethics framework as a tool for imagining digital courtship environments built around dignity rather than only efficiency. Throughout, we have insisted on the distinction between harm that is intended and harm that is foreseeable — and on the ethical weight of that distinction.
This concludes Part VI: The Dark Side. Part VII turns to applied questions: workplace attraction, romantic media and its influence, and the future of courtship.