The term "dark pattern" entered the design lexicon in 2010, coined by British UX researcher Harry Brignull to describe a specific category of user interface design choices: those that benefit the company deploying them at the direct expense of the...
In This Chapter
- Learning Objectives
- 14.1 Harry Brignull and the Birth of a Concept
- 14.2 The Classic Taxonomy: Brignull's Original Dark Patterns
- 14.3 Social Media-Specific Dark Patterns
- 14.4 The Intent-Effect Gap
- 14.5 The Asymmetry of Expertise
- 14.6 The Regulatory Landscape
- 14.7 Velocity Media: When Good Intentions Meet Optimization Pressure
- 14.8 Maya and the Invisible Architecture
- 14.9 The Space Between Bad Design and Predatory Design
- 14.10 Looking Forward: Design Ethics as a Discipline
- Summary
- Discussion Questions
Chapter 14: What Are Dark Patterns? A Taxonomy of Digital Manipulation
The term "dark pattern" entered the design lexicon in 2010, coined by British UX researcher Harry Brignull to describe a specific category of user interface design choices: those that benefit the company deploying them at the direct expense of the users navigating them. In the decade and a half since Brignull registered the domain darkpatterns.org and began cataloging examples, the concept has traveled from niche UX forums to congressional testimony, European regulatory frameworks, and the front pages of major newspapers. That trajectory tells us something important. Dark patterns are not an anomaly of digital design — they are a systematic feature of an attention economy that rewards engagement over wellbeing, conversion over consent, and retention over satisfaction.
This chapter builds a comprehensive taxonomy of dark patterns, beginning with Brignull's original classification and extending it to capture the social media-specific variants that have emerged as platforms matured. We examine the spectrum from genuinely bad design (which frustrates users accidentally) through deliberately unethical design (which frustrates users on purpose) to predatory design (which targets vulnerable populations with precision-engineered manipulation). We explore the intent-effect gap — the uncomfortable space between what designers consciously intend and what their systems produce — and the profound asymmetry of expertise between platform architects and ordinary users. We close by surveying the emerging regulatory landscape and grounding the abstract in a concrete scenario: a product meeting at Velocity Media where the gap between engagement optimization and user manipulation becomes uncomfortably visible.
Learning Objectives
- Define "dark patterns" and explain how Harry Brignull's original taxonomy was constructed
- Distinguish between bad design, unethical design, and predatory design on a spectrum of intent and harm
- Apply the classic dark pattern taxonomy (roach motel, hidden costs, trick questions, etc.) to real platform examples
- Identify social media-specific dark patterns that extend beyond the original UX framework
- Analyze the intent-effect gap and explain why it does not absolve designers of ethical responsibility
- Describe the asymmetry of expertise between platform designers and users and its ethical implications
- Summarize current regulatory approaches to dark patterns in the EU and United States
14.1 Harry Brignull and the Birth of a Concept
In the summer of 2010, Harry Brignull was working as a user experience consultant when he noticed something that had been nagging the design community without anyone quite naming it. Bad design — confusing interfaces, cluttered layouts, broken flows — was well understood. But there was another category of interface problem that felt qualitatively different. These were interfaces that worked exactly as their designers intended, and that intention was to get users to do things they did not actually want to do.
Brignull's insight was taxonomic: the confusion these interfaces created was not accidental. It was engineered. And because it was engineered to benefit the company at the user's expense, it deserved a name that captured its moral valence. He chose "dark patterns," registered darkpatterns.org, and began building a crowdsourced library of examples. The initial taxonomy identified around a dozen distinct types. Each had a memorable name designed to stick in a designer's memory — to make the pattern identifiable, and therefore avoidable or, in the hands of ethically motivated designers, actively rejectable.
14.1.1 What Makes a Pattern "Dark"?
Brignull's original definition was precise: a dark pattern is a user interface design choice that has been carefully crafted to trick users into doing things they did not mean to do, such as buying insurance they did not want or signing up for a newsletter without realizing it. Three elements are essential to this definition.
First, intent on the designer's part. A genuine mistake — a confusing button label that results from insufficient testing — is bad design, not a dark pattern. Dark patterns require at least some awareness, on someone's part in the design chain, that the interface is working against the user. This does not mean every designer who implements a dark pattern is consciously malicious. The intent may exist at the product management level (maximize subscription conversions) without any individual designer fully registering what they are building. But somewhere in the system, the decision was made to optimize against the user's interest.
Second, benefit to the deployer. The classic economic relationship is inverted. Normal product design serves users because serving users generates revenue. Dark pattern design extracts value from users — their money, their data, their attention — in ways that users have not meaningfully consented to and would not endorse if they understood what was happening.
Third, user disadvantage. The user ends up worse off than they would have been had the interface been transparent and honest. This might mean spending money they did not intend to spend, sharing data they meant to keep private, or investing time in a platform in ways that undermine their own stated goals.
14.1.2 The Spectrum of Harmful Design
Brignull's original framework implicitly placed all dark patterns in the same category. But as the concept has matured, researchers have identified important distinctions along a spectrum of severity and intentionality.
Bad design occupies one end: confusing, frustrating, poorly thought-through, but not deliberately manipulative. A modal dialog with two buttons labeled "OK" and "Cancel" when neither label clearly explains what each button does is bad design. It makes users guess. But it does not systematically benefit the company at users' expense.
Unethical design sits in the middle of the spectrum: deliberately engineered to produce outcomes users would not endorse, but doing so through friction and confusion rather than psychological precision. Hiding an unsubscribe button behind five nested menus is unethical design. It works by making the desired action hard rather than by targeting cognitive vulnerabilities.
Predatory design occupies the far end: precision-engineered to exploit specific psychological vulnerabilities, often targeting users who are least able to resist — children, people in emotional distress, users with limited technological literacy. Social media platforms operating at scale, armed with A/B testing infrastructure and behavioral data from hundreds of millions of users, have the capacity to produce predatory design at industrial scale without any individual designer intending to "prey" on anyone.
This spectrum matters for both ethical analysis and regulatory response. It determines culpability, proportionality of harm, and the appropriate remedies.
14.2 The Classic Taxonomy: Brignull's Original Dark Patterns
Brignull's taxonomy has been refined and extended by subsequent researchers, but the original categories remain foundational. We examine each with attention to its mechanics, its prevalence in contemporary digital products, and its specific manifestations in social media contexts.
14.2.1 Roach Motel
Named after the pest control device — you check in but you cannot check out — the roach motel pattern describes any situation where it is easy to get into a relationship or commitment and deliberately difficult to exit it. The asymmetry is the tell: if the process of subscribing takes fifteen seconds but the process of unsubscribing requires a phone call during business hours, a waiting period, and navigation through a retention script designed to make you feel guilty about leaving, that asymmetry is almost certainly engineered.
Amazon Prime's cancellation flow became the canonical example for years. To cancel, users navigated through multiple screens with names like "Keep My Benefits" (the option not to cancel) and "Cancel My Benefits" (the option to proceed), passed through a page reminding them of everything they would lose, encountered a "pause membership" option designed to intercept cancelers who just needed a break, and finally arrived at a confirmation that required one more click after they had already clicked "cancel." The FTC sued Amazon over this flow in 2023, and the company agreed to simplify it.
In social media specifically, the roach motel pattern appears in account deletion flows. Facebook's account deletion process historically required navigating to a non-obvious settings page, distinguishing between "deactivation" (which preserved the account in a dormant state, keeping all data and maintaining the platform's ability to retarget the user) and "deletion" (which actually removed the account after a 30-day grace period during which logging in would reactivate the account). Each step of this flow was designed, consciously or not, to capture users who had decided to leave before they actually left.
14.2.2 Hidden Costs
The hidden costs pattern involves revealing the true price of a transaction at the final step — after the user has already invested time, energy, and psychological commitment in the purchase process. The pattern exploits the sunk cost fallacy: having already spent twenty minutes building a cart, users are less likely to abandon it when a $15 "convenience fee" appears at checkout than they would have been had that fee been disclosed upfront.
Ticketmaster's fee structure is the most-cited example in popular discourse, but the pattern appears throughout digital commerce. In social media, the "hidden cost" is often not financial but attentional or behavioral. Instagram's early design presented itself as a simple photo-sharing app. The costs — chronic comparison behavior, body image effects, data collection at industrial scale, attention extraction that reduced capacity for offline relationships — were not disclosed in onboarding and were not visible until users had already built years of their social lives on the platform.
14.2.3 Trick Questions
Trick questions are interfaces that use confusing wording to get users to answer in ways that benefit the company rather than the user. The classic form involves double negatives in opt-in/opt-out language: "Uncheck this box if you do not want to receive marketing emails from our partners" — which means checking the box opts you out of opting out, or in other words, keeps you opted in. The grammar is technically correct but experientially treacherous.
Cookie consent banners have become a primary venue for trick questions post-GDPR. A banner that presents "Accept All" in a large, brightly colored button and buries the option to reject non-essential cookies behind "Manage Preferences" — itself requiring navigation through several subcategories — is using the trick question pattern at the level of visual design rather than linguistic confusion. The "question" (do you consent to tracking?) is being answered for you by the interface before you process it.
14.2.4 Misdirection
Misdirection uses visual emphasis, narrative framing, or attention-directing elements to draw users' eyes and cognitive focus to one option while a different option would better serve their interests. A modal dialog with a large, colorful "UPGRADE NOW" button and a tiny, gray "No thanks" link is directing attention to the conversion option through contrast, size, and color — all of the visual hierarchy cues that human perceptual systems are wired to follow.
In social media, misdirection operates at the level of the feed itself. Platforms surface emotionally activating content — conflict, outrage, novelty — because this content captures attention more effectively than mundane but relationship-relevant posts from close contacts. The "misdirection" is away from what users say they want (connection with friends) and toward what the algorithm has discovered keeps users scrolling (emotional provocation). This form of misdirection is algorithmic rather than UI-level, but the underlying logic — draw attention away from users' interests and toward the platform's interests — is identical.
14.2.5 Confirmshaming
Confirmshaming is the practice of labeling the opt-out option in a way that makes users feel ashamed or foolish for choosing it. The pattern was named by UX blogger Iain Lobb around 2015. Classic examples include pop-ups with "Yes, I want to save money!" (opt-in) and "No thanks, I prefer paying full price" (opt-out), or "Sign me up for productivity tips!" versus "No thanks, I don't want to be more productive."
The psychological mechanism is simple: no one wants to identify themselves, even in a purely private interface interaction, as someone who hates saving money or refuses to be productive. The opt-out label exploits the self-concept and the desire to maintain a positive self-image.
In social media, confirmshaming appears in push notification prompts: "Stay in the loop!" (enable notifications) versus "Miss out on updates" (deny notifications). The framing reframes the user's choice from "manage my attention" to "choose whether to be informed or ignorant." Snapchat's notifications settings have at various points used this pattern explicitly.
14.2.6 Disguised Ads
Disguised ads are advertisements designed to resemble organic content — editorial articles, user-generated posts, search results — so that users do not register them as advertising. The pattern is as old as advertorial journalism but has been refined to a science in digital contexts.
Social media feeds have made disguised ads structurally endemic. Facebook's "Sponsored" label sits in small gray text above a post that is otherwise visually identical to an organic post from a friend. Instagram's sponsored posts, Pinterest's promoted pins, TikTok's paid advertisements — all share the visual language of organic content, distinguished only by regulatory-required disclosures that are deliberately rendered in the least-attention-capturing visual treatment possible.
Research by the Reuters Institute (2016) found that a majority of users could not reliably distinguish paid content from editorial content on news sites, and subsequent research in social media contexts has replicated this finding. The boundary between content and advertisement has been deliberately blurred because that blurring is commercially valuable.
14.2.7 Bait and Switch
The bait and switch pattern involves advertising one outcome to attract users and then delivering a different, inferior outcome once the user has committed. In UX, this typically involves a product promoted with prominent features that are then restricted behind paywalls, or a free service that degrades in quality after accumulating a user base.
Social media platforms have executed bait and switch at civilizational scale. Facebook's original value proposition — a private, ad-free network for connecting with friends and family — was the bait. The switch occurred gradually: algorithmic feed curation that deprioritized friends' posts in favor of advertisers' content, systematic data collection that transformed users from customers into products, and privacy defaults that shifted repeatedly in the direction of maximum data exposure. Each individual change was incremental enough to avoid triggering user revolt; the cumulative effect was a product categorically different from what users had signed up for.
14.2.8 Privacy Zuckering
Privacy zuckering — named with pointed irony after Facebook's founder — describes patterns that lead users to share more personal information than they intended. The name entered the dark pattern lexicon following Facebook's repeated controversies around default privacy settings, but the pattern is endemic across the digital ecosystem.
The mechanics include privacy settings defaults that maximize sharing rather than minimize it, settings menus that are complex enough that users give up navigating them, platform features that encourage sharing in the moment without surfacing long-term implications, and onboarding flows that collect extensive personal data under the guise of "improving your experience." Instagram's location tagging, Facebook's "Feeling" status indicators, LinkedIn's automatic disclosure of profile view notifications — each of these features was designed to elicit personal data that users share without fully registering what they are disclosing or how it will be used.
14.3 Social Media-Specific Dark Patterns
The classic Brignull taxonomy was developed with e-commerce and web interfaces in mind. Social media platforms have developed a second generation of dark patterns that are less about transaction manipulation and more about attention extraction, behavioral conditioning, and social obligation. These patterns are subtler, harder to regulate, and often more harmful precisely because they operate on the social fabric of users' lives rather than on their wallets.
14.3.1 Algorithmic Amplification of Outrage
Perhaps the most consequential social media dark pattern is not visible as an interface element at all. It operates in the recommendation and feed-curation algorithms that determine what content users see. Research by Facebook's own data science team (Kramer et al., 2014; Bakshi et al., 2013) established what platform engineers had known from internal experiments for years: emotional content, and particularly outrage-inducing content, generates more engagement than neutral content. Users comment on outrage-inducing posts. They share them. They react to them with the full suite of emoji reactions. From the perspective of an engagement-optimization algorithm, outrage is gold.
The dark pattern here is the gap between what the platform presents itself as doing (showing you content that is relevant and interesting to you) and what it is actually doing (showing you content that keeps you engaged, with "engagement" defined in ways that systematically favor emotional provocation). This is misdirection at the algorithmic level. Users believe they are seeing a curated selection of what their network finds worth sharing; they are actually seeing a curated selection of what makes them angry, anxious, or envious, because those emotions translate to time-on-platform.
The consequences extend beyond individual wellbeing. Algorithmic amplification of outrage has been implicated in political polarization (Bail et al., 2018), the spread of health misinformation (Cinelli et al., 2020), and violence incitement in contexts from Myanmar to Ethiopia (UN Human Rights Council, 2018; Amnesty International, 2022). These harms are not incidental to the platform's design; they are the natural output of an optimization function that was never constrained to avoid them.
14.3.2 Notification Spam with No Real Opt-Out
Modern social media platforms deploy notification systems that have been engineered through A/B testing to maximize the number of notifications users receive — and to make opting out of notifications sufficiently difficult that most users do not succeed in doing so. This is the roach motel pattern applied to attention management.
The notification dark pattern has several layers. First, default settings are maximally permissive: new users receive notifications about everything — likes, comments, new followers, trending content, people they may know, events, advertising messages disguised as platform notifications. Second, notification settings are buried in menus that require multiple navigation steps, and they are organized in ways that make systematic opt-out cognitively taxing — each category requires separate action, and categories are proliferated beyond what any user will patiently navigate. Third, platforms regularly reset notification preferences during app updates or "restore" them to defaults after users have spent time customizing them.
TikTok's notification architecture, examined by researchers at the Oxford Internet Institute in 2022, sent users an average of 17 notifications per day when notification settings were left at defaults, and the process of disabling them required 14 separate toggle interactions across 6 different settings screens.
14.3.3 Frictionless Sharing and Oversharing Defaults
Social media platforms have systematically designed sharing defaults to maximize the audience for user-generated content. The frictionless sharing pattern makes the broadest possible sharing the path of least resistance, while narrowing the audience (sharing with friends only, or making a post private) requires deliberate additional steps.
Facebook's early "Open Graph" system, announced at the 2011 f8 Developer Conference, took this to an extreme: apps connected to Facebook could share users' activities — reading an article, listening to a song, taking a quiz — automatically, without requiring users to make an explicit decision to share for each activity. The argument was that friction reduces sharing; the reality was that removing friction also removed consent. The cascade of "Mike just read an article about [embarrassing topic]" notifications to Mike's entire network was not a bug but a feature from the platform's perspective, because each such notification was an advertisement for the connected app.
14.3.4 Ephemeral Content Urgency
Instagram Stories, Snapchat Snaps, and similar ephemeral content formats create artificial time pressure that drives users to check platforms at higher frequency than they would otherwise. The "disappears in 24 hours" mechanic is engineered scarcity applied to social content.
The urgency this creates is real in terms of behavioral effect but manufactured in terms of necessity. There is no technical reason that content must disappear after 24 hours; the design choice is a psychological intervention designed to increase check-in frequency. Research published in Computers in Human Behavior (2020) found that ephemeral content formats were the strongest predictor of compulsive social media checking behavior among the design features studied.
14.3.5 Social Pressure Mechanics
Some of the most powerful dark patterns in social media are not strictly interface-level phenomena but are engineered social situations that create behavioral pressure through the mechanism of social obligation. Follow-back norms, read receipts, activity indicators, "typing..." indicators in messaging apps — all of these are technically optional features that platforms deploy because they create social obligation and therefore increase platform activity.
The "read receipt" is a useful case study. When a messaging platform shows senders that their message has been read, it creates social pressure on the recipient to respond. The response rate (and thus platform activity) increases. Whether users want to be visible in this way — whether they want the social obligation that comes with read receipts — was not their choice to make. The platform chose it for them, deploying their social relationships as levers for engagement optimization.
14.4 The Intent-Effect Gap
One of the most important and most contested concepts in the dark patterns debate is the intent-effect gap: the space between what designers consciously intend and what their systems produce. This gap is ethically significant because it determines whether platform designers bear moral responsibility for harms that may not have been their conscious purpose.
14.4.1 Optimization Without Understanding
Modern platform design does not primarily work through conscious design decisions. It works through A/B testing: deploying multiple versions of an interface to different user cohorts and measuring which version produces the desired metric outcome — engagement, conversion, retention, time-on-platform. The "desired" metric is determined by product management and engineering culture; the specific mechanisms through which a winning variation achieves its metric advantage may be opaque even to the designers who built it.
This creates a system in which manipulation can be engineered without any individual human in the design chain consciously intending to manipulate. A designer might build a notification system that tests well for re-engagement without knowing that the reason it tests well is that it exploits anxiety about social exclusion. The intent, at the individual level, was to increase re-engagement. The mechanism, at the psychological level, was exploitation of a social anxiety. The individual designer's intent does not fully characterize the system's effect.
14.4.2 Does Intent Matter Ethically?
Philosophers of technology have debated the extent to which intent matters in evaluating the ethics of designed systems. The traditional view in legal and moral reasoning is that intent is highly relevant: we judge negligence differently from recklessness and recklessness differently from willful harm. A designer who accidentally creates an addictive product is not in the same moral category as one who deliberately engineers addiction.
But this framework, developed in contexts of individual agency, may not adequately capture the ethics of large-scale technological systems. When a platform serves two billion users and has A/B tested its interfaces through hundreds of thousands of experiments, the aggregate effect of those design choices is not accidental even if no individual choice was consciously malicious. The system has been systematically optimized to produce certain outcomes in users' behavior. That optimization constitutes a form of intent that operates at the systemic level even when it is absent at the individual level.
Philosopher Evan Selinger (2016) has argued that large-scale technological systems demand a new framework of "systemic responsibility" that evaluates the ethics of design at the level of outcomes and power differentials rather than individual intent. Under this framework, the question is not "did the designer mean to harm users?" but "did the system, as designed, predictably produce harm, and did the organization have the capacity to know this and choose not to investigate?"
14.4.3 When Optimization Becomes Manipulation
There is a point on the intent-effect spectrum where the gap closes. When a platform's internal research team has documented that its design choices produce anxiety, depression, or compulsive use in users — and the platform continues to deploy those choices in service of engagement metrics — the intent-effect gap is no longer available as a moral defense. The internal documents disclosed in the Facebook Papers (2021) revealed that researchers within Meta had documented significant harms from platform design and that these findings did not systematically alter design decisions. The gap between internal knowledge and external action is evidence that the "we didn't intend harm" defense had become less tenable as institutional awareness grew.
14.5 The Asymmetry of Expertise
The ethics of dark patterns cannot be fully understood without confronting the profound power asymmetry between platform designers and platform users. This asymmetry operates at multiple levels.
14.5.1 Professional Expertise vs. Everyday Navigation
A user opening Instagram to see what friends are up to brings whatever everyday cognitive resources they have available — which vary with their mood, their fatigue level, the competing demands on their attention, and their prior experience with digital interfaces. They are not thinking about cognitive biases, behavioral psychology, or engagement optimization. They are trying to see if their college roommate had her baby.
The platform they are navigating was designed by teams including UX researchers with PhDs in cognitive psychology, behavioral economists who specialize in decision-making under uncertainty, machine learning engineers who have modeled their behavior in granular detail across thousands of previous sessions, and product managers who have access to data showing exactly which interface choices produce which behavioral outcomes. The expertise differential is not incidental; it is the source of the platform's competitive advantage.
14.5.2 Scale and Iteration
Individual humans cannot learn from the experience of millions of other humans in real time. Platforms can. A social media company that A/B tests 100 interface variations simultaneously, each on a cohort of millions of users, accumulates knowledge about human behavior at a scale that no individual human can match. This creates what researcher Shoshana Zuboff (2019) calls "behavioral surplus" — a vast excess of knowledge about human behavioral patterns that users themselves do not possess about themselves.
When this behavioral knowledge is deployed in service of engagement optimization rather than user wellbeing, the asymmetry of expertise becomes an asymmetry of power. The platform knows, with statistical precision, which visual treatments elicit more scrolling, which notification timing creates more anxiety-driven re-engagement, which content types activate emotions that extend session length. The user knows only that they meant to check Instagram for five minutes and emerged ninety minutes later feeling vaguely worse.
14.5.3 Cognitive Bandwidth and Vulnerable Populations
The expertise asymmetry is most severe when the users on one side of it have reduced cognitive bandwidth due to age, stress, mental health challenges, or cognitive vulnerability. Children and adolescents, whose prefrontal cortex development is not complete until their mid-twenties, are particularly ill-equipped to recognize and resist design choices that target the reward circuitry of the adolescent brain. Individuals experiencing depression, anxiety, or loneliness are more susceptible to social comparison mechanics and more likely to remain on a platform that provides intermittent social reward even when that platform is making them feel worse.
The deployment of precision psychological design against populations with reduced capacity to resist it is what distinguishes predatory design from merely unethical design. It is the digital equivalent of targeting financial products toward elderly individuals with cognitive decline — a comparison that should give platform apologists pause.
14.6 The Regulatory Landscape
The concept of dark patterns has moved from academic and activist discourse into formal regulatory frameworks, particularly in the European Union. This regulatory attention reflects a broader recognition that market mechanisms have failed to discipline dark pattern deployment: competitive pressure often rewards, rather than punishes, manipulation.
14.6.1 The EU Dark Patterns Framework
The European Union has addressed dark patterns through multiple overlapping regulatory instruments. The General Data Protection Regulation (GDPR), effective 2018, requires that consent to personal data processing be "freely given, specific, informed and unambiguous." Regulatory guidance from the European Data Protection Board (EDPB) has interpreted this to prohibit cookie consent banners that use dark patterns — making refusal harder than acceptance, using deceptive visual hierarchies, or bundling consent for multiple purposes.
The Digital Services Act (DSA), effective 2024, includes explicit prohibitions on dark patterns in online platforms, particularly those with very large user bases (over 45 million EU users). The DSA prohibits "online interfaces designed, organised or operated in a way that deceives or manipulates recipients of the service or in a way that otherwise impairs or limits the ability of recipients of the service to make free and informed decisions."
The EU's Dark Patterns Taskforce, established in 2022, conducted a systematic sweep of major consumer platforms and found that 97% of the most popular websites deployed at least one dark pattern, and 37% deployed patterns explicitly prohibited under existing law. Enforcement actions have followed against Google, Meta, and Apple.
14.6.2 FTC Guidance and U.S. Approaches
The United States regulatory response has been more fragmented but is accelerating. The Federal Trade Commission issued guidance on dark patterns in 2022 under the title "Bringing Dark Patterns to Light," identifying several categories of prohibited practices under existing FTC authority over unfair or deceptive acts and practices (UDAP). The FTC guidance identified four primary dark pattern types: misleading subscription enrollment and cancellation, hidden costs and fees, disguised advertising, and manipulation of interface design to impair user choice.
The FTC's legal authority over dark patterns derives from Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices in or affecting commerce. The agency has used this authority to bring actions against Amazon (the Prime cancellation case), Fortnite maker Epic Games (for using dark patterns to induce children to make unintended purchases), and several other companies. Several U.S. states have moved faster than the federal government. California's Consumer Privacy Act (CCPA) includes opt-out requirements that implicitly prohibit dark patterns that undermine opt-out exercise. The California Age-Appropriate Design Code Act (AADC), passed in 2022, explicitly requires that products likely to be accessed by children be designed with the best interests of child users as a primary consideration — a provision that directly targets predatory design.
14.7 Velocity Media: When Good Intentions Meet Optimization Pressure
The following is a reconstructed account of a product development meeting at Velocity Media, a mid-sized social content platform. The specifics are fictionalized, but the dynamics reflect patterns documented in industry accounts, whistleblower testimony, and academic research on platform design culture.
The Tuesday product meeting had been running for forty minutes when Marcus Webb clicked to his seventh slide. Head of Product for Velocity Media's consumer apps, Marcus had spent the previous month working with his design team on a new user onboarding flow. The problem was simple: Velocity's 30-day retention — the percentage of users who were still active thirty days after downloading the app — was 23 percent. Industry benchmarks for social apps ran around 35 percent. Closing that gap was Marcus's primary KPI.
"The new flow does three things," Marcus said. "It gets contact list access on day one, so we can immediately show the user people they know. It front-loads the most engaging content types — high-completion short videos — to establish the value proposition before the user hits any friction. And it delays the notification permission ask until the user has had a positive experience, which we expect will improve the opt-in rate for notifications."
The slides were clean, the data was compelling. Early testing showed a 12-percentage-point improvement in 30-day retention.
Dr. Aisha Johnson, Velocity's Head of Ethics and Trust, had been quiet through the presentation. Now she spoke.
"I want to flag something about the contact list access piece," she said. "When we ask for contact list access on day one, before the user has any context for why we need it or what we do with it, we're getting consent that I'd argue isn't really informed consent. Users click 'OK' because they want to use the app, not because they've understood what contact list access means for their data and for the people in their contacts."
Marcus nodded. "The prompt is required by iOS. We're not inventing the mechanism."
"But we're choosing the moment," Dr. Johnson said. "We're choosing day one, first session, before the user knows anything about us. We could ask for contact access after they've understood what Velocity is and what the contacts feature does. We'd probably get lower opt-in rates."
"Almost certainly," Marcus acknowledged. "Our testing suggests we'd lose about 15 points on contact opt-in if we delay."
The room was quiet for a moment. Sarah Chen, Velocity's CEO, looked between them.
"What's the alternative onboarding path for users who decline contacts?" she asked.
"They see a generic feed," Marcus said. "Similar to what a logged-out user sees. It's much lower quality. Most of them churn."
"So the practical alternative to the day-one contacts ask," Dr. Johnson said slowly, "is an app that doesn't work well for users who don't give us contact access. Which means the 'choice' isn't really a choice — it's consent-or-churn."
This was the precise moment at which the meeting became something different. What had been a product review became an ethics negotiation, and around the table, people's faces shifted as they registered what was at stake.
What Marcus had built was not obviously malicious. He was trying to improve the product experience for users — genuinely trying to show them people they knew quickly, because research showed this improved retention and presumably satisfaction. But the mechanism for doing so required information extracted from users in conditions designed to minimize their resistance. That is the definition of a dark pattern. Not evil. Not conspiracy. Just optimization pressure, running up against user autonomy, with optimization winning.
Voices from the Field
"The most insidious dark patterns are not the ones that trick you. They're the ones that make you feel like you made a real choice, when in fact the entire decision environment was engineered to produce a single outcome. The question I ask is always: would the user endorse this design if they could see the whole system? If the answer is no, you have a dark pattern, regardless of anyone's intentions."
— Dr. Colin Gray, Purdue University, researcher in dark patterns and design ethics, 2023
14.8 Maya and the Invisible Architecture
SIDEBAR: What Maya Didn't Know She Agreed To
When Maya first downloaded TikTok at age 14, the onboarding flow took approximately 90 seconds. She entered her birthday — a field that confirmed she was over 13, but that the app did not verify — chose a username, and was immediately shown a feed of short videos curated by an algorithm she had no knowledge of. She did not read the terms of service (studies suggest fewer than 10% of users read terms of service for any digital product). She did not meaningfully engage with the privacy policy. She clicked through the notification permission prompt because she wanted to see the videos.
In those 90 seconds, Maya had: - Agreed to terms of service that granted TikTok a license to her content - Consented (via the notification prompt) to receive notifications at any hour - Agreed to data collection including device information, location data, browsing behavior within the app, and behavioral data sufficient to construct a detailed psychological profile - Become subject to algorithmic curation she could not see, understand, or meaningfully control
None of this was illegal. All of it was standard practice. The onboarding flow was designed by teams of expert UX designers, tested with hundreds of thousands of users, and refined to maximize the speed at which new users became engaged users. Maya, at 14, with no particular knowledge of UX design, behavioral psychology, or data economics, was the less-informed party in an asymmetric information relationship. The power differential was not incidental. It was the product.
14.9 The Space Between Bad Design and Predatory Design
The concept of dark patterns invites a question that is both practical and ethical: where exactly is the line between optimizing a product and manipulating its users? The answer is not a bright line but a set of diagnostic questions that can be applied to any design choice.
Would users endorse this design choice if they understood how it works? A notification system that creates anxiety-driven re-engagement is not likely to be endorsed by users who understand its mechanism, even though it might be tolerated by users who are unaware of it. Consent that depends on ignorance is not meaningful consent.
Does this design serve users' stated goals or the platform's engagement goals when these diverge? The divergence between stated user goals and platform engagement optimization is not always present — sometimes keeping users engaged also serves them well. But when the divergence exists, design choices that consistently favor platform goals over user goals accumulate into a systematic pattern of exploitation.
Is the impact of this design choice borne equally across the user population, or does it fall more heavily on vulnerable users? Design that is merely annoying for a user with strong self-regulation skills may be functionally addictive for a user with less. A platform that optimizes for aggregate engagement without attending to differential impact across user populations is treating vulnerable users as acceptable casualties of growth optimization.
Has the platform conducted research on the impact of this design choice, and if so, has that research influenced decisions? The existence of internal research documenting harm — the "Facebook knows" moment documented in the Facebook Papers — transforms the ethical evaluation. Organizations that know their design choices cause harm and continue deploying them have crossed from negligence into something closer to deliberate harm.
14.10 Looking Forward: Design Ethics as a Discipline
The dark patterns concept has seeded a broader conversation about design ethics that is now institutionalized in some corners of the industry and entirely absent in others. Organizations like the Center for Humane Technology (founded by former Google design ethicist Tristan Harris) and the Electronic Frontier Foundation have pushed for design standards that prioritize user wellbeing over engagement metrics. Academic programs in "ethical design" have emerged at several universities, training designers with the vocabulary and conceptual tools to identify and reject manipulative patterns.
Whether these developments will be sufficient to shift industry practice is an open question. The structural incentives of the attention economy have not changed. Advertising-funded platforms still benefit from maximizing engagement, and engagement maximization still systematically favors the emotional and addictive over the satisfying and beneficial. Regulatory intervention may be a necessary complement to design ethics, creating external constraints where internal ones have proven insufficient.
What is not in doubt is that the dark patterns concept has made something visible that was previously difficult to name. When users can articulate that an interface is designed against their interests, they begin to develop the critical literacy needed to navigate that interface more deliberately. When designers can identify the specific patterns that constitute manipulation, they can make principled choices to reject them. And when regulators can point to documented taxonomies of harm, they have the conceptual tools to craft proportionate interventions.
The taxonomy in this chapter is a beginning, not an end. The dark patterns of 2010, largely confined to e-commerce transaction manipulation, look quaint against the sophistication of algorithmic attention engineering in 2024. As platforms evolve and new technologies — augmented reality, AI-generated content, brain-computer interfaces — create new vectors for attention extraction, the taxonomy will need to evolve with them. What must remain constant is the ethical orientation: user autonomy and wellbeing as the standard against which design choices are judged.
Summary
Dark patterns are interface design choices deliberately engineered to benefit companies at users' expense. Harry Brignull's original 2010 taxonomy identified patterns including roach motels, hidden costs, trick questions, misdirection, confirmshaming, disguised ads, bait and switch, and privacy zuckering. Social media platforms have extended this taxonomy with algorithmic dark patterns — amplification of outrage, notification spam, frictionless oversharing, ephemeral content urgency, and social pressure mechanics — that operate at scale and target the social fabric of users' lives. The intent-effect gap — the distance between what individual designers intend and what optimized systems produce — is real but does not fully absolve platforms of ethical responsibility, particularly when internal research documents harm. The asymmetry of expertise between platform architects and everyday users is a fundamental feature of the current information environment, not an accident. Regulatory responses in the EU and United States are accelerating, though enforcement remains uneven. Design ethics as a discipline offers tools for identifying and rejecting manipulative patterns, but structural incentives of the attention economy mean that internal ethics alone has proven insufficient.
Discussion Questions
-
Harry Brignull's original definition of dark patterns requires intent on the designer's part. Apply this requirement to the case of A/B tested designs: if a design choice was selected by an algorithm rather than a human, can it still be a "dark pattern" in Brignull's sense? What modifications to the definition might be needed?
-
The roach motel pattern describes asymmetry between entry and exit. Identify three examples of roach motel design in social media platforms you use regularly. For each, analyze: who benefits from the asymmetry, who bears the cost, and whether you would characterize the design as bad design, unethical design, or predatory design.
-
Dr. Johnson's concern in the Velocity Media meeting centered on the timing of the contacts permission request. Evaluate her argument: is the timing of a consent request ethically relevant even when the consent prompt itself is technically compliant with platform requirements? What does "meaningful consent" require in the context of app onboarding?
-
The chapter argues that there is an "asymmetry of expertise" between platform designers and users. Some critics argue that this framing is paternalistic — that adult users can assess risks and make decisions for themselves without needing protection. Evaluate this argument. Under what conditions is the asymmetry of expertise a legitimate basis for regulatory intervention?
-
The Velocity Media scenario ends without resolution: we do not know whether Marcus Webb changed his onboarding design or Sarah Chen intervened. Write the next 200 words of that meeting, capturing the decision that was ultimately made and the reasoning behind it. Then analyze your own ending: what assumptions about corporate culture, regulatory pressure, and individual ethics informed the outcome you wrote?
-
Privacy zuckering has been defined as leading users to share more than they intended. Does the same pattern apply to emotional disclosure — getting users to express emotions publicly that they meant to keep private? Identify a platform feature that might constitute "emotional zuckering" and analyze it through the framework of this chapter.
-
The chapter describes the regulatory landscape as evolving. Research the current status of one regulatory action mentioned (EU DSA enforcement, FTC actions, California AADC) and assess: has the regulatory intervention changed the platform's behavior, and how would you measure that change?