Part III: Dark Patterns Taxonomy

The Catalog of Manipulation


The Power of Naming

Harry Brignull coined the term "dark patterns" in 2010, initially to describe deceptive user interface design in e-commerce — the pre-checked subscription boxes, the cancellation flows deliberately buried in menus, the hidden fees revealed only at checkout. The term traveled quickly because naming a thing makes it visible. Before "dark pattern" existed as a shared vocabulary, these design choices could be described only awkwardly, case by case, without a common frame. Once named, they could be cataloged, compared, analyzed, regulated.

This book extends and adapts the taxonomy to social media specifically — a domain where dark patterns operate not just at the transaction level (as in e-commerce) but at the level of continuous engagement, shaping behavior across thousands of micro-interactions per day. The patterns in this part are darker in a particular sense: not because they are more dramatic, but because they operate invisibly, through mechanisms the user is generally not aware of, producing effects the user generally did not consent to.

Part III names them. All eight.


The Shift from Part II

Part II explained how the brain works. It described the dopamine system, reward prediction error, social reward circuits, cognitive load, and stopping cues — the biological and psychological mechanisms that make humans susceptible to certain kinds of designed environments.

Part III is a different kind of chapter. It is not describing susceptibility; it is describing exploitation. The distinction matters. Part II was, in a sense, morally neutral — the brain works the way it works regardless of what we think about it. Part III is not morally neutral. Dark patterns are design choices. Someone decided to build the variable notification delay. Someone decided to remove stopping cues from the feed. Someone decided to display follower counts as social proof signals. These are choices, made by people, in pursuit of engagement metrics, with knowledge — often documented, often internal — of their psychological effects.

The taxonomy approach is appropriate here because it forces specificity. "Manipulation" as a general category is not analytically useful; there is too much that can fit under it, and the vagueness obscures the mechanisms. "Variable ratio reinforcement delivered through notification batching" is analytically useful. It names the mechanism, connects it to a specific design feature, and makes both the harm and potential remedies visible.


The Eight Categories

Chapter 14 opens the taxonomy by defining dark patterns in the social media context — what qualifies, what does not, and why the definition needs to be precise rather than expansive. The chapter establishes the analytical criteria used throughout Part III: intentionality, concealment of mechanism, exploitation of cognitive bias, and asymmetry between platform and user interests.

Chapter 15 maps the cognitive biases that dark patterns target — not as a complete catalog of cognitive psychology but as a field guide specific to the platform design context. Default exploitation, anchoring, framing, and availability bias each have specific design implementations that Chapter 15 documents.

Chapter 16 focuses on loss aversion and streak mechanics — the design of systems that frame disengagement as loss, that attach psychological value to maintenance behavior (streaks, progress bars, status levels), and that penalize absence in ways the user experiences as threat rather than merely inconvenience.

Chapter 17 examines social proof — the use of social signals (follower counts, like displays, trending labels, friend activity) to manufacture consensus and generate conformity pressure. The chapter analyzes both the explicit social proof signals and the more subtle curation choices that shape the impression of what "everyone" is doing.

Chapter 18 turns to reciprocity and commitment — the design features that generate social obligations (read receipts, activity indicators, public commitment mechanisms) that trap users in patterns of behavior they did not consciously choose and find difficult to exit.

Chapter 19 covers parasocial relationships and the influencer economy — the way platforms algorithmically cultivate one-sided emotional attachments between users and creators, and the business model that makes those attachments valuable to both the platform and a class of professional content producers.

Chapter 20 addresses outrage as engagement — perhaps the most societally significant dark pattern in this taxonomy. The algorithmic amplification of high-arousal negative content, the comment section designs that reward inflammatory responses, and the share mechanics that strip context to maximize outrage reaction.

Chapter 21 concludes the taxonomy with personalization and filter bubbles — the opacity of recommendation logic, the incremental narrowing of content exposure, and the absence of pathways out of algorithmically determined interest clusters.


The Deliberate-vs.-Emergent Question

Threading through the entire taxonomy is a question that the book takes seriously rather than dismissing in either direction: to what extent are dark patterns the product of deliberate design, and to what extent are they emergent properties of optimization systems?

The honest answer is: both, in varying proportions for different patterns.

Some dark patterns have clear evidence of intentionality. The specific timing parameters of notification batching were A/B tested and optimized; the results are documented in internal communications that have become public. The streak mechanic in certain platforms was explicitly conceived as a retention tool. These are choices.

Other effects emerge from optimization systems that were told to maximize a metric without being given explicit instructions about how. An algorithm told to maximize watch time will learn that outrage keeps users watching; it does not need to be explicitly instructed to amplify outrage. The outcome is the same as if it had been designed that way, but the causal chain is different.

This distinction matters for both analysis and accountability. It does not change the harm. But it affects how we think about responsibility, remedy, and regulation — questions that Part VI takes up directly.

The taxonomy in Part III is organized by outcome and mechanism rather than by intentionality, because the cognitive effects on users do not vary based on whether a human designer made a deliberate choice or a gradient descent algorithm found the same outcome through optimization. But the intentionality question is noted throughout, because it matters for the arguments that follow.


From Catalog to Analysis

Part III is the book's most directly actionable section for a specific audience: designers, product managers, and people working inside platforms who want to understand what they are building. The taxonomy provides a vocabulary for naming what is happening — which is the prerequisite for internal advocacy, ethical review, and design alternatives.

For students and general readers, Part III provides the analytical tools for Capstone Project 2 (the platform audit) and for the platform case studies in Part IV. When Chapter 23 describes TikTok's For You page, it is drawing on the vocabulary established here. When Chapter 29 examines A/B testing practices, it is connecting experimental methodology to the specific patterns this part has named.

Name the thing. Then you can see it everywhere.

Chapters in This Part