Capstone Project 2: Dark Pattern Analysis in the Wild

A Platform Audit


Project Overview

In the 1960s, Ralph Nader did not simply argue that American cars were dangerous. He documented it — specific models, specific defects, specific crash outcomes. The power of Unsafe at Any Speed was not the moral case against General Motors; it was the accumulated specificity of the evidence. A claim becomes an argument when it becomes documentable.

This project asks you to work in that tradition. Dark patterns are not an abstraction — they are specific design decisions, embedded in specific interfaces, producing specific effects on specific users. Your task is to document them: to name them precisely, trace their cognitive mechanisms, contrast them against the platform's own rhetoric, and propose what better alternatives could look like.

The result should be an annotated platform audit report — rigorous enough to be taken seriously, specific enough to be actionable, and grounded enough in the book's analytical frameworks to demonstrate synthesis across the full arc of your learning.

What a Platform Audit Is Not

A platform audit is not a rant. It is not a list of grievances. It is not a summary of the book's arguments applied generically to your platform of choice. The audits that matter — the ones that have informed regulators, designers, and researchers — are grounded in documented evidence, calibrated in their claims, and honest about what they cannot prove.

This means acknowledging when a design feature has multiple plausible explanations. It means distinguishing patterns that are clearly intentional from those that may be emergent. It means proposing alternatives that are technically and financially viable rather than simply condemning current practice.


Learning Goals

By the end of this project, you will be able to:

  1. Conduct a systematic inventory of dark patterns on a real platform using the taxonomy from Part III.
  2. Map each identified dark pattern to the specific cognitive mechanism it exploits, drawing on the neuroscience from Part II.
  3. Analyze the gap between a platform's stated values and its actual design choices with documented evidence.
  4. Evaluate the business model logic that makes dark pattern deployment rational for platforms, connecting to Part I.
  5. Propose at least three concrete alternative designs that serve legitimate user and business needs without the identified manipulative mechanisms.

Phase 1: Platform Selection and Background Research (Days 1–3)

Choosing Your Platform

The platform you choose will shape everything that follows. A few principles:

Choose a platform you use actively. You need to observe the platform in its natural operating state, which means you need to be a genuine user whose data and behavior history has shaped your feed and interface. A platform you have never used will show you a generic onboarding experience, not the personalized manipulation engine it becomes over time.

Choose a platform with enough surface area. A simple utility app (a calculator, a weather app) has limited dark patterns to catalog. The most instructive platforms for this project are those with social feeds, notification systems, and engagement optimization — which describes most major social platforms and many games.

Choose a platform you can analyze ethically. Do not create fake accounts to observe manipulation of vulnerable users. Do not access private backend data. Your analysis is based on what is visible to a regular user.

Recommended options include: TikTok, Instagram, YouTube, Facebook, Snapchat, Twitter/X, Reddit, LinkedIn, BeReal, Discord, any major mobile game with social features. You may also audit a platform not on this list with instructor approval.

Disclose your relationship to the platform. In your report, briefly note how long you have been a user, approximately how often you use it, and any ways your relationship to it might bias your observations. This is good research practice.

Background Research Tasks

Before you begin observational analysis, complete the following:

Company and Business Model Documentation - Platform's revenue model (advertising, subscription, freemium, data licensing, or combination) - Known details about the platform's recommendation algorithm (what is publicly disclosed) - Platform's user base size and key demographics - Parent company, ownership structure, relevant corporate history

Stated Values and Policies - Locate and read the platform's Community Guidelines or Terms of Service - Locate any public statements about user wellbeing, mental health, or responsible design (press releases, blog posts, annual reports, testimony to legislatures) - Note any design changes the platform has voluntarily made in response to criticism — what changed, when, and what was the stated rationale?

Prior Research and Reporting - Identify at least two credible published analyses of this platform (academic papers, investigative journalism, regulatory filings, internal documents if publicly available via whistleblowers or litigation) - Note the key claims in each and their evidentiary basis

This background becomes the context against which your direct observation means something.


Phase 2: Dark Pattern Inventory (Days 4–8)

The Observation Protocol

You will conduct structured observation sessions — dedicated time spent on the platform with the specific purpose of identifying and documenting design patterns. Recommendation: five sessions across five different days, at different times of day, for thirty to sixty minutes each. Use the Screenshot Analysis Sheet (see templates) to document each finding.

Structured observation is different from regular use. When you use a platform normally, you are engaged in content; when you audit it, you are watching the interface. You may find it helpful to record your screen during observation sessions (most devices support this natively) so you can review moments you may have scrolled past.

Applying the Part III Taxonomy

The dark pattern taxonomy from Part III (Chapters 14–21) gives you eight categories to work with:

From Chapter 14 (What Are Dark Patterns): Roach-motel designs, confirmshaming, bait-and-switch, hidden costs, misdirection.

From Chapter 15 (Cognitive Biases): Default exploitation (pre-checked options favoring the platform), anchoring, framing effects.

From Chapter 16 (Loss Aversion and Streak Mechanics): Streak designs, penalty framing for disengagement, progress that cannot be paused.

From Chapter 17 (Social Proof): Manufactured consensus signals, follower count visibility, trending indicators, like counts displayed (or selectively hidden).

From Chapter 18 (Reciprocity and Commitment): Design features that generate social obligation (e.g., read receipts, "seen" indicators, follow-back pressure), public commitment mechanisms.

From Chapter 19 (Parasocial Relationships): Features that encourage one-sided attachment to creators/influencers, algorithmic amplification of emotionally resonant figures.

From Chapter 20 (Outrage as Engagement): Algorithmic amplification of high-arousal negative content, comment section design that rewards inflammatory responses, share mechanics that strip context.

From Chapter 21 (Personalization and Filter Bubbles): Opacity of recommendation logic, absence of alternative pathways, incremental interest narrowing.

For each dark pattern you identify, complete one Pattern Catalog Form (see templates). A strong audit documents a minimum of eight patterns across at least four categories. An exceptional audit documents twelve or more patterns with particularly fine-grained evidence.

The Deliberate vs. Emergent Question

Chapter 14 raises the question of whether dark patterns are intentional design choices or emergent properties of optimization systems. Your audit should engage with this question honestly. Some patterns will have clear evidence of intentionality — design elements that are too precisely calibrated to have emerged accidentally (variable notification timing, for instance, is not accidental). Others may be more ambiguous. Note your assessment of intentionality for each pattern and the evidence that informs it.


Phase 3: Cognitive Mechanism Mapping (Days 7–10)

From Pattern to Mechanism

Documenting a dark pattern is necessary but not sufficient. The analytical contribution of this project requires you to explain why each pattern works — the specific cognitive or neurological mechanism it exploits. This is the bridge between Part III (what designers do) and Part II (how the brain responds).

For each pattern in your inventory, identify:

The primary cognitive mechanism. Be specific. "Exploits reward-seeking behavior" is too broad. "Exploits the dopamine anticipation response described in Chapter 7 by making reward timing unpredictable" is specific.

The relevant brain system or psychological principle. Is this primarily a dopaminergic reward mechanism? A social reward circuit (Chapter 10)? Loss aversion (Chapter 16)? A stopping-cue removal mechanism (Chapter 12)?

The population it targets most effectively. Chapter 31 establishes that adolescent brains are differentially susceptible to social reward signals. Chapter 30 discusses individual differences in vulnerability to depression and anxiety. Some patterns are broad-spectrum; others are particularly potent for specific populations.

Evidence that the mechanism operates as described. Cite the relevant chapter from Part II, and if you have identified external research in Phase 1, note whether it supports the claimed mechanism.

Building the Mechanism Map

Once you have analyzed each pattern individually, construct a Cognitive Mechanism Map — a visualization (a table, a diagram, or a structured list) that shows the full pattern-to-mechanism matrix for your platform. This map should make visible: - Which mechanisms are exploited most frequently on this platform - Which patterns co-occur (multiple patterns targeting the same mechanism simultaneously) - Any gaps — mechanisms the platform does not appear to exploit, or patterns you found that do not fit cleanly in the taxonomy

The map will become a key exhibit in your final report.


Phase 4: Comparative Analysis — Design vs. Stated Values (Days 9–12)

The Gap Analysis

In Phase 1, you documented the platform's publicly stated values and policies. In Phases 2 and 3, you documented what the platform actually does. Phase 4 is the comparison.

For each of the platform's stated commitments (user wellbeing, transparency, age-appropriate design, mental health safety, or whatever commitments you found), identify whether the documented dark patterns contradict, confirm, or complicate that commitment.

Structure this as a gap analysis:

Where the design aligns with stated values. Intellectual honesty requires you to acknowledge where the platform does what it says. Most major platforms have made some genuine design changes in response to criticism — even if those changes are partial, or serve as cover for more problematic practices. Note these accurately.

Where the design contradicts stated values. These are your most important findings. For each contradiction, you need: the stated commitment, the design pattern that contradicts it, the evidence for the contradiction, and the magnitude of the gap (a design choice that affects every user on every visit is a more significant contradiction than one affecting a subset of interactions).

Where stated values are vague enough to be unfalsifiable. Many platform commitments are written in language deliberately resistant to audit — "we care about user wellbeing" without specifying what that means. Note these as a separate category of finding. The vagueness itself is a design choice.

The Regulatory and Transparency Context

Chapter 38 discusses regulatory approaches to platform accountability. Using what you know from that chapter: - Does the platform face any existing regulatory requirements related to the patterns you documented? (Age verification laws, algorithmic transparency requirements, data protection rules) - If so, does its design comply? - If not, what regulatory framework would be most relevant to the patterns you found?


Phase 5: Reform Proposals (Days 12–16)

What Good Alternatives Look Like

Chapter 39 (Design Ethics and Humane Technology) establishes the principles that should govern alternative designs. Before proposing alternatives, review that chapter with this question in mind: what would it actually mean to apply these principles to the specific patterns you documented?

Good reform proposals share several characteristics:

They are technically feasible. "Delete the algorithm" is not a reform proposal. Neither is "eliminate all advertising." Proposals that would require the platform to cease operating are not proposals — they are demands for dissolution. Genuine reform works within the constraints of a platform that needs to sustain itself.

They address the mechanism, not just the symptom. Removing a specific dark pattern without addressing the incentive that produced it tends to result in the pattern being replaced by a functionally equivalent one. Strong proposals either change the incentive structure or create design constraints that make the manipulation harder even if the incentive remains.

They serve both user interests and some legitimate business interest. The goal is not a platform that users hate but that does not manipulate them. The goal is a platform that users can trust, which is itself a sustainable business model for some platforms.

They are specific enough to be evaluated. "Make the platform more transparent" is not a proposal. "Replace the current opaque recommendation interface with a toggleable preference panel showing the five most heavily weighted factors in my current feed, with direct controls for each" is a proposal.

Your Reform Proposals

Develop at least three reform proposals, each addressing a different dark pattern from your inventory. For each, document:

  1. The dark pattern being addressed and the mechanism it exploits
  2. The specific alternative design, described in enough detail that a designer could begin implementing it
  3. The user benefit: what experience would users have that they do not currently have?
  4. The business case: why might the platform's owners consider this change viable? Are there precedents (other platforms that have made similar changes)?
  5. The likely resistance: what business incentive works against this change? How significant is it?
  6. The tradeoff: what, if anything, does the platform legitimately lose by making this change?

Documentation Templates

Template A: Screenshot Analysis Sheet

SCREENSHOT ANALYSIS

Platform: _________________________
Date/Time of Observation: _________________________
Session #: ____

Screenshot/Recording Reference: _________________________
(File name, timestamp, or description)

WHAT I OBSERVED:
(Describe the specific interface element, interaction, or flow being analyzed)
___________________________________________________________________________
___________________________________________________________________________

DARK PATTERN CLASSIFICATION:
Pattern Name: _________________________
Source Chapter: ____
Definition: _________________________

WHY THIS QUALIFIES:
(Specific reasoning — how does this match the pattern definition?)
___________________________________________________________________________
___________________________________________________________________________

EVIDENCE OF INTENTIONALITY (High / Medium / Low / Unclear):
Reasoning: _________________________

ALTERNATIVE EXPLANATION (if any):
___________________________________________________________________________

USER IMPACT ASSESSMENT:
Who is most affected and how?
___________________________________________________________________________

COGNITIVE MECHANISM (from Part II):
Primary mechanism: _________________________
Supporting chapter: ____
___________________________________________________________________________

Template B: Pattern Catalog Form

PATTERN CATALOG

Pattern #: ____
Pattern Name: _________________________
Dark Pattern Category (Chapter 14–21): _________________________

DEFINITION (from relevant chapter):
___________________________________________________________________________

PLATFORM INSTANTIATION:
(How does this specific platform implement this pattern?)
___________________________________________________________________________
___________________________________________________________________________

EVIDENCE (list screenshot/recording references):
1.
2.
3.

COGNITIVE MECHANISM MAP:
Primary brain system/psychological principle: _________________________
Chapter citation: ____
Mechanism description: _________________________
___________________________________________________________________________

POPULATION MOST AFFECTED:
___________________________________________________________________________

RELATIONSHIP TO STATED PLATFORM VALUES:
Aligns with / Contradicts / Not addressed in stated values
Relevant stated value/policy: _________________________
Explanation: _________________________

REFORM PROPOSAL (brief):
___________________________________________________________________________
___________________________________________________________________________

INTENTIONALITY ASSESSMENT: Clearly intentional / Likely intentional / Possibly emergent / Unclear
Reasoning: _________________________

Template C: Audit Report Structure

The final annotated platform audit report should follow this structure:

Section 1: Platform Profile (300–400 words) Platform overview, business model, user base, relevant corporate history, and your methodology disclosure.

Section 2: Research Background (400–500 words) Summary of prior research and reporting, key claims, evidentiary basis. Establishes the existing analytical context your work enters.

Section 3: Dark Pattern Inventory (Main body — 1,500–2,500 words) Organized by pattern category. Each pattern documented with: definition, specific evidence, cognitive mechanism, population impact, intentionality assessment. The Cognitive Mechanism Map should appear here as a table or figure.

Section 4: Stated Values vs. Design Reality (500–700 words) Gap analysis. Organize by stated commitment. Be specific about contradictions; acknowledge genuine alignments.

Section 5: Reform Proposals (600–900 words) Three or more proposals, each with the full documentation described in Phase 5.

Section 6: Limitations and Epistemic Caveats (200–300 words) What your methodology cannot establish. What you cannot rule out. What further research would be needed to strengthen the analysis. This section demonstrates analytical maturity — the audits that damage their own credibility are the ones that overclaim.

Appendix: Full Documentation All completed Screenshot Analysis Sheets and Pattern Catalog Forms.


Evaluation Rubric

Criterion 1: Evidence Quality and Rigor (25 points)

Score Description
22–25 All dark pattern claims are supported by specific documented evidence (screenshots, recordings, or detailed field notes); evidence is specific to the platform being audited rather than generic; documentation is systematic and complete; a reader could independently verify each claim.
17–21 Most claims supported with evidence; documentation is mostly systematic; a few claims are asserted rather than demonstrated.
12–16 Evidence present but uneven; some claims lack supporting documentation; not fully systematic.
6–11 Significant claims without evidence; documentation is unsystematic or incomplete.
0–5 Evidence insufficient to support the analysis.

Criterion 2: Taxonomy Application (20 points)

Score Description
18–20 Eight or more patterns identified; correctly classified using Part III taxonomy; classification reasoning is explicit; deliberate-vs-emergent question addressed for each; patterns span at least five categories.
14–17 Six or more patterns identified; mostly correctly classified; some classification reasoning; categories adequately represented.
10–13 Four or more patterns identified; classification present but reasoning thin; limited category coverage.
5–9 Fewer than four patterns or significant misclassification.
0–4 Taxonomy not applied or applied incorrectly.

Criterion 3: Cognitive Mechanism Analysis (20 points)

Score Description
18–20 Each pattern mapped to a specific cognitive mechanism from Part II with explicit citation; mechanism descriptions are precise rather than generic; population differentiation addressed; mechanism map is coherent and shows cross-pattern relationships.
14–17 Most patterns connected to Part II mechanisms; descriptions mostly specific; mechanism map present.
10–13 Connections to Part II present but generic; mechanism map incomplete or absent.
5–9 Part II referenced but not applied; mechanisms described vaguely.
0–4 Cognitive mechanism analysis absent.

Criterion 4: Comparative Analysis (15 points)

Score Description
13–15 Gap analysis is specific and documented; stated values accurately represented; genuine alignments acknowledged alongside contradictions; regulatory context addressed; vague commitments identified as a separate category.
10–12 Gap analysis present with reasonable specificity; mostly accurate representation of stated values; limited regulatory context.
7–9 Gap analysis present but surface-level; either overly critical (no genuine alignments acknowledged) or insufficiently critical.
3–6 Comparative analysis thin or missing key components.
0–2 Comparative analysis absent.

Criterion 5: Reform Proposals (20 points)

Score Description
18–20 Three or more proposals; each technically feasible; addresses mechanism not just symptom; serves both user interests and some business interest; specific enough to evaluate; tradeoffs honestly acknowledged; precedents cited where they exist.
14–17 Three proposals meeting most criteria; mostly feasible and specific; limited engagement with tradeoffs.
10–13 Two or more proposals; feasibility questionable or specificity limited; tradeoffs not addressed.
5–9 Proposals present but vague, infeasible, or single-dimensional.
0–4 Reform proposals absent or not meaningfully developed.

Total: 100 points


A final note on what this project is for. Platform audits of this kind — systematic, documented, analytically grounded — are increasingly consequential. They inform regulatory proceedings, they seed journalism, they give designers evidence to push back against engagement-maximizing pressures from within their own companies. The form matters because the stakes are real. Do this work as if someone outside your classroom might read it — because the intellectual habits it builds are exactly those the field needs.