Capstone Project 1: Auditing Your Own Digital Behavior
A 30-Day Personal Investigation
Project Overview
There is a particular kind of knowledge that comes only from turning the analytical lens on yourself. You can understand the dopamine reward loop as an abstraction; you can recognize variable ratio reinforcement as a design principle; you can catalog dark patterns in the abstract. But until you watch yourself pick up your phone seventeen times during a two-hour study session, check a notification before the sound has finished, or scroll for forty-five minutes when you intended to check one thing — the book's arguments remain at arm's length.
This project closes that distance.
Over thirty days, you will collect systematic data on your own digital behavior: not just screen time totals, but the context of each session, the emotional state you bring to it and leave with, the triggers that initiate use, and the consequences that follow. In the second week, you will apply the book's frameworks to the patterns you have documented. In the third week, you will run a controlled experiment — modifying one variable at a time and observing what changes. In the fourth week, you will analyze what the data actually shows, compare it to what you assumed before you started, and draw conclusions you can defend.
The project is demanding because data collection requires daily consistency. Sporadic record-keeping produces data that cannot support rigorous analysis. If a full thirty days is not feasible in your course context, a compressed version using a fourteen-day protocol is noted at each stage — but understand that the shorter timeline reduces the reliability of pattern identification in Week 2.
A Note on Honesty
This project will produce uncomfortable findings. Most people who complete it discover that their actual use patterns differ substantially from their self-reported assumptions. They find they check platforms at times they did not think they were checking. They find mood effects they had attributed to other causes. They find that their stated reasons for using platforms — staying connected, staying informed — account for a smaller fraction of their time than they expected.
This discomfort is analytically valuable, not a sign that you are doing something wrong. Record what you actually observe, not what you wish you had observed.
Learning Goals
By the end of this project, you will be able to:
- Apply the attention economy frameworks from Part I to analyze your own platform use with systematic rigor.
- Identify specific dark patterns from the Part III taxonomy that are operating in your daily digital behavior.
- Demonstrate, with documented evidence, the relationship between specific platform design features and measurable changes in your mood, attention, or behavior.
- Design and execute a simple controlled behavioral experiment and evaluate its results honestly.
- Synthesize personal data into a coherent analytical argument about the relationship between platform design and individual agency.
Week 1: Baseline Data Collection (Days 1–7)
Purpose
You cannot understand a pattern without a baseline. Week 1 establishes yours. Resist the urge to change anything about your behavior during this week. The goal is documentation, not improvement. If you change your behavior because you are observing it — a well-known methodological challenge in behavioral research — note it, but do not let awareness of the problem cause you to overcorrect by pretending nothing is different.
Daily Data Collection Protocol
Each day, complete the Daily Log Template (see templates section) within thirty minutes of completing your last platform session for the day. Do not reconstruct the log from memory the following morning; by then, the affective details will have faded.
Platform-by-Platform Tracking
For each platform you use on a given day, record: - Platform name - Total time spent (in minutes, using your device's built-in screen time tracker if available) - Number of discrete sessions (each time you opened the app counts as one session) - Time of day for each session (morning, midday, afternoon, evening, late night) - Device used (phone, tablet, laptop, desktop) - Primary activity (scrolling feed, direct messaging, content creation, watching video, reading comments, posting, other)
Context Tracking
For each session, also note: - What you were doing immediately before you opened the platform - Whether you initiated the session yourself or responded to a notification, message, or other external prompt - Your intended duration ("I'll just check for a minute") versus actual duration - What interrupted or ended the session
Emotional State Tracking
Rate your emotional state on a simple 1–5 scale at three points: - When you wake up (pre-platform baseline) - Immediately after your longest platform session of the day - At the end of the day
Use this scale: 1 = Notably low/negative (anxious, sad, irritable, depleted) 2 = Below neutral 3 = Neutral/flat 4 = Above neutral 5 = Notably high/positive (energized, calm, connected, satisfied)
You may add brief qualitative notes to any rating. Do.
Sleep Adjacency Tracking
Note whether you used any platform in the sixty minutes before sleep, and whether you checked any platform within fifteen minutes of waking. These patterns are discussed in Chapter 9 (notifications as triggers) and carry particular significance in the data analysis phase.
Week 1 Reflection (Complete on Day 7)
Before you have done any pattern analysis, write a 200-word informal response to these questions: - What do you expect to find when you analyze this data? - Which platform do you expect will account for most of your time? - Do you expect a relationship between platform use and mood? If so, in which direction?
Seal this response metaphorically — do not re-read it until the Week 4 analysis. These are your priors. They matter.
Week 2: Pattern Identification (Days 8–14)
Purpose
With seven days of data in hand, Week 2 shifts from collection to analysis. You are looking for patterns: regularities, correlations, surprises. You are also beginning to connect what you observe to the frameworks from the book.
Analytical Tasks
Task 2.1: Quantitative Summary
Compile your Week 1 data into aggregate form using the Week 2 Summary Template. Calculate: - Total time per platform across the week - Average daily time per platform - Percentage of sessions that were notification-triggered versus self-initiated - Your average pre-session versus post-session mood ratings, by platform - Number of sessions where actual duration exceeded intended duration by more than 5 minutes
Task 2.2: Trigger Analysis
Review every context notation from Week 1. Identify the three most common triggers for platform use. Consider: - Boredom or transition (between tasks, waiting, commuting) - Social anxiety or FOMO (someone might have responded) - Reward-seeking (checking for likes, comments, new content) - Habit (automatic, no clear motivational state) - Genuine functional need (communicating with someone specific, finding information)
For each trigger category you identify, note which platform it most commonly leads to. Connect this analysis to Chapter 11 (FOMO) and Chapter 9 (notifications as triggers).
Task 2.3: Dark Pattern Identification
Return to Part III (Chapters 14–21) and identify at least three dark patterns you can document in your own use data. For each pattern, you need: - The pattern name and its definition from the relevant chapter - Specific evidence from your Week 1 data that this pattern is operating on you - The platform(s) where you observed it most clearly - The cognitive mechanism it exploits (connect to Part II)
Example: You notice that you open Instagram most often after a period of not posting, with anxiety about how previous posts performed. This is consistent with the variable ratio reinforcement structure described in Chapter 7 and with the social approval mechanics described in Chapter 10. The design feature enabling it is the notification delay — likes are not delivered in real time but in batches — which amplifies the unpredictability of the reward.
Task 2.4: Mood Correlation Analysis
Using your daily mood ratings, examine whether there is a consistent relationship between platform use and mood. Consider: - Does mood before use predict whether you use more or less? - Does mood after use consistently differ from mood before use? In which direction? - Are there differences between platforms in their mood effects? - Are there differences based on time of day, duration, or activity type?
Be careful with causal language here. Your data can identify correlations; it cannot, by itself, establish causation. This methodological point echoes the discussion of the mental health research debates in Chapter 30.
Continuing Daily Collection
Continue the Daily Log throughout Week 2. You are now collecting data you will use for the Week 3 experiment, so the baseline should remain unmodified.
Week 3: Experimental Modification (Days 15–21)
Purpose
Week 3 is where the project becomes a genuine experiment. Based on your Week 2 pattern analysis, you will identify one specific modification to test — one variable changed systematically — and collect data on the results.
Experimental Design Principles
A controlled experiment changes one variable at a time. This is harder than it sounds when the variable is your own behavior, but the principle is essential. If you simultaneously delete three apps, turn off all notifications, and implement a phone curfew, you will not be able to attribute any changes you observe to any specific cause.
Choose One Modification
Based on your pattern analysis, select one of the following experimental conditions (or propose your own, with instructor approval):
Option A — Notification Reduction: Turn off all non-essential notifications from your highest-use platform. Keep everything else constant. Track whether this changes session frequency, duration, trigger type, or mood effects.
Option B — Time-of-Day Restriction: Choose the time-of-day slot where your data shows the highest negative mood correlation with platform use. Block access during that period for the full week. Track whether use redistributes to other times and whether mood effects change.
Option C — Active vs. Passive Use Shift: For your highest-use platform, commit to only active engagement (commenting, messaging, creating) and no passive scrolling. Use a tally counter or phone timer to enforce this. Track duration changes and mood effects.
Option D — Platform Substitution: Replace your highest-use platform, for one week, with direct alternatives for each functional need it serves. (If you use Instagram to stay connected with friends, use direct messages instead. If you use it for inspiration, use a curated bookmarking tool.) Track how many uses you can substitute, how many you cannot, and what the unsatisfied residual looks like.
Data Collection During Week 3
Continue the Daily Log with one addition: each day, note your adherence to the experimental condition (full adherence, partial, or broke the condition) and if you broke it, what triggered the lapse. Non-adherence is analytically useful data, not a failure. A lapse triggered by a specific notification or social pressure tells you something important about the design features you are working against.
Midpoint Check (Day 18)
Write a brief (100 words) midpoint note: is anything unexpected happening? Are you finding the modification easier or harder than anticipated? Are there mood effects you did not predict?
Week 4: Analysis and Reflection (Days 22–30)
Purpose
Week 4 is synthesis. You are assembling everything — baseline data, pattern analysis, experimental results — into a coherent analytical account. This is where the capstone project becomes an argument, not just a data log.
Analytical Tasks
Task 4.1: Return to Your Priors
Retrieve your Week 1 Reflection (the 200-word note you wrote on Day 7). Read it now. Write a 300-word response: - What did you expect? What did you actually find? - Where were your assumptions most wrong? - What does the gap between assumption and finding tell you about how well you understood your own behavior before starting?
This comparison is significant. Research in behavioral science consistently finds that self-reports of media use substantially underestimate actual use. If you find the same, you are in good company — and in a position to understand why the gap exists.
Task 4.2: Experimental Results Analysis
Analyze your Week 3 data against your Week 1–2 baseline. For each of your key metrics (total time, session frequency, mood ratings, trigger types), describe: - What changed? - What did not change? - Were the changes in the direction you expected? - What alternative explanations exist for what you observed?
Task 4.3: Framework Integration
Write a 600-word analytical essay applying at least three of the book's frameworks to your personal data. This is not a summary of the frameworks — it is an application. The essay should make a specific argument about what your data shows about the relationship between platform design and your own behavior, supported by evidence from your logs and connected to the book's concepts.
Task 4.4: Ongoing Strategy
Based on what you have learned, describe the two or three specific, concrete changes you intend to make to your platform use going forward. Be specific about the mechanism (why do you expect this change to work, given what your data showed?). Be realistic about what you will not change and why.
Data Collection Templates
Template A: Daily Log
DATE: ____________ DAY #: ____
MOOD ON WAKING (1-5): ____
PLATFORM SESSIONS:
| Platform | Start Time | Duration | Triggered by | Intended Duration | Activity Type |
|----------|------------|----------|--------------|-------------------|---------------|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
MOOD AFTER LONGEST SESSION (1-5): ____ Platform: ____________
MOOD END OF DAY (1-5): ____
SLEEP ADJACENCY:
Used platform in 60 min before sleep? Y / N Which: ____________
Checked platform within 15 min of waking? Y / N Which: ____________
NOTES (anything notable about today's use — an urge you noticed, something that surprised you, a pattern you're starting to see):
___________________________________________________________________________
___________________________________________________________________________
Template B: Week 2 Aggregate Summary
WEEK 1 AGGREGATE DATA
Platform Summary:
| Platform | Total Min | Daily Avg | Sessions | Notif-Triggered % | Avg Pre-Mood | Avg Post-Mood | Mood Delta |
|----------|-----------|-----------|----------|-------------------|--------------|---------------|------------|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
Trigger Breakdown:
| Trigger Type | # Occurrences | Primary Platform |
|-----------------------|---------------|------------------|
| Boredom/Transition | | |
| Social Anxiety/FOMO | | |
| Reward-Seeking | | |
| Habit/Automatic | | |
| Functional Need | | |
Sessions Where Actual Duration > Intended by 5+ min: ____ (____% of all sessions)
Dark Patterns Identified (list):
1.
2.
3.
Template C: Dark Pattern Analysis Sheet
DARK PATTERN ANALYSIS
Pattern Name: _________________________
Source Chapter: ____ Definition: _________________________
Evidence from my data:
(Specific instances — dates, platforms, behaviors observed)
___________________________________________________________________________
___________________________________________________________________________
Cognitive mechanism exploited (cite Part II chapter):
___________________________________________________________________________
Platform(s) where observed: _________________________
Design feature enabling this pattern:
___________________________________________________________________________
My observed response to this pattern:
___________________________________________________________________________
Countermeasure I tested or could test:
___________________________________________________________________________
Template D: Week 3 Experimental Log
EXPERIMENT: _________________________ (modification type)
Start Date: ____________ End Date: ____________
Daily Adherence Record:
| Day | Adherence (Full/Partial/Broke) | What triggered any lapse | Notes |
|-----|-------------------------------|--------------------------|-------|
| 15 | | | |
| 16 | | | |
| 17 | | | |
| 18 | | | |
| 19 | | | |
| 20 | | | |
| 21 | | | |
Key Metric Changes (vs. Week 1 baseline):
| Metric | Week 1 Avg | Week 3 Avg | Change | Expected? |
|---------------------|------------|------------|--------|-----------|
| Total daily min | | | | |
| Sessions per day | | | | |
| Mood delta (post-pre)| | | | |
| Notif-triggered % | | | | |
Final Reflection Questions
These fifteen questions form the core of your Week 4 written reflection. You do not need to answer each one in equal depth; some will be more relevant to your data than others. Together, they should produce 1,500–2,000 words of substantive reflection.
-
Which of the book's claims about platform design did your data most strongly support? Which did it fail to support, or actively contradict?
-
Before this project, how accurate was your sense of how much time you spent on platforms? What does the gap — if there was one — suggest about self-knowledge and platform design?
-
Which dark pattern had the most visible effect on your behavior? Why do you think that particular mechanism was effective on you?
-
What was the relationship between mood and platform use in your data? Was it as simple as platforms making you feel worse? More complicated?
-
How much of your platform use served genuine functional needs? How much, in retrospect, do you think served those needs effectively?
-
Describe the moment in the data that most surprised you. What made it surprising?
-
How did your Week 3 experiment go? What worked, what did not, and what does that tell you about the difficulty of behavioral modification against platform design?
-
Chapter 36 discusses digital minimalism as one individual response strategy. Does your data support the premises that strategy rests on? Why or why not?
-
Chapter 37 discusses cognitive defense and inoculation. Did awareness of dark patterns — gained through Part III — change your behavior in ways you could observe in the data?
-
The book argues that individual behavior is heavily shaped by platform design choices, but that individual agency is not zero. Where in your data do you see design influence most clearly? Where do you see evidence of genuine agency?
-
If you were presenting your data to a platform designer at the company whose app you used most, what would you want them to know? What design change would you advocate for, based on your personal evidence?
-
Did the project change how you think about other people's platform use — particularly heavy users you know? In what direction?
-
What are the limitations of this data set? What could not be concluded from it, no matter how carefully collected?
-
If you repeated this project one year from now, what do you expect would be different? What do you expect would be the same?
-
The book ends with the concept of a personal manifesto for digital agency (Chapter 40). Based on your thirty days of data, draft three sentences of your own manifesto — commitments grounded in what you actually observed rather than in idealized aspirations.
Evaluation Rubric
The following rubric is intended for instructor use. Students should review it before beginning the project to understand what a strong submission looks like.
Criterion 1: Data Quality and Consistency (20 points)
| Score | Description |
|---|---|
| 18–20 | All 30 daily logs completed; data is internally consistent; emotional ratings include qualitative notes; sleep adjacency tracked throughout; missing data (if any) is acknowledged and its effect on analysis addressed. |
| 14–17 | 25–29 daily logs completed; minor gaps do not materially affect pattern analysis; most tracking fields completed. |
| 10–13 | 20–24 logs completed; some fields inconsistently tracked; gaps are acknowledged but partially affect analysis reliability. |
| 5–9 | Fewer than 20 logs; significant fields missing; data quality limits analytical conclusions. |
| 0–4 | Insufficient data for meaningful analysis. |
Criterion 2: Pattern Analysis Depth (20 points)
| Score | Description |
|---|---|
| 18–20 | Three or more dark patterns identified with specific evidence from data; each connected to a named cognitive mechanism from Part II; analysis distinguishes correlation from causation appropriately; trigger analysis is specific and grounded. |
| 14–17 | Two or more patterns identified with reasonable evidence; connections to Part II present but could be deeper; trigger analysis identifies categories but may lack specificity. |
| 10–13 | Patterns named but evidence thin; connections to book frameworks present but generic; trigger analysis lists categories without analysis. |
| 5–9 | Pattern identification without evidence; frameworks referenced but not applied. |
| 0–4 | Pattern analysis absent or purely descriptive. |
Criterion 3: Experimental Design and Execution (20 points)
| Score | Description |
|---|---|
| 18–20 | One variable clearly identified and modified; adherence log complete including lapse documentation; results analyzed against baseline with specific metrics; alternative explanations considered; non-adherence treated as analytically useful rather than hidden. |
| 14–17 | Experiment conducted with clear design; some lapse documentation; results compared to baseline; limited consideration of alternative explanations. |
| 10–13 | Experiment conducted but design is unclear or multiple variables changed simultaneously; adherence not fully documented; results described but not analyzed. |
| 5–9 | Experiment attempted but not completed or not documented systematically. |
| 0–4 | Experimental component absent. |
Criterion 4: Analytical Synthesis (25 points)
| Score | Description |
|---|---|
| 22–25 | Framework integration essay makes a specific, defensible argument grounded in personal data; draws on frameworks from at least three book sections; acknowledges limitations and alternative interpretations; connects personal findings to broader arguments in the book without overgeneralizing. |
| 17–21 | Essay applies multiple frameworks with reasonable specificity; argument is present but could be sharper; some acknowledgment of limitations. |
| 12–16 | Essay summarizes frameworks alongside data but the two are not well integrated; argument is implicit rather than explicit. |
| 6–11 | Essay is primarily descriptive; frameworks referenced but not applied to data. |
| 0–5 | Analytical essay absent or does not engage with book frameworks. |
Criterion 5: Reflection Quality (15 points)
| Score | Description |
|---|---|
| 13–15 | Reflection questions answered with genuine intellectual engagement; prior assumptions compared honestly to findings; ongoing strategy is specific and mechanistically grounded; demonstrates growth in self-knowledge and analytical capacity. |
| 10–12 | Most reflection questions addressed with reasonable depth; some comparison between assumption and finding; strategy is concrete. |
| 7–9 | Reflection present but surface-level; strategy is vague or aspirational without mechanistic grounding. |
| 3–6 | Reflection minimal; prior assumptions not compared to findings. |
| 0–2 | Reflection absent or purely pro forma. |
Total: 100 points
This project is not an exercise in self-criticism. It is an exercise in applied analysis. The goal is not to conclude that you are addicted, that platforms are evil, or that you need to delete everything. The goal is to know, with evidence, what is actually happening — and to be in a position to make informed choices. That is what the whole book has been preparing you for.