Appendix D: Templates and Worksheets — Practical Tools for Digital Agency
A Note on Using These Tools
The worksheets in this appendix are designed to be used, not read. The distinction matters. Reading about dark patterns and attention engineering is valuable; observing these mechanisms operating in your own specific life is transformative.
You do not need to complete all six tools at once. Some — the Personal Technology Audit and Algorithmic Audit Log — require sustained engagement over days or weeks. Others — the Dark Pattern Checklist and Ethical Design Rubric — are designed for single-sitting use. The FOMO Scale and Personal Digital Values Statement sit in between: each takes under thirty minutes but rewards revisiting over time.
A practical suggestion: start with the FOMO Scale to establish a baseline self-understanding, then run the 7-day Technology Audit to observe your actual behavior, then return to the Values Statement to articulate what you want to do differently. The other tools can be used as needed from there.
All worksheets in this appendix are in the public domain. You may reproduce, modify, and distribute them freely.
Worksheet 1: Personal Technology Audit — 7-Day Self-Assessment
Purpose
Most people significantly underestimate how much time they spend on social media and significantly overestimate the quality of that time. This audit creates an objective record over one week — enough time to establish patterns without requiring an unsustainable tracking commitment.
The audit has three components: time tracking, emotional state logging, and trigger identification. Completing all three gives you data you can actually use.
Instructions
- Complete each day's entry within two hours of the end of your day (or in real time if you prefer).
- Use your device's built-in Screen Time (iOS) or Digital Wellbeing (Android) app for the usage numbers — self-estimation is unreliable. Record the numbers, not your impression of them.
- Be honest in the emotional state entries. No one sees this but you.
Part A: Daily Screen Time Log
Copy this table for each of the 7 days. Fill in actual time (from device settings) for each platform.
Day __ Date: ____
| Platform / App | Morning (6am–12pm) | Afternoon (12pm–6pm) | Evening (6pm–12am) | Total for Day |
|---|---|---|---|---|
| TikTok | ||||
| Twitter / X | ||||
| YouTube | ||||
| Messaging apps (total) | ||||
| News apps (total) | ||||
| Other: __ | ||||
| DAILY TOTAL |
Total screen time for all apps (from device settings): _____
Social media as % of total screen time: _____
Part B: Emotional State Log
Rate your emotional state on a 1–10 scale (1 = very low / distressed, 10 = very high / content) and add a brief note.
| Session | Platform | Duration | Mood BEFORE (1–10) | Mood AFTER (1–10) | Change (+/-) | Notes (what were you feeling?) |
|---|---|---|---|---|---|---|
| 1 | ||||||
| 2 | ||||||
| 3 | ||||||
| 4 | ||||||
| 5 | ||||||
| 6 |
Daily emotional summary: - Did social media use improve, worsen, or not change your mood today overall? __ - Which session left you feeling worst? What were you doing? __ - Which session left you feeling best? What were you doing? _____
Part C: Trigger Identification Log
For each significant use session (more than 10 minutes), record what triggered it.
| Session | Trigger (what prompted you to open the app?) | Intended duration | Actual duration | Match? |
|---|---|---|---|---|
| 1 | ||||
| 2 | ||||
| 3 | ||||
| 4 | ||||
| 5 |
Trigger categories (check all that appear in your log): - [ ] Boredom / idle moment (waiting, transitioning) - [ ] Anxiety / emotional distress (seeking distraction) - [ ] Social obligation (responding to messages, notifications) - [ ] Habit (automatic opening with no conscious trigger) - [ ] Genuine need (looking up specific information) - [ ] Seeking entertainment (intentional relaxation) - [ ] Fear of missing out (checked because something might be happening) - [ ] Other: ___
Part D: Weekly Summary Reflection
Complete these questions at the end of Day 7. Take at least 20 minutes and write substantive answers rather than single words.
1. Total weekly social media time: _____ Compare this to your estimate before you started tracking. Were you surprised? More or less than you expected?
2. Pattern analysis: - What platform consumed the most time? - What time of day was your heaviest use period? - Were your heaviest use days predictably different from lighter days (e.g., weekdays vs. weekends, stressed days vs. calm days)?
3. Emotional patterns: - Looking across all your mood-before vs. mood-after entries: did social media use, on average, improve or worsen your emotional state? - Which platform was most consistently positive? Most consistently negative? - Are there any conditions under which social media use seemed to reliably help you? (Be specific — "connecting with close friends after a hard day" is more useful than "sometimes.")
4. Trigger patterns: - What percentage of your sessions began with a genuine, intentional purpose? - What percentage began with habit or anxiety? - What would you have done with that time if the app hadn't been there?
5. The gap between intention and behavior: Look at your intended vs. actual duration column. How often did you stay longer than you meant to? Which platforms showed the largest gap between intention and actual duration? What does this suggest about how those platforms are designed?
6. If you could change one thing about your social media behavior based on this week's data, what would it be? Why that specific thing?
Worksheet 2: Dark Pattern Identification Checklist
Purpose
This checklist is designed to help you audit any digital interface — a social media app, a website, a game, an e-commerce platform — for the presence of dark patterns. It can be used for systematic platform analysis, for classroom or workshop use, or for personal awareness when you feel a platform is manipulating your behavior but cannot name how.
Instructions
Walk through each item in the context of a specific platform or interface you want to evaluate. Mark each item as: Present / Absent / Uncertain. After completing the checklist, count the number of items marked Present. A brief interpretation guide follows.
Category 1: Notification Manipulation
- [ ] Default-on notifications: All notification types are enabled by default; turning them off requires multiple steps.
- [ ] Urgency language in notifications: Notifications use time-pressured language ("Don't miss this," "Happening now," "Before it's gone").
- [ ] Badge counts: Unread counts displayed on app icons before the app is opened, creating persistent low-level anxiety.
- [ ] Notification bundling delay: Notifications are delayed and then delivered in batches to create a false sense of accumulated social activity.
- [ ] Re-engagement notifications: Notifications are sent specifically to users who have been inactive, designed to pull them back rather than to communicate genuine social information.
- [ ] Misleading notification content: Notification preview text is vague or misleading to maximize opens ("Someone commented on your post" rather than telling you what they said).
- [ ] Email notification overlap: You receive email notifications duplicating in-app notifications, creating multiple simultaneous interruption channels.
Category 2: Social Pressure Mechanisms
- [ ] Public like/reaction counts: Approval metrics are visible to everyone, creating social pressure to perform well and anxiety when performance is poor.
- [ ] Public follower/friend counts: Social status metrics are prominently displayed and unavoidable.
- [ ] Streak mechanics: Consecutive-day use is tracked and displayed, creating loss aversion (fear of breaking the streak) that compels daily use regardless of value.
- [ ] Read receipts (non-optional): You cannot disable read receipt indicators, creating social obligation to respond immediately.
- [ ] Online status indicators: Your live activity status is visible to others by default, creating social pressure around response times.
- [ ] Comparative display of metrics: Your metrics are displayed alongside others' (how your engagement compares to similar accounts, who is "trending").
- [ ] Social validation through visible engagement: Comments and reactions from others are prominently displayed as social proof to encourage conformity.
Category 3: Friction Asymmetry (Easy In, Hard Out)
- [ ] One-click engagement, multi-step disengagement: Subscribing, following, or creating an account takes seconds; unsubscribing, unfollowing, or deleting takes multiple steps across multiple pages.
- [ ] Account deletion buried in settings: The option to delete your account is not in an obvious location; finding it requires multiple navigation steps.
- [ ] Data portability obstacles: Downloading your own data is complicated, unintuitive, or produces formats that are difficult to use.
- [ ] Exit confirmation dark patterns: When you try to leave or delete, you are shown emotionally manipulative messages ("Are you sure? You'll lose your memories") rather than neutral confirmations.
- [ ] Subscription cancellation friction: Cancelling a paid subscription requires contacting support or navigating multiple confirmation screens, while signing up was a single click.
- [ ] Social cost of exit: The platform emphasizes what you will lose socially by leaving (your friends won't see you, your followers will lose updates) rather than what you gain.
Category 4: Attention Trap Features
- [ ] Infinite scroll: Content feeds have no natural stopping point; the user must actively choose to stop rather than reaching an end.
- [ ] Autoplay: Video or audio content begins playing automatically after the previous item ends, without active choice.
- [ ] Variable reward feeds: The feed mixes highly engaging content with ordinary content unpredictably, creating the same reinforcement pattern as a slot machine.
- [ ] Pull-to-refresh: A gesture interaction that triggers the same anticipatory reward cycle as pulling a slot machine lever.
- [ ] "You're all caught up" / "See older posts": After genuine completion, the platform offers to extend engagement with lower-quality content rather than allowing a natural exit.
- [ ] Cross-content recommendations: After completing one piece of content, immediate recommendations begin loading the next without pause.
- [ ] Content that rewards longer viewing: The algorithm explicitly promotes content that keeps you watching longer, regardless of whether longer content is better.
Category 5: Data Collection Deception
- [ ] Privacy theater consent interfaces: Cookie consent and privacy dialogs are designed to make the "Accept All" option visually prominent and the privacy-protective option hidden or small.
- [ ] Pre-ticked data sharing boxes: Data sharing options are pre-selected by default; opting out requires active unchecking.
- [ ] Impossible-to-understand privacy policies: Privacy policies are written to technically satisfy legal disclosure requirements while being practically incomprehensible.
- [ ] Off-platform tracking: The platform collects data about your behavior on other websites and apps without making this prominent in consent flows.
- [ ] Inferred sensitive data: Data about religion, politics, health, sexuality, or ethnicity is inferred from behavior and used for targeting without explicit disclosure.
- [ ] "Improving your experience" framing: Data collection that primarily serves advertising targeting is described as improving your user experience.
Scoring Interpretation
0–5 items present: Low dark pattern density. The platform has some concerning features but is relatively respectful of user autonomy. Worth monitoring for pattern increases.
6–12 items present: Moderate dark pattern density. The platform uses multiple psychological manipulation techniques, though not all. Be aware of specific patterns present and consider whether they affect your behavior.
13–20 items present: High dark pattern density. The platform is systematically designed to override user autonomy and maximize engagement at the expense of user well-being. Intentional use protocols are strongly advisable.
21+ items present: Very high dark pattern density. Nearly every feature of this platform is oriented toward capturing and holding attention through psychological manipulation. Significant caution warranted.
Worksheet 3: FOMO Assessment Scale
Purpose
This scale is adapted from the Fear of Missing Out (FoMO) scale developed by Przybylski, Murayama, DeHaan, and Gladwell (2013), published in Computers in Human Behavior. The original scale is validated and widely used in social media research; this version adapts the wording for self-assessment purposes. It is intended for self-reflection, not clinical diagnosis.
Instructions
Rate each item from 1 to 5 using the scale below. Answer based on how you generally feel, not how you feel on particularly good or bad days.
1 = Not at all true of me 2 = Slightly true of me 3 = Moderately true of me 4 = Quite true of me 5 = Extremely true of me
| # | Statement | Rating (1–5) |
|---|---|---|
| 1 | I fear others have more rewarding experiences than me. | |
| 2 | I fear my friends have more rewarding experiences than me. | |
| 3 | I get worried when I find out my friends are having fun without me. | |
| 4 | I get anxious when I don't know what my friends are up to. | |
| 5 | It is important to me to understand my friends' "in jokes." | |
| 6 | Sometimes I wonder if I spend too much time keeping up with what is going on. | |
| 7 | It bothers me when I miss an opportunity to meet up with friends. | |
| 8 | When I miss a planned get-together, it bothers me. | |
| 9 | When I go on vacation, I continue to keep tabs on what my friends are doing. | |
| 10 | When I have a good time, it is important for me to share the details online. |
Total Score: _____ (Add all 10 ratings)
Scoring and Interpretation
10–20: Low FoMO. You generally feel secure in your social connections and do not feel strong anxiety about missing out on others' experiences. Social media is likely not a significant source of anxiety for you, though specific platforms or life circumstances might still trigger FoMO responses.
21–30: Moderate FoMO. You experience some anxiety about missing out on social experiences and likely check social media somewhat compulsively at times. You may find that your social media use is partly driven by anxiety rather than genuine desire to connect or be entertained.
31–40: High FoMO. Social anxiety about missing out is a significant feature of your relationship with social media. Your use is likely substantially driven by the need to feel included and up-to-date rather than by the intrinsic value of the content. This level of FoMO is associated with lower overall life satisfaction and greater susceptibility to engagement manipulation.
41–50: Very high FoMO. FoMO is a dominant force in your social media use. The anxiety about missing out may be causing you to use platforms in ways that are not aligned with your own values or goals. This score range is associated with significant well-being costs and may warrant deliberate intervention.
Reflection Questions
After scoring, take a few minutes to respond to these questions in writing:
-
Which specific items had your highest ratings? What do they suggest about the specific fears driving your social media use?
-
Think of a specific recent situation where FoMO affected your behavior (checking your phone during dinner, staying up late scrolling, posting something primarily to make others aware of your experience). What were you actually afraid of?
-
Consider the relationship between your FoMO score and your technology audit data (if you completed Worksheet 1). Does the score feel consistent with what you observed about your usage triggers?
-
What would it actually cost you — concretely — if you missed an update, a trend, or a social event in the next 24 hours? Is that cost real, or imagined?
-
FoMO is partly a social media design feature, not just a personality trait — platforms are specifically designed to trigger it. Knowing this, does your score feel like information about you, about the platform, or both?
Worksheet 4: Ethical Design Evaluation Rubric
Purpose
This rubric provides a structured framework for evaluating whether a specific digital platform feature serves its users or exploits them. It can be used to evaluate an existing feature, to compare two competing platforms, or as a design tool for practitioners creating digital products.
Instructions
Select a specific platform feature to evaluate (e.g., Instagram's suggested posts feed, TikTok's For You Page, Twitter's trending topics, Snapchat's Streaks, LinkedIn's "People Also Viewed" sidebar). Rate the feature on each of the five dimensions below using the anchor descriptions. Then total your score and consult the interpretation guide.
Feature being evaluated: ____ Platform: ____ Date of evaluation: _______
Dimension 1: Consent and Transparency
Does the user understand what this feature does and how it works? Was consent to the feature's operation genuinely informed and freely given?
| Score | Anchor Description |
|---|---|
| 1 | The feature operates invisibly; users have no meaningful understanding of how it works or that it is operating. No consent mechanism exists or it is purely nominal. |
| 3 | Some disclosure of the feature exists in documentation (privacy policy, help center) but it is not presented at the point of use. Most users would be unaware of the mechanism even if they had technically consented to terms of service. |
| 5 | The feature's operation is clearly explained at the point of introduction and at intervals during use. Users can see exactly why they are being shown this content or experiencing this feature. Consent is actively given rather than assumed from a general terms-of-service click. |
Rating for this dimension: _____
Dimension 2: Autonomy Preservation
Does the feature support or undermine the user's ability to make free choices about how they use their time and attention?
| Score | Anchor Description |
|---|---|
| 1 | The feature is specifically designed to override users' stated intentions (e.g., continuing to show content after a stated stopping point, making it more difficult to leave than to continue). It exploits known psychological vulnerabilities to produce behavior the user would not endorse on reflection. |
| 3 | The feature does not actively circumvent user intentions but also provides no support for the user's own stated goals. It is neutral regarding autonomy — neither helping nor harming the user's ability to make deliberate choices. |
| 5 | The feature actively supports user autonomy: it respects stated preferences about time limits, gives users meaningful control over its operation, provides easily accessible off-switches, and defaults to settings that align with users' stated goals rather than the platform's engagement metrics. |
Rating for this dimension: _____
Dimension 3: Attention Respect
Does the feature treat the user's attention as a resource to be protected or as a commodity to be extracted?
| Score | Anchor Description |
|---|---|
| 1 | The feature is engineered to maximize time-on-platform regardless of whether that time is valuable to the user. It uses variable reward schedules, autoplay, or infinite feeds to prevent natural stopping points. Its success metric is session length or return frequency, not user satisfaction or goal completion. |
| 3 | The feature consumes user attention but does not employ active psychological manipulation to extend engagement beyond the user's own inclinations. It is indifferent to whether the user's time is well-spent. |
| 5 | The feature is explicitly designed to respect the user's time. It has natural stopping points, provides clear signals of content completion, supports users in achieving their actual goals rather than maximizing browsing time, and does not sacrifice user time to platform engagement metrics. |
Rating for this dimension: _____
Dimension 4: Data Minimization
Does the feature collect and use only the data necessary for the function it explicitly provides, or does it exploit its position to gather additional behavioral data for advertising or other purposes?
| Score | Anchor Description |
|---|---|
| 1 | The feature collects far more data than its function requires. Behavioral data (viewing duration, emotional reaction, interaction patterns) is harvested as a secondary purpose without disclosure. This data is used for advertising targeting, behavioral profiling, or sold to third parties. |
| 3 | The feature collects data beyond strict functional necessity but provides some disclosure and limited opt-out mechanisms. Secondary data uses are real but not the dominant purpose of the feature. |
| 5 | The feature collects only data strictly necessary for the service it provides. Secondary uses of behavioral data are either absent or clearly disclosed, with genuine opt-out mechanisms. Users have visibility into what data the feature generates and meaningful control over it. |
Rating for this dimension: _____
Dimension 5: User Control
Does the user have genuine, accessible control over how this feature operates? Can they modify, disable, or exit it easily?
| Score | Anchor Description |
|---|---|
| 1 | The feature cannot be meaningfully disabled or modified. The only meaningful option is to leave the platform entirely. Asking to change the feature's operation requires technical knowledge or customer support interaction beyond ordinary users' capacity. |
| 3 | Some control settings exist but they are buried in settings menus, require multiple steps to access, and may not work fully or reliably. A motivated user could modify the feature but an average user would not. |
| 5 | Controls are prominently accessible directly within the feature itself. The user can easily pause, modify, or disable the feature at the point of use. Default settings reflect user well-being rather than platform engagement preferences, and changes to settings are respected immediately and reliably. |
Rating for this dimension: _____
Total Score and Interpretation
| Dimension | Score |
|---|---|
| Consent and Transparency | |
| Autonomy Preservation | |
| Attention Respect | |
| Data Minimization | |
| User Control | |
| TOTAL |
5–10: Exploitative. This feature prioritizes platform interests (engagement, data collection, revenue) over user interests across most dimensions. Users interacting with it are operating in an environment designed to override their own judgment and extract value from them.
11–15: Problematic. The feature has significant ethical design problems, though not uniformly. It may respect users in some dimensions while exploiting them in others. Users should be aware of specific weaknesses.
16–20: Mixed. The feature shows genuine attention to some ethical design principles alongside meaningful shortcomings. This score range suggests a feature that was designed with at least some user well-being considerations but that has not fully committed to them.
21–25: Ethical. This feature is designed with genuine respect for user autonomy, attention, and privacy. It represents good-faith ethical design. Features in this range are rare among major social media platform components.
Worksheet 5: Personal Digital Values Statement
Purpose
A Personal Digital Values Statement articulates, in advance, what you want from technology and what you will not accept from it. It shifts you from reactive (responding to platform design with habitual behavior) to intentional (using platforms on your own terms, within limits you set yourself). It functions as both a decision-making tool in the moment and a periodic reflection document.
This is a living document. It should be revised every few months as your relationship with technology evolves.
Part 1: What I Want from Social Media
List 3–5 specific, concrete goals that justify your use of social media. "Staying connected" is too vague to be useful. "Maintaining regular contact with the three friends I no longer live near" is specific enough to evaluate against.
| # | Specific Goal | Which Platform(s) Serve This Goal | Is This Goal Currently Being Met? |
|---|---|---|---|
| 1 | |||
| 2 | |||
| 3 | |||
| 4 | |||
| 5 |
Part 2: What I Will Not Tolerate
List 3–5 specific behaviors, experiences, or outcomes that you are no longer willing to accept as a consequence of your social media use. These are your non-negotiables.
Examples: checking my phone during meals with family; using social media after 10pm; feeling worse about myself after using Instagram; spending more than 30 minutes per day on TikTok; reading political content that leaves me anxious and powerless.
| # | Specific Limit |
|---|---|
| 1 | |
| 2 | |
| 3 | |
| 4 | |
| 5 |
Part 3: My Platform-Specific Commitments
For each platform you use at least weekly, complete this brief commitment statement.
Platform: _________
I use this platform for: _________
My maximum weekly time on this platform: _____ hours
Times / places I will not use this platform: _________
I will stop following or muting: _________
If this platform consistently fails to serve my goals in Part 1 or violates my limits in Part 2, my response will be: _________
(Repeat for each platform)
Part 4: My Review Schedule
Technology use patterns change. Commit to a specific review schedule.
I will review and revise this document: - [ ] Monthly - [ ] Every three months - [ ] Twice a year - [ ] Once a year
Scheduled review dates: _______
Person (if any) I have shared this with and who will hold me accountable: _______
Part 5: My Personal Technology Manifesto
Write one to three sentences that capture your overall philosophy about social media and technology use. This should be something you could say out loud to someone you trust and feel was genuinely true of you.
You might complete sentence starters like: - "I use social media to... not to..." - "My attention is... and I choose to direct it toward..." - "Technology serves me when... and stops serving me when..."
My manifesto:
Worksheet 6: The Algorithmic Audit Log
Purpose
This two-week log tracks what content an algorithm serves you, allowing you to detect filter bubble effects, emotional manipulation patterns, and the difference between what you choose and what is chosen for you. It is designed to make the algorithm's behavior visible.
Instructions
Complete one log entry per day for 14 consecutive days on a single platform of your choice. Choose the platform where you spend the most time or feel most uncertain about algorithmic influence. For each entry, record 5–10 pieces of content you encountered — enough to be representative, not so many it becomes burdensome.
Daily Log Entry Template
Day: _ Date: _ Platform: ____ Total time on platform today: _____
| # | Content Type | Emotional Valence | Confirms or Challenges Views? | Sought or Served? | Notes |
|---|---|---|---|---|---|
| 1 | |||||
| 2 | |||||
| 3 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 9 | |||||
| 10 |
Column Guide: - Content Type: News, entertainment, personal update, advertisement, influencer content, political, health/wellness, sports, humor, other. - Emotional Valence: Positive (+), Negative (-), Neutral (0), or Outrage/Anger (A). - Confirms or Challenges Views? Confirms (C), Challenges (Ch), or Neutral/Not Applicable (N). - Sought or Served? Did you actively search for or navigate to this content (S), or was it placed in your feed/recommended to you (Se)?
End-of-day questions: 1. What proportion of today's content was negative or outrage-inducing? _ / total entries 2. What proportion confirmed rather than challenged your existing views? / total entries 3. What proportion was served to you rather than actively sought? __ / total entries 4. Did you feel, at the end of today's session, better or worse than when you started? (1–10 scale) _____
Two-Week Reflection Analysis
After completing 14 days of entries, answer the following:
1. Content composition analysis: Total entries: _ Negative/outrage content: (%) Confirming vs. challenging: _ confirming (%), _ challenging (%) Sought vs. served: _ sought (%), _ served (%)
2. Emotional pattern: Average mood before sessions (if tracked): _ Average mood after sessions (if tracked): ___ On days with more negative content, was your post-session mood consistently worse? Yes / No / Mixed
3. Filter bubble assessment: Did the algorithm consistently show you content that confirms your existing views? What topics were most over-represented? What topics or perspectives were almost entirely absent?
4. The served-vs-sought gap: What percentage of your total content consumption was algorithmically served rather than actively chosen? If you could only see content you actively searched for, what would you have missed? What would you not have missed?
5. Most revealing finding: After two weeks of data, what single thing did you notice about the algorithm's behavior that you had not consciously recognized before?
These worksheets may be reproduced for educational, classroom, workshop, or personal use. For organizations seeking to adapt these tools for clinical, research, or commercial applications, please consult the publisher.