Chapter 8 Exercises: Algorithm Literacy


Exercise 8.1 — The Signal Audit (Individual or Group)

Objective: Develop fluency in reading platform analytics through the lens of signal quality.

Time: 60–90 minutes

What you need: Access to analytics for at least one platform where you have posted content. If you have no content yet, you will use a provided dataset (see Dataset 8-A in Appendix F).

Instructions:

Pull up the last 20 pieces of content you have posted on your primary platform. For each piece, record the following in a spreadsheet:

Column What to Record
Content ID Short title or date
Impressions How many people saw it
Views/Clicks How many actually watched/read
CTR Click-through rate (if available)
Completion % What percentage watched to the end
Likes Raw count
Comments Raw count
Shares/Sends Raw count (if available)
Saves Raw count (if available)
Follows Generated If visible in analytics

Once you have your data:

  1. Calculate an engagement rate for each piece: (likes + comments + shares + saves) / impressions × 100.
  2. Sort by completion rate. What do the top 5 pieces have in common?
  3. Sort by shares/saves combined. What themes emerge?
  4. Identify your single best-performing piece by each metric separately. Is it the same video/post across all metrics, or different ones?
  5. Write a 300-word analysis: What signals is your content generating well? What signals are weak? What does this suggest about what to produce next?

Reflection prompt: Did the piece you were most proud of creatively correlate with your best-performing piece algorithmically? What does any gap there tell you?


Exercise 8.2 — Platform Algorithm Comparison Chart (Research)

Objective: Build a working reference document comparing the primary algorithmic signals of four major platforms.

Time: 45–60 minutes

What you need: Internet access, note-taking software

Instructions:

Using platform-published resources (help centers, creator academy pages, official blog posts) and high-quality secondary sources (academic papers, reputable journalism), research and build a comparison chart with the following structure for TikTok, YouTube, Instagram Reels, and one platform of your choice:

Factor TikTok YouTube Instagram Reels [Your choice]
Primary ranking signal
Discovery mechanism
Initial test audience size/approach
Highest-weight engagement type
How posting frequency affects reach
What hurts distribution
Monetization requirements
Key official resource

For each cell, note whether the information comes from official platform documentation (mark as "O"), academic research ("A"), creator community inference ("I"), or journalistic investigation ("J"). The source type matters for how confidently you should hold the information.

Deliverable: Your completed chart plus a 200-word reflection on which platform's algorithm is most transparent and why that might be.


Exercise 8.3 — The A/B Content Experiment Design (Individual)

Objective: Design a rigorous (given real-world constraints) A/B test for one element of your content strategy.

Time: 30–45 minutes to design; several weeks to run

Instructions:

Choose one variable to test. Good candidates:

  • Hook type: Question hook ("Have you ever wondered…?") vs. bold statement hook ("Most people get this wrong…") vs. visual hook (start with the most interesting moment)
  • Video length: Under 60 seconds vs. 60–180 seconds (for the same content topic)
  • Posting time: Early morning (6–8am) vs. prime time (7–9pm) for your time zone and audience
  • Thumbnail style (YouTube): Person-in-frame with text vs. graphic-only vs. before/after
  • Caption approach: Long descriptive caption vs. one-line caption + hashtags vs. no caption

Design document (write out each of these):

  1. Hypothesis: I believe [Variable A] will outperform [Variable B] because [reason based on what you know about your audience and the algorithm].

  2. Control variables: What will you keep constant? (Topic area, production quality, posting frequency, promotion effort)

  3. Sample size: Minimum 5 pieces in each condition. Why does this matter?

  4. Duration: At least 14 days after the last post in each condition before measuring.

  5. Success metrics (rank these in order of importance for your goals): - Completion rate - Shares/sends - Saves - Follows generated - CTR

  6. Decision rule: What result will cause you to adopt the winning approach? (e.g., "If Variable A outperforms Variable B on my top 3 metrics by more than 15%, I'll shift to Variable A as my default.")

  7. Null result plan: If there's no meaningful difference, what's your next experiment?

Note: Do not run this test yet — just design it. You'll revisit the results in Chapter 26 (A/B Testing) with a more sophisticated analytical framework.


Exercise 8.4 — The Ethics of Algorithm Optimization (Discussion or Written)

Objective: Examine the real tension between algorithmic optimization and content ethics with intellectual honesty.

Time: 45 minutes (individual written) or 60 minutes (group discussion)

Instructions:

Read the following three scenarios and respond to each with a 150–200-word analysis.

Scenario A: A cooking creator discovers that their "recipe fail" videos (where they mess something up and joke about it) get 3× the completion rate and 5× the shares of their actual recipe tutorials. The failure videos are honest and genuine — they really do mess up sometimes — but they're not really teaching cooking. Should they shift more of their content toward failures to chase the algorithmic signal? What does "authentic" mean here?

Scenario B: A political commentary creator discovers that videos expressing outrage about a specific opposing politician consistently get 10× the algorithmic reach of their more nuanced policy analysis videos. They believe the outrage content is factually accurate — the politician really has done the things they're outraged about. But they also know that outrage framing makes people less likely to engage productively with the underlying policy questions. How should they weigh algorithmic reach against the quality of civic discourse they're contributing to?

Scenario C: A creator in the mental health space discovers that their most algorithmically successful content involves vivid descriptions of mental health crises — content that is not intended to harm but that research suggests can have "contagion" effects on vulnerable viewers. The content helps a lot of people feel less alone. It also might trigger some people. The algorithm has no way to make this distinction. What responsibility does the creator bear? What responsibility does the platform bear?


Exercise 8.5 — Algorithm Archaeology: Tracking a Major Update (Research)

Objective: Develop pattern-recognition skills by tracing how one historical algorithm update affected creators.

Time: 60–90 minutes

Instructions:

Choose one of the following documented algorithm events to research:

  • YouTube's shift from view-count to watch-time optimization (2012)
  • Facebook's "meaningful social interactions" update (January 2018)
  • Instagram's chronological-to-algorithmic feed shift (2016)
  • TikTok's reported suppression of content by disabled, fat, or LGBTQ+ creators (reported 2020)
  • YouTube's "Adpocalypse" and brand-safety advertiser boycott (2017)
  • Twitter/X's algorithm open-source release (2023)

Research questions (answer each in at least 150 words):

  1. What changed, specifically? What signals or mechanics were altered?
  2. Which creators were hurt most? Which benefited?
  3. What was the stated reason for the change? What were likely unstated reasons?
  4. How did creators respond? What adaptations worked? What didn't?
  5. What does this event suggest about the relationship between creators and platforms?

Sources to consult: Look for journalism from The Verge, Wired, or BuzzFeed News at the time of the event; academic papers citing the event; YouTube or platform blog posts from the period; creator community discussions (forums, YouTube videos from affected creators).

Deliverable: A 750–1,000-word research report structured around the five questions above. Include a brief bibliography of at least four sources.


Exercise 8.6 — Platform Dependency Risk Assessment (Individual)

Objective: Quantify and strategize around your current algorithmic dependency.

Time: 30–45 minutes

Instructions:

Complete the following assessment honestly.

Part 1: Current State

List every platform/channel where you currently have an audience or presence. For each: - Platform name - Approximate audience size - What percentage of that audience would find you if the platform disappeared? - Does this platform have a discovery algorithm? (Yes/No) - How much of your audience came from algorithmic recommendation vs. direct search/referral?

Part 2: Revenue Attribution

If you currently earn any revenue from your content, estimate what percentage comes from each source: - Platform ad programs (YouTube AdSense, TikTok Creator Fund, etc.) - Algorithmic-driven brand deals (brands found you because of algorithmic reach) - Direct audience relationships (email list subscribers, Patreon, paid community) - Other (consulting, speaking, products sold to warm audience)

Part 3: Risk Score

Using the following rubric, calculate your Platform Dependency Risk Score:

Indicator Score
>80% of audience on one algorithm-dependent platform +3
No email list or direct contact with audience +2
Revenue primarily from platform ad programs +2
No content on non-algorithm platforms (podcast, blog) +1
Account on at least one platform with no history of strikes or violations -1
Email list with >500 subscribers -2
Revenue from direct products/services -2

Total score: ____

4–8: High dependency. Significant vulnerability to algorithm changes or account issues. 1–3: Moderate dependency. Some diversification, but room to grow. Below 1: Well-diversified. Your audience would survive most platform-level events.

Part 4: The Plan

Based on your score, identify one concrete action to reduce your dependency by 1 point. Write out specifically what you will do, by when, and how you'll measure whether it worked.


Exercise 8.7 — The Creator Platform Negotiation (Thought Experiment)

Objective: Develop a structural perspective on the platform-creator relationship by inhabiting the platform's decision-making framework.

Time: 45–60 minutes

Instructions:

You are the head of creator relations at a hypothetical video platform launching in 2026. You need to design an algorithm that balances:

  • Maximizing time users spend on your platform (your revenue driver)
  • Rewarding creators who produce high-quality content (your content supply)
  • Preventing the spread of misinformation, harassment, and harmful content
  • Ensuring creators from all demographic backgrounds have equitable access to algorithmic distribution
  • Not creating incentives for creators to produce rage-bait or engagement-bait

These goals are genuinely in tension with each other. Design your algorithm's top 5 ranking signals, explain what each signal is measuring, and explain the trade-off you made by choosing it.

Then write a one-page "creator transparency report" — the document you would publish to creators explaining how your algorithm works. How much do you reveal? What do you withhold, and why?

Share with the class: What were the hardest trade-offs? What did you end up prioritizing, and what does that reveal about the values embedded in any algorithm's design?