Chapter 17 Exercises: Algorithms, the Attention Economy, and Filter Bubbles
Exercise 17.1 — Ingrid's Experiment: Tracking Algorithmic Personalization
Type: Individual observation project (ongoing, two weeks) Difficulty: Intermediate Time Required: Setup: 30 minutes. Ongoing observation: 15–20 minutes per day for 14 days. Final write-up: 1–2 hours.
Background
In the chapter's opening narrative, Ingrid Larsen describes an experiment she ran to observe algorithmic personalization in action: two Google accounts, identical search behavior, divergent recommendation outcomes. This exercise asks you to conduct a version of that experiment yourself — with a controlled design that will allow you to observe the feedback loop mechanism in real time.
Setup
-
Create a completely new account on either YouTube or Twitter/X (not your existing account — its behavioral history will confound results). Use a throwaway email address. Do not connect it to your existing social media presence in any way.
-
Before beginning any searches or engagement, take a screenshot of your starting state — the default "recommended" or "trending" content with no behavioral history. This is your baseline.
-
Choose a topic area for your controlled engagement. The topic should be one you can engage with consistently for two weeks and that has genuine political or social content. Options: immigration policy, climate change, election integrity, public health policy, or economic inequality. Choose a topic where you can predict that strong partisan content exists.
Procedure
Days 1–3: Engage exclusively with mainstream, centrist, or widely respected sources on your topic. On YouTube, this means mainstream news networks; on Twitter, it means established newspaper accounts and nonpartisan policy organizations. Do not click on explicitly partisan content, even if it appears in your recommendations. Record what the algorithm is recommending at the end of each day.
Days 4–7: Shift your engagement slightly toward content that has a clear but moderate political tilt — one direction only (choose left-tilting or right-tilting, and be consistent). Again, avoid explicitly partisan or extreme content. Record recommendations daily.
Days 8–14: Maintain your Day 4–7 pattern without change. Do not escalate or de-escalate engagement. Record recommendations daily, noting any drift toward more extreme or emotionally intense content.
Documentation
Maintain a daily log with the following fields: - Date - What you engaged with (describe briefly, do not need to link) - Top 5–10 items recommended at end of session - Estimated political tilt of recommendations (neutral / center-left / left / far-left / center-right / right / far-right) - Any notable escalation in emotional intensity or extremity of content
Write-Up (500–750 words)
Answer the following questions: 1. Did your recommendations drift from your starting baseline? In what direction? 2. Did the drift match your engagement pattern (amplifying in the direction you were engaging) or diverge from it (pulling toward a different direction)? 3. At any point did you observe the escalation phenomenon — recommendations for content more extreme or emotionally intense than what you engaged with? Describe specifically. 4. Compare your observations to the Ribeiro et al. (2019) findings discussed in the chapter. What similarities and differences did you observe? 5. What are the limitations of your experiment? What would you need to change to make it more rigorous?
Exercise 17.2 — YouTube Pathway Audit
Type: Individual observation exercise Difficulty: Intermediate Time Required: 60–90 minutes
Background
This exercise asks you to conduct a structured observation of YouTube's recommendation pathway — specifically, to document the trajectory from a mainstream political starting point through a series of algorithm-driven recommendations. The goal is to observe, in real time, the pathway dynamics documented by Ribeiro et al.
Procedure
Step 1: Open YouTube in an incognito/private browser window (no behavioral history). Navigate to a mainstream news organization's YouTube channel (examples: BBC News, PBS NewsHour, NPR, Associated Press). Find a recent video on a political topic — a current event, a policy debate, an interview with an elected official. Watch at least 5 minutes of the video.
Step 2: Without searching for anything, click on the first "Up Next" or recommended video that appears in the sidebar or below the video. Watch at least 3 minutes of this video. Document: what it is, which channel it's from, how you would characterize its political content and emotional intensity.
Step 3: Repeat Step 2 for a total of 10 recommendation clicks from the starting point. At each step, document the video, channel, and your assessment of political content and emotional intensity on a 1–5 scale (1 = very moderate/measured; 5 = very extreme/emotionally intense).
Step 4: If at any point you reach a video that you would characterize as explicitly extremist — content promoting white nationalism, conspiracy theories about election fraud, advocacy for political violence, etc. — stop the chain and note at which step you arrived there. Do not continue clicking.
Documentation
Create a table with the following columns: | Step | Video Title (abbreviated) | Channel | Platform Category (Mainstream/Alt/Extreme) | Political Lean | Emotional Intensity (1–5) |
Write-Up (400–600 words)
- Describe the trajectory you observed. Did the pathway escalate toward more extreme content? Remain roughly stable? Vary unpredictably?
- At which step (if any) did you cross what you would consider a significant threshold in terms of content extremity or emotional intensity?
- How does your observed pathway compare to the Ribeiro et al. finding of systematic pathways from mainstream to extreme content? Note: it is possible your pathway did not escalate significantly — that is a valid observation.
- What do you think accounts for any differences between your observation and Ribeiro et al.'s findings? (Consider: YouTube has made algorithmic changes since the study; your incognito window may behave differently than a logged-in account; your specific starting video matters.)
Exercise 17.3 — Filter Bubble Self-Assessment
Type: Individual reflection exercise with calibration component Difficulty: Introductory Time Required: 30–45 minutes
Background
This exercise asks you to assess your own information environment before engaging with the empirical literature on filter bubbles — and then to revisit your assessment after reading Bail et al. (2018) and the chapter's discussion of the echo chamber distinction. The goal is to calibrate your intuitions against the empirical record.
Part A: Pre-Reading Self-Assessment (complete before reading Bail et al.)
Answer the following questions as honestly as you can, based on your actual social media and news consumption habits:
-
Across all the social media platforms you use, what percentage of the political content you see would you estimate comes from perspectives you generally agree with? (0%–100%)
-
In the past week, how many times did you actively seek out a political article or post that you expected to disagree with? (Number)
-
When you see a political claim that contradicts your existing beliefs, what is your typical first response: (a) investigate it; (b) dismiss it; (c) feel uncomfortable but consider it; (d) share it as an example of misinformation; (e) other.
-
Do you believe your social media feeds show you a representative sample of political opinion in your country, an unrepresentative but ideologically diverse sample, or a sample skewed toward your own political perspective?
-
Rate your agreement with this statement: "My social media feeds have made me more politically extreme than I would otherwise be." (1 = strongly disagree; 5 = strongly agree)
Part B: Post-Reading Calibration (complete after reading chapter and Bail et al.)
After reading the chapter's discussion of filter bubbles, echo chambers, and the Bail et al. finding, answer:
-
Were your Part A estimates consistent with the empirical findings about filter bubbles discussed in the chapter? In what ways were you over- or underestimating your own filter bubble?
-
The Bail et al. finding suggests that exposure to opposing viewpoints can increase, not decrease, polarization. Does this match your own experience? Describe a specific instance if you can.
-
Revise your Part A Question 5 rating in light of the chapter's discussion. What is your revised rating, and what accounts for any change?
Part C: Reflection (250–400 words)
The Bakshy et al. (2015) study found that human choice drives more political information cocooning than algorithmic curation. Does this finding make you more or less concerned about filter bubbles? Does it change how you think about your own responsibility for the composition of your information environment? Be specific about what, if anything, you would change about your information consumption habits based on the chapter's findings.
Exercise 17.4 — Group Exercise: Designing an Information-Quality Algorithm
Type: Group exercise (3–4 students) Difficulty: Advanced Time Required: 60–90 minutes in class, plus 30-minute write-up
Background
Social media platforms optimize their recommendation algorithms primarily for engagement metrics. This exercise asks your group to design an alternative: an algorithm optimized for "information quality" rather than engagement, and to work through the tradeoffs that design involves.
Part A: Define Your Metrics (20 minutes)
As a group, define what you would want an information-quality algorithm to optimize for. Consider:
- Accuracy: Content that has been fact-checked and found to be accurate. How would you measure this at scale? Who would do the checking?
- Source credibility: Content from sources with established track records of accuracy. How would you define and measure source credibility? Who decides which sources are credible?
- Viewpoint diversity: Exposure to a range of political and cultural perspectives. How much diversity? Diversity of what — partisan framing, methodological approach, geographic perspective?
- Relevance: Content that is relevant to the user's actual interests and needs. How do you measure relevance without reverting to engagement metrics?
- Deliberative quality: Content that presents evidence and argument, not just assertion. How would you operationalize "deliberative quality"?
For each metric you choose to include, specify: (1) how you would measure it, and (2) who or what system would do the measuring.
Part B: Identify the Tradeoffs (20 minutes)
For each metric you've defined, identify the primary tradeoff or problem it introduces:
- An accuracy metric requires fact-checkers. Who controls the fact-checkers? What happens when accurate information contradicts a politically powerful party's preferred narrative?
- A source-credibility metric advantages established outlets. What happens to independent journalism, whistleblower reporting, or citizen documentation of events that established outlets have not covered?
- A viewpoint-diversity metric exposes users to opposing views. What does the Bail et al. finding suggest about the effect of this?
- A relevance metric that is not based on behavioral data requires users to explicitly articulate their information needs. How many users would do this accurately?
Part C: Design Proposal (20 minutes)
Based on Parts A and B, draft a one-paragraph description of your information-quality algorithm. It should specify: what it optimizes for, how it measures those things, what safeguards address the tradeoffs you identified, and what you are willing to accept that your algorithm will do worse than engagement-optimized algorithms.
Write-Up (Individual, 300–500 words)
After the group exercise, each student should individually answer: What did this exercise reveal about the difficulty of replacing engagement optimization? Is there a technically feasible alternative to engagement-optimized recommendation, or is engagement optimization an inherent feature of any scalable recommendation system? Explain your reasoning.
Exercise 17.5 — Writing Prompt: Facebook, the Haugen Disclosures, and Moral Responsibility
Type: Analytical essay Difficulty: Advanced Length: 700–1,000 words
Background
The Frances Haugen disclosures showed that Facebook's own internal researchers had documented significant harms from the platform's design choices — harms to civic information quality, to electoral integrity, and to the mental health of teenage users — and that executives repeatedly chose not to make changes that would address those harms because doing so would reduce engagement.
The question this essay asks you to analyze is: what is the appropriate moral and legal characterization of this conduct?
Three Positions to Evaluate
Position 1: Negligence. Facebook failed to exercise reasonable care to prevent foreseeable harm. The standard for negligence does not require intent to harm; it requires failure to take reasonable precautions against harm that a reasonable actor should have foreseen. Facebook's own research documented the harm; a reasonable actor with that research would have taken mitigation steps; Facebook did not. This meets the standard.
Position 2: Fraud. Facebook knowingly made false representations — claiming its platform was safe and beneficial to users, and that it took safety seriously — while possessing internal research that contradicted those claims. This is the tobacco company pattern: internal knowledge of harm combined with external denial. Fraud requires knowing misrepresentation that induces reliance; if Facebook's public statements about platform safety induced users to continue using the platform, and if those statements contradicted what Facebook knew from internal research, the elements are arguably present.
Position 3: Acceptable business decision-making. Facebook made a business decision to prioritize growth and revenue. All businesses make such decisions, and they are not ordinarily subject to legal sanction. The research showed harms but also showed that users continued to use the platform voluntarily. Adults are entitled to use platforms that may not be optimal for their mental health or their information environment. The harm was real but was not of the type that society typically imposes legal liability for.
Your Task
Evaluate all three positions using the analytical frameworks developed in this chapter and earlier in the course. You should:
- Specify what factual questions matter most for choosing between the three positions (what additional evidence, if it existed, would resolve the question in favor of each position?).
- Apply the Big Tobacco analogy directly: in what ways does the Facebook/Haugen situation parallel the tobacco internal documents case, and in what ways does it differ?
- Reach a considered conclusion about which characterization is most accurate — while acknowledging the strongest objections to your conclusion.
- Address the policy implication: if Facebook's conduct is best characterized as negligence, fraud, or acceptable business decision-making, what regulatory response (if any) does that imply?
You are not required to reach a definitive legal conclusion — that would require legal training this course does not assume. You are required to reason carefully through the analytical question and defend a position.