Chapter 8 Exercises: Platform Algorithms and the Attention Economy
Instructions
These exercises develop analytical, research, and computational skills related to platform algorithms and the attention economy. Written responses should be supported with specific evidence from the chapter readings and independent research where indicated.
Section A: Conceptual Understanding
Exercise 8.1 — Simon's Scarcity
Herbert Simon wrote in 1971 that "a wealth of information creates a poverty of attention." In your own words:
a) What does Simon mean by this statement? What is the economic logic? b) How has this insight been operationalized by digital platforms specifically? c) How does the "attention as commodity" framing change the analysis of platform algorithm design? What would be different if platforms sold information (not attention) to users?
Write 300-400 words.
Exercise 8.2 — Recommendation System Comparison
For each of the following recommendation approaches, explain: (a) the technical mechanism, (b) the type of data required, (c) how it might create a feedback loop or filter bubble, and (d) one specific misinformation implication:
- Collaborative filtering
- Content-based filtering
- Engagement-based recommendation (watch time optimization)
- Social graph-based recommendation (posts from people you follow)
Exercise 8.3 — EdgeRank and Engagement Signals
Facebook's News Feed algorithm uses multiple engagement signals to rank content. Explain why each of the following signals, used in isolation as a ranking factor, might create problematic incentives for content quality:
- "Angry" emoji reaction count
- Share count
- Comment count
- Watch time for video content
- Click-through rate for link posts
For each, suggest an alternative signal that might be a better proxy for content quality.
Exercise 8.4 — The Vosoughi Study
Vosoughi, Roy, and Aral (2018) found that false news spreads faster, farther, and more broadly than true news on Twitter, and that bots are not primarily responsible — human users drive the difference.
a) What specific mechanisms do the authors propose to explain why false news spreads faster? b) Why is the finding about bots significant for intervention strategies? c) What are the methodological limitations of the study? (Consider: how "true" and "false" are classified, the platform-specific nature of the findings, the time period studied) d) The study was conducted before major platform algorithm changes. How might its findings apply differently to the current algorithmic social media environment?
Write 400-500 words.
Exercise 8.5 — Filter Bubble Evidence
Compare Pariser's theoretical filter bubble argument with the empirical findings from Guess, Nyhan, and Reifler (2018) and Bakshy, Messing, and Adamic (2015).
a) What does Pariser's argument predict about news consumption patterns in a personalized social media environment? b) What do the empirical studies actually find? c) How do you reconcile the empirical finding that filter bubbles are smaller than expected with the observation that political polarization has increased during the social media era? d) Is there a version of Pariser's normative concern (about democratic citizenship and shared information environment) that survives the empirical challenges to his descriptive predictions?
Write 400-600 words.
Section B: Research and Analysis
Exercise 8.6 — Personal Algorithm Audit
Conduct a personal algorithm audit over one week:
Part 1 — Baseline documentation: For three days, document every piece of content recommended to you by an algorithm (YouTube recommendations, Facebook/Instagram/TikTok feed, Spotify/Netflix suggestions, Google search results). For each, note: - Platform and content type - Your estimated confidence that the algorithm recommended it vs. you sought it - Whether you found it informative or misleading - The emotional response it generated
Part 2 — Manipulation experiment: For the next four days, deliberately engage with content outside your normal behavior pattern. If you typically watch news, watch cooking videos. If you typically engage with one political perspective, engage with the opposing perspective. Document what recommendations appear in response.
Part 3 — Analysis: Write a 600-800 word reflection: How rapidly did the algorithm respond to your behavioral change? What does this reveal about how the algorithm constructs your user profile? Did the experiment produce any unexpected results?
Exercise 8.7 — Platform Transparency Audit
Research and compare the algorithmic transparency of three major platforms. For each platform, find and analyze:
- The platform's public documentation of how its recommendation algorithm works
- Academic or journalistic research that has audited the algorithm's actual behavior
- Any discrepancies between the documented algorithm and audited behavior
Write a 500-700 word analysis of what these discrepancies (or lack thereof) reveal about platform transparency practices.
Exercise 8.8 — Frances Haugen Documentation Analysis
The Frances Haugen disclosures (2021) included internal Facebook research documents. Research the specific content of these documents using public reporting:
- What did Facebook's internal research find about the relationship between algorithmic amplification and content that generated fear, anger, or outrage?
- What specific product changes did Facebook's integrity team recommend, and which were implemented versus rejected?
- What did the research find about Instagram's effects on teenage mental health?
- What does the pattern of accepted vs. rejected recommendations reveal about how Facebook's decision-making process weighed commercial vs. social interests?
Write a 700-900 word analytical report with citations.
Exercise 8.9 — Search Algorithm Manipulation Investigation
Research and document a specific documented case of search engine optimization being used to promote misinformation. Cases to research include: the "Google bombing" campaigns of the mid-2000s, health misinformation that has appeared in Google's "featured snippets," or the manipulation of image search results.
For your chosen case, document: 1. The specific mechanism of manipulation (what ranking signal was exploited) 2. The scale of the misinformation's reach via search 3. How Google or other search engines responded 4. Whether the response was effective
Write a 500-700 word case analysis.
Exercise 8.10 — Friction Intervention Design
Research the academic literature on "friction interventions" in digital information sharing (see Pennycook and Rand's work).
a) What is the theoretical basis for why friction interventions improve sharing accuracy? b) What specific friction interventions have been tested empirically, and what were the results? c) Design an original friction intervention for a specific platform and context. Specify: - The platform and the specific sharing behavior you are targeting - The friction mechanism you would introduce - How you would measure the intervention's effectiveness - What unintended consequences you would monitor for
Write 500-700 words.
Section C: Computational Exercises
Exercise 8.11 — Recommendation System Implementation
Using the code in code/example-01-recommendation-system.py as a starting point:
a) Run the collaborative filtering simulation with the provided synthetic user-item matrix.
b) Modify the simulation to introduce a "misinformation content" item that: - Has very high initial engagement rates (high arousal) - Is rated positively by users who have engaged with other high-arousal content
Measure how the recommendation system distributes this content compared to a "high-quality content" item with moderate engagement.
c) Write a 200-300 word explanation of what your results demonstrate about how engagement-based collaborative filtering treats high-engagement false content differently from moderate-engagement accurate content.
Exercise 8.12 — Engagement-Emotion Analysis Extension
Using the code in code/example-02-engagement-emotion-correlation.py:
a) Extend the headline dataset to include 30 additional headlines across five categories. Ensure you include examples of: - Accurate headlines with high emotional arousal (genuine disasters or crises) - False headlines with low emotional arousal (dry false statistics) - Accurate headlines with moderate arousal
b) Run the correlation analysis on your extended dataset. Does the arousal-engagement correlation persist?
c) Compute the correlation separately for accurate content (accuracy ≥ 0.8) and false content (accuracy ≤ 0.2). Is the arousal-engagement correlation stronger within one category than the other?
d) Write a 200-300 word interpretation: What do these sub-group correlations reveal about the mechanism driving the overall arousal-engagement relationship?
Exercise 8.13 — Algorithm Audit Simulation
Using the code in code/example-03-algorithm-audit.py:
a) Run the simulation with three distinct user profile types: - A "news seeker" who primarily engages with political and current events content - A "entertainment user" who primarily engages with entertainment and lifestyle content - A "mixed user" who engages with diverse content
b) For each profile, measure: - The diversity of content consumed (use entropy as a measure of diversity) - The proportion of high-misinformation content in the feed - How quickly the algorithm "locks in" to a narrow content range
c) Write a 250-350 word analysis: Do different user behavioral profiles lead to systematically different misinformation exposure levels? What does this suggest about the "user choice vs. algorithm" question in the filter bubble debate?
Exercise 8.14 — PageRank Implementation
Research the basic PageRank algorithm and implement a simplified version using NetworkX:
a) Create a small web graph with approximately 20 nodes (web pages) and directed edges (hyperlinks between pages). Include: - A "hub" page that links to many others (like a major news aggregator) - A "authority" page that receives many links (like a major news source) - Several "misinformation" pages with few inbound links but inflated inbound links from a link farm
b) Compute PageRank scores for all nodes.
c) Now add 10 fake "link farm" nodes that all link to the misinformation pages. Recompute PageRank.
d) Compare the rankings before and after link farming. Write a 200-300 word explanation of what this demonstrates about the vulnerability of link-based authority metrics to manipulation.
Section D: Critical Thinking
Exercise 8.15 — The Algorithm as Editor
The chapter argues that algorithms make billions of "editorial" decisions that are analogous to the decisions made by human editors about what to feature on a front page.
a) What are the key similarities between human editorial decisions and algorithmic content ranking decisions? b) What are the key differences — what can a human editor do that an algorithm cannot, and vice versa? c) Human editors at major news organizations are subject to professional norms, editorial standards, and (in some cases) legal liability. Should platform algorithms be subject to analogous accountability mechanisms? What would this look like in practice?
Write 500-650 words.
Exercise 8.16 — Engagement vs. Information Quality
Frances Haugen testified that Facebook "realized that if they change the algorithm to be safer, people will spend less time on the site, they'll click on less ads, they'll make less money."
a) Evaluate this claim. Is there evidence that engagement and information quality are inherently opposed, or is this a contingent feature of Facebook's specific implementation? b) Are there examples of platforms, products, or media organizations that have found ways to generate substantial engagement while maintaining information quality standards? What design features do they share? c) Design a hypothetical platform business model that aligns financial incentives with information quality. What tradeoffs would it involve?
Write 500-700 words.
Exercise 8.17 — The Implied Truth Problem
Pennycook and Rand found an "implied truth" effect: when some false content is labeled, unlabeled false content benefits from an implicit credibility signal (users assume unlabeled content has passed a fact-check).
a) Explain the mechanism of the implied truth effect in detail. Under what conditions would it be most severe? Under what conditions might it be negligible? b) What are the implications of the implied truth effect for the design of platform labeling systems? c) Propose a labeling system design that addresses the implied truth problem while remaining practically feasible at the scale of major platforms.
Write 400-550 words.
Exercise 8.18 — TikTok vs. Facebook Architecture
Compare Facebook's social graph architecture with TikTok's interest graph architecture from a misinformation perspective.
a) For each architecture, identify: how misinformation enters the system, how it spreads, and who is most likely to be exposed to it. b) Which architecture do you believe creates greater misinformation risks, and why? c) Are there types of misinformation that are uniquely dangerous on one platform but not the other? d) What interventions would be most effective on each platform, and why might interventions that work on Facebook be ineffective on TikTok (or vice versa)?
Write 500-700 words.
Exercise 8.19 — Regulation Design
The European Union's Digital Services Act (DSA, 2022) requires large platforms to assess and mitigate "systemic risks" including the dissemination of illegal content, negative effects on fundamental rights, and negative effects on "civic discourse or electoral processes."
a) Research the key requirements of the DSA's systemic risk provisions. What specifically must platforms do? b) Evaluate the DSA's approach: Does it address the structural attention economy problems identified in this chapter? What does it miss? c) Design an alternative or complementary regulatory intervention that addresses the engagement-optimization root cause more directly. Address: what behavior it would regulate, how compliance would be measured, what enforcement mechanism would apply, and what free speech implications it might have.
Write 700-900 words with research citations.
Exercise 8.20 — Historical Analogies
The attention economy and its misinformation consequences have historical analogies in other media technologies. Research one of the following: - Yellow journalism and the Spanish-American War (advertising-driven sensationalism) - Radio propaganda in Nazi Germany (state-captured broadcast attention) - Television and the tobacco industry (advertising that obscured health evidence)
For your chosen case: 1. Identify the structural incentive that produced the misinformation problem 2. Describe the intervention (regulatory, market, social) that addressed it 3. Evaluate the analogy: what does it illuminate about the current platform algorithm problem, and where does the analogy break down?
Write 600-800 words.
Section E: Extended Research Projects
Exercise 8.21 — Original Algorithm Audit (3-week project)
Conduct an original algorithmic audit of a recommendation system of your choice. Your audit should:
-
Define a specific research question (e.g., "Does YouTube recommend more extreme content after exposure to moderate political commentary?" or "Does TikTok's FYP surface health misinformation to users who engage with wellness content?")
-
Design a systematic data collection protocol that controls for confounds (use multiple accounts or devices, document starting conditions, use consistent behavioral inputs)
-
Collect data over at least two weeks
-
Analyze results: does the algorithm's behavior match your hypothesis?
-
Write a 1,500-2,000 word audit report including: research question, methodology, results, limitations, and policy implications
Note: Ensure your research complies with platform terms of service. Do not use automated scraping without explicit permission.
Exercise 8.22 — Comparative Platform Policy Analysis
Research and compare how three major platforms (choose from: YouTube, TikTok, Meta/Facebook, Twitter/X, Snapchat, Pinterest) have responded to concerns about engagement-optimization and misinformation.
For each platform, document: 1. What changes (if any) have been made to recommendation algorithms to reduce misinformation 2. What evidence exists about the effectiveness of these changes 3. What transparency the platform has provided about its algorithms
Write a 1,200-1,500 word comparative analysis, evaluating whether current platform responses are adequate to address the structural problems identified in this chapter.
Exercise 8.23 — Academic Literature Review
Conduct a literature review on the filter bubble hypothesis. Find and analyze at least six peer-reviewed studies that have examined whether algorithmic personalization reduces ideological diversity of news consumption. Your literature review should:
- Summarize the methodology and key findings of each study
- Identify patterns and inconsistencies across studies
- Assess what the overall weight of evidence suggests about the filter bubble hypothesis
- Identify gaps in the literature that future research should address
Write a 1,500-2,000 word literature review in standard academic format.
Exercise 8.24 — Original Simulation Extension
Extend the algorithm audit simulation in code/example-03-algorithm-audit.py to model one of the following:
a) The effect of "deplatforming" (removing the highest-engagement misinformation accounts from the system). How does this affect the total misinformation exposure of the user population?
b) The effect of cross-recommendation between ideologically diverse content clusters. If the algorithm occasionally recommends content from outside a user's typical cluster, how does this affect overall content diversity?
c) The dynamics of a "cold start" problem: what happens to a new user who has no behavioral history? How does the algorithm's initial default behavior shape their early exposure, and how does this shape their subsequent engagement pattern?
Your extension should include: hypothesis, implementation, results with visualization, and a 500-700 word interpretation.
Exercise 8.25 — Policy Brief: Algorithmic Accountability
Write a 2,000-2,500 word policy brief addressed to a legislative committee examining platform algorithm accountability. The brief should:
-
Explain clearly (for a non-technical audience) how engagement optimization creates incentives toward misinformation
-
Review the evidence from academic research and platform disclosures about the scale of algorithmic misinformation amplification
-
Evaluate three specific regulatory approaches: - Algorithmic transparency and audit requirements - Engagement metric restrictions (prohibiting certain engagement signals, like "angry" reactions, from algorithmic ranking) - Fiduciary duty of care requirements (requiring platforms to prioritize user welfare over engagement)
-
Make a specific, prioritized recommendation with implementation timeline
-
Address likely industry objections and potential free expression concerns
Your brief should be evidence-based, politically realistic, and clearly written for a policy audience.
End of Chapter 8 Exercises