Chapter 26 Exercises: YouTube's Recommendation Engine and the Radicalization Pipeline


Comprehension and Recall

Exercise 1 [Short Answer] Describe YouTube's scale in concrete terms: how many logged-in monthly active users does the platform serve, and how much video is uploaded per minute? Why do these numbers matter for understanding the recommendation engine's social consequences?

Exercise 2 [Short Answer] Explain the difference between click-based and watch-time-based recommendation optimization. What problem with click-optimization led YouTube to change its primary metric in 2012? What new problem did watch-time optimization introduce?

Exercise 3 [Short Answer] Who is Guillaume Chaslot, what was his role at YouTube, and what did he do after leaving the company? Why is his testimony considered significant evidence about YouTube's recommendation system?

Exercise 4 [Short Answer] Describe the methodology of the Ribeiro et al. (2019) study. What did the researchers find about the relationship between mainstream political content and more extreme political content in YouTube's recommendation network?

Exercise 5 [Short Answer] What was "Elsagate"? Describe the phenomenon, explain how it was possible given YouTube's content moderation systems, and summarize the regulatory response.


Application Exercises

Exercise 6 [Self-Observation] With a fresh YouTube account (or in incognito mode to avoid personalization), start with a mainstream news video on a political topic of your choice. Follow the "Up Next" recommendations for ten steps without typing any new search queries. Document each video's title, channel, and your subjective assessment of its political intensity (on a scale from 1=neutral to 5=extreme). Chart the trajectory. What pattern do you observe?

Exercise 7 [Creator Analysis] Choose a YouTube creator who produces political or social commentary content with more than one million subscribers. Analyze their content strategy using the chapter's framework: How would you characterize their content on the aspiration scale? How does the watch-time optimization dynamic shape their content approach? Does their content fit the "extremism premium" pattern described in the chapter?

Exercise 8 [Platform Comparison] Compare YouTube's recommendation design with Netflix's recommendation system. Both platforms use collaborative filtering and behavioral data to generate recommendations. What are the key differences in their business models, optimization objectives, and recommendation consequences? Does Netflix's recommendation system produce analogous "rabbit hole" effects? Why or why not?

Exercise 9 [Case Application] Maya watched a music tutorial and ended up watching politically extreme commentary through a sequence of recommendations. Trace the specific mechanism using the chapter's framework: what does each step in the recommendation chain optimize for, and how does each recommendation relate to the previous video?

Exercise 10 [Children's Content Analysis] Examine YouTube Kids' current content policies (available on YouTube's help center). Based on the chapter's discussion of Elsagate and the FTC settlement, assess whether the current policies adequately address the content safety failures documented in the case study. What vulnerabilities remain?


Critical Thinking

Exercise 11 [Argument Evaluation] YouTube's argument for its recommendation system is that it reflects user preferences: it recommends what users have demonstrated they want to watch. Evaluate this argument. What conception of user preferences does it rely on? What does it leave out? Is "expressed preference" an adequate proxy for "what users want"?

Exercise 12 [Mechanism Analysis] Explain the "extremism premium" dynamic in YouTube's creator economy. Why would a creator moving toward more extreme content encounter less competition and higher algorithmic rewards than one staying in the mainstream? Use supply and demand concepts to structure your analysis.

Exercise 13 [Causal Attribution] The self-selection versus algorithmic guidance debate asks whether users seek out extreme content or are led to it. Construct the strongest possible argument for each position, then evaluate the evidence. What would a research design look like that could definitively distinguish between the two explanations?

Exercise 14 [Policy Analysis] YouTube's 2019 "borderline content" policy distinguished between content that would be removed from the platform and content that would be "reduced in recommendations." Evaluate this distinction as a policy approach. What are its advantages compared to content removal? What are its limitations?

Exercise 15 [Ethical Analysis] The chapter describes how YouTube's creator incentive structure (watch time = revenue) financially rewards creators who produce emotionally intense or extreme content. Who bears moral responsibility for the content that results — individual creators who optimize for the algorithm, YouTube for creating these incentives, advertisers who fund the system, or users who consume the content?


Research and Investigation

Exercise 16 [Literature Review] The chapter discusses the Ribeiro et al. (2019) study. Find two subsequent peer-reviewed studies that either extend or challenge Ribeiro et al.'s findings about YouTube's recommendation radicalization pathways. For each study, describe the design, findings, and what they add to or change about the radicalization hypothesis.

Exercise 17 [Regulatory Research] Research the Digital Services Act (DSA), which the European Union enacted in 2022 and which applies to large online platforms including YouTube. Identify at least three specific provisions of the DSA that address the recommendation system transparency or safety concerns described in this chapter. Assess whether these provisions, if fully implemented, would address the radicalization pipeline problem.

Exercise 18 [Algorithmic Audit] Using AlgoTransparency (algotransparency.org), the tool developed by Guillaume Chaslot, examine the current top recommendations for a mainstream political query of your choice. Describe what you find. How do the recommendations compare to what you would expect based on the chapter's framework?

Exercise 19 [Comparative Platform Study] Research how YouTube's recommendation system compares to those of competing video platforms, specifically regarding radicalization pathway research: Has similar research been conducted on TikTok, Rumble, or other video platforms? What does the comparative evidence suggest about whether the radicalization dynamic is specific to YouTube's design or a more general feature of watch-time optimization?

Exercise 20 [Historical Analysis] The chapter notes that YouTube is the "second-largest search engine." Research the history of YouTube as an information resource: How did it transition from primarily entertainment to a major source of news and political information for younger demographics? What does this transition reveal about the stakes of its recommendation design?


Synthesis and Writing

Exercise 21 [Essay: Short] In 500 words, explain the "rabbit hole" effect using the specific mechanism described in the chapter. Why does watch-time optimization create incremental extremization? Use the concept of emotional intensity as a mediating variable.

Exercise 22 [Essay: Extended] In 1,000 words, analyze YouTube's response to evidence of radicalization pathways — including the 2019 borderline content policy, changes to its recommendation system, and its public communications. Evaluate whether these responses are adequate to address the problem. What would a more fundamental response look like?

Exercise 23 [Memo] Write a 600-word memo from Guillaume Chaslot's perspective, addressed to YouTube's leadership team, written at the point when he first identified the tendency of the recommendation algorithm toward extreme content. What do you observe? What do you recommend? What concerns do you anticipate will be raised in response?

Exercise 24 [Counter-Argument] The chapter presents strong evidence for the radicalization pipeline hypothesis. Write a 600-word counter-argument challenging this evidence. Address the methodological limitations of the key studies, the self-selection alternative explanation, and YouTube's own evidence that the problem has been substantially addressed.

Exercise 25 [Op-Ed] Write a 700-word op-ed arguing for specific regulatory requirements for recommendation system transparency. Your argument should specify exactly what information platforms should be required to disclose, to whom, and through what mechanism. Address likely objections from platform companies and free speech advocates.


Data and Quantitative Exercises

Exercise 26 [Scale Calculation] The chapter states that five hundred hours of video are uploaded to YouTube per minute. Calculate: (a) how many hours are uploaded per day; (b) how many years of video are uploaded per year; (c) if a moderator could review one video per minute (real time), how many moderators would be needed to review all content uploaded in one day? What does this calculation reveal about the limits of content moderation?

Exercise 27 [Simulation Analysis] Run the recommendation_chains.py code file associated with this chapter. Observe the output showing how content recommendations drift over a sequence of steps. Modify the parameter controlling the "extremism gradient" and observe how this affects the drift trajectory. Write a 300-word interpretation connecting your simulation results to the research discussed in the chapter.

Exercise 28 [Economic Analysis] Research YouTube's advertising revenue (publicly reported as part of Google/Alphabet quarterly earnings). Using available figures, estimate what proportion of YouTube's revenue comes from political content creators who occupy the more extreme positions in the creator ecosystem. What does this estimate suggest about the economic stake YouTube has in the current recommendation system?


Debate and Discussion

Exercise 29 [Structured Debate] Debate the following proposition: "YouTube has a responsibility to prevent its recommendation algorithm from leading users to extremist content, even if doing so reduces watch time and revenue." One team argues for the proposition; one argues against. Prepare evidence-based arguments and anticipate counterarguments.

Exercise 30 [Socratic Seminar] Discuss as a class: What would a "responsible" YouTube recommendation system look like? Should it optimize for watch time, user satisfaction, informational diversity, or some combination? How would you define and measure each of these objectives? Who should decide?

Exercise 31 [Role-Play] In small groups, role-play the Velocity Media internal debate about diversity constraints in recommendations. Each character (Johnson, Webb, Chen) should argue their position. After the role-play, debrief: what were the most compelling arguments? What compromises emerged? What remained irreconcilable?


Reflection and Personal Application

Exercise 32 [YouTube Audit] Review your YouTube watch history for the past month. Identify any instances where you watched a video that surprised you by being more extreme, partisan, or conspiratorial than your initial search query would have predicted. Describe how you got there. Does your experience confirm, challenge, or complicate the chapter's account?

Exercise 33 [Informed Consumption] For one week, practice "deliberate" YouTube use: before watching any recommended video, read the title, check the channel's About page, and make a conscious decision whether to watch rather than automatically playing the next video. At the end of the week, write a 400-word reflection on what you noticed. Did your video consumption change? How did you experience the recommendation interface differently?

Exercise 34 [Empathy Exercise] Describe, from the perspective of a YouTube engineer who worked on the recommendation algorithm in 2012-2019, what it might have felt like to learn that research suggested your work had contributed to radicalization pathways. How might cognitive dissonance, professional identity, and organizational incentives shape the response? What would you hope you would have done?

Exercise 35 [Design Alternative] Imagine you are designing a video platform that explicitly commits to recommendation transparency and user agency. Describe your recommendation system design: What would users know about why videos are being recommended? What controls would users have? How would you balance algorithmic efficiency with user understanding? What would you give up relative to YouTube's current design?