Chapter 20 Quiz: The Outrage Machine: Anger as Engagement

Instructions

Select the best answer for each question. After completing all 22 questions, check your answers against the answer key at the end.


1. According to the chapter, what makes anger uniquely suited as an "engagement emotion" compared to joy or sadness?

A) Anger is easier for algorithms to detect in text than other emotions B) Anger is high-arousal and approach-motivating, priming users for exactly the active engagement platforms measure C) Anger is more socially acceptable on social media platforms than other negative emotions D) Anger generates advertising revenue at higher rates than positive emotions directly


2. In the two-dimensional model of emotion, which combination of dimensions predicts the highest sharing behavior?

A) Low arousal, positive valence (contentment) B) Low arousal, negative valence (sadness) C) High arousal, positive or negative valence (excitement, anger, awe) D) High arousal, positive valence only (excitement)


3. What is "moral outrage" and why is it described as more virally potent than simple anger?

A) Moral outrage is anger directed at political figures, which makes it more newsworthy B) Moral outrage is self-regarding anger that prompts immediate individual action rather than social engagement C) Moral outrage is other-regarding anger at perceived norm violations that is socially connective and demands group response D) Moral outrage is a clinical term for anger that meets diagnostic thresholds for pathological emotional processing


4. Brady et al. (2017) found that for each additional moral-emotional word in a tweet, the retweet rate increased by approximately:

A) 2 percent B) 10 percent C) 20 percent D) 50 percent


5. The study analyzed by Brady et al. focused on tweets about which three contested political topics?

A) Gun control, same-sex marriage, and climate change B) Immigration, abortion, and healthcare C) Foreign policy, taxation, and education D) Racial justice, police reform, and housing


6. According to Berger and Milkman (2012), which emotional state was notably NOT associated with increased viral sharing, despite being a strong emotion?

A) Anger B) Awe C) Anxiety D) Sadness


7. The engagement optimization feedback loop works as described in the chapter because:

A) Algorithms are explicitly programmed to prioritize angry content for commercial reasons B) Platform engineers intentionally selected outrage content as a design goal C) Algorithms identify content characteristics correlated with high engagement and amplify them, without awareness of emotional character D) Users report that they prefer outrage content when surveyed about their preferences


8. Facebook's 2018 "meaningful social interactions" algorithm change is described as having backfired because:

A) Users complained loudly about seeing more posts from friends and less content from brands B) The behavioral metrics used to identify "meaningful" content were the same ones that outrage content reliably generates C) The change reduced overall engagement so severely that Facebook's stock price declined significantly D) Third-party app developers were excluded from the new algorithm and lobbied against the change


9. The Kramer et al. (2014) emotional contagion study demonstrated:

A) That social media use is causally linked to clinical depression in adolescents B) That Facebook users are significantly less happy than non-users on average C) That manipulating the emotional valence of News Feed content changed users' subsequent emotional expression D) That anger is the most contagious emotion in social networks


10. Why did the Kramer et al. (2014) study generate significant ethical controversy?

A) The researchers fabricated data about emotional contagion effects B) The study was conducted without explicit informed consent from the approximately 700,000 affected users C) Facebook paid researchers to find results favorable to the platform D) The study used deceptive recruitment practices to attract vulnerable research participants


11. In the five-stage outrage cycle described in the chapter, what happens during Stage 3 — Counter-Reaction?

A) The algorithm identifies the content as high-engagement and begins amplifying it B) Early viewers share the provoking content with their networks C) Users with different moral frameworks encounter the outrage reactions and experience counter-outrage at the reaction itself D) The outrage cycle begins to decay as emotional energy dissipates


12. The "ratio" on Twitter/X refers to:

A) The ratio of organic reach to paid promotion for political content B) The proportion of a user's followers who engage with their content C) A post where replies substantially exceed likes, indicating widespread disagreement or outrage D) The percentage of morally-emotional language in a viral tweet


13. How does Jonathan Haidt's moral foundations theory help explain the tribal patterns of outrage amplification?

A) Haidt's theory explains that all humans share identical moral intuitions and that outrage is therefore universal B) Haidt's theory shows that different political groups weight moral foundations differently, so algorithms learn to deliver customized outrage calibrated to each user's moral community C) Haidt's theory demonstrates that political polarization predates social media and cannot be attributed to algorithmic amplification D) Haidt's theory establishes that moral outrage is a pathological emotional state that algorithms should be designed to suppress


14. What were the six moral foundations identified by Haidt and colleagues?

A) Anger, Fear, Sadness, Joy, Disgust, and Surprise B) Rights, Responsibilities, Justice, Care, Loyalty, and Liberty C) Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression D) Individual, Community, Nation, Religion, Nature, and Future Generations


15. The Facebook Files reporting, drawing on documents provided by Frances Haugen, revealed primarily that:

A) Facebook had deliberately programmed its algorithm to amplify outrage for commercial reasons B) Facebook had identified outrage amplification harms through internal research but had not consistently acted on this knowledge C) Facebook's algorithm had no measurable effect on political polarization or user emotional states D) Facebook's CEO had personal knowledge of illegal activity and attempted to suppress research about it


16. Guillaume Chaslot's AlgoTransparency project documented what specific finding about YouTube's recommendation algorithm?

A) YouTube's algorithm preferentially recommended content from paid promoters over organic creators B) YouTube's algorithm consistently recommended more extreme versions of political content as users engaged with political material C) YouTube's algorithm showed no systematic bias in political content recommendations D) YouTube's algorithm recommended illegal content at significantly higher rates than other platforms


17. What does the "outrage ratchet" term (attributed to Chaslot) describe?

A) A regulatory mechanism that progressively tightens restrictions on platforms that repeatedly amplify harmful content B) A user behavior pattern of escalating outrage expression over time on social media C) The algorithm's tendency to progressively recommend more extreme, emotionally activating content because extreme content generates more watch time D) A journalistic technique for escalating coverage of platform harms through successive revelations


18. The chapter describes the content creator revenue alignment problem as creating which specific dynamic?

A) Creators who use outrage rhetoric are banned from platforms for community standards violations B) Financial incentives push creators toward outrage content because it generates higher engagement and thus more revenue C) Advertisers avoid outrage content, so creators face a tension between engagement and monetization D) Creators from political minorities are systematically demonetized for outrage content while mainstream creators are not


19. In the Velocity Media sidebar, Dr. Aisha Johnson's argument against adding a disgust emoji was that:

A) Disgust is culturally specific and would create international moderation problems B) Users already had sufficient ways to express negative reactions without a dedicated disgust option C) Adding a disgust signal would train the algorithm to surface more disgust-inducing content, amplifying outrage D) The legal liability associated with disgust-related content moderation was unacceptable


20. Ribeiro et al. (2019) described the network of political YouTube channels along which the recommendation algorithm created pathways as the:

A) Outrage Pipeline B) Alternative Influence Network C) Recommendation Radicalization Web D) Extremism Amplification Cluster


21. The "intent/effect problem" described in the chapter's ethical analysis refers to:

A) The difficulty of proving that social media platforms intended to create specific psychological harms in users B) The gap between platforms' original intentions (maximize engagement) and the outrage amplification effect, which becomes a choice once documented C) The challenge of measuring the effect of outrage content on users' emotional states D) The legal question of whether intent or effect determines liability in platform design cases


22. The chapter suggests that structural alternatives to the outrage machine include all of the following EXCEPT:

A) Optimizing for satisfaction ratings or self-reported well-being instead of raw engagement B) Reducing virality mechanics by adding friction to reposting controversial content C) Publishing algorithmic optimization targets for independent audit D) Eliminating political content from social media platforms entirely


Answer Key

  1. B
  2. C
  3. C
  4. C
  5. A
  6. D
  7. C
  8. B
  9. C
  10. B
  11. C
  12. C
  13. B
  14. C
  15. B
  16. B
  17. C
  18. B
  19. C
  20. B
  21. B
  22. D