Chapter 33: Quiz
Misinformation and Engagement Optimization: The Epistemic Crisis
22 multiple-choice questions. Select the best answer for each.
Question 1. Which of the following best describes the category of "malinformation" in the Wardle and Derakhshan framework?
A) False information shared without intent to harm B) False information shared with intent to deceive C) True information used with intent to cause harm D) Fabricated information created by automated systems
Question 2. According to the Vosoughi, Roy, and Aral (2018) study in Science, which of the following was the PRIMARY driver of false news spreading faster than true news?
A) Automated bot networks amplifying false content B) Human sharing behavior, driven by false news being more novel C) Platform algorithms explicitly designed to prioritize false content D) Political operatives coordinating to spread false narratives
Question 3. The Vosoughi et al. study found that false political news spread particularly rapidly. Compared to other false news categories, how much faster did political falsehoods spread?
A) Approximately the same rate as other false content B) About two times faster than other false content C) About three times faster than other false content D) About ten times faster than other false content
Question 4. What does the "novelty hypothesis" in misinformation research propose?
A) Misinformation spreads because people enjoy sharing shocking content B) False news spreads faster because it tends to be more novel and surprising than true news C) Novel (new) misinformation is more dangerous than recycled misinformation D) Social media platforms create novelty by randomizing content feeds
Question 5. How does an engagement-optimization algorithm become a structural amplifier of misinformation, according to this chapter's analysis?
A) Platform engineers deliberately program algorithms to amplify emotionally charged content B) Because false content is more emotionally arousing and novel, algorithms optimizing for engagement learn to surface it more C) Advertisers specifically pay to have emotionally arousing content prioritized D) Misinformation creators use technical tricks to fool algorithms into thinking their content is popular
Question 6. Which of the following best describes the "rabbit hole" phenomenon documented in social media recommendation research?
A) Users who join social media become progressively more isolated from offline social networks B) Users who begin with mainstream content are progressively recommended more extreme or conspiratorial content C) Platforms deliberately hide accurate information from users who engage with misinformation D) Users spend increasing amounts of time on social media each day as they become more addicted
Question 7. Chesney and Citron's concept of the "liar's dividend" refers to:
A) The financial profit that creators of misinformation receive from high engagement B) The reputational benefit that platforms receive when they claim to be fighting misinformation C) The ability to dismiss genuine evidence as fabricated because deepfake technology makes all video content questionable D) The advantage that early spreaders of false information have over fact-checkers who arrive later
Question 8. The World Health Organization coined the term "infodemic" to describe:
A) The deliberate spread of false information by foreign governments during health crises B) An overabundance of information — accurate and inaccurate — that makes it hard to find trustworthy guidance during a crisis C) The specific misinformation narratives that were most damaging during the COVID-19 pandemic D) The scientific study of how false information spreads through populations, analogous to how diseases spread
Question 9. A 2020 study by Broniatowski and colleagues found which of the following about vaccine-hesitant users on Twitter?
A) They primarily received their misinformation from foreign government actors B) They were likely to abandon vaccine hesitancy if provided with accurate scientific information C) They existed in recommender-reinforced clusters where they were disproportionately exposed to vaccine-skeptical content D) They were more likely than average users to identify misinformation as false before sharing it
Question 10. The Center for Countering Digital Hate's "Disinformation Dozen" research found that approximately what percentage of COVID-19 vaccine misinformation online originated from just 12 accounts?
A) 25 percent B) 45 percent C) 65 percent D) 85 percent
Question 11. What is the "implied truth effect" documented in research on misinformation warning labels?
A) Users trust content more when it appears alongside an official warning label, because they assume the label provider reviewed everything B) Content that appears near labeled misinformation but is not itself labeled receives a boost in perceived credibility, regardless of its accuracy C) Warning labels cause users to believe the labeled content is true because only important content receives official attention D) Platforms can legally claim they have fulfilled their misinformation obligations by applying labels, even if users ignore them
Question 12. Inoculation theory, applied to misinformation prebunking, is analogous to which medical concept?
A) Treatment — addressing the problem after it has already occurred B) Quarantine — preventing exposure to false information entirely C) Vaccination — building resistance by exposing people to weakened forms of the threat D) Triage — prioritizing which misinformation deserves counter-intervention resources
Question 13. Research by Sander van der Linden and colleagues, in partnership with Google, found that prebunking videos reduced susceptibility to misinformation by approximately:
A) Less than 1 percentage point B) 5 to 10 percentage points C) 20 to 30 percentage points D) More than 50 percentage points
Question 14. According to Penny Abernathy's research at Northwestern University, approximately how many local newspapers in the United States have closed between 2004 and 2022?
A) 500 B) 1,000 C) 2,500 D) 5,000
Question 15. The concept of "news deserts" refers to:
A) Regions where internet connectivity is insufficient for social media use B) Communities without any local news coverage, typically due to newspaper closures C) Countries whose media is entirely controlled by the government D) Social media feeds entirely devoid of news content due to platform algorithm changes
Question 16. Research by Joshua Darr and colleagues found which of the following about communities in news deserts?
A) They were more likely to rely on foreign news sources for information B) They showed higher rates of social media addiction than communities with robust local journalism C) They were more likely to rely on national political media and showed reduced engagement with local issues D) They were more likely to create their own local journalism through citizen journalism platforms
Question 17. Which of the following best describes "coordinated inauthentic behavior" (CIB) as defined by Facebook?
A) Networks of fully automated accounts (bots) that share identical content repeatedly B) Networks of real people operating accounts in coordinated ways to artificially amplify specific narratives while deceiving others about the coordination C) Platform employees who coordinate internally to remove content that violates community standards D) Advertising campaigns that use inauthentic testimonials or reviews to promote products
Question 18. The Internet Research Agency (IRA), which conducted documented social media influence operations during the 2016 U.S. presidential election, was linked to which country?
A) China B) Iran C) North Korea D) Russia
Question 19. Twitter ran a pilot program in 2020 prompting users to read articles before retweeting them. What did the study find?
A) The prompt had no effect because most users dismissed it immediately B) The prompt actually increased sharing of misinformation because it created artificial scarcity C) Read rates increased by 40 percent when the prompt was shown D) Users who saw the prompt were more likely to share misinformation than those who did not
Question 20. The European Union's Digital Services Act, which came into force in 2023, includes which type of misinformation-relevant provision?
A) Criminal penalties for individual users who share misinformation B) A requirement that platforms algorithmically suppress all unverified health claims C) Provisions for algorithmic auditing of very large online platforms D) A mandate that platforms adopt the EU's official fact-checking organization as the sole arbiter of misinformation
Question 21. In the context of the GameStop meme stock episode of January 2021, which statement most accurately describes the role of social media misinformation?
A) The entire episode was orchestrated by a foreign government using bot networks B) WallStreetBets combined legitimate short-squeeze analysis with misinformation about fundamentals, with recommendation dynamics spreading claims to retail investors lacking evaluation expertise C) Platforms' recommendation algorithms were manipulated by hedge funds to promote short-selling content D) GameStop's own social media team spread misinformation about the company's business prospects to inflate its stock price
Question 22. Which of the following represents the most structurally adequate response to the misinformation crisis, based on the chapter's analysis?
A) Training individual users to identify and report misinformation through media literacy education B) Having platforms hire more content moderators to manually review flagged content C) Changing the optimization targets of recommendation systems and implementing mandatory algorithmic auditing alongside friction interventions and journalism support D) Holding individual creators of misinformation criminally liable for harmful false claims
Answer Key
- C
- B
- C
- B
- B
- B
- C
- B
- C
- C
- B
- C
- B
- C
- B
- C
- B
- D
- C
- C
- B
- C