Chapter 16: Quiz

Digital Media, Social Networks, and Viral Spread Propaganda, Power, and Persuasion: A Critical Study of Influence, Disinformation, and Resistance


Instructions: Select the best answer for each question. Questions are based on the reading, case studies, and research breakdowns in Chapter 16. Answers and explanatory notes are in Appendix B.


Question 1

Which of the following most accurately describes what makes social media structurally different from previous propaganda channels such as radio and television?

A) Social media is faster than radio and television, which allows propaganda to reach audiences before corrections can be issued.

B) Social media eliminated the resource constraints that had previously concentrated broadcasting power, while simultaneously creating algorithmic amplification, network effects, and dark social channels that are structurally hospitable to propaganda.

C) Social media is regulated differently from radio and television because it operates exclusively through private companies rather than public licenses.

D) Social media allows only one-way communication, which means audiences have no mechanism for checking claims they receive.


Question 2

Vosoughi, Roy, and Aral's 2018 study of true and false news on Twitter found that false news spread faster and further than true news. When the researchers looked for the explanation, what did they find?

A) Automated bot accounts were primarily responsible for the greater spread of false news, because bots systematically amplified false content.

B) False news was produced by wealthier organizations with more sophisticated distribution strategies, giving it a structural reach advantage.

C) Humans — not bots — were primarily responsible for the greater spread of false news, driven by false news's greater novelty and higher levels of emotional arousal (surprise, fear, disgust).

D) False news spread faster because it was shorter than true news, making it easier to read and share quickly.


Question 3

The Internet Research Agency (IRA) operated a large-scale social media influence campaign targeting American users. According to the Senate Intelligence Committee's assessment, what was the IRA's primary strategic goal?

A) To elect Donald Trump president of the United States by generating favorable content and suppressing coverage of his opponents.

B) To deepen social division and undermine trust in American democratic institutions, targeting multiple communities including both supporters and opponents of various candidates.

C) To gather intelligence on American political activists by identifying which users engaged with divisive content.

D) To test Russian psychological operations techniques in a real-world environment for use in future military operations.


Question 4

Facebook's internal research, revealed by Frances Haugen in 2021, found that "angry" reactions drove how many times more distribution in the News Feed than "like" reactions?

A) Two times more B) Three times more C) Five times more D) Ten times more


Question 5

Which of the following is NOT a characteristic of dark social that makes it a particularly challenging propaganda channel to counter?

A) End-to-end encryption means platforms cannot monitor or moderate content in private messages.

B) Forwarded messages lose origin attribution, so recipients cannot identify the original producer of the content.

C) Dark social platforms make all shared content visible to public researchers so that the scale of disinformation can be measured.

D) Content in dark social channels carries the social proof of trusted personal relationships, making it more persuasive than content from strangers on public platforms.


Question 6

The chapter discusses WhatsApp disinformation in India between 2017 and 2019. Which of the following most accurately describes what researchers and journalists documented?

A) WhatsApp disinformation in India was primarily focused on electoral campaigns, and its effects were mostly confined to changing voting behavior.

B) False content spread through WhatsApp groups — including fabricated accusations of child kidnapping, with photographic "evidence" — contributed directly to mob violence and the deaths of individuals who were innocent of the accusations against them.

C) Indian government authorities successfully intercepted and corrected WhatsApp disinformation before it caused physical harm, demonstrating that government monitoring can address dark social propaganda effectively.

D) WhatsApp's end-to-end encryption protected Indian users from disinformation by preventing large-scale coordinated campaigns from reaching private groups.


Question 7

Pennycook, McPhetres, Zhang, Lu, and Rand (2020) tested an "accuracy-nudge" intervention. What did the intervention consist of, and what did it find?

A) Participants were shown detailed fact-checks of specific false headlines before making sharing decisions; this reduced sharing of false content by approximately 50 percent.

B) Participants were asked to rate the accuracy of one unrelated headline before a sharing decision task; this minimal accuracy priming significantly improved participants' ability to distinguish accurate from inaccurate content when deciding whether to share.

C) Participants were shown warning labels on known false content; those who saw labels shared false content at half the rate of those who did not.

D) Participants were given a media literacy training session before the study; trained participants showed dramatically better accuracy in sharing decisions, but only for content in their area of prior expertise.


Question 8

The IRA account "Blacktivist" achieved its influence partly by mixing legitimate civil rights content with electoral demobilization messaging. What does this mixing strategy tell us about how sophisticated propaganda operations build effectiveness?

A) It tells us that propaganda is most effective when it is entirely false, because mixed content confuses the audience and reduces persuasive impact.

B) It tells us that credibility must be established through content the audience finds authentic and valuable before the strategic messages embedded within it can achieve persuasive effect — the legitimate content earns the trust that the strategic content then exploits.

C) It tells us that platforms are unable to distinguish between legitimate activism and propaganda because they look identical, and that this is the primary reason propaganda evades moderation.

D) It tells us that the IRA was sympathetic to Black American civil rights and included authentic content because its operators had personal commitment to those causes.


Question 9

Section 230 of the U.S. Communications Decency Act (1996) is central to the "publisher vs. platform" debate. What does Section 230 provide?

A) A requirement that internet platforms review and approve all user-generated content before it is published, to ensure it meets minimum accuracy standards.

B) Legal immunity for internet platforms from liability for user-generated content, on the grounds that platforms are not "publishers or speakers" of that content.

C) A regulatory framework requiring social media companies to register with the Federal Communications Commission and submit to broadcast content standards.

D) Criminal penalties for users who knowingly post false information on social media platforms, with platforms required to report violations to law enforcement.


Question 10

The chapter presents three positions on social media regulation. "Position C: Structural Regulation" argues that:

A) Platforms should be required to register as publishers and bear full legal liability for all user-generated content, with no exceptions for good-faith moderation decisions.

B) Section 230 must be completely preserved because any publisher liability regime would destroy the user-generated internet and concentrate broadcasting power in a few large institutions.

C) The regulatory focus should be on platform design requirements — such as algorithmic transparency, external auditing, and accuracy-optimization mandates — rather than on content liability questions, because the design is the mechanism that produces systematic amplification of harmful content.

D) Social media platforms should be broken up under antitrust law because their network effects monopolies are the root cause of their propaganda-enabling power.


Chapter 16 | Propaganda, Power, and Persuasion See Appendix B for answers and explanatory notes.