Chapter 26 Quiz

Multiple Choice (1 point each)

1. According to Wardle and Derakhshan's typology, a fabricated screenshot designed to deceive viewers falls into which category?

a) Malinformation b) Misinformation c) Disinformation d) Misleading content

Answer: c — The screenshot is deliberately fabricated and disseminated to deceive, making it disinformation (false content + intent to harm).


2. A real photograph from a 2012 protest is shared with a caption saying it depicts a recent riot. This is best classified as:

a) Fabricated content b) False context c) Manipulated content d) Imposter content

Answer: b — The content itself (the photograph) is genuine; it is the contextual information that is false.


3. The Vosoughi, Roy, and Aral (2018) Science paper found that compared to true news on Twitter, false news:

a) Spread more slowly but reached more total users over time b) Was primarily spread by bots rather than humans c) Was faster, deeper, and broader in its cascade, with novelty as a key driver d) Was concentrated in politically extreme accounts rather than mainstream users

Answer: c — The study found false news spread faster, deeper, and more broadly, and that humans (not bots) were primarily responsible, with novelty driving much of the effect.


4. The "backfire effect" — the idea that corrections sometimes strengthen false beliefs — is now understood to be:

a) A robust finding that has been replicated across many studies b) Rare and difficult to reproduce, with the current consensus finding corrections produce partial, temporary belief updating c) Caused primarily by partisan media rather than corrections from neutral sources d) Specific to misinformation about scientific topics, not political claims

Answer: b — Multiple replication attempts, including by the original researchers, failed to find the backfire effect. Corrections generally produce modest, partial belief updating.


5. Inoculation theory as applied to misinformation proposes that:

a) People are best protected by avoiding all exposure to false claims b) Exposing people to a weakened form of a misleading argument, with a refutation, builds resistance to the full-strength version c) The best counter-misinformation approach is to correct false claims immediately after people encounter them d) Social media literacy training makes people immune to motivated reasoning

Answer: b — Inoculation theory (Van der Linden and colleagues) draws on the vaccine analogy: weakened exposure + refutation builds resistance.


6. Which of the following best describes the "reach disparity" problem in fact-checking?

a) Fact-checkers have more resources than campaigns and can reach more voters b) Corrections reach a relatively small, already-engaged audience and often fail to reach those who saw the original false claim c) Fact-checkers in swing states reach larger audiences than those in safe states d) Digital fact-checks reach a larger audience than print corrections

Answer: b — Research consistently finds that fact-checks reach a small fraction of the audience that encountered the original false claim.


7. The "implied truth effect" refers to:

a) The phenomenon where political claims acquire truth status when repeated by political elites b) The increase in perceived truthfulness of content that has NOT been labeled, even when other similar content has been labeled c) The tendency for voters to assume true claims are labeled as such by platforms d) The process by which satire is interpreted as factual reporting

Answer: b — When platforms label some content as disputed, unlabeled content acquires implied credibility — a challenge for selective labeling policies.


8. According to the research, which of the following most strongly predicts whether a political false claim will be accepted as true?

a) The quality of the evidence provided in support of the claim b) The national origin of the disinformation operation c) Whether the claim aligns with the recipient's prior political beliefs and identity d) Whether the claim is accompanied by a photograph

Answer: c — Motivated reasoning and identity-protective cognition are the most consistent predictors; people evaluate evidence through the lens of prior beliefs.


9. Carlos Mendez's tracking dashboard found that 62% of Meridian's poll mentions characterized the results as showing Whitfield leading. This characterization is:

a) Technically accurate because Whitfield's number was higher b) A type of malinformation — technically true but deployed to harm c) Misleading content — accurate numbers presented in a context that distorts their meaning d) Fabricated content because Meridian never published those numbers

Answer: c — The raw numbers are accurate, but characterizing a within-margin-of-error result as a "lead" misleadingly implies a meaningful advantage.


10. The evidence on "deplatforming" — removing high-profile accounts that spread misinformation — suggests:

a) It has no measurable effect because followers migrate immediately to new accounts b) It increases misinformation spread by creating martyr narratives c) It can produce significant reductions in misinformation reach, though some traffic migrates to alternative platforms d) It only works when applied to foreign state-sponsored accounts, not domestic actors

Answer: c — Research on the Trump deplatforming found a 73% reduction in online misinformation in the following week, though migration to other platforms partially offset this.


Short Answer (5 points each)

11. Explain the difference between "prebunking" and "debunking" in the context of misinformation response. What does the research evidence suggest about the relative effectiveness of each approach, and what mechanisms explain the difference?

Model Answer: Debunking involves correcting false information after exposure. Its effectiveness is limited by reach disparities, memory updating failures, and identity entrenchment. Prebunking (inoculation) exposes people to a weakened version of a false argument before they encounter it, along with a refutation. Research by Van der Linden and colleagues finds prebunking more consistently effective because it activates critical processing before motivated reasoning is engaged. The mechanism is analogous to a vaccine: weakened exposure produces resistance to the full-strength version.


12. What is "epistemic inequality" in the misinformation context, and what are its political implications?

Model Answer: Epistemic inequality refers to the differential vulnerability to misinformation across population segments. Research by Thorson and others finds false political information has larger and more persistent effects on lower-knowledge, lower-media-literacy voters. The political implication is that campaigns targeting misinformation at less-informed voter segments may achieve disproportionate effects, since these voters are less likely to encounter corrections and less equipped to evaluate source credibility. This creates a potential exploitation dynamic.


True/False with Justification (3 points each)

13. True or False: High-engagement, highly informed voters are more resistant to political misinformation than low-engagement voters because their greater knowledge allows them to identify false claims.

Answer: FALSE. Research on motivated reasoning and identity-protective cognition shows that high-engagement partisans often show stronger (not weaker) acceptance of politically aligned misinformation, because their political identity is more central to their self-concept and because they are better equipped to find flaws in unwelcome evidence.


14. True or False: The business model of advertising-funded social media platforms creates incentives that systematically favor the spread of emotionally arousing, engaging content — which includes misinformation — over accurate but less engaging content.

Answer: TRUE. Platforms optimize for engagement because engagement drives advertising revenue. False and emotionally arousing content generates higher engagement than accurate, measured content. This structural incentive exists regardless of platform intent.


15. True or False: According to the research literature, once a voter has been exposed to a fact-check rating a claim as false, they can be expected to fully update their belief and discard the false claim.

Answer: FALSE. Research consistently finds that corrections produce partial, temporary belief updating, not complete belief replacement. Memory for the original false claim persists, and the updated belief may decay over time without repeated correction. The correction does reduce (but does not eliminate) the false claim's influence.