Chapter 15 Quiz: Political Misinformation and Election Integrity

Answer each question, then reveal the explanation using the dropdown. Each question includes the relevant section reference.


Question 1

Which of the following best describes "disinformation" as distinct from "misinformation"?

A) Disinformation is more widely spread than misinformation B) Disinformation involves false content shared with deliberate intent to harm C) Disinformation originates from foreign state actors while misinformation is domestic D) Disinformation targets elections while misinformation targets other political topics

Reveal Answer **Correct Answer: B** Disinformation is defined by the intent to deceive or harm, not by its scale, origin, or topic. Wardle and Derakhshan's framework distinguishes disinformation (false content deliberately created to harm) from misinformation (false content shared without harmful intent) and malinformation (true content shared with intent to harm). Origin (domestic vs. foreign) is not part of the definition. *[Section 15.1]*

Question 2

The "partisan asymmetry" debate in misinformation research concerns:

A) Whether foreign disinformation operations target both parties equally B) Whether political misinformation is more concentrated among right-wing versus left-wing actors C) Whether partisan media is more or less reliable than nonpartisan media D) Whether Republican or Democratic voters are more susceptible to believing misinformation

Reveal Answer **Correct Answer: B** The partisan asymmetry debate concerns whether political misinformation — including sharing of false news, computational propaganda, and fabricated content — is symmetrically distributed across the political spectrum or concentrated more on one side. Research by Guess, Nagler, Tucker, and Benkler et al. has found asymmetries in the US context, though this research has methodological critics. *[Section 15.1]*

Question 3

Voter suppression disinformation is most effective when it:

A) Targets high-information voters who monitor many news sources B) Makes claims so outlandish that the target community immediately recognizes them as false C) Exploits real barriers and legitimate uncertainties about voting that already exist in target communities D) Reaches a large general audience rather than specific demographic communities

Reveal Answer **Correct Answer: C** The chapter emphasizes that effective voter suppression disinformation exploits plausibility: it lands most effectively in communities that have real reasons to be uncertain about eligibility, that face actual barriers to voting, or that have experienced genuine historical disenfranchisement. This is why targeted false claims about immigration status and voting eligibility are particularly effective in Latino communities, and why false claims about felony disenfranchisement are effective in Black communities where overincarceration rates make this a live concern. *[Section 15.2]*

Question 4

The Internet Research Agency (IRA) was:

A) A US government agency created to investigate Russian interference B) A Russian state-connected organization that conducted social media influence operations C) A Facebook internal team that detected foreign interference D) A consortium of academic researchers studying Russian propaganda

Reveal Answer **Correct Answer: B** The Internet Research Agency was a St. Petersburg-based company funded by Yevgeny Prigozhin, a Russian oligarch connected to the Kremlin. It conducted systematic influence operations targeting US political discourse through fake social media personas, real-world event organization, and paid advertising. Its operations were documented in the Mueller Report, Senate Intelligence Committee Report Volume 2, and independent research commissioned by the Senate. *[Section 15.3]*

Question 5

According to research cited in the chapter, the IRA's most extensive content targeting in 2016 was directed at:

A) White rural voters in the Rust Belt B) Conservative evangelical Christians C) Black American communities D) College-educated suburban voters

Reveal Answer **Correct Answer: C** The Senate Intelligence Committee's independent research (New Knowledge / DiResta et al.) found that the IRA's most extensive targeting was of Black American communities. The IRA created large Facebook pages and Instagram accounts specifically targeting Black Americans, amplifying legitimate grievances about police violence and racial injustice. The goal was primarily to suppress Black voter turnout for Democratic candidates rather than to directly promote Republican candidates. *[Section 15.3]*

Question 6

Cambridge Analytica's psychographic targeting methods were later found to be:

A) More effective than standard political micro-targeting due to unique psychological insights B) Effective for voter mobilization but not voter persuasion C) Substantially overstated in their effectiveness, amounting to fraud on clients D) Illegal under FEC regulations but effective in practice

Reveal Answer **Correct Answer: C** Multiple data scientists who examined Cambridge Analytica's methods concluded that the company's claimed psychographic revolution was largely marketing mythology significantly overstating the actual methods' differentiation from standard political targeting. The company essentially used voter file data in ways campaigns already did, while claiming unique psychographic insights it couldn't actually deliver. The data breach from Facebook was real; the claimed targeting revolution was not. *[Section 15.3]*

Question 7

How many courts dismissed or rejected lawsuits challenging the 2020 US presidential election results?

A) A few, with several courts declining jurisdiction without ruling on merits B) About half — courts were divided on the evidence C) All but one, which was decided on a narrow procedural ground D) All of them accepted the fraud claims on their merits

Reveal Answer **Correct Answer: C** Over 60 lawsuits were filed challenging the 2020 election. Courts dismissed or rejected all but one on the merits. These decisions were made by judges appointed by presidents of both parties, including judges appointed by Trump himself. Specific decisions cited lack of evidence; Judge Stephanos Bibas (Trump appointee) wrote: "Charges require specific allegations and then proof. We have neither here." *[Section 15.4]*

Question 8

The "liar's dividend" refers to:

A) The financial profit made by disinformation operators through advertising revenue B) The ability of public figures to deny authentic recordings or evidence by claiming it is a deepfake C) The political advantage gained by candidates who successfully spread misinformation about opponents D) The legal protection afforded to satirists who make false claims

Reveal Answer **Correct Answer: B** The "liar's dividend" is the phenomenon whereby the mere existence and public awareness of deepfake technology allows public figures to claim that authentic, embarrassing footage or audio is a deepfake — even without any evidence of manipulation. This may be the most significant current impact of deepfake technology: not the actual deployment of fake videos, but the epistemic damage done by creating widespread doubt about authenticity of all video evidence. *[Section 15.5]*

Question 9

The Content Authenticity Initiative (CAI) and C2PA are working on what approach to the deepfake problem?

A) Training AI systems to detect deepfakes after they are created B) Criminalizing deepfake creation C) Technical provenance standards that authenticate content at the point of creation D) Watermarking deepfakes with visible markers

Reveal Answer **Correct Answer: C** The CAI and C2PA are developing technical standards that would embed provenance metadata in content at the time of creation — essentially providing a chain of custody for authentic content. Rather than trying to detect manipulation after the fact (which is technically challenging and falls behind in an arms race with generation technology), this approach makes authentic content verifiable by providing a record of where and when it was created and what modifications have been made. *[Section 15.5]*

Question 10

The 2010 Maryland gubernatorial election robocall operation resulted in:

A) Dismissal of charges because the calls were protected political speech B) Civil penalties but no criminal conviction C) A criminal conviction of the Republican campaign consultant who authorized the calls D) Platform policy changes at robocall companies

Reveal Answer **Correct Answer: C** Paul Schurick, a Republican campaign consultant, was criminally convicted of election fraud for authorizing robocalls sent to African American Democratic voters on Election Day, falsely telling them that candidate Martin O'Malley had won and they could "relax" and not vote. This represents the most legally documented case of deliberate voter suppression disinformation in recent US electoral history. *[Section 15.6]*

Question 11

WhatsApp-based disinformation is particularly challenging to address because:

A) WhatsApp has no policies against election misinformation B) The platform operates in countries where election law is weak C) Messages in private and group chats are encrypted, making monitoring by researchers and platforms very difficult D) WhatsApp users are older and therefore less responsive to fact-checking

Reveal Answer **Correct Answer: C** WhatsApp's end-to-end encryption means that neither the platform nor external researchers can monitor message content at scale. This makes it effectively impossible to apply the same kinds of label-and-fact-check interventions that work on public social media posts. The 2018 Brazilian election demonstrated this vulnerability: campaign disinformation spread through WhatsApp business accounts at scale with minimal ability for external monitoring or intervention. *[Sections 15.7, 15.1]*

Question 12

In the 2017 French election, the "MacronLeaks" operation was less damaging than similar operations in other countries primarily because:

A) French voters were already aware of Russian interference tactics B) France's legal electoral blackout period restricted mainstream media coverage before the vote C) Emmanuel Macron's campaign had superior cybersecurity D) The leaked documents were immediately verified as authentic, limiting their sensational appeal

Reveal Answer **Correct Answer: B** France's electoral law prohibits media from publishing new political news or polling in the 36 hours before a vote. When "MacronLeaks" — a dump of campaign documents mixed with fabricated materials — was released on 4chan in that window, French mainstream media largely could not cover it during the critical decision period. Macron won decisively. The case demonstrates how legal and institutional frameworks can limit the impact of information operations. *[Section 15.7]*

Question 13

CISA's statement that the 2020 election was "the most secure in American history" referred specifically to:

A) The absence of foreign interference of any kind B) The technical security of election infrastructure C) The accuracy of vote counts across all states D) The integrity of candidate advertising and campaign communications

Reveal Answer **Correct Answer: B** CISA's statement was specifically about the security of election infrastructure — voting machines, voter registration databases, vote tabulation systems, and related technology. It addressed technical cybersecurity, not the broader claims about ballot fraud that constituted the "stolen election" narrative. CISA Director Chris Krebs was subsequently fired by Trump for signing this statement. The distinction between technical security (what CISA addresses) and the political integrity narrative (what the "Big Lie" asserted) is a key analytical point in the chapter. *[Section 15.8]*

Question 14

Which of the following best characterizes the outcome of the Fox News / Dominion Voting Systems defamation case?

A) The case was dismissed because news coverage of political claims is constitutionally protected B) A jury found Fox News liable for defamation and awarded $1.6 billion in damages C) Fox News settled for $787.5 million without admitting liability, after discovery revealed hosts privately doubted the claims they broadcast D) The case is still pending in federal court

Reveal Answer **Correct Answer: C** Fox News settled with Dominion Voting Systems for $787.5 million without admitting liability. During discovery, internal Fox News communications (texts and emails between hosts, executives, and Rupert Murdoch) were revealed showing that multiple Fox figures privately doubted or rejected the election fraud claims they were amplifying on air. Dominion's theory was that Fox knew the claims were false and broadcast them to retain audience share. *[Section 15.9, Key Terms]*

Question 15

Psychological inoculation theory (prebunking) works by:

A) Exposing people to accurate information before they encounter misinformation B) Teaching people manipulation techniques used in misinformation before they encounter specific false claims C) Reducing emotional engagement with political content to make people less susceptible to persuasion D) Training algorithms to identify misinformation before it spreads

Reveal Answer **Correct Answer: B** Prebunking based on inoculation theory teaches the rhetorical and manipulative techniques used in misinformation — such as false dichotomies, emotional appeals, cherry-picking evidence — without exposing people to the specific false claims. This provides a form of "cognitive immunity" that transfers across different pieces of misinformation using the same techniques. Studies by the Cambridge Social Decision-Making Lab and partners, deployed through YouTube, have shown promise for this approach. *[Section 15.9]*

Question 16

The "implied truth effect" in platform labeling refers to:

A) The tendency for labeled content to be perceived as satirical and therefore harmless B) The perception that unlabeled false content is more credible because it lacks a false label C) The effect of labels in making users trust the platform's judgment on all content D) The increased sharing that occurs when content is labeled as disputed

Reveal Answer **Correct Answer: B** When platforms apply false labels or fact-check labels to some content but not all, users may infer that unlabeled content has been reviewed and found to be accurate — even when it hasn't. This "implied truth effect" can inadvertently boost the credibility of false content that platforms haven't gotten around to labeling. It represents a significant challenge for selective labeling approaches and argues for either comprehensive labeling or explicit acknowledgment of the limits of labeling coverage. *[Section 15.9]*

Question 17

Research by Benkler, Faris, and Roberts in "Network Propaganda" found that the most significant driver of political misinformation in 2016 was:

A) The Internet Research Agency's social media operations B) Cambridge Analytica's psychographic targeting C) A distinct, insular right-wing media ecosystem with Fox News at its center D) Social media platform algorithms amplifying outrage content

Reveal Answer **Correct Answer: C** Benkler, Faris, and Roberts analyzed 1.25 million stories and found that domestic right-wing media, with Fox News at its center and connected to hyperpartisan sites and conservative media figures, formed a distinct insular ecosystem that mutually reinforced misinformation in a way without a left-wing equivalent. They argued this domestic ecosystem was more consequential for political misinformation than Russian interference. Their findings do not mean foreign interference was irrelevant but contextualize it relative to domestic factors. *[Section 15.3]*

Question 18

In Brazil's 2022 election, Bolsonaro's response to losing was most similar to:

A) Al Gore's 2000 response to the Florida recount — contesting through legal channels then conceding B) Trump's 2020 "stolen election" narrative, including supporter attacks on government buildings C) The French National Front's 2017 response — filing legal challenges but accepting the final result D) A standard democratic transition with no public contesting of results

Reveal Answer **Correct Answer: B** Bolsonaro deployed a stolen election narrative closely paralleling the US 2020 experience, including claims about electronic voting machine fraud. His supporters attacked government buildings in Brasília on January 8, 2023 — exactly one year after January 6th in the US — with deliberate parallels to the US attack. The chapter uses this case to illustrate how misinformation tactics, including stolen election narratives, can be exported and replicated across democracies. *[Section 15.7]*

Question 19

Which of the following is NOT a documented form of election misinformation in the voter suppression category?

A) False information about polling location hours B) False claims that immigration status will be checked at the polls C) False claims that mail-in ballot signatures are not verified D) Accurate information about a candidate's criminal conviction

Reveal Answer **Correct Answer: D** Accurate information about a candidate's criminal conviction, even if shared to harm the candidate, is accurate information and falls into the "malinformation" category (true content shared with harmful intent), not misinformation or disinformation. It is not an example of election misinformation because it is true. Options A, B, and C are all documented forms of voter suppression disinformation involving false information about voting logistics, eligibility, or processes. *[Section 15.2]*

Question 20

The "electoral blackout" mechanism France used to limit the impact of MacronLeaks works by:

A) Blocking foreign social media platforms during the campaign period B) Prohibiting media from publishing new political news in the 36 hours before a vote C) Requiring all political advertising to be pre-approved by the election authority D) Making campaign data publicly available to enable fact-checking

Reveal Answer **Correct Answer: B** France's electoral law imposes a media blackout on new political reporting and polling in the 36 hours before an election. This is not a social media ban — French people can still use social media — but it means mainstream broadcast and print media largely did not amplify the MacronLeaks dump before the vote, limiting its reach to the audiences who found it directly on far-right platforms. *[Section 15.7]*

Question 21

The term "coordinated inauthentic behavior" (CIB) refers to:

A) Any false information spread through social media B) Networks of fake or real accounts that work together deceptively to manipulate information ecosystems C) Government-sponsored censorship of political speech on social media D) Automated bots that post content without human control

Reveal Answer **Correct Answer: B** Coordinated inauthentic behavior is a platform policy term (developed primarily by Facebook) for networks of accounts — which may be fake personas, real but compromised accounts, or real accounts operating under direction — that coordinate to deceive about their nature or origin while manipulating public discourse. The "inauthentic" component refers to deception about identity or coordination, not necessarily to the content itself being false. *[Key Terms]*

Question 22

The Senate Intelligence Committee found that Russia's interference in 2016 US election infrastructure:

A) Successfully altered vote tallies in several states B) Had no successful access to any US election systems C) Sought access to election infrastructure in all 50 states and succeeded in accessing some voter registration systems, though there is no evidence of data alteration D) Was entirely limited to social media operations

Reveal Answer **Correct Answer: C** The Senate Intelligence Committee found that Russian actors sought to access election infrastructure in all 50 states, and successfully accessed some voter registration systems. However, there is no evidence that any vote tallies were altered or that access to infrastructure affected actual vote counts. This is distinct from the IRA's social media operations. The distinction between infrastructure hacking (real but without demonstrated effect on outcomes) and social media operations (real and large-scale) is important for accurate assessment of Russian interference. *[Section 15.3, Section 15.8]*

Question 23

Which statement about the Foreign Agents Registration Act (FARA) is most accurate in the context of election interference?

A) FARA prohibits any foreign involvement in US election campaigns B) FARA requires registration and disclosure by those acting as agents of foreign principals, and undisclosed foreign political advertising may violate it C) FARA applies only to government officials who work with foreign governments D) FARA was created specifically in response to the 2016 election interference

Reveal Answer **Correct Answer: B** FARA requires that those who act as agents of foreign governments or political parties register with the Justice Department and disclose their activities. Some IRA-linked operations involved undisclosed foreign nationals purchasing political advertising, which may constitute FARA violations. FARA was enacted in 1938 in response to Nazi propaganda operations in the US and predates the 2016 election by decades. *[Section 15.9]*

Question 24

"Process misinformation" in the election context refers to:

A) False claims about how elections are funded B) False claims about how voting and vote-counting work, including how machines function and how ballots are handled C) False claims about the legal process for challenging election results D) False claims about candidates' positions on electoral reform

Reveal Answer **Correct Answer: B** Process misinformation targets how elections mechanically function — how voting machines work, how ballots are counted and audited, what normal vote-counting procedures involve, and how absentee and mail ballots are handled. Claims that voting machines are connected to the internet, that "ballot dumps" represent fraud, or that signatures are not verified on mail ballots are process misinformation. This category is distinct from voter suppression disinformation (which targets voter behavior), candidate misrepresentation, and results misinformation. *[Section 15.2]*

Question 25

What does research by Guess, Nagler, and Tucker about 2016 fake news sharing suggest?

A) Fake news sharing was evenly distributed across age groups and political identities B) Young liberal users shared the most fake news C) Sharing of misinformation was concentrated among older, conservative-leaning users D) Fake news was primarily shared via email rather than social media

Reveal Answer **Correct Answer: C** Research by Guess, Nagler, and Tucker found that in the 2016 election cycle, sharing of misinformation on social media was concentrated among older, conservative-leaning users. This finding is part of the partisan asymmetry literature and has been replicated in subsequent work. It is important to note that this finding refers to sharing patterns, not to susceptibility to believing misinformation — a distinct question with more mixed findings in the literature. *[Section 15.1]*

Question 26

The Dominion Voting Systems defamation case differed from typical political speech cases because:

A) Dominion was a foreign company and therefore subject to different legal standards B) The case involved specific, verifiable false claims about an identifiable company (not general political opinion) C) The false claims were made in campaign advertising, which is regulated by FEC D) The Supreme Court had issued a new standard for defamation of businesses

Reveal Answer **Correct Answer: B** Defamation law protects identifiable persons and entities from specific false statements of fact. General political opinion and rhetoric are protected speech, but specific false factual claims about a named company (such as that Dominion's machines were designed to flip votes or were connected to Venezuela) can be defamatory if made with "actual malice" (knowledge of falsity or reckless disregard for truth). The discovery process in the case revealed internal communications suggesting awareness that the claims were false, supporting the actual malice standard. *[Section 15.9]*

Question 27

Which of the following interventions has shown the most empirical promise in protecting voters from election misinformation?

A) Removing all false content from social media platforms B) Requiring platforms to display fact-check labels on all political advertising C) Prebunking campaigns that teach manipulation techniques before exposure to specific false claims D) Banning political advertising on social media during election periods

Reveal Answer **Correct Answer: C** Prebunking campaigns based on inoculation theory — which teach manipulation techniques rather than refuting specific false claims — have shown consistent promise in large-scale experimental studies. Content removal faces significant definitional challenges and free speech concerns. Labels show mixed evidence, with implied truth effects as a drawback. Advertising bans address only a subset of election misinformation. The Cambridge Social Decision-Making Lab's prebunking videos, deployed at scale through YouTube, represent the most promising recent evidence for a scalable intervention. *[Section 15.9]*