Case Study 3.2: The Illusory Truth Effect in Political Fact-Checking

Overview

The illusory truth effect—the finding that repeated exposure to a statement increases its perceived truth—poses a particularly acute challenge for political fact-checking. Fact-checkers must contend with a fundamental paradox: the very act of repeating a false claim in order to correct it may inadvertently increase familiarity with that claim, strengthening the cognitive foundations of belief in it. This case study examines the illusory truth effect in the context of political misinformation, reviews the experimental evidence for how repeated false claims affect belief even after correction, and analyzes the implications for fact-checking practice, political communication, and media literacy education.


Background: Political Fact-Checking and Its Challenges

Political fact-checking emerged as a professional practice in the early 2000s, with the founding of PolitiFact (2007), FactCheck.org (2003), and the Washington Post's Fact Checker (2007) in the United States, followed by fact-checking organizations in dozens of other countries. The underlying premise of this enterprise is that providing accurate information to voters will help them make better-informed decisions and that public figures will be incentivized to be more accurate when their claims are publicly evaluated.

Both premises have been challenged by empirical research. On the demand side, studies of motivated reasoning (reviewed in Chapter 3) have shown that people process corrections differently depending on whether the corrected claim came from a liked or disliked source and whether the correction threatens important aspects of their identity. On the supply side, there is evidence that prominent public figures face limited electoral consequences for making false claims that activate strongly held beliefs.

But there is a third challenge that has received comparatively less attention in popular discourse: even when voters are exposed to accurate corrections, the illusory truth effect may work against the correction by increasing the salience and familiarity of the false claim itself.


The Core Mechanism: How Repetition Shapes Political Belief

Experimental Demonstrations

Fazio et al. (2015) provided one of the most direct demonstrations of the illusory truth effect for political content. Participants rated the accuracy of 272 statements across two sessions. Some statements were presented in both sessions (repeated); others only in the second session (new). Critically, some of the statements were factually false—and participants possessed knowledge that directly contradicted them. The key finding: even for statements that participants could, in principle, identify as false based on their prior knowledge, repetition increased truth ratings. The fluency generated by a single prior exposure was sufficient to increase perceived truth even against contradicting knowledge.

For political content, this finding is particularly concerning because political false claims often circulate many dozens or hundreds of times before any fact-check reaches a given individual. By the time a correction appears, the false claim has already been repeated often enough to generate substantial fluency and perceived truth.

Pennycook et al. (2018) examined the illusory truth effect specifically for fake news headlines—the type of political misinformation most prominently circulated via social media. In their study, participants who had previously seen a fake news headline rated it as more accurate than novel fake news headlines, even controlling for prior beliefs and even for participants who were generally skeptical of the outlet. A single prior exposure to a fabricated news headline was sufficient to increase its perceived truth.

The Continued Influence Effect

Related to the illusory truth effect but conceptually distinct is the continued influence effect (Johnson & Seifert, 1994): once a piece of misinformation is integrated into understanding of an event, it continues to influence reasoning even after it has been explicitly corrected and the correction has been acknowledged.

In the classic demonstration, participants read a news report about a warehouse fire. Some versions included misinformation about the probable cause (e.g., suggesting flammable materials were stored in a specific location). Later, this misinformation was explicitly corrected ("We now know no flammable materials were stored there"). Despite acknowledging the correction, participants continued to cite the location of flammable materials in their reasoning about the cause of the fire.

The continued influence effect appears to arise partly from the fact that the correction creates a gap in the mental model: once you remove the false explanation, what is left? If no alternative explanation is provided, the mind may default to the original false explanation, which at least provides a coherent story. This has important implications for how corrections should be designed (discussed below).


Case Analysis: The "Death Panel" Claim (2009-Present)

Background

The claim that the Affordable Care Act (ACA) included "death panels"—government bodies that would decide which elderly and disabled individuals were worthy of medical care—was introduced into political discourse by Sarah Palin in an August 2009 Facebook post. The claim was immediately evaluated by multiple fact-checkers, including PolitiFact, which rated it "Pants on Fire" (their most extreme falsehood rating), and it was described as the "lie of the year" for 2009.

The underlying provision actually concerned voluntary Medicare reimbursement for end-of-life counseling sessions with physicians—a provision that was ultimately removed from the bill due in part to the controversy generated by the death panel claim. Numerous independent analyses confirmed that no provision in the ACA established any body with authority to deny care based on cost-effectiveness.

The Repetition Pattern

Despite these corrections, the death panel claim did not disappear. It was repeated thousands of times in political discourse over the following decade. Cable news coverage—which frequently repeated the claim while also correcting it—produced exactly the pattern most likely to strengthen illusory truth: many repetitions of the false claim in a correction context.

Polling data tracked by organizations including Gallup and the Kaiser Family Foundation consistently found that significant minorities of Americans continued to believe that the ACA contained death panels long after the corrections had been widely disseminated. A 2012 survey found that 34% of Americans believed "the new health care law has a provision establishing death panels." A 2013 survey found that 41% of Americans did not know or were not sure that the law did not include death panels.

These persistently high rates of false belief cannot be fully explained by the failure of fact-checks to reach people—the death panel claim was among the most extensively fact-checked political claims of the decade. They are more consistent with the prediction of illusory truth theory: that repeated exposure to the false claim (even in correction contexts) built up its fluency and perceived truth.

Mechanisms in Operation

Several mechanisms documented in Chapter 3 contributed to the persistence of the death panel belief:

1. Illusory truth effect: Thousands of repetitions of the phrase "death panel" in news coverage—whether in support or refutation—built strong fluency associations. By 2012, "death panels" was one of the most frequently occurring phrases in health care policy discourse.

2. Source monitoring errors: For individuals who heard the death panel claim multiple times in multiple contexts, attributing it to its original partisan source became increasingly difficult. The claim became "something I've heard about," stripped of its origin as a partisan attack and acquiring the neutral patina of received knowledge.

3. Continued influence effect: Even for individuals who had heard and accepted a correction, the death panel claim had been integrated into their mental model of the ACA debate. Correcting it left a narrative gap: "So the government won't actually decide who gets care—but then who will determine what expensive treatments are covered?" The unresolved narrative uncertainty created conditions in which the original false claim continued to influence reasoning.

4. Motivated reasoning: For individuals who had already formed negative opinions of the ACA on partisan grounds, the death panel claim was consistent with their prior beliefs and identity, activating identity-protective cognition that made the correction less effective.


Case Analysis: The "Stolen Election" Narrative (2020-Present)

Background

In the months following the 2020 US presidential election, a network of claims emerged asserting that the election had been stolen through widespread fraud. These claims were rejected by election officials of both parties in every contested state, dismissed by over 60 federal and state courts including judges appointed by both Republican and Democratic presidents, rejected by the Department of Justice under William Barr (a political appointee of the incumbent president), and evaluated as false by independent fact-checkers and journalism organizations worldwide.

Despite this comprehensive correction, polling data from subsequent years showed that large percentages of Americans—disproportionately but not exclusively Republican—continued to believe that the election had involved significant fraud sufficient to change the outcome.

Scale of Repetition

The stolen election claim provides perhaps the most extreme test of the illusory truth hypothesis in political discourse. Between November 2020 and January 2021, the various claims comprising the stolen election narrative were repeated by:

  • The sitting president of the United States, across hundreds of social media posts, press conferences, and campaign-style rallies
  • Dozens of members of Congress, in floor speeches and media appearances
  • Hundreds of state-level politicians across contested states
  • A network of media outlets with combined audiences of tens of millions
  • Social media users who shared the claims millions of times

By the time comprehensive corrections were widely available, most Americans had been exposed to the stolen election claim dozens or hundreds of times—through both repetition of the claim and repetition of corrections. The illusory truth effect predicts that this scale of repetition would produce substantial belief, and the polling data are consistent with this prediction.

The Role of Social Identity

The stolen election narrative also illustrates the interaction between illusory truth and motivated reasoning. For strongly Republican-identified voters, the claim that the election had been stolen was consistent with broader narratives about Democratic electoral malpractice and activated strong identity-protective cognition. This interaction is important: the illusory truth effect is not politically uniform. Repetition of a claim that is consistent with one's prior beliefs and identity likely produces larger illusory truth effects than repetition of a claim that is inconsistent with prior beliefs. When fluency, familiarity, and motivated reasoning all push in the same direction, the cumulative effect on belief can be very strong.


Implications for Fact-Checking Practice

The Backfire Problem (Revisited)

The risk of fact-checking backfiring—increasing rather than decreasing false belief—was highlighted in early research by Nyhan and Reifler (2010), who found that corrections of political misinformation sometimes increased false belief among those most motivated to reject the correction. Subsequent research has shown this effect to be less universal than initially claimed (Wood & Porter, 2019 found corrections generally reduced false belief even among motivated participants), but the concern remains relevant in specific contexts.

The illusory truth effect represents a distinct but related concern: not that corrections directly cause backfire in motivated individuals, but that the ubiquitous repetition of false claims in correction contexts builds the fluency foundation for sustained false belief over time.

The "Truth Sandwich" Approach

Linguist George Lakoff has proposed the "truth sandwich" as a practical corrective to this problem: lead with the truth, mention the falsehood briefly and only once, then return to and reinforce the truth. This approach directly addresses the illusory truth mechanism by minimizing the number of repetitions of the false claim while maximizing repetitions of the accurate claim.

Evidence from journalism practice and limited experimental testing supports the principle: corrections that lead with accurate information and minimize repetition of the false claim are more effective at reducing false belief than corrections that center the false claim.

Pre-Bunking vs. Post-Hoc Correction

Research by Pennycook, Rand, Lewandowsky, and colleagues suggests that pre-emptive debunking—providing people with information about misleading claims before they encounter them, including identification of manipulation techniques—is generally more effective than correcting claims after exposure. This approach, grounded in inoculation theory, has the advantage of avoiding the illusory truth problem: the false claim is introduced only once, in a context where its falseness is immediately and clearly established, and the subsequent encounters with the claim trigger the inoculation rather than adding to its fluency.

Cook et al. (2017) developed a large-scale inoculation intervention (a browser game called "Bad News") in which participants role-played as creators of misinformation, learning to identify manipulation techniques including emotional appeals, false experts, and conspiracy rhetoric. Subsequent studies showed that participants who played the game were more accurate at evaluating the credibility of social media content. This approach avoids the illusory truth problem by building cognitive resistance to manipulation techniques rather than centering specific false claims.

Structural Reforms

Beyond technique-level modifications to fact-checking practice, the illusory truth effect implies structural reforms:

Platform design: Social media platforms that allow repeated sharing of content already labeled as misinformation are creating optimal conditions for the illusory truth effect. Platforms could limit the number of times misleading-labeled content can be reshared, reducing the total number of repetitions.

Amplification asymmetry: Major media organizations face a choice when a prominent political figure makes a false claim: cover it (and thereby amplify it) or ignore it (and cede the information space to partisan sources). Recognizing the illusory truth effect suggests that coverage should be structured to minimize the total number of repetitions of the false claim and maximize the number of repetitions of accurate information.

Prebunking infrastructure: Public investment in inoculation campaigns that preemptively familiarize populations with common manipulation techniques (rather than specific false claims) could provide population-level resistance to novel misinformation as it emerges.


The Correction-Repetition Paradox: A Framework

Based on the research reviewed above, we can articulate a general framework for the correction-repetition paradox:

  1. False claim circulates at high volume and frequency, building fluency and familiarity.
  2. Fact-checks correct the claim, but in doing so must mention the claim, adding to its repetition count.
  3. Correction reaches some audiences and produces immediate reduction in false belief.
  4. Over time, source monitoring errors cause the correction context to be forgotten; the false claim's high familiarity from both original circulation and correction repetitions makes it feel true.
  5. Net effect: False belief is reduced in the short term but may recover over longer timescales, particularly for identity-consistent claims among motivated populations.

This framework implies that corrections, while valuable, are fighting upstream against the fluency advantage built by prior repetition. It suggests that interventions before or during the early spread of a false claim—before the fluency base is established—are likely more effective than corrections after the claim has already circulated widely.


Discussion Questions

  1. Given the illusory truth effect, should fact-checkers change the format of their corrections? What would an evidence-based redesign of political fact-checking look like?

  2. Television news programs frequently run segments that feature a political figure making a false claim, followed immediately by a correction. How does the illusory truth effect predict the impact of this format? What changes would reduce the risk of inadvertent false belief amplification?

  3. The truth sandwich approach (lead with the truth, minimize repetition of the false claim) requires that journalists not give equal prominence to a false claim and its correction. Does this raise any legitimate concerns about journalistic fairness? How would you balance these concerns with the cognitive science evidence?

  4. The stolen election narrative spread across a deeply partisan information environment where motivated reasoning and illusory truth interacted. Does this interaction create situations where standard fact-checking approaches are insufficient? What additional interventions would be needed?

  5. Pre-bunking (inoculation before exposure) consistently outperforms post-hoc correction in experimental studies. What are the practical obstacles to implementing large-scale pre-bunking campaigns? What institutions would be best positioned to conduct them?

  6. If the illusory truth effect means that no level of correction can fully counteract widely-circulated false claims, does this have implications for the legal and political accountability of those who originate and spread political misinformation? What remedies—legal, regulatory, or social—might be appropriate?


Key Research Referenced

  • Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16(1), 107–112. (Original illusory truth demonstration)
  • Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General, 144(5), 993–1002.
  • Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880.
  • Johnson, H. M., & Seifert, C. M. (1994). Sources of the continued influence effect: When misinformation in memory affects later inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(6), 1420–1436.
  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.
  • Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes' steadfast factual adherence. Political Behavior, 41(1), 135–163.
  • Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384.