51 min read

Sophia Marin had spent most of the previous three hours working through a stack of primary sources for her journalism seminar's capstone assignment. The topic was vaccine misinformation — specifically, how demonstrably false claims about vaccine...

Chapter 11: Repetition and the Illusory Truth Effect

Propaganda, Power, and Persuasion

Part Two: Techniques


Opening: The Untraceable Belief

Sophia Marin had spent most of the previous three hours working through a stack of primary sources for her journalism seminar's capstone assignment. The topic was vaccine misinformation — specifically, how demonstrably false claims about vaccine safety had circulated online despite sustained efforts at correction. She was trying to build a timeline: when specific false claims had first appeared, how they had spread, and what had happened to public belief after fact-checkers debunked them.

At some point she had paused, put down her pen, and sat with an uncomfortable thought.

She knew — knew with complete confidence, the way you know your own name — that she had at some point encountered and absorbed the claim that vaccines caused autism. She could not remember reading an article about it. She could not remember a conversation. She could not identify a source. But the claim had weight in her memory. It felt familiar. It felt, on some level she could not quite locate, like something that had been around long enough to be at least partially credible.

Sophia was not someone who doubted vaccine safety. She understood the science; she had read the retraction of the Wakefield paper; she knew the epidemiological literature well enough to cite it. But she also knew, sitting there, that there was a layer of her cognition where the familiarity of the anti-vaccine claim had registered independently of her evaluation of its truth.

She went to find Prof. Marcus Webb in his office.

"Is it possible," she asked, leaning in the doorway, "to believe something is true — or at least to feel like it might be true — just because you've heard it so many times? Even if you know it's false?"

Webb turned from his desk. He had been expecting a question along these lines from Sophia for a few weeks — she had been circling around it in her seminar comments. He pulled a paper from a shelf and handed it to her.

"That's not just possible," he said. "It's documented. It has a name. And it may be the most important mechanism in the entire propaganda toolkit, because it operates below the level of argument."

The paper was Hasher, Goldstein, and Toppino (1977): "Frequency and the Conference of Referential Validity." It had been in print for nearly fifty years. What it described would change how Sophia thought about her research — and about the internal architecture of her own beliefs.


The Illusory Truth Effect: The Science

The illusory truth effect is the empirically documented phenomenon by which the repetition of a claim increases the probability that it will be judged as true — independently of any additional evidence, reasoning, or argument. It is not a response to better arguments. It is not a response to the accumulation of corroborating evidence. It is a response to familiarity alone.

The foundational study was published in the Journal of Verbal Learning and Verbal Behavior in 1977 by Lynn Hasher, David Goldstein, and Thomas Toppino. The study's design was straightforward. Participants were presented with a series of statements — 240 of them, a mix of true, false, and genuinely uncertain claims about general knowledge — across three sessions separated by two-week intervals. In each session, they rated each statement on a scale of 1 to 7 for how confident they were that it was true. Some statements appeared in multiple sessions; others appeared only once.

The finding was clear and replicable: statements that appeared in multiple sessions were rated as more valid in later sessions than in the first, regardless of whether they were actually true. The effect held for false statements as reliably as for true ones. Participants' sense of the statements' truthfulness increased with each repetition, not because they had gathered additional evidence but because the statements felt more familiar — easier to process, more fluent, more like things they had encountered before.

The mechanism identified by subsequent research is cognitive fluency — the subjective experience of ease with which information is processed. When a claim is encountered for the first time, processing it requires effort. When it is encountered again, having already been processed once, it requires less effort. This reduction in processing difficulty is experienced as a feeling — a slight sense of ease and familiarity — that the cognitive system interprets, usually automatically and without conscious awareness, as evidence of truth. Things that are true tend to be familiar; familiarity tends to follow from truth. But the cognitive system cannot reliably distinguish between the familiarity that comes from prior encounter with a true claim and the familiarity that comes from prior encounter with a repeated false one.

This misattribution of processing fluency to truth is the core mechanism. It is not a logical error in the sense of a failed inference. It is a feature of how the brain uses experience as evidence for truth — a feature that is generally reliable (familiar things often are true, because true things are what we encounter most often) but that becomes a vulnerability when false claims are deliberately repeated.

The subsequent research program built on Hasher et al. produced several findings of particular importance for understanding propaganda.

The effect holds for obviously false statements. Fazio, Brashier, Payne, and Marsh (2015) — whose research is examined in detail in Research Breakdown 1 below — demonstrated that the illusory truth effect persists even for statements that are patently false and that participants correctly identified as false on first encounter. Participants who knew, on first exposure, that a claim was false showed increased belief in the claim after repeated exposure. This finding is counterintuitive and has significant implications: the effect is not simply a matter of filling in unknown information with familiar guesses. It operates on claims that were already evaluated and rejected.

The effect persists across delays. Studies examining the illusory truth effect over periods of days, weeks, and months have found that the effect persists well beyond the session in which repetition occurred. Repeated exposure at one point in time can influence truth judgments days or weeks later, even when participants have no explicit memory of the original exposure. This persistence across time is particularly significant for propaganda operations that involve repeated exposure over extended periods.

The effect is robust to corrections. This is perhaps the most important and most unsettling finding. Studies examining whether the illusory truth effect can be prevented by telling participants, before or during exposure, that a claim is false have found mixed results — with many studies finding that even explicit correction does not prevent the repetition effect from increasing perceived truth on subsequent encounters. The familiarity that repetition builds appears to operate through a channel that is at least partially independent of explicit belief updating.

The effect influences political beliefs. Research by Pennycook and colleagues has extended the illusory truth findings specifically to politically charged misinformation — the domain of greatest practical significance for propaganda analysis. Their research, examined in detail in Research Breakdown 2 below, found that even a single prior exposure to a false news headline — without any engagement with its content — increased subsequent ratings of the headline's accuracy.

These findings collectively define the illusory truth effect as one of the most powerful and robust mechanisms of propaganda influence identified in the empirical literature. It is powerful because it operates below the level of argument; counterarguments cannot directly engage it. It is robust because it persists across time, across topic domains, and — critically — even when the audience has already evaluated and rejected the claim being repeated.

The neuroscience of the illusory truth effect — while still developing — points to specific brain systems that underlie the fluency-truth link. Research using neuroimaging has found that the medial prefrontal cortex, a region associated with self-referential processing and the evaluation of familiar stimuli, shows differential activation to repeated versus novel information. The activity patterns associated with repeated processing overlap substantially with the activity patterns associated with processing claims that are known to be true — which is why the system so readily misattributes repetition-based familiarity to truth.

The relationship between fluency and aesthetic experience provides an additional lens on the mechanism. Experimental aesthetics research — studies of what makes art, music, and language feel beautiful or satisfying — has consistently found that processing fluency is a predictor of aesthetic pleasure: things that are easier to process tend to feel more attractive, more elegant, more "right." This connection between fluency and the felt sense of rightness is not a coincidence. Both the truth heuristic and the aesthetic judgment are using the same underlying metric — processing ease — as their primary signal. When something feels fluent, it simultaneously feels true and feels good. This is why effectively repeated propaganda slogans often feel not just familiar but aesthetically satisfying: the rhythm of "Ein Volk, Ein Reich, Ein Führer," the clean simplicity of "Make America Great Again," the declarative confidence of "Doubt is our product" — these formulations exploit the fluency-pleasure connection as well as the fluency-truth connection.

This also explains why propaganda slogans are so often designed for the ear as much as the eye. Alliteration, parallel structure, and rhythmic patterns all increase processing fluency by engaging procedural memory systems — the systems we use for learned sequences and motor patterns — in the service of verbal processing. A slogan that falls naturally into a rhythm is processed more easily than one that does not, and that ease translates both into perceived truth and into the kind of emotional resonance that makes the slogan feel significant rather than merely asserted.

For the student of propaganda, this means that aesthetic appeal and rhetorical elegance in political or commercial communication are not evidence of quality — they are potential warning signals. The slogan that feels too right, too natural, too obvious to require examination may have achieved that feeling through deliberate design for fluency rather than through contact with genuine truth.


Goebbels and the Repetition Doctrine

Joseph Goebbels did not have access to cognitive psychology research. The mechanism he was exploiting — cognitive fluency and its misattribution to truth — would not be formally identified until thirty-four years after his death. But Goebbels's practical understanding of repetition as a propaganda technique was sophisticated, systematic, and documented in unusual detail in his own diary entries and in the records of the Reich Ministry of Public Enlightenment and Propaganda, which he directed from 1933 until the final days of the Third Reich.

Goebbels's diaries, which run to thousands of pages and were maintained throughout the Nazi period, record his ongoing management of the German propaganda apparatus with remarkable candor. The diaries reveal a man who thought strategically about the relationship between repetition, familiarity, and belief. "Propaganda has only one object," he wrote in a 1941 entry, "to conquer the masses." The instrument of that conquest was not primarily argument — Goebbels was contemptuous of argument as a tool of mass persuasion — but saturation, repetition, and the relentless simplification of complex realities into repeated slogans.

The slogans of Nazi propaganda were not accidental coinages. They were designed for repetition: short, rhythmically structured, emotionally resonant, and easily reproduced across multiple media simultaneously. "Ein Volk, Ein Reich, Ein Führer" (One People, One State, One Leader) was a formulation that could be stamped on posters, broadcast on radio, chanted at rallies, and printed in newspapers simultaneously, creating a saturation of repetition across the entire information environment. "Kraft durch Freude" (Strength through Joy), the slogan of the leisure and recreation program for German workers, appeared on everything from cruise ship brochures to factory floor posters, associating the Nazi state with positive affect through sheer familiarity. "Die Juden sind unser Unglück" (The Jews are our misfortune) — a phrase adapted from an 1881 essay by the historian Heinrich von Treitschke — was so relentlessly repeated through Nazi media that it became, for millions of Germans, a familiar formulation that carried the weight of settled fact rather than incitement.

The phrase's pre-Nazi history is itself instructive. Treitschke had written it in a different context, in a pamphlet addressing what he characterized as excessive Jewish influence in German intellectual life. The phrase did not originate with Goebbels or Hitler. What Nazi propaganda did was to abstract it from its original intellectual context, strip it of nuance, and repeat it so relentlessly — in newspapers, in Der Stürmer (the antisemitic weekly), in school textbooks, in public speeches — that it achieved the status of a proverb: a piece of received wisdom that felt ancient and collectively endorsed even for those who encountered it in living memory. The illusion of age and collective belief, created by repetition, was central to its propaganda function.

The Volksempfänger — the "people's receiver" — was the most important piece of infrastructure for the Nazi repetition strategy. The device, a technically simple and deliberately inexpensive radio receiver, was produced by German manufacturers under a government program designed to make radio ownership accessible to the entire German population. By 1939, Germany had the highest rate of radio ownership of any country in the world. Sixteen million Volksempfänger had been distributed at subsidized prices.

The strategic purpose of the Volksempfänger was explicit and documented in the propaganda ministry's own records. Radio could reach audiences simultaneously — in their homes, at the dinner table, in the factory cafeteria — in a way that print media could not. It was intimate and immediate. And it enabled Goebbels to deploy the repetition strategy at unprecedented scale: the same key messages, the same slogans, the same emotional framings, broadcast simultaneously to millions of listeners across the country and repeated at regular intervals throughout the broadcast day.

Goebbels issued specific instructions to German media outlets — instructions that were documented in the ministry's records and subsequently analyzed by historians — coordinating the key messages to be repeated across all outlets simultaneously. A message appearing in the morning newspaper, broadcast on the lunchtime radio news, repeated in the afternoon broadcast, and referenced in the evening newspaper program was encountered by a typical German citizen four or five times in a single day from what appeared to be independent sources. The apparent independence was illusory: all of these outlets were operating under coordinated ministry direction. But the effect on the listener — encountering the same message repeatedly from multiple apparently independent sources — was to create precisely the fluency and familiarity that the illusory truth mechanism would subsequently confirm as truth-conferring.

The saturation strategy had a second function beyond direct truth conferral. It prevented the existence of alternative framings. In an information environment completely controlled by a single source issuing coordinated instructions, there was no space for competing narratives to achieve the familiarity necessary to feel credible. The Nazi propaganda machine did not merely make its preferred reality familiar; it crowded out alternative realities by denying them the repetition that familiarity requires.

Goebbels was aware that repetition alone was not sufficient — that messages repeated too nakedly, without variation and without an affective dimension, would eventually trigger cynicism. His instruction to vary the presentation while preserving the core message reflects a practical understanding of what would later be confirmed in the psychology research: that repetition with slight variation maintains fluency effects while reducing the awareness of repetition that can trigger motivated counter-processing.


Echo Chambers and Algorithmic Repetition

The Nazi propaganda machine required a centralized authority — the Reich Ministry of Public Enlightenment and Propaganda, with Goebbels directing the coordination across all media simultaneously. The contemporary digital information environment can produce equivalent repetition effects without any central coordination at all.

The mechanism is the interaction between confirmation bias and algorithmic content personalization.

Confirmation bias — the documented tendency to seek out, attend to, and remember information that confirms existing beliefs while avoiding or discounting information that challenges them — was described in Chapter 4 as one of the most consequential cognitive biases in the propaganda context. In the pre-digital information environment, confirmation bias operated on the information a person encountered through their choices of newspapers, television channels, and social circles. The range of information available was limited by geography and production costs; most people encountered a relatively similar information diet, and confirmation bias operated at the margins.

Digital platforms transformed this landscape. Algorithmic content delivery systems — the recommendation engines of YouTube, Facebook, TikTok, Twitter/X, and their competitors — observe each user's engagement patterns and use those patterns to predict what content the user is likely to engage with next. Content that aligns with a user's existing beliefs and interests typically generates higher engagement (likes, shares, comments, extended viewing time) than challenging content. The algorithm, optimizing for engagement, therefore delivers more confirming content. The user, receiving confirming content, engages more. The algorithm, receiving more engagement signals, delivers still more confirming content. The feedback loop progressively narrows the information environment to content aligned with the user's existing beliefs.

The result is an environment in which the same claims are encountered repeatedly, from multiple apparent sources, across multiple sessions over extended periods. This is, structurally, an echo chamber — a term introduced by media scholars to describe information environments that reflect and amplify existing beliefs rather than challenging them. But crucially, from the perspective of the illusory truth effect, it does not matter that the multiple sources are drawing from the same original claim through a process of sharing and algorithmic amplification. What matters, cognitively, is the accumulated exposure.

Research by Gordon Pennycook and colleagues has been particularly important in documenting how platform-mediated repetition produces illusory truth effects in the specific context of false news. Their 2018 paper "Prior Exposure Increases Perceived Accuracy of Fake News" — examined in detail in Research Breakdown 2 below — demonstrated that a single prior exposure to a false headline, without any engagement with its content, was sufficient to increase subsequent accuracy ratings. The effect was not limited to politically aligned content; it occurred across partisan lines.

The false independence problem that algorithmic repetition creates is a specific version of the illusory truth mechanism that is particularly important to understand. When a user encounters the same claim from five apparently independent sources — five different accounts, five different platforms, five different surface presentations — the brain interprets this as corroborating evidence: multiple independent sources arriving at the same conclusion provides stronger evidence than a single source. But if all five sources drew from the same original post through a sharing cascade, the apparent independence is illusory. The corroboration is not real. The evidence is not being multiplied; it is being reflected.

The algorithmic repetition environment also creates what might be called illusory consensus: the sense, generated by exposure to many community members making similar claims, that a false claim is widely believed. The social proof mechanism (Chapter 9) and the illusory truth effect reinforce each other in this environment: repeated exposure makes the claim feel true (illusory truth), and the apparent agreement of many community members makes the claim feel socially validated (false consensus). The combination is particularly resistant to correction because it creates the sense not only that the claim is true but that everyone around you agrees it is true.

The comparison between Goebbels's coordinated repetition strategy and contemporary algorithmic repetition is instructive not only for its parallels but for its differences, because the differences have important implications for how resistance can be organized. Goebbels's system had a central command point: the Reich Ministry issued the coordination instructions; if the Ministry were disrupted or its instructions were leaked, the coordination could be compromised. Authoritarian propaganda systems are vulnerable, in principle, to the disruption of their coordination infrastructure — which is part of why the Allied information strategy in World War II focused partly on exposing the coordination to German audiences, to reduce the apparent independence of the sources they were receiving.

Algorithmic repetition has no such central command point. No single actor decides what false claims will be amplified; the amplification emerges from the aggregate of millions of individual engagement decisions interacting with platform optimization systems. This distributed architecture makes it substantially more resistant to the disruption strategies that work against centralized propaganda systems. There is no instruction to leak, no coordinator to expose, no editorial decision to challenge. The system produces its effects through structural properties — the relationship between engagement optimization and the specific characteristics of emotionally provocative false content — rather than through identifiable decisions by identifiable actors.

This means that resistance to algorithmic repetition requires different strategies than resistance to coordinated propaganda. Exposing the coordination — the journalist's traditional tool — is less effective when there is no coordinator. The more promising approaches focus on the structural level: changing the incentive structure that drives engagement optimization, building users' metacognitive awareness of the repetition effect through education, and creating the kind of first-encounter logging habits that allow individuals to track the difference between genuine convergence of evidence and the false independence of shared amplification.


Repetition in Commercial Propaganda

The advertising industry has over a century of practical experience with the repetition principle, and much of what was subsequently confirmed in academic research was anticipated in the industry's own practical understanding of how repeated exposure builds brand recognition, product familiarity, and purchasing behavior.

The so-called "effective frequency" theory of advertising — the idea that a consumer must be exposed to an advertisement a specific number of times before it influences behavior — became advertising industry conventional wisdom in the mid-twentieth century. Herbert Krugman's influential 1972 paper on the subject argued for a three-exposure minimum for attitude change. The "rule of seven" — the claim that consumers need seven exposures to a message before taking action — circulated as industry wisdom for decades, with limited empirical foundation but considerable intuitive appeal.

The actual research on advertising repetition suggests a more complex picture. The relationship between exposure frequency and attitude change follows a curve — effects increase with initial exposures, level off, and eventually may decline as the audience becomes habituated or, in some contexts, develops resistance. But the basic principle — that repeated exposure builds familiarity that translates into positive affect and behavioral influence — is well-supported.

The advertising industry developed the most durable forms of this understanding: the jingle and the tagline. Both are designed specifically to exploit fluency and familiarity. A jingle is a short musical phrase associated with a brand that, through repetition in broadcast media, becomes so embedded in memory that hearing the first few notes produces the rest of the sequence automatically. This automatic completion — a product of procedural memory laid down through repetition — means that encountering the brand in a non-advertising context (a product on a store shelf, a conversation about the category) activates the familiar musical phrase and, with it, the positive associations built up through repeated exposure.

Taglines function similarly in the verbal domain. "Just Do It" (Nike). "I'm Lovin' It" (McDonald's). "Taste the Rainbow" (Skittles). These phrases are not arguments for the products they represent. They do not provide evidence or make falsifiable claims. They work through mere exposure and fluency — the familiarity of the phrase transfers positive affect to the brand, and the ease with which the phrase comes to mind creates a sense that the brand is a natural, established, familiar part of the world.

Big Tobacco's advertising strategy is particularly instructive in this context because it illustrates the simultaneous exploitation of repetition and authority. The Marlboro Man campaign — examined in detail in Chapter 22 — was one of the most successful repetition-based brand-building operations in advertising history. Philip Morris ran a version of the campaign for over four decades, repeating the core visual and emotional elements — the rugged cowboy, the open western landscape, the masculine independence — across hundreds of executions. The campaign did not merely build brand recognition; it built, through repetition, a felt association between the Marlboro brand and an entire emotional and identity register. That felt association — the product of accumulated exposure rather than argument — persisted in the cultural consciousness long after tobacco advertising was banned from broadcast media.

Product placement in films and television — a form of advertising in which brand names appear in entertainment content without the explicit presentation of an advertisement — exploits repetition without the markers that would trigger awareness of an advertising encounter. An audience member who sees a character in a film drink a specific brand of cola is exposed to the brand's presence in a context associated with narrative entertainment, not commercial persuasion. The exposure registers without the defensive response that explicit advertising sometimes triggers.


The Correction Paradox

Among the most counterintuitive and practically consequential findings in the illusory truth research is what might be called the correction paradox: the observation that correcting a false claim requires repeating it, and that repeating it — even in the context of an explicit correction — may reinforce the fluency that makes it feel true.

The theoretical basis of this paradox was identified early in the misinformation correction literature. A correction that says "It is FALSE that vaccines cause autism" contains the phrase "vaccines cause autism." A reader who encounters this correction has been exposed to the claim, with its falseness explicitly flagged. But the cognitive processing of "vaccines cause autism" occurs regardless of the truth flag attached to it. The phrase has been processed; its fluency has increased; its familiarity has been established or reinforced.

Research on whether corrections can undo illusory truth effects has produced mixed but sobering results. For claims about which audiences have limited prior knowledge or low confidence, corrections can be effective — the explicit truth flag overrides the fluency heuristic when the heuristic's output is sufficiently uncertain. For claims that have been extensively repeated, and particularly for claims about which audiences have strong prior beliefs or strong emotional investments, corrections are substantially less effective. The fluency the claim has accumulated through prior repetition is not easily undone by a single corrective exposure.

The practical implications for journalism and counter-misinformation efforts are significant. Journalism's traditional approach to false claims has been to report the claim and then debunk it — to give the audience the false information and then correct it. This approach has the effect of giving the false claim additional exposure as part of the correction process.

The truth sandwich, a term developed by cognitive linguist George Lakoff and subsequently popularized by journalism critics including Margaret Sullivan, represents a deliberately designed alternative. The principle is: lead with the truth, mention the falsehood once and briefly (if at all), and return to the truth. Do not repeat the false claim multiple times; do not allow the correction to be structured in a way that foregrounds the falsehood. The formulation "Vaccine safety is supported by overwhelming scientific evidence — contrary to a claim circulating online" is a truth sandwich; "Is it true that vaccines cause autism? No, it is not" is not.

The empirical evidence for the truth sandwich approach is suggestive but contested. Studies specifically testing truth sandwich versus standard debunking formats have found some advantage for the truth sandwich in reducing false belief, particularly for claims with which audiences have had extensive prior exposure. The evidence is not yet strong enough to establish the truth sandwich as definitively superior to all correction approaches in all contexts. But the theoretical basis for its advantage is well-grounded in the illusory truth literature.

The timing problem in corrections also deserves attention. A correction that arrives before a false claim has been extensively repeated has a substantially better chance of preventing the fluency buildup that the illusory truth effect requires. A correction that arrives after months or years of repetition is fighting an uphill battle: the false claim may already have accumulated more fluency — may be more cognitively accessible and familiar — than the corrective truth. This is the situation that Sophia identified in her research: the Wakefield vaccine claim had been circulating, being shared, and being repeated for more than a decade before its systematic correction. The fluency it had built in that period was not erased by the retraction.

This is why the inoculation approach — exposing audiences to the techniques of manipulation before they encounter specific manipulative content — is increasingly preferred over retrospective correction in the counter-misinformation research literature. An audience that understands, in advance, that false claims will be repeated to make them feel familiar is better equipped to resist the fluency-based truth signal than an audience encountering the mechanism for the first time while trying to evaluate a specific claim.

The correction paradox also has implications for the design of media literacy education itself. If teaching media literacy requires discussing specific false claims — explaining how a specific piece of misinformation was constructed and what made it effective — the teaching activity may inadvertently expose students to additional repetitions of the false claim, building fluency even while teaching critical analysis. This is not a reason to avoid teaching with specific examples; the educational benefits substantially outweigh the fluency risk. But it suggests that media literacy education should be attentive to the ratio of false-claim repetition to accurate-claim repetition in its pedagogy, and should consistently follow exposure to false claims with explicit return to the accurate information.

Webb's response to Sophia's question captures the correct pedagogical orientation. He did not begin by describing the illusory truth effect in the abstract; he handed her a paper that explained the mechanism before applying it to her specific case. This sequence — mechanism before application — enables the application of an inoculating metacognitive frame from the outset. Understanding why repetition creates the feeling of truth allows a student to recognize that feeling when it occurs, and to treat it as a signal for investigation rather than as evidence of truth.

The broader implication for civic education is significant. A population that understands the illusory truth effect — that knows, in a practically applicable way, that familiarity is not evidence — is substantially more resistant to the propaganda technique that may be most ubiquitous in the contemporary information environment. This is not a complex or technically demanding piece of knowledge. It is, as Sophia's conversation with Webb illustrates, the kind of insight that can be transmitted in a single exchange, if that exchange happens before the mechanism has been exploited rather than after.


Repetition and Radicalization

The relationship between repetition and radicalization is one of the least well-understood but most practically consequential aspects of the illusory truth effect in the contemporary information environment.

Online radicalization research has documented a pattern in which individuals who initially explore fringe ideological content in a spirit of curiosity progressively move to more extreme positions through extended exposure within communities where extreme claims are normalized through constant repetition. The mechanism is complex and multi-factorial, but the illusory truth effect is a component: claims that are extremely counterintuitive on first encounter — claims about hidden ruling cabals, claims about racial destiny, claims about systematic persecution of an in-group — become more familiar and more cognitively accessible with each repeated encounter. The cognitive accessibility that repetition builds does not necessarily produce explicit belief endorsement, but it reduces the psychological distance between the ordinary cognition and the extreme claim.

Within radicalization communities — online forums, closed social media groups, and encrypted messaging channels — repetition operates at very high intensity. Members of active ideological communities may encounter the same core claims dozens of times per day, from dozens of apparent community members, across extended periods of hours-long daily engagement. The cumulative exposure is vastly higher than anything achievable through broadcast media. And the apparent independence of the repetition sources — each community member appearing to arrive independently at the same conclusion — creates the false consensus effect alongside the illusory truth effect.

The crossover point is the moment at which the radicalized claims may have higher fluency than the mainstream reality. After sufficient immersion in a radicalization community, the core claims of that community's worldview may be more cognitively accessible — more fluent, more familiar, more instantly available — than the corresponding claims of the broader information environment. At this point, the mainstream reality may feel strange, unfamiliar, and cognitively effortful to process — while the community's claims feel natural, obvious, and self-evident. The illusion of truth has been so thoroughly established through repetition that the ordinary world has come to feel unfamiliar.

This has direct implications for exit and deradicalization. A person leaving a radicalization community does not merely leave a social group; they leave an information environment where their most repeated and cognitively familiar claims are constantly reinforced. The information environment they enter — the mainstream — will, at least initially, offer less frequent repetition of the claims they have internalized. The cognitive experience of exit can involve a form of discomfort that mimics the experience of being in an unfamiliar environment: the deeply repeated claims feel more cognitively natural than their replacements, even when the person has made a conscious commitment to change.

Effective deradicalization programs take account of this dynamic. Simply providing accurate information to exiting individuals is insufficient if that information lacks the repetition necessary to build competing fluency. Sustained engagement over extended periods, providing repeated exposure to accurate claims about the world, is necessary to shift the balance of cognitive accessibility in favor of the truth.


Counter-Repetition: Building Fluency for True Claims

The illusory truth effect presents a challenge for counter-propaganda that is easy to misstate. The challenge is not that false claims are inherently more compelling or more memorable than true ones. Research does not support the view that misinformation is structurally superior to accurate information in the competition for memory and belief. The challenge is that false claims, when strategically repeated, acquire fluency that accurate information — which is not always strategically amplified — may lack.

The solution is not simply to "correct" false claims more loudly or more frequently. The solution is to build competing fluency for accurate claims through strategic repetition — to treat the design of accurate information for memorability and repeatability as a priority, not an afterthought.

Chip and Dan Heath's framework for "sticky" messages — presented in their 2007 book Made to Stick and structured around the acronym SUCCES — provides a useful design checklist for counter-propaganda. The six characteristics of sticky messages are:

Simple: The core message is distilled to a single memorable insight. Not simplified to the point of inaccuracy, but expressed in its most essential and memorable form. Complexity defeats repetition; simplicity enables it.

Unexpected: The message violates an expectation in a way that triggers curiosity and sustained attention. An unexpected message is processed more deeply and remembered more durably than a message that confirms what the audience already expects.

Concrete: The message is grounded in specific, tangible details rather than abstract principles. Concrete details are processed more fluently than abstractions and remembered more reliably.

Credible: The message carries internal or external credibility markers — not manufactured authority, but genuine evidence presented in accessible form. Statistical evidence anchored in human-scale stories is more credible and more memorable than abstract statistics alone.

Emotional: The message engages genuine emotional response — not fear or outrage manufactured for manipulation, but legitimate empathy, pride, or concern proportionate to the actual stakes. Emotional engagement increases processing depth and memory durability.

Story: The message is structured as a narrative, with a character, a challenge, and a resolution. Narrative is the most ancient and most powerful format for human information transmission; stories are processed fluently, retained durably, and repeated naturally.

Applied to counter-propaganda design, the SUCCES framework suggests that accurate information needs to be designed for stickiness — not dumbed down, but crafted for the specific qualities that make messages memorable and repeatable. Public health campaigns that have succeeded against misinformation have typically combined accurate information with these design characteristics: the truth anti-smoking campaign's use of concrete, story-based, emotionally resonant content is a case in point, examined in Chapter 36.

The fundamental insight is that counter-repetition is a design problem, not merely a truth problem. Having the truth on your side is necessary but not sufficient. Truth needs to be packaged, designed, and strategically repeated to build the fluency that will allow it to compete with false claims that have benefited from strategic repetition.


The Ethics of Deliberate Repetition

The illusory truth effect raises a genuine ethical question that any communicator who understands it must face: if repetition builds perceived truth independently of evidence, is the deliberate repetition of even accurate claims a form of manipulation?

This question is not merely academic. Governments design public health campaigns around repeated messaging. Advocacy organizations deploy slogans and talking points intended to be repeated across media. Journalists develop standards for repeating key facts in news coverage. Teachers assign readings and repeat key concepts across multiple lessons. Each of these practices deliberately exploits the fluency-truth link — building familiarity with accurate information to increase its perceived credibility. Are these practices ethical?

The philosophical framework for thinking about this question draws on the distinction between autonomy-preserving and autonomy-violating persuasion introduced in Chapter 2. Persuasion that works by providing relevant evidence and sound argument to a rational agent who can evaluate the argument on its merits preserves the agent's epistemic autonomy — their capacity to reach conclusions through their own reasoning. Persuasion that works by bypassing rational evaluation — by exploiting cognitive mechanisms that operate below the level of conscious reasoning — potentially violates that autonomy, because it influences belief through a channel that the agent cannot easily monitor or correct.

By this standard, the deliberate repetition of false claims — to build illusory truth fluency in an audience — is clearly manipulative. It is designed to produce belief by bypassing the evaluative process through which accurate belief-formation works. This is the propagandist's use of repetition: not argument but saturation.

But the case of accurate claims is more complex. Consider a public health agency that repeatedly emphasizes "Vaccines are safe and effective." The claim is true. The agency's intent is to promote public health. The repetition will build fluency that increases perceived truth for the claim. But the same mechanism — cognitive fluency misattributed to truth — is at work as in the propaganda case. The difference is the truth value of the claim and the transparency of the intent.

Several considerations converge on a rough ethical principle for deliberate repetition in democratic communication. First, truth value: repeating true claims exploits the fluency mechanism in a direction that converges with reality; the fluency it builds is warranted by the underlying evidence. Repeating false claims exploits the same mechanism in a direction that diverges from reality. Second, transparency: when a communicator's interest in repeating a claim is disclosed — when the audience understands that a public health campaign has an explicit goal of increasing vaccination rates, for instance — the repetition is not covert. The audience can account for the communicator's interest when evaluating the message. Third, competing fluency: in an information environment where false claims are being strategically repeated to build illusory truth fluency, counter-repetition of accurate claims is not merely a rhetorical choice but a form of epistemic defense. An accurate claim that lacks competitive fluency because it is never repeated is at a systematic disadvantage to false claims that are strategically amplified.

The practical conclusion for Sophia's journalism training — and for anyone engaged in public communication — is that the ethics of repetition depend less on the mechanism itself than on the combination of accuracy, transparency, and proportionality. Repeating an accurate claim transparently, in proportion to the genuine evidence for it, in an information environment where competing false claims are being strategically amplified, is ethically defensible in ways that the propaganda use of repetition is not.

This does not mean that counter-repetition of accurate information is unproblematic. There is a real risk that the deliberate design of "sticky" accurate messages for viral spread can erode the epistemic virtue of proportionality — of calibrating confidence to evidence — in ways that subtly shift the norms of democratic communication toward persuasion by saturation rather than persuasion by argument. Maintaining the distinction between helping truth compete in a manipulated information environment and adopting the manipulative techniques of the environment requires ongoing ethical vigilance that is, ultimately, part of the professional responsibility of anyone who communicates publicly.


Research Breakdown 1: Fazio et al. (2015) — Knowledge Does Not Protect Against Illusory Truth

Lisa Fazio, Nadia Brashier, B. Keith Payne, and Elizabeth Marsh published "Knowledge Does Not Protect Against Illusory Truth" in the Journal of Experimental Psychology: General in 2015. The paper addressed a natural and important objection to the illusory truth effect: surely the effect applies only to unfamiliar claims where audiences have no existing knowledge to anchor their judgments? Surely someone who already knows that a claim is false cannot have their truth judgment shifted upward by mere repetition?

Fazio and colleagues designed a study that directly tested this objection. They selected a set of statements that were demonstrably false but about which participants would be expected to have some prior knowledge — statements like "A sari is the name of the short, sword-like weapon used by knights in the Middle Ages" and "The largest ocean on Earth is the Atlantic Ocean." These were not arcane claims; they were claims that a reasonably well-informed participant would know to be false.

Participants rated these statements' validity across two sessions. Before the second session, some participants received explicit fact-checking feedback on a subset of the statements. Others did not. The key finding: repeated exposure to the false statements increased their perceived validity even for participants who, based on their initial ratings, clearly knew the statements were false. The increase was smaller for clearly wrong statements than for uncertain ones, but it was statistically significant and practically meaningful. And explicit fact-checking feedback did not reliably prevent the repetition effect from increasing perceived truth.

The implications are significant. The illusory truth effect cannot be dismissed as a feature of ignorance that education will automatically address. It operates on claims that educated, knowledgeable audiences have already evaluated and rejected. This means that standard educational approaches — teaching audiences the facts, correcting the record — are insufficient to prevent the illusory truth effect from increasing belief in false claims through repetition. What is needed is not merely knowledge of the correct facts but active resistance to the fluency-based truth signal — a form of metacognitive awareness that the processing ease generated by repetition is not evidence of truth.

The study also found that the illusory truth effect influenced behavioral measures as well as explicit truth ratings. Participants who had been repeatedly exposed to false claims were more likely to choose the false answer on a subsequent knowledge test, even when explicit truth information was available. The effect had penetrated below the level of explicit belief to influence the automatic retrieval of information in a testing context.

For propaganda analysis, Fazio et al.'s finding establishes that the illusory truth effect is not confined to the naive or the undereducated. It is a feature of normal cognitive processing that operates on all audiences, including educated and skeptical ones. The specific magnitude of the effect may vary with knowledge and motivation, but the direction is consistent: repetition increases perceived truth, and that increase is not reliably prevented by prior knowledge or explicit correction.


Research Breakdown 2: Pennycook, Cannon, and Rand (2018) — Prior Exposure Increases Perceived Accuracy of Fake News

Gordon Pennycook, Tyrone Cannon, and David Rand published "Prior Exposure Increases Perceived Accuracy of Fake News" in the Journal of Experimental Psychology: General in 2018. The paper directly applied the illusory truth paradigm to the specific context of social media misinformation — the most practically urgent application of the phenomenon.

The study's design was modeled on the Hasher et al. paradigm but used actual false news headlines from sources identified as fabricating or misrepresenting news content. Participants were shown a series of headlines — some true, some false — across two sessions, and rated each headline's accuracy on a numerical scale.

The central finding replicated and extended the illusory truth effect in this new context: prior exposure to a false headline — even without any engagement with the headline's content, without clicking through, without reading a supporting article — was sufficient to increase subsequent ratings of that headline's accuracy. The effect was observed across both politically congruent false headlines (headlines aligned with the participant's own political views) and politically incongruent ones (headlines at odds with the participant's political views). Partisan alignment moderated the magnitude of the effect — false headlines congruent with political beliefs showed larger effects — but the effect was not confined to partisan alignment.

The implications of this finding for social media platform design are considerable. A user who scrolls through a social media feed without clicking on, liking, or sharing any content is still being exposed to headlines. That exposure — even the purely passive, unengaged scrolling exposure — is sufficient, according to Pennycook et al.'s findings, to increase subsequent truth ratings for false headlines encountered in that scroll. The algorithm that determines what content appears in a user's feed is, through its repetition decisions, shaping the user's truth judgments even when the user is not actively engaging with any of the content.

This finding also has implications for the design of counter-misinformation interventions. Prompts that increase users' engagement with the accuracy of content — that activate analytical thinking about whether a headline is likely to be true — have been found in subsequent Pennycook and Rand research to reduce the sharing of false news headlines. But these prompts must activate accuracy consideration at the moment of exposure, not retrospectively. The illusory truth effect begins building on first exposure; an intervention that arrives after the first encounter is already fighting against the fluency that has been established.

Pennycook and Rand's broader research program has found that the sharing of misinformation is less often a product of deliberate deception and more often a product of inattention — a failure to engage analytical processing at the moment of encounter. People share false headlines not primarily because they want to deceive but because the accuracy of content is not the salient consideration at the moment of sharing decision. Interventions that make accuracy salient — without being accusatory or demanding extended reflection — can meaningfully reduce false news sharing without requiring large changes in platform architecture.


Primary Source Analysis: The Volksempfänger as Repetition Infrastructure

Source: The Volksempfänger (VE 301, introduced 1933), a technical device produced under a program coordinated by the Reich Ministry of Public Enlightenment and Propaganda and manufactured by German electronics companies including Telefunken and Seibt. The device's development, pricing, and distribution were explicitly directed by the propaganda ministry; internal ministry records documenting the strategic purpose are held in the Bundesarchiv, Berlin.

The device and its design: The Volksempfänger (People's Receiver) was deliberately designed to receive only domestic broadcasts. It could receive medium-wave transmissions — the frequencies used by German domestic broadcasters — but not short-wave frequencies, on which foreign broadcasts were transmitted. This was not a cost-cutting measure; it was a deliberate design choice to restrict the information environment. The device could receive propaganda but not counter-propaganda.

The device was sold at a heavily subsidized price — approximately 35 Reichsmarks, when a comparable commercial radio might cost 100 or more — specifically to enable working-class households to afford a radio receiver. By 1939, approximately 70 percent of German households owned a radio, the highest penetration rate in the world at the time.

The strategic purpose: The propaganda ministry's documentation is explicit about the Volksempfänger's purpose. Goebbels wrote in his diary that radio was "the eighth great power" — more powerful than print because it reached audiences in their homes, simultaneously, with an intimacy that print could not match. The investment in radio infrastructure was an investment in the repetition apparatus: a mechanism for simultaneously delivering the same core messages, in the same emotional register, to the maximum possible number of German households.

The ministry's instruction to German radio broadcasters specified the key messages to be repeated across the broadcast day, coordinated with the print press and cinema newsreel so that the same messages would be encountered by Germans across multiple apparently independent channels on the same day.

Implicit audience: The Volksempfänger was designed for mass reach — specifically for the working-class and lower-middle-class households that were economically marginalized enough that radio had previously been a luxury good. This audience had less access to diverse print media and was therefore more dependent on radio as an information source. The saturation strategy was designed to work with particular effectiveness in this segment of the population.

The omission: The Volksempfänger's technical restriction to domestic frequencies is the crucial omission in its public presentation. Marketed as a device for enabling German families to enjoy the benefits of radio culture — music, drama, news — its design as a propaganda reception device rather than a general-purpose communications technology was not foregrounded. The restriction became relevant in wartime, when listening to foreign broadcasts on any radio became a criminal offense, but the architecture of the Volksempfänger had pre-figured this restriction by making most foreign broadcasts technically inaccessible anyway.

Historical significance: The Volksempfänger is among the clearest examples of information infrastructure investment specifically designed to enable propaganda repetition. It was not the propaganda itself; it was the mechanism through which propaganda could be repeatedly delivered to a maximum audience at minimum cost. Understanding it as infrastructure — as an investment in delivery capacity rather than message content — helps illuminate the relationship between technology, control, and the illusory truth effect at population scale.


Debate Framework: Should Journalists Avoid Repeating False Claims Even When Correcting Them?

The truth sandwich principle — that journalists should avoid repeating false claims even in the context of correction — has become one of the most actively debated questions in contemporary journalism ethics. The debate cuts to the core of what journalism is for and how corrections work.

Position A: Corrections must repeat the claim to be comprehensible. The function of a correction is to update audiences who have encountered the false claim. A correction that does not specify what is being corrected leaves audiences unable to connect the correction to the claim it addresses. "Vaccines are safe and effective" tells audiences nothing about the specific false claim circulating online; it cannot function as a correction for audiences who have encountered "vaccines cause autism." To correct effectively, the journalist must name the false claim being corrected. The alternative — a truth sandwich that barely mentions the false claim — may be incomprehensible to audiences who have not yet encountered the misinformation and irrelevant to audiences who have, because it fails to engage directly with the specific claim that has built fluency in their cognitive systems.

Additionally, avoiding the repetition of false claims entirely would make it impossible to cover certain stories at all. The story of the manufactured doubt campaign described in Chapter 10 cannot be told without repeatedly mentioning the false claims that the campaign promoted. The story of Wakefield's fraudulent vaccine paper cannot be told without describing what the paper claimed. Journalism that cannot engage with false claims directly is journalism that cannot hold false claimants accountable.

Position B: The truth sandwich is both more effective and more responsible. Given the evidence from illusory truth research that repetition of a claim increases its perceived truth regardless of the truth flag attached to it, a journalistic practice that leads with the false claim — even to correct it — is contributing to the fluency buildup that makes false claims feel true. The truth sandwich does not require avoiding all mention of the false claim; it requires minimizing repetition of the false claim and maximizing repetition of the accurate information.

This position notes that journalistic practice has historically been shaped around the intuitive model of how correction works — you identify the false claim, you correct it — without accounting for the cognitive mechanism by which familiarity builds truth-perception independently of explicit reasoning. Journalism's failure to account for the illusory truth effect is not simply a journalistic failure; it is a structural feature of how institutions develop practices before the relevant science is available to inform them. The science is now available; journalistic practice should evolve to account for it.

Toward a resolution: The positions are not entirely incompatible. There is a middle ground — which many journalism ethicists are exploring — that involves strategic variation: naming the false claim once, briefly, without emotional amplification, and spending significantly more space and repetition on the accurate information. The debate is less about whether to mention false claims than about the ratio of repetition given to the false claim versus the accurate information, and about the structural placement of each. The truth sandwich is better described as a principle of proportionality — maximize attention to the truth, minimize attention to the falsehood — than as an absolute prohibition on mentioning false claims.


Argument Map: Algorithmic Amplification as Functional Propaganda

Central Claim: Algorithmic amplification of repeated false claims is functionally equivalent to organized propaganda even without a coordinating propagandist.

Supporting Premise 1: Propaganda's defining features include systematic repetition of specific claims to a large audience, creating familiarity that the illusory truth mechanism converts to perceived truth. This definition does not require centralized coordination; it describes an outcome that can be produced by distributed mechanisms.

Supporting Premise 2: Recommendation algorithms systematically amplify content that generates high engagement. False and emotionally provocative content reliably generates higher engagement than accurate and nuanced content. Therefore, recommendation algorithms systematically amplify false and emotionally provocative content.

Supporting Premise 3: The result — the systematic, large-scale repetition of specific false claims across a large audience — produces illusory truth effects equivalent to those documented in coordinated propaganda campaigns.

Objection 1: Propaganda requires intent to deceive; algorithmic amplification is the unintended side effect of optimization for engagement, not a deliberate deception strategy. Therefore, even if the outcome is similar, the mechanism is fundamentally different and should not be categorized as propaganda.

Response to Objection 1: The functional definition of propaganda adopted in Chapter 1 focuses on effect, not intent. A distributed system that systematically produces the cognitive effects of propaganda — increased truth perception for false claims, reduced epistemic diversity, erosion of ability to evaluate claims on evidence — is functionally equivalent to propaganda in its consequences, whatever the intent of its designers. If propaganda is defined by its effects on audiences rather than by the intent of a propagandist, algorithmic amplification qualifies.

Objection 2: The comparison wrongly implies that algorithmic amplification is designed to serve specific political or commercial goals, when in fact it serves only the goal of maximizing engagement with no particular ideological content preference.

Response to Objection 2: The engagement optimization goal is not neutral in its effects. Content that benefits from algorithmic amplification tends to share specific characteristics (emotional provocation, outrage, simplicity, novelty of the extreme) that systematically favor certain kinds of claims over others. The absence of explicit political direction does not mean the amplification is ideologically neutral in its effects.


Action Checklist: First-Encounter Logging

The illusory truth effect operates through accumulated exposure. A first-encounter log is a practical tool for maintaining awareness of when you first encounter a claim, before repetition has built fluency that mimics familiarity with truth.

Step 1: Note the source and date of first encounter. When you encounter a claim you have not seen before — especially a claim with propaganda potential (simple, emotionally provocative, politically charged, health-related) — make a brief note of where you first encountered it and when.

Step 2: Note your initial response. How credible did the claim seem on first encounter? Did it align with or challenge your existing beliefs? What was your initial reaction — skeptical, uncertain, receptive?

Step 3: Track subsequent encounters. When you encounter the same claim again — in a different source, a different context — note it. How many times have you encountered this claim? From how many apparently independent sources?

Step 4: Check whether your sense of the claim's credibility has shifted. After multiple encounters, does the claim feel more familiar and therefore more credible than it did on first encounter? Has your assessment of its truth changed without new evidence? If so, that shift may be the illusory truth effect rather than updated reasoning.

Step 5: Seek the source. For claims encountered multiple times, trace them to their original source. Was the claim generated by a single original post that was amplified and shared? Do the multiple apparent sources represent genuine independent corroboration, or false independence?

Step 6: Apply the lateral reading process. For claims that feel familiar but whose original source you cannot identify, apply lateral reading to evaluate the claim's evidential basis independently of its fluency.


Inoculation Campaign: Technique Identification Matrix — Row 5

This chapter contributes Row 5 to the Technique Identification Matrix: Repetition and the Illusory Truth Effect.

Row 5: Repetition and the Illusory Truth Effect

Dimension Description for Your Campaign
Core mechanism Repeated exposure to a claim increases its perceived truth through cognitive fluency — the processing ease that familiarity creates, which the cognitive system misattributes to truth. This effect operates below the level of argument and is resistant to correction.
Warning signal 1 A claim feels familiar and credible, but you cannot identify where or when you first encountered it.
Warning signal 2 Multiple apparently independent sources in your information environment are making the same claim. Consider whether all of these sources may be amplifying the same original post.
Warning signal 3 Your sense of the claim's credibility has increased over time without any new evidence being presented.
Warning signal 4 A claim is expressed in a catchy, highly repeatable format — a slogan, a rhyme, a simple memorable phrase — that is designed for easy transmission.
Inoculation technique Introduce the first-encounter logging exercise before asking your target community to evaluate any specific claim. Make the metacognitive question — "Have I heard this before, and is that familiarity influencing my assessment?" — habitual.
Example for your campaign Identify a specific false or misleading claim that has been widely repeated in your target community's media environment. Trace its history: when did it first appear? How many times has it been repeated? From how many apparent sources? Has the apparent credibility of the claim in your community increased over time?

Complete Row 5 by applying the first-encounter logging process retroactively to a specific false or misleading claim circulating in your target community. Reconstruct, as best you can, the repetition history of the claim and its relationship to its apparent credibility in the community.


Sophia's discovery of the illusory truth effect changed how she thought about her journalism seminar project, but it also changed how she thought about her own role as a future journalist. The correction paradox means that how she covers false claims — the ratio of false-claim repetition to accurate-information repetition in her reporting — will have real effects on the epistemic environment her audience inhabits. This is not a comfortable discovery for someone trained in the tradition of balanced, comprehensive reporting. But it is an honest one. The illusory truth effect does not care about journalistic intention. A correction that foregrounds the false claim twelve times builds fluency for the false claim, regardless of whether the journalist wrote it to debunk or to inform.

Webb's challenge to Sophia — and to anyone who communicates publicly about contested facts — is to design for the cognitive reality of the audience rather than for the procedural comfort of the communicator. Leading with truth, minimizing false-claim repetition, and building fluency for accurate information through strategic design and repeated exposure is harder than the traditional correction format. It requires accepting that cognitive mechanisms are real, that they operate independently of intention, and that responsible communication in the age of algorithmic amplification requires knowing and accounting for the illusory truth effect at every step of the journalistic and advocacy process.

That is the work. It is ongoing, imperfect, and necessary.

Chapter 11 is part of Part Two: Techniques. The next chapter examines visual propaganda — symbols, images, and the specific mechanisms through which visual information bypasses analytical processing.