Appendix B: Key Studies and Empirical Findings

This appendix provides annotated summaries of the most important empirical studies, theoretical works, and landmark reports referenced throughout Propaganda, Power, and Persuasion. Entries are organized by topic area rather than by chapter, so that students can use this appendix as a standalone research reference. For each study, the entry includes the full citation, a brief description of the study design or methodology, the key finding, its significance for propaganda studies, and (where relevant) known limitations or subsequent challenges to the findings.

Students wishing to trace how specific studies connect to course arguments should cross-reference the in-text citations and the Further Reading lists at the end of each chapter.


Section 1: Foundational Studies in Propaganda and Persuasion

Lasswell, Harold D. (1927). Propaganda Technique in the World War. New York: Knopf.

Study design: Historical-analytical study drawing on official documents, memoirs, wartime journalism, and government records from the belligerent nations of World War I to reconstruct how propaganda was systematically organized and deployed.

Key finding: Lasswell demonstrated that propaganda was not an incidental byproduct of war but a planned instrument of statecraft. Governments on all sides developed bureaucratic machinery to manufacture consent domestically and demoralize enemies abroad. He identified four core objectives of wartime propaganda: mobilizing hatred against the enemy, preserving the alliances of friendly nations, securing the cooperation of neutral parties, and demoralizing enemy populations.

Significance for propaganda studies: Lasswell established propaganda studies as a legitimate field of political science and introduced the "who says what, in which channel, to whom, with what effect" framework that would anchor communication research for generations. He treated propaganda not as pathology but as technique — a politically neutral tool whose moral valence depended entirely on ends and means.

Limitations: The study is largely descriptive and lacks systematic quantitative evidence. Its focus on elite-level production of propaganda says little about reception or persuasive effectiveness at the individual level.


Bernays, Edward L. (1928). Propaganda. New York: Horace Liveright.

Study design: Practitioner account and theoretical manifesto by the pioneering American public relations consultant, drawing on his own campaign work and the social psychology of Walter Lippmann and Gustave Le Bon.

Key finding: Bernays argued that in a modern mass democracy, the "engineering of consent" — the deliberate shaping of public opinion through carefully designed symbolic campaigns — is both inevitable and necessary. He outlined techniques including the use of trusted intermediaries ("third-party authority"), the manufacture of "news events," and the appeal to unconscious desires rather than rational argument.

Significance for propaganda studies: Bernays explicitly collapsed the distinction between commercial advertising, public relations, and political propaganda, revealing them as variations of the same practice. His frank endorsement of elite manipulation as socially beneficial remains one of the most cited illustrations of the "propaganda as system design" perspective. The book anticipated later scholarship on framing, agenda-setting, and the social construction of news.

Limitations: As a practitioner account rather than empirical research, the book offers no controlled evidence for the effectiveness of the techniques Bernays describes. Its theoretical framework uncritically reproduces Le Bon's crowd psychology, which has been substantially revised by subsequent social science.


Hovland, Carl I., Irving L. Janis, and Harold H. Kelley (1953). Communication and Persuasion: Psychological Studies of Opinion Change. New Haven: Yale University Press.

Study design: Compilation and synthesis of experimental research conducted under the Yale Communication Research Program during and after World War II, using controlled laboratory experiments with college student and military populations to test the effects of message characteristics, source credibility, and audience factors on attitude change.

Key finding: Hovland and colleagues produced a systematic taxonomy of persuasion variables. Among their most replicated findings was the "sleeper effect": messages from low-credibility sources initially produce less attitude change than messages from high-credibility sources, but over time this gap diminishes as audiences retain the message content while forgetting the source. Source credibility itself was shown to depend on perceived expertise and trustworthiness, which are separable dimensions.

Significance for propaganda studies: The Yale program established the experimental method as the gold standard for persuasion research and introduced the source-message-audience-channel framework that dominated the field for three decades. The sleeper effect has direct implications for disinformation: propaganda may be more durable than its disreputable origins would suggest.

Limitations: The laboratory setting using college students raises generalizability concerns. Subsequent attempts to replicate the sleeper effect yielded mixed results, with some researchers arguing it requires specific temporal and message-strength conditions to appear reliably.


McGuire, William J. (1961). "Resistance to Persuasion Conferred by Active and Passive Prior Refutation of the Same and Alternative Counterarguments." Journal of Abnormal and Social Psychology, 63(2), 326–332; and McGuire, W. J. (1964). "Inducing Resistance to Persuasion." Advances in Experimental Social Psychology, 1, 191–229.

Study design: A series of controlled laboratory experiments in which participants were exposed to "cultural truisms" (widely accepted beliefs with no prior exposure to counterarguments), then received either supportive information reinforcing the belief or "refutational" pre-treatments that exposed them to weakened counterarguments and rebuttals before presenting an attack message.

Key finding: Participants who received refutational pre-treatment were significantly more resistant to subsequent persuasion attempts than those who received supportive information or no pre-treatment. By analogy to biological immunization, exposure to a weakened form of the attack "inoculated" recipients against the full-strength version. Both active (generating one's own rebuttals) and passive (reading provided rebuttals) forms conferred some resistance.

Significance for propaganda studies: McGuire's inoculation framework became one of the most generative theories in persuasion research and, decades later, in counter-disinformation practice. The core insight — that pre-emptive exposure to weakened manipulation attempts builds psychological resistance — provides the theoretical foundation for prebunking campaigns and media literacy interventions.

Limitations: McGuire's original studies used highly artificial "truism" beliefs (e.g., "everyone should brush their teeth") specifically because they had no prior cultural defenses; it was unclear whether inoculation would work for contested political topics where people already hold strong prior beliefs. Subsequent research (see Section 4) has addressed this systematically.


Section 2: Cognitive Bias and Information Processing

Tversky, Amos, and Daniel Kahneman (1973). "Availability: A Heuristic for Judging Frequency and Probability." Cognitive Psychology, 5(2), 207–232.

Study design: Series of judgment experiments in which participants estimated relative frequencies of events, words, and probabilities. The key manipulation was whether examples of a category were easy or difficult to mentally retrieve.

Key finding: People systematically overestimate the frequency or probability of events that are cognitively available — easy to call to mind — regardless of their actual base rate. Vivid, recent, emotionally salient, or personally experienced events are judged as more common or likely than statistically rarer events that are harder to imagine.

Significance for propaganda studies: The availability heuristic explains why propaganda strategies that repeatedly circulate dramatic imagery, emotionally charged anecdotes, and vivid worst-case scenarios can distort audience risk perception without making factually false claims. Saturation coverage of atrocities (real or fabricated) inflates perceived threat; suppression of coverage of beneficial events deflates perceived benefits.

Limitations: Later research has complicated the cognitive mechanism, distinguishing between the availability of content and the ease of the retrieval process itself. The political and motivated reasoning contexts in which propaganda operates add further complexity beyond the original laboratory setting.


Petty, Richard E., and John T. Cacioppo (1986). "The Elaboration Likelihood Model of Persuasion." Advances in Experimental Social Psychology, 19, 123–205.

Study design: Synthesis of approximately a decade of experimental work testing how message recipients process persuasive communications under varying conditions of motivation and cognitive capacity.

Key finding: The ELM proposes two routes to persuasion: the "central route," in which recipients carefully evaluate argument quality, and the "peripheral route," in which they rely on simple heuristics (source attractiveness, consensus cues, message length). High-involvement or high-ability audiences tend toward the central route; distracted, low-involvement, or low-ability audiences tend toward the peripheral route. Attitude changes via the central route are more durable and predictive of behavior.

Significance for propaganda studies: The ELM explains why propaganda often targets distracted, overwhelmed, or low-engagement audiences with peripheral cues rather than logical argument — and why information overload (a documented tactic in Russian "firehose of falsehood" strategies) can degrade central-route processing. It also explains why emotional appeals, celebrity endorsements, and repetition can be effective even when argument quality is poor.

Limitations: The distinction between "central" and "peripheral" processing has been criticized as overly binary. The HSM (heuristic-systematic model) offers a parallel framework with some empirical advantages. The model was developed primarily in laboratory settings using low-stakes consumer attitude topics.


Cialdini, Robert B. (1984). Influence: The Psychology of Persuasion. New York: William Morrow.

Study design: Cialdini's framework synthesizes three years of field observation inside influence professions (sales, advertising, fundraising, public relations) combined with experimental social psychology literature. The book identifies six foundational influence principles, each supported by both field observation and controlled experiments.

Key finding: Cialdini identified six universal principles of compliance: reciprocity (people repay what they receive), commitment and consistency (people align with prior commitments), social proof (people look to others' behavior under uncertainty), authority (people defer to perceived experts), liking (people comply with those they like), and scarcity (people value what is rare or diminishing). Each principle can be triggered through specific cues, often without conscious awareness.

Significance for propaganda studies: Cialdini's taxonomy provides a checklist for analyzing propaganda and influence operation tactics. Social proof underlies the manufactured consensus technique; authority underlies the use of fabricated experts and scientific-seeming sources; scarcity underlies urgency framing in recruitment and radicalization. The framework has been widely adopted in counter-influence training.

Limitations: The principles were developed largely in Western, WEIRD (Western, Educated, Industrialized, Rich, Democratic) populations. Cross-cultural variation in susceptibility to specific principles is documented. As a practitioner synthesis rather than a single experimental program, some principles have stronger empirical foundations than others.


Pennycook, Gordon, and David G. Rand (2019). "Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning." Cognition, 188, 39–50.

Study design: Survey experiments with large online samples (via Mechanical Turk and Lucid) presenting participants with politically concordant and discordant true and false news headlines, measuring belief accuracy alongside cognitive style scores (Cognitive Reflection Test) and political identity measures.

Key finding: Contrary to the "motivated reasoning" hypothesis — which predicts that people are better at identifying fake news that contradicts their political opponents — Pennycook and Rand found that analytic thinking was a stronger predictor of news discernment than political concordance across both liberal and conservative samples. Better reasoners were more accurate regardless of whether headlines aligned with their politics.

Significance for propaganda studies: The finding suggests that susceptibility to disinformation is more a function of cognitive engagement than partisan tribalism. This has important implications for intervention design: approaches that activate analytic thinking (such as accuracy prompts) may be broadly effective rather than requiring ideologically tailored content.

Limitations: The study uses headline-only stimuli in artificial conditions that may not reflect real-world media consumption. The debate with motivated reasoning theorists remains active; subsequent work by Kahan and others argues that among politically knowledgeable participants, motivated reasoning effects are more pronounced.


Pennycook, Gordon, Adam Bear, Evan T. Collins, and David G. Rand (2020). "The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings." Management Science, 66(11), 4944–4957.

Study design: Online experiment in which participants rated the accuracy of news headlines, some of which carried warning labels while others did not. A control condition presented all headlines without labels.

Key finding: Attaching warning labels to a subset of false headlines produced an "implied truth effect" for unlabeled headlines: participants rated unlabeled stories as more accurate in the warning condition than in the no-label control, apparently inferring that anything not labeled must have been reviewed and found accurate.

Significance for propaganda studies: The implied truth effect is a critical finding for platform policy. Partial fact-checking coverage — which is inevitable given the volume of online content — may inadvertently lend credibility to the large proportion of false content that is never labeled. It argues for either comprehensive labeling or alternative intervention strategies such as accuracy nudges.

Limitations: The magnitude of the implied truth effect and its real-world significance remain debated. The laboratory setting may overestimate the effect, as real users know that fact-checking is incomplete and may not draw the same inference. Replication in more naturalistic settings has shown smaller but still present effects.


Section 3: Misinformation and Correction Research

Nyhan, Brendan, and Jason Reifler (2010). "When Corrections Fail: The Persistence of Political Misperceptions." Political Behavior, 32(2), 303–330.

Study design: Series of survey experiments presenting participants with mock news articles containing a false political claim (e.g., regarding WMDs in Iraq or the fiscal effects of tax cuts), followed by a factual correction. Belief change was measured before and after the correction, with political identity as a moderating variable.

Key finding: Nyhan and Reifler reported a "backfire effect" in which corrections sometimes strengthened misperceptions among ideologically predisposed believers — the correction prompted defensive processing that reinforced the original false belief. The effect was most pronounced among participants with strong prior commitments to the relevant political position.

Significance for propaganda studies: The backfire effect, if robust, would have devastating implications for correction-based counter-propaganda strategies, suggesting that factual rebuttal might be counterproductive. The finding generated enormous scholarly and popular attention and was widely cited as evidence that "facts don't matter" in political communication.

Limitations: Subsequent large-scale replication attempts (see Wood and Porter, below) have substantially failed to reproduce the backfire effect, suggesting it may be a fragile laboratory artifact or limited to very specific topic-identity configurations. Nyhan himself has acknowledged the replication difficulties. The finding should be treated as contextually limited rather than a general principle.


Wood, Thomas, and Ethan Porter (2019). "The Elusive Backfire Effect: Mass Attitudes' Steadfast Factual Adherence." Political Behavior, 41(1), 135–163.

Study design: Large-scale pre-registered survey experiment (N > 8,000) testing 52 correction instances across a wide range of politically charged factual claims, designed explicitly to replicate and extend Nyhan and Reifler's backfire effect paradigm.

Key finding: Across all 52 correction instances, Wood and Porter found no evidence of a general backfire effect. Corrections reliably moved factual beliefs in the direction of accuracy regardless of participants' political identity, though in some cases the belief corrections did not translate into attitude changes on related policy questions.

Significance for propaganda studies: The failure to replicate the backfire effect substantially rehabilitates correction-based strategies. While corrections may not always change minds on policy, they appear to update factual beliefs — a meaningful finding for counter-disinformation work. The study shifted the field from a posture of "corrections are dangerous" to a more nuanced understanding that corrections work factually but face limits at the attitude level.

Limitations: The study found that corrections updated factual beliefs without always changing policy attitudes, suggesting a disconnect between factual and evaluative updating that complicates counter-propaganda work. The pre-registered large-N design is a significant methodological strength, but the headline stimuli may still not fully capture the affective intensity of real-world disinformation encounters.


Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook (2012). "Misinformation and Its Correction: Continued Influence and the Psychological Mechanisms Underlying It." Psychological Science in the Public Interest, 13(3), 106–131. [Published as The Debunking Handbook, freely available at www.debunking handbook.com]

Study design: Comprehensive review and synthesis of the experimental literature on misinformation correction, identifying the psychological mechanisms responsible for the "continued influence effect" — the persistence of false beliefs even after explicit correction — and deriving evidence-based practical recommendations.

Key finding: Misinformation persists after correction partly because it fills explanatory gaps in people's mental models; when corrected, the false belief is removed without a functional replacement, so the mind reverts to familiar content. Effective corrections must therefore provide an alternative explanation rather than simply negating the false claim. The review also identified how correction backfires when corrections are repetitive, complex, or excessively threatening to identity.

Significance for propaganda studies: The Debunking Handbook became the most widely cited practical guide for counter-disinformation communication, directly influencing public health, electoral integrity, and platform policy work globally. Its "fact-myth-fallacy" correction structure — leading with the fact, warning of the myth, explaining the fallacy — is now standard practice in science communication.

Limitations: As a narrative review, the handbook synthesizes evidence across heterogeneous contexts; not all recommendations have been independently validated in applied settings. The 2020 edition (Cook, Lewandowsky, et al.) substantially updates and revises some earlier recommendations in light of subsequent evidence.


Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zittrain (2018). "The Science of Fake News." Science, 359(6380), 1094–1096.

Study design: Multidisciplinary review article synthesizing the state of empirical knowledge on the production, distribution, and effects of false news, authored by a cross-institutional team of political scientists, communication scholars, computer scientists, and legal scholars.

Key finding: The review found that false news spreads widely on social media, is difficult for users to identify, and appears to have measurable influence on political beliefs. However, the authors cautioned that large-scale field experiments measuring behavioral effects (voting, civic action) remained sparse at the time of publication, and that the magnitude of population-level effects was uncertain.

Significance for propaganda studies: The Science article established a scholarly consensus — visible to policymakers and the public — that false news is a genuine problem requiring interdisciplinary research and platform intervention. It also set the agenda for subsequent empirical priorities, including the need for data-sharing agreements with platforms.

Limitations: As a short review article, it could not thoroughly assess evidence quality. Some of its conclusions have been revised by subsequent more rigorous work. The article also preceded the emergence of platform-level studies using internal data, which have complicated the picture of who encounters and is influenced by false news.


Vosoughi, Soroush, Deb Roy, and Sinan Aral (2018). "The Spread of True and False News Online." Science, 359(6380), 1146–1151.

Study design: Large-scale computational study analyzing approximately 126,000 news stories tweeted by about 3 million people between 2006 and 2017, using fact-checking organization ratings (Snopes, PolitiFact, FactCheck.org, etc.) to classify stories as true or false, and measuring spread speed, reach, and network structure.

Key finding: False news spread significantly faster, deeper, and more broadly than true news on Twitter. False stories were 70% more likely to be retweeted than true stories. The speed advantage for false news was particularly pronounced for political content. Critically, the analysis showed that human users — not bots — were primarily responsible for this asymmetry; false news is more novel and emotionally arousing, which drives human sharing behavior.

Significance for propaganda studies: The MIT study provided the most rigorous large-scale quantitative evidence that false news has a structural advantage on social platforms, rooted in human psychology (novelty, emotional arousal) rather than solely in algorithmic amplification. This challenged the assumption that deplatforming bots would solve the problem and focused attention on the role of ordinary users as vectors of disinformation.

Limitations: The study relies on fact-checker classifications, which cover only a subset of stories and may introduce selection bias toward high-profile false news. The Twitter-specific findings may not generalize to other platforms with different sharing mechanics. The study measures spread, not belief change or behavioral impact.


Section 4: Inoculation Research

Note: McGuire (1961, 1964) is covered in Section 1 as a foundational study.

Compton, Josh (2013). "Inoculation Theory." In J. P. Dillard and L. Shen (Eds.), The SAGE Handbook of Persuasion: Developments in Theory and Practice (2nd ed., pp. 220–236). Thousand Oaks, CA: SAGE.

Study design: Comprehensive literature review synthesizing approximately five decades of inoculation theory research following McGuire's foundational work, assessing what has been learned about the mechanisms, boundary conditions, and applications of inoculation effects.

Key finding: The review confirmed inoculation's robustness across a range of topics, populations, and message types. A key theoretical development was identifying two mechanisms: the refutational preemption component (specific counterarguments) and the threat component (warning that one's belief will be attacked). Both are necessary for full inoculation; threat alone or refutation alone produces weaker effects.

Significance for propaganda studies: Compton's review made inoculation theory accessible for applied researchers and practitioners and clarified which elements of the McGuire paradigm are essential versus incidental. The review also documented inoculation's scalability across media formats, suggesting that pre-broadcast "inoculation messages" could be deployed through mass media, not just interpersonal channels.

Limitations: As a review article, the piece synthesizes a heterogeneous literature and cannot fully assess publication bias. Most inoculation studies through 2013 used laboratory settings with relatively short time gaps between inoculation and attack messages; durability effects in real-world settings remained underexplored.


Van der Linden, Sander, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibach (2017). "Inoculating the Public Against Misinformation About Climate Change." Global Challenges, 1(2), 1600008.

Study design: Pre-registered online survey experiment (N = 2,167) in which participants received a "scientific consensus" message about climate change, a well-known denialist misinformation message (the Oregon Petition), or combinations including an inoculation treatment that forewarned participants about the denialist campaign and exposed them to a sample of its misleading tactics before presenting the misinformation.

Key finding: The scientific consensus message significantly increased perceived consensus and public risk concern. The denialist misinformation message significantly reduced these perceptions. When participants received the inoculation message before the denialist content, the reduction in perceived consensus was almost entirely neutralized — the inoculation "protected" the accurate information from being undermined.

Significance for propaganda studies: This study was the first to test inoculation theory in a high-stakes real-world political topic (climate change) with a large, nationally diverse sample, demonstrating that the laboratory paradigm generalizes beyond artificial truisms. It also showed that inoculation does not merely protect existing beliefs but can buffer against the degradation caused by active influence operations.

Limitations: The study used a single exposure in a survey context; real-world inoculation requires repeated exposures to competing messages over time. Political polarization in the sample was not fully analyzed as a moderator, leaving open the question of whether inoculation works equally across the political spectrum on this topic.


Roozenbeek, Jon, and Sander van der Linden (2019). "Fake News Game Confers Psychological Resistance to Online Disinformation." Palgrave Communications, 5, 65.

Study design: Two pre-registered online experiments testing the inoculation effect of the "Bad News" browser game, in which players role-play as disinformation producers and learn six common manipulation techniques (impersonation, emotional manipulation, polarization, conspiracy, discrediting, trolling) before returning to evaluate real fake news content.

Key finding: Participants who played the Bad News game showed significantly improved ability to identify manipulative content in subsequent exposure, compared to control groups. Importantly, this effect held regardless of political ideology, education level, age, and media literacy. The game also did not reduce belief in real news — a key concern about inoculation approaches.

Significance for propaganda studies: Bad News demonstrated that gamified, technique-based inoculation can work at scale in a self-directed format accessible to general populations, not just structured classroom settings. The finding that the technique generalizes across partisan lines is particularly important for counter-disinformation, as many interventions produce asymmetric effects by ideology.

Limitations: The game is a voluntary opt-in platform, which creates significant selection bias; people who choose to play are likely already more interested in media literacy than the general population. Long-term retention of the inoculation effect beyond weeks was not measured in the original study.


Roozenbeek, Jon, Sander van der Linden, Emma Goldberg, William Maertens, and Rakoen Maertens (2022). "Prebunking Interventions Based on 'Inoculation' Theory Can Result in Accurate Belief Discernment on Social Media." Science Advances, 8(34), eabo6254.

Study design: Three large-scale field experiments conducted in partnership with YouTube and other platforms, testing whether short "prebunking" video advertisements shown to users before they encountered real online content could reduce susceptibility to specific disinformation techniques (false dichotomy, emotional manipulation, ad hominem attacks).

Key finding: Prebunking video ads significantly improved participants' ability to identify and resist specific manipulation techniques when subsequently encountered in authentic social media content. The effects were detectable even in the ecologically valid setting of real platform use and persisted across a diverse range of political topics. Importantly, effects were observed not just on the specific manipulations targeted but on novel examples of the same techniques.

Significance for propaganda studies: This study provides the strongest real-world evidence to date that inoculation-based prebunking is scalable to mass media contexts. The technique-based (rather than topic-specific) approach offers particular promise for generalizing across the enormous variety of disinformation content in circulation.

Limitations: Platform partnerships introduce selection bias in who is shown the advertisements. Effect sizes, while statistically significant, were modest, and translating belief discernment into changed behavior (e.g., reduced sharing) was not directly measured. The studies focused on specific manipulative techniques that could be operationalized in short videos, which may not cover the full landscape of propaganda methods.


Section 5: Social Media and Platform Research

Pariser, Eli (2011). The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Press.

Study design: Journalistic-analytical work combining personal observation, interviews with platform engineers and executives, and synthesis of available academic and industry research to theorize how algorithmic personalization creates ideologically homogeneous information environments.

Key finding: Pariser argued that search and social media algorithms curate content based on prior behavior, resulting in "filter bubbles" — personalized information environments in which users are systematically shielded from discordant viewpoints and only see content reinforcing their existing beliefs. He warned this would accelerate political polarization and impair democratic deliberation.

Significance for propaganda studies: The filter bubble thesis became one of the most widely cited frameworks for understanding social media's political effects and motivated billions of dollars of subsequent research. It established the idea that algorithmic mediation — not just partisan media choice — shapes exposure, which has important implications for how propaganda can be targeted and amplified.

Limitations: Empirical research has substantially challenged the strong filter bubble hypothesis (see Dubois and Blank, below; and subsequent work by Guess, Nyhan, and others). Most users encounter diverse content; the problem of self-selected ideological clustering may be smaller than the thesis suggests. The book predates the empirical infrastructure needed to test the claim rigorously.


Dubois, Elizabeth, and Grant Blank (2018). "The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media." Information, Communication & Society, 21(5), 729–745.

Study design: Survey-based study (N = 2,000, representative UK sample) using a multi-platform media use diary combined with news source tracking to assess the degree to which respondents actually encountered diverse political information in their media diet.

Key finding: The authors found that most people consume news across multiple platforms and source types, including sources with which they politically disagree. The "echo chamber" effect — consuming only politically congenial media — was real but confined to a small minority (roughly 8%) of highly politically engaged respondents. Political interest and diverse media access moderated exposure far more than platform-specific algorithms.

Significance for propaganda studies: The study was among the first rigorous empirical challenges to the filter bubble and echo chamber theses, suggesting that disinformation's impact may be more concentrated among highly engaged partisans rather than spreading uniformly through algorithmically isolated populations. This complicates both the threat model and intervention strategies.

Limitations: The UK media context differs meaningfully from the United States in terms of public broadcasting presence and partisan media ecology. The study measured exposure, not processing or influence; encountering diverse sources does not necessarily imply that exposure is equally persuasive across source types.


Benkler, Yochai, Robert Faris, and Hal Roberts (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press.

Study design: Large-scale computational analysis of 4 million political news stories and social media shares during the 2015–2018 period, using network mapping, content analysis, and hyperlink analysis to identify structural differences in how information flows through left-leaning and right-leaning American media ecosystems.

Key finding: The left-leaning media ecosystem demonstrated a more distributed, centrist-anchored structure in which mainstream outlets constrained the spread of extreme or false content. The right-leaning ecosystem showed a more insular, mutually reinforcing network in which extreme outlets (Breitbart, Infowars) occupied central positions and mainstream conservative outlets followed rather than constrained their agenda. This "partisan asymmetry" — not the internet itself — was the primary driver of disinformation vulnerability.

Significance for propaganda studies: Network Propaganda shifted the analytical frame from "social media causes polarization" to "structural differences in partisan media ecosystems create differential vulnerability." It provided the most comprehensive evidence to that date of systematic partisan asymmetry in disinformation exposure, with important implications for targeted counter-disinformation efforts.

Limitations: The study covers a specific historical period and U.S. context. The characterization of asymmetry has been contested, with critics arguing the methodology overrepresents right-wing extreme content while underweighting left-wing equivalents. Cross-national applicability requires caution.


Pennycook, Gordon, Jonathon McPhetres, Yunhao Zhang, Jackson G. Lu, and David G. Rand (2020). "Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention." Psychological Science, 31(7), 770–780.

Study design: Pre-registered online experiments testing whether a simple "accuracy nudge" — prompting participants to think about the accuracy of a single unrelated headline before rating COVID-19 misinformation — improved subsequent discernment between true and false COVID-related stories. Participants rated their intention to share headlines.

Key finding: The accuracy nudge significantly improved news discernment and reduced the rated shareability of false COVID-19 headlines, compared to control conditions. The effect held across partisan lines and did not produce unwanted spillover into over-skepticism of true news. The intervention is brief, cheap, and scalable as a platform design element.

Significance for propaganda studies: The accuracy nudge study provided empirical support for a "top-of-mind" intervention model: because people usually share social media content without thinking carefully about accuracy, simply prompting accuracy consideration can meaningfully improve behavior. This translated directly into platform policy experiments (Twitter tested related interventions at scale in 2021).

Limitations: The gap between "intention to share" (measured) and actual sharing behavior (not directly measured in the original study) remains an important limitation. Sustained effects with repeated nudge exposure — and the risk of adaptation — were not assessed in the initial study.


Section 6: Influence Operations Research

Ferrara, Emilio, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini (2016). "The Rise of Social Bots." Communications of the ACM, 59(7), 96–104.

Study design: Computational analysis of bot detection methods and bot behavior on Twitter, reviewing multiple datasets and developing classification algorithms to identify automated accounts based on behavioral fingerprints (posting frequency, timing, content repetition, network patterns).

Key finding: Social bots had become a substantial and growing presence on major platforms by 2016, estimated at 9–15% of Twitter accounts. Bot accounts systematically amplified political content, targeted trending hashtags, and coordinated to manufacture the appearance of grassroots support for political positions. Bot behavior was often indistinguishable from human behavior to unaided observers.

Significance for propaganda studies: Ferrara et al. provided the foundational computational framework for bot detection and analysis that subsequent influence operations research built upon. The study established that manufactured social proof — simulated popularity and consensus — was technically feasible and actively deployed.

Limitations: Bot detection accuracy depends on evolving behavioral signatures, and sophisticated operators quickly adapt to evade detection. The 9–15% figure has been subject to ongoing methodological debate. Platform API restrictions since 2016 have made large-scale replication and updating difficult.


Howard, Philip N., Samantha Bradshaw, Bence Kollanyi, and Lisa-Maria Neudert (2017). "Junk News and Bots during the U.S. 2016 Presidential Election: What Were Michigan Voters Sharing over Twitter?" Oxford Internet Institute, Computational Propaganda Project, Data Memo 2017.01.

Study design: Computational analysis of Twitter traffic in the critical swing state of Michigan during the final days of the 2016 U.S. presidential election, combining bot detection, junk news classification (using criteria for professionalism, credibility, bias, and sourcing), and network analysis.

Key finding: In the Michigan Twitter conversation, junk news content was shared at rates comparable to or exceeding professionally produced journalism among politically active accounts. Automated bot accounts played a significant role in amplifying junk news. The study found that pro-Trump content in the sample was more likely to originate from or be amplified by bots and junk news sources than pro-Clinton content.

Significance for propaganda studies: The Oxford Internet Institute's Computational Propaganda Project produced some of the first systematic quantitative evidence that junk news and bot amplification were embedded in actual electoral conversations — not merely theoretical threats. It established the methodology for studying computational propaganda in real political events.

Limitations: The study covers a limited time window and a single platform in one U.S. state, limiting generalizability. "Junk news" classification involves judgment calls about credibility that may introduce researcher bias. The causal relationship between bot amplification and voter opinion or behavior was not demonstrated.


DiResta, Renée, Kathleen Hall, Rebecca Leins, David Rabkin, James Parrott, John Kelly, Tricia Meehan, Camille François, Tom Jaski, Tim Squirrell, Ryan Fox, and Jonathan Morgan (2018). "The Tactics and Tropes of the Internet Research Agency." Prepared for the U.S. Senate Select Committee on Intelligence. New Knowledge / Columbia University / Canfield Research.

Study design: Forensic analysis of approximately 10 million social media posts produced by the Russian Internet Research Agency (IRA) across Facebook, Instagram, Twitter, YouTube, Reddit, Pinterest, and other platforms between 2015 and 2017, based on data disclosed to Congress by the platforms.

Key finding: The IRA operated a highly sophisticated, segmented influence campaign designed not simply to promote Trump but to inflame existing social divisions across race, religion, immigration, and gun rights. The operation created hundreds of fake community accounts targeting specific U.S. demographic groups, built large authentic-seeming followings before using them for political messaging, and ran coordinated advertising campaigns. Black American communities were disproportionately targeted.

Significance for propaganda studies: The Senate Intelligence Committee report provided the most detailed public accounting of a state-sponsored influence operation ever released, establishing the IRA campaign as the reference case for the study of coordinated inauthentic behavior. It revealed that the primary goal was social division rather than direct electoral influence, and that the operation's scale dwarfed prior assumptions.

Limitations: The analysis was limited to content disclosed by platforms, which may not represent the full operation. Platform disclosure decisions were commercially and politically influenced, raising questions about completeness. Measuring the causal impact on political attitudes or voting behavior was not within scope.


Nimmo, Ben, Camille François, C. Shawn Eib, Léa Ronzaud, and others (Stanford Internet Observatory / Graphika). (2019, 2022). Spamouflage Documentation Series. Stanford Internet Observatory and Graphika, various reports.

Study design: Ongoing forensic tracking and attribution of the "Spamouflage Dragon" Chinese-linked influence network, combining platform-disclosed data, open-source intelligence, network analysis, and behavioral fingerprinting across Facebook, Twitter, YouTube, and dozens of smaller platforms.

Key finding: Spamouflage represented one of the largest-volume influence operations ever documented, producing enormous quantities of low-quality pro-Chinese government and anti-Western content across hundreds of platforms. Despite its scale, the operation was largely ineffective in generating authentic engagement — most content received near-zero interaction from real users. This "engagement gap" suggests that scale and reach do not automatically translate into influence.

Significance for propaganda studies: The Spamouflage documentation challenged the assumption that all state-sponsored influence operations are effective. The consistent finding of near-zero authentic engagement suggests that at least some operations may function more as domestic performance for state principals (demonstrating capability) than as effective foreign propaganda. This has important implications for proportionate policy responses.

Limitations: The operationally documented "failed" operations may be the ones that are easiest to detect; more sophisticated operations with higher authentic engagement may remain undiscovered. Attribution to state actors involves intelligence assessments that are not fully public.


Section 7: Public Health Propaganda Research

Proctor, Robert N. (2011). Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition. Berkeley: University of California Press.

Study design: Historical forensic analysis drawing on 70 million pages of internal tobacco industry documents made public through litigation to reconstruct Big Tobacco's multi-decade campaign to suppress, distort, and manufacture scientific uncertainty about the health effects of smoking.

Key finding: The tobacco industry developed a systematic, well-funded, and coordinated propaganda apparatus that included manufacturing industry-funded science, creating front groups, funding sympathetic researchers, lobbying regulatory agencies, and deliberately cultivating public doubt about settled science for decades after internal documents confirmed the harms of smoking. This was not incidental to the business model — it was the business model's survival strategy.

Significance for propaganda studies: Proctor's work, along with Oreskes and Conway's (below), established Big Tobacco as the paradigm case of "manufactured doubt" — corporate propaganda designed not to persuade but to create sufficient uncertainty to prevent regulatory action. The same playbook has been traced to climate denial, anti-vaccine campaigns, and pharmaceutical industry manipulation of evidence.

Limitations: The study is limited to the documentary record of a single (though enormous) industry. The mechanisms by which the manufactured doubt campaigns actually influenced public opinion, media coverage, and regulatory outcomes, while strongly implied, are difficult to establish causally from documentary evidence alone.


Oreskes, Naomi, and Erik M. Conway (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press.

Study design: Historical analysis tracking a specific network of contrarian scientists and industry-funded advocacy organizations across five major scientific controversies: tobacco and lung cancer, the Strategic Defense Initiative, acid rain, the ozone hole, and climate change.

Key finding: The same small network of individuals and organizations — often involving the same people using the same rhetorical techniques — repeatedly manufactured public doubt about settled scientific consensus across decades and industries. The goal was not to produce alternative science but to create the impression that scientific disagreement existed, thereby justifying regulatory delay.

Significance for propaganda studies: Oreskes and Conway provided the first comprehensive cross-industry documentation of the "doubt manufacturing" playbook, demonstrating it as a deliberate, transferable strategy rather than independent episodes of industry misconduct. The "merchants of doubt" model has since been applied to the analysis of anti-vaccine campaigns, nutritional science manipulation, and algorithmic amplification of health misinformation.

Limitations: The analysis focuses on a specific elite network; the broader industry of climate denial and health misinformation has since diversified well beyond this original cohort. The historical focus means the study predates social media as an amplification mechanism for manufactured doubt.


Hotez, Peter J. (2021/2022). "COVID-19 Anti-Vaccine Activism in the Age of Social Media." Nature Reviews Immunology, 21(12), 764–766; and related analysis of excess mortality attributable to vaccine hesitancy.

Study design: Epidemiological analysis combining vaccination rate data, COVID-19 mortality data by U.S. county, and geographic mapping of vaccine hesitancy to estimate excess deaths attributable to vaccine refusal during periods when vaccines were available. The 318,000 figure was derived from modeling by Peterson Institute and related public health analyses.

Key finding: In the United States alone, an estimated 318,000 preventable COVID-19 deaths between June and December 2021 were attributable to vaccine hesitancy among unvaccinated adults, concentrated in counties with lower vaccination rates. Hotez and colleagues documented that organized anti-vaccine disinformation campaigns — particularly those amplified through partisan media ecosystems — made a measurable contribution to this hesitancy.

Significance for propaganda studies: This is among the most direct empirical links between organized disinformation campaigns and mortality outcomes. It establishes anti-vaccine disinformation not merely as a communication failure but as a public health catastrophe with calculable human cost, raising urgent questions about platform liability and counter-disinformation as a public health priority.

Limitations: Attribution of specific deaths to disinformation rather than vaccine hesitancy that would have existed independently is methodologically difficult. The 318,000 figure is a modeling estimate with uncertainty ranges. Partisan and geographic mortality patterns may reflect multiple confounders beyond media exposure.


Roozenbeek, Jon, Sander van der Linden, and Toni Nygren (2020). "Prebunking Interventions Based on the Psychological Inoculation Theory Can Result in Accurate Belief Discernment and Reduced Susceptibility to COVID-19 Misinformation." Royal Society Open Science, 7(10), 201199. [Also known as the "Go Viral!" study.]

Study design: Pre-registered online experiment testing a short browser-based game ("Go Viral!") designed to inoculate players against common COVID-19 misinformation techniques — specifically emotional manipulation, false experts, and conspiracy theories — by having players produce misinformation themselves in a simulated environment.

Key finding: Players who completed the Go Viral! game showed significantly improved ability to identify COVID-19 misinformation and lower credibility ratings for false COVID content in subsequent tests, compared to a Tetris-playing control group. The effect held across age groups and was particularly pronounced for participants with lower baseline levels of analytic thinking.

Significance for propaganda studies: The Go Viral! study was one of the first applications of gamified inoculation to a real-time public health crisis with active disinformation campaigns. It demonstrated that rapidly developed, topical inoculation tools can be deployed responsively during a crisis — not only as general-purpose media literacy preparation.

Limitations: The study used self-selected volunteer participants rather than a representative sample. The game was brief (approximately 5 minutes) and the long-term durability of the effect was not assessed. As with Bad News, real-world self-selection into the game creates questions about reaching the most at-risk audiences.


Section 8: Media Literacy Research

Wineburg, Sam, Sarah McGrew, Joel Breakstone, and Teresa Ortega (2016). "Evaluating Information: The Cornerstone of Civic Online Reasoning." Stanford History Education Group, Working Paper.

Study design: Empirical assessment of how students, historians, and professional fact-checkers evaluate online information sources, using think-aloud protocols and click-tracking as participants navigated unfamiliar websites and evaluated social media content.

Key finding: Students and historians relied primarily on "vertical reading" — deeply exploring a single website to assess its credibility before moving on — which proved a poor strategy because sophisticated misinformation sites are designed to appear credible in isolation. Professional fact-checkers used "lateral reading" — immediately opening multiple browser tabs to check what third parties said about a source — and were dramatically faster and more accurate at identifying unreliable sources. The finding held across age groups and formal education levels.

Significance for propaganda studies: The lateral reading finding fundamentally challenged the dominant media literacy pedagogy, which had focused on teaching students to examine internal features of websites (authorship, citations, design). The study generated a new instructional approach — SIFT (Stop, Investigate the source, Find better coverage, Trace claims) — adopted by media literacy programs internationally.

Limitations: The study's think-aloud protocols are resource-intensive and cannot assess information behavior at scale. The professional fact-checkers were a small, highly specialized group. The research was conducted in the United States and may not generalize to media environments with different platform and news ecosystems.


Ashley, Seth, Adam Maksl, and Stephanie Craft (2013). "Developing a News Media Literacy Scale." Journalism & Mass Communication Educator, 68(1), 7–26.

Study design: Scale development study constructing and validating a psychometric measure of news media literacy drawing on existing media literacy theory, survey pre-testing, and factor analysis with college student samples.

Key finding: News media literacy is a multidimensional construct encompassing both motivational (willingness to engage critically) and cognitive (knowledge of media production processes) components. Scales developed with these dimensions outperform simpler "did you use media?" measures in predicting critical evaluation of news. The study provided a validated instrument for subsequent media literacy effect research.

Significance for propaganda studies: The development of psychometrically validated media literacy scales made it possible to conduct rigorous studies assessing whether media literacy education actually changes relevant beliefs and behaviors. This infrastructure is essential for evaluating the effectiveness of counter-propaganda interventions.

Limitations: The scale was developed with college student samples, limiting generalizability. Validated media literacy scales have since proliferated, creating comparability problems across studies. There remains debate about whether measuring media literacy knowledge predicts critical behavior in naturalistic (not laboratory) settings.


Hobbs, Renee (2010). Digital and Media Literacy: A Plan of Action. Washington, DC: The Aspen Institute.

Study design: Policy report and competency framework synthesizing research on digital and media literacy to develop a national agenda for media literacy education in the United States, including identification of core competencies and recommendations for curriculum integration.

Key finding: Hobbs identified five core media literacy competencies: Access (the ability to find and use media), Analyze (critical evaluation of messages), Create (the ability to produce media), Reflect (ethical consideration of one's own media use), and Act (civic engagement through media). She argued that these competencies are inseparable — isolated analysis skills without creative practice do not produce genuine media literacy.

Significance for propaganda studies: The Hobbs framework became one of the most widely adopted models in U.S. media literacy education and provided a structure for connecting classroom media literacy work to counter-disinformation goals. Its emphasis on creation as a component of literacy anticipated research showing that experiencing disinformation production (as in Bad News and Go Viral!) builds resistance.

Limitations: As a policy framework rather than an empirical study, the competency model reflects professional consensus and values as much as experimental evidence. Outcome measurement — whether students who develop these competencies actually resist propaganda more effectively — requires separate empirical work.


Newman, Nic, Richard Fletcher, Anne Schulz, Simge Andı, Craig T. Robertson, and Rasmus Kleis Nielsen (Annual). Reuters Institute Digital News Report. Oxford: Reuters Institute for the Study of Journalism.

Study design: Annual large-scale international survey (typically 40–46 countries, approximately 2,000 respondents per country, conducted by YouGov) measuring news consumption habits, platform use, trust in news media, and attitudes toward journalism, misinformation, and media platforms.

Key finding: Across multiple years, the Reuters Digital News Report has documented declining trust in news media in most Western democracies, widening partisan gaps in trust, growing reliance on social media for news access particularly among younger audiences, and significant variation across national contexts. The report also consistently finds high concern about misinformation among news consumers alongside low confidence in their own ability to identify it.

Significance for propaganda studies: The Reuters Institute survey provides the only systematic comparative international data on news trust, consumption, and media attitudes over time, making it the essential empirical reference for contextualizing country-specific claims about media literacy, disinformation vulnerability, and platform dependence. Its annual publication allows tracking of trends across major global events.

Limitations: As a self-report survey, the Digital News Report is subject to social desirability bias and the gap between reported and actual media behavior. "Trust in news" is a complex construct that may measure different things across respondents and countries. The survey does not directly measure exposure to or belief in specific disinformation.


A Note on Using This Appendix

The studies summarized here represent a snapshot of a rapidly evolving empirical literature. Students should be aware of three ongoing challenges when using this research.

First, effect sizes in persuasion and misinformation research are frequently modest in laboratory settings and smaller still in field settings. Large-scale online experiments and platform partnerships have provided more ecologically valid evidence in recent years, but the translation from laboratory findings to real-world intervention impact remains an active area of inquiry.

Second, the replication crisis in social psychology has affected several foundational findings in this literature — most notably the backfire effect, but also various social priming and ego depletion findings that underpin some media influence models. Students should weight pre-registered, large-sample studies more heavily than small laboratory studies published before 2015.

Third, the influence operations and platform research landscape changes rapidly. The studies documented here were state-of-the-art at their publication dates; the Stanford Internet Observatory, Graphika, Oxford Internet Institute, and related organizations publish updated findings regularly. For the most current evidence on influence operations, disinformation campaigns, and platform policy experiments, students should consult these organizations' ongoing research releases alongside the peer-reviewed literature.

The goal of empirical research in this field is not to produce a final accounting of how propaganda works, but to build the cumulative knowledge base that makes detection, resistance, and effective counter-messaging possible. Each study here contributes a piece of that foundation.


For full bibliographic references organized alphabetically, see Appendix A: Glossary of Key Terms and Concepts. For guidance on locating primary sources, government reports, and platform disclosures, see Appendix C: Primary Sources and Research Guide.