> "The goal isn't to make you believe any one thing. The goal is to make you believe nothing — to make you so confused about what's real that you stop trusting the information environment altogether. Confusion is the product."
In This Chapter
- Opening: The Trail Goes Cold
- 1. Why 2016–2020 Was a Watershed
- 2. The Internet Research Agency: Operation in Full
- 3. The Domestic Disinformation Ecosystem
- 4. The COVID-19 Infodemic
- 5. The Big Lie: Election Denial as Disinformation Campaign
- 6. Platform Response and Its Limits
- 7. Research Breakdown: Guess, Nagler, and Tucker (2020)
- 8. Primary Source Analysis: The IRA's "Heart of Texas" Facebook Page
- 9. Debate Framework: Who Is Responsible for Digital Disinformation?
- 10. Action Checklist: Evaluating Disinformation Campaigns
- 11. Inoculation Campaign: Completing the Historical Grounding Component
- Conclusion: The Confusion Is the Product
Chapter 24: Digital Disinformation — The 2016–2020 Campaigns
"The goal isn't to make you believe any one thing. The goal is to make you believe nothing — to make you so confused about what's real that you stop trusting the information environment altogether. Confusion is the product." — Renée DiResta, research director, Stanford Internet Observatory
Opening: The Trail Goes Cold
It is a Thursday afternoon in October 2024 when Sophia Marin first notices the problem.
She is sitting in the Hartwell Clarion newsroom — a cluttered room above the campus bookstore that still smells of decades-old newsprint despite never having housed a printing press — and she is trying to do something that should be simple: find out where a story came from.
The story is about Marcus Diallo, a junior running for student body president on a platform of expanded mental health services and a new textbook lending program. According to messages circulating in at least four student group chats Sophia has access to, Diallo allegedly received funding from an off-campus political action committee with ties to a state-level political party. The allegation, if true, would likely disqualify him under the student government's campaign finance rules. If false, it is a smear designed to derail a candidate days before the election.
Sophia has been doing journalism for three years. She knows how to trace a story. You find the original source. You verify the document or the witness. You contact the subject for comment. You publish with appropriate qualifications. It is not complicated.
Except she cannot find the original source.
The message in Group Chat A says it came from a post in Group Chat B. The post in Group Chat B links to a tweet that has since been deleted. The tweet quoted a screenshot of what appears to be a financial disclosure document, but Sophia cannot find the original document in any public filing database, and the screenshot does not show metadata, formatting details, or any identifying information that would let her verify its authenticity. She has emailed the student government's elections oversight committee. She has filed a records request. She is waiting.
In the meantime, the story is spreading. By Friday morning, someone has created an Instagram post with a red-bordered graphic — the aesthetic of news urgency — repeating the allegation. By Friday afternoon, the post has 847 likes and has been shared 212 times.
Sophia brings the problem to Professor Webb's office hours. She lays it out: the allegation, the trail, the dead ends. She is expecting methodological advice. What she gets is something else.
Marcus Webb swivels his chair, looks at her for a long moment, and says: "That confusion you're feeling right now — that sensation of reaching for something solid and finding nothing — that's not an accident. That's not a news failure. That's what a successful information operation feels like from the inside. Somebody designed that experience for you. The question isn't how to trace the story back to its source. The question is: what was the point of the story in the first place?"
Sophia looks at him. "To hurt Marcus Diallo?"
"Maybe," Webb says. "Or maybe to make everyone so confused about the allegations that they decide not to vote at all. Or to make supporters of his opponent feel vindicated even without evidence. Or to make people like you — journalists — spend three days chasing a ghost instead of covering the actual policy differences between the candidates." He pauses. "When you can't find the source, that's information. The sourcelessness is part of the design."
This chapter is about that design. Not at the scale of a campus election, but at the scale of a national information environment — the most extensively documented disinformation campaigns in the history of democratic politics, run between 2016 and 2020, in the United States and across the democratic world. We will examine who ran them, how they worked, what they achieved, and why the experience of navigating them felt, to millions of people, exactly like what Sophia felt on that Thursday afternoon in October: reaching for something solid and finding nothing.
Understanding that experience — its design, its execution, and its effects — is not a partisan exercise. The disinformation campaigns of 2016–2020 were documented by Republican and Democratic congressional investigations, by Trump-appointed judges and Obama-appointed judges, by intelligence agencies serving administrations of both parties, and by independent academic researchers whose funding came from sources across the political spectrum. The facts are, to a degree unusual in contemporary political discourse, genuinely not in dispute among people who have examined the evidence.
What remains in genuine dispute — and what we will engage with honestly — is the question of effect: how much did these campaigns actually change political outcomes? That question is harder, and it is where the research is more contested. We will examine the evidence on that question without either overstating the impact (a temptation on one side) or dismissing it as inconsequential (a temptation on the other).
1. Why 2016–2020 Was a Watershed
Every era of communication technology has produced its own disinformation challenges. Chapter 21 traced the Soviet dezinformatsiya operations of the Cold War, which seeded false narratives through foreign newspapers and let them migrate back into Western media. Chapter 22 documented the Big Tobacco campaign that manufactured scientific uncertainty for decades. Chapter 23 examined the domestic propaganda apparatus that shaped American public opinion about civil rights, the Vietnam War, and the War on Terror.
The 2016–2020 period was different in kind, not just in degree. To understand why, it is necessary to specify exactly what was qualitatively new.
Scale through algorithmic amplification. Prior influence operations required direct media access — a journalist to pitch, a newspaper to plant a story in, a broadcaster to reach. Social media eliminated the bottleneck. The Internet Research Agency, operating from a commercial building in St. Petersburg, Russia, reached an estimated 126 million American Facebook users through a combination of organic content, paid advertising, and algorithmic distribution — all without placing a single phone call to a single American journalist. The amplification was performed by the platform itself, which was designed to distribute engaging content as broadly and quickly as possible. The IRA produced content that was engaging; the platform did the rest.
Speed that outran correction. The famous observation, attributed to Mark Twain but almost certainly apocryphal, that "a lie travels halfway around the world while the truth is putting on its shoes" became, in the social media era, empirically testable. Vosoughi, Roy, and Aral (2018) examined the spread of true and false news stories on Twitter from 2006 to 2017. False news spread significantly farther, faster, deeper, and more broadly than true news in all categories of information studied. False political news was the most viral category. True corrections, when they came, typically reached a fraction of the audience exposed to the original false claim. The physics of the information environment actively disadvantaged truth.
Platform architecture not designed for adversarial use. Facebook, Twitter, YouTube, and Instagram were built to connect people with content they found engaging. They were not designed with the assumption that sophisticated state actors would attempt to weaponize their systems. The features that made them functional — algorithmic recommendation, targeted advertising, community group formation, pseudonymous accounts, viral sharing mechanisms — became the attack surface. As Facebook's internal research team acknowledged in documents later made public: the platform's design had systematically failed to anticipate information warfare.
Democratic specificity. The operations of 2016–2020 were not generic influence campaigns. They were specifically targeted at the mechanisms of democratic self-governance: electoral trust, public health compliance, institutional legitimacy. This specificity was not accidental. The documented goal of the IRA — established by the Senate Intelligence Committee and the Mueller investigation — was not to elect any particular candidate but to "sow discord in the U.S. political system, including the 2016 U.S. presidential election." The targets were the sinews of democratic function: faith in elections, trust in public health agencies, willingness to accept institutional authority. This is the authoritarian playbook applied to the information environment.
The domestic ecosystem interaction. Perhaps the most important — and most underemphasized — aspect of the 2016–2020 period is that foreign disinformation did not operate in a vacuum. It operated in an environment already saturated with domestic disinformation, produced by American partisan media outlets, content farms, political operatives, and individual sharers, for reasons ranging from ideological conviction to profit to attention-seeking. The IRA amplified domestic content; domestic producers amplified IRA content; the algorithmic system treated all of it identically, as engagement content to be maximized. The foreign and domestic ecosystems were not separate. They were interacting, mutually reinforcing, and ultimately inseparable in their effects.
The 2016–2020 period did not invent disinformation. What it did was demonstrate, at unprecedented scale and with unprecedented documentation, what disinformation looks like when every prior barrier to its spread — the cost of printing, the gatekeeping of broadcast, the geography of distribution — has been eliminated by a global network designed to maximize the spread of engaging content.
Tariq Hassan, in the seminar discussion that followed Sophia's account, put it precisely: "What's different now isn't that governments and bad actors are lying. They've always lied. What's different is that before, you needed infrastructure to spread the lie. Now the infrastructure is free, it's global, and it's optimized to spread your lie faster than any truth can follow."
2. The Internet Research Agency: Operation in Full
Chapter 16 introduced the Internet Research Agency as a case study in coordinated inauthentic behavior. This chapter examines it in the depth it demands as the book's second anchor example. The IRA operation is, in the history of influence campaigns, the most thoroughly documented foreign disinformation campaign ever run against a democratic country. The documentation comes from the Mueller Report (February 2019), the Senate Intelligence Committee's five-volume report on Russian interference (released in stages from 2018 to 2020), and from Facebook, Twitter, and Google's own disclosures to Congress. What follows is derived from those primary sources.
2.1 The Organization
The Internet Research Agency (IRA) was a private company, registered in Russia and based at 55 Savushkina Street in St. Petersburg. It was founded in 2013 and was funded and directed by Yevgeny Prigozhin, a Russian oligarch with close ties to Vladimir Putin who would later become famous as the founder of the Wagner Group mercenary company. Prigozhin has been referred to in press and intelligence reports as "Putin's chef" because of his catering and restaurant contracts with the Russian government.
The IRA was not a shadowy basement operation. At its peak, it employed more than four hundred people, organized into departments by function and target demographic. There were separate departments for content production, graphic design, search engine optimization, data analysis, and information technology. Employees worked regular shifts, received regular salaries (which varied from roughly 40,000 to 100,000 rubles per month depending on role), and were subject to performance reviews that evaluated them on metrics including post engagement rates and follower counts. The operation, at its peak around the 2016 election, was budgeted at approximately $1.25 million per month.
The American department — the department focused on U.S. audiences — organized its content production teams by target demographic. There were teams focused on African Americans, teams focused on conservative white Americans, teams focused on Muslims, teams focused on Texas identity, teams focused on immigration. Each team produced content calibrated specifically for its target demographic's existing beliefs, grievances, and community identity markers. Employees in the American department were required to produce content quotas — typically fifty Facebook posts, five Facebook groups, and ten Twitter posts per day. They were penalized for posts that failed to reach engagement thresholds.
The employees of the American department, for the most part, had limited personal knowledge of the United States. The Senate Intelligence Committee documents describe IRA workers who were uncertain about American geography, confused about the specifics of American electoral law, and who relied on Google Maps and travel guides to construct authentic-seeming American voices. And yet the content they produced was, by multiple measures, indistinguishable to its audience from genuine American community content.
2.2 The Documented Goal
A crucial point, which is often misrepresented in both directions: the documented operational goal of the IRA was not to elect Donald Trump. The goal was to maximize American social division and undermine trust in American democratic institutions.
The Senate Intelligence Committee Vol. 2 states this explicitly: the operation "sought to influence the 2016 U.S. presidential election by harming Hillary Clinton's chances and supporting Donald Trump's candidacy," but the primary strategic objective was broader — to "sow discord in the U.S. political system." The IRA ran accounts and content supporting Bernie Sanders. It ran accounts attacking Democratic voters for their lack of enthusiasm. It ran accounts organizing Black voter suppression efforts (encouraging Black voters not to vote for Clinton because she was insufficiently committed to their interests). The goal was not a specific electoral outcome. The goal was dysfunction — a damaged American political system, a public less willing to trust its institutions, a democratic process undermined.
This distinction matters for understanding the operation's design. If the goal had been simply to elect Trump, the IRA would have been a relatively conventional propaganda campaign. Because the goal was division, the IRA operated on all sides simultaneously — amplifying conflict, feeding existing resentments, and making the American political environment less coherent and less trustworthy for everyone.
2.3 The Facebook Operation
The IRA's Facebook operation was, in terms of documented reach, the largest component of the campaign. The Senate Intelligence Committee and Facebook's own disclosures establish the following:
The IRA created 470 Facebook pages and accounts, which collectively posted 80,000 pieces of content between January 2015 and August 2017. Those posts were seen by an estimated 126 million Americans. The IRA also purchased approximately 3,500 Facebook advertisements for a total cost of approximately $100,000, which were seen by approximately 11.4 million Americans. The Facebook operation also included 120 Instagram accounts, which collectively posted approximately 116,000 pieces of content seen by an estimated 116 million Americans.
The specific Facebook communities the IRA built included:
Blacktivist — an account that presented itself as a Black civil rights community and amassed 360,000 followers, making it larger than the official Black Lives Matter Facebook page. Blacktivist content mixed genuine civil rights advocacy with voter suppression messaging, posts encouraging Black voters to boycott the 2016 election, and content designed to maximize outrage and social division.
Being Patriotic — a conservative American identity page that amassed 200,000 followers and produced content focused on immigration, gun rights, and American exceptionalism, calibrated precisely for its target audience's existing beliefs.
United Muslims of America — a page presenting itself as a Muslim American community that produced both genuine pro-Muslim community content and content designed to increase Muslim American political disengagement.
Heart of Texas — a page claiming to represent Texas conservatives and ultimately accumulating 254,000 followers, more than double the size of the official Texas Republican Party Facebook page. We will examine this page in detail in the Primary Source Analysis section.
Army of Jesus — a page targeting evangelical Christian Americans with religious content mixed with political messaging.
The sophistication of these operations lay in their authenticity. The IRA pages did not post obviously false content most of the time. They built genuine communities by posting content that their target audiences found genuinely meaningful — civil rights history, local news, community events, cultural material. The false content — the misleading political posts, the voter suppression messaging, the fabricated or distorted political stories — was introduced into communities that had been built through months of authentic-seeming engagement. By the time the manipulative content arrived, the community members had reason to trust the page. That trust was the product, and it was built deliberately.
2.4 The Instagram Operation
The IRA's Instagram operation received less initial attention than its Facebook operation but was arguably larger in reach. The Senate Intelligence Committee estimated that IRA Instagram content reached approximately 116 million Americans — roughly comparable to the Facebook reach. Instagram's visual format, its lack of the linking structures that allow fact-checkers to trace claim origins, and its younger demographic (compared to Facebook) made it a particularly effective distribution channel for emotional, visually arresting disinformation content.
2.5 The Twitter Operation
Twitter identified 3,841 IRA accounts that collectively posted approximately 10.4 million tweets. The Twitter operation also included automated "bot" accounts that did not produce original content but amplified existing content — including content produced by real American users who had no knowledge they were being amplified by a Russian state influence operation.
The Twitter bots presented a specific challenge: they were indistinguishable from real users. The IRA bots had profile pictures (many stolen from real Americans), posting histories, follower networks, and engagement patterns designed to simulate authentic American Twitter users. Some had been active for years before being activated for the 2016 campaign.
2.6 The Houston Events
Perhaps the most striking demonstration of the IRA's operational ambition was the organization of real-world events through fake accounts. In May 2016, two IRA accounts — "United Muslims of America" and "Heart of Texas" — organized competing rallies in Houston, Texas, at the same location, at the same time. "Heart of Texas" organized a rally to "Stop Islamization of Texas." "United Muslims of America" organized a counter-rally called "Save Islamic Knowledge." Real Americans attended both rallies, on opposing sides, organized by the same Russian government contractors in St. Petersburg, neither side having any knowledge of the operation's origin. The rallies, according to local news accounts, were relatively small — a few dozen people on each side — but the event was documented and became emblematic of the operation's strategy: manufacturing real social conflict from fabricated community identity.
The Houston events illuminate the central logic of the IRA operation. Foreign disinformation does not create social divisions from nothing. The divisions between conservative Texans and Muslim Americans were real, rooted in real policy disputes and real cultural tensions. The IRA did not invent them. What the IRA did was identify genuine fault lines in American society — genuine areas of conflict and mistrust — and apply organizational capacity and resources to inflaming them. The operation exploited real grievances. It made real conflict more intense. But it did not manufacture the conflict's existence.
3. The Domestic Disinformation Ecosystem
The IRA operation is the most thoroughly documented foreign disinformation campaign in history, and its documentation has shaped how the 2016–2020 period is publicly understood. But focusing exclusively on the IRA distorts the actual disinformation environment of that period in an important way: the IRA was a relatively small fraction of the total disinformation circulating in American political discourse.
The domestic disinformation ecosystem — the network of American-produced, American-distributed false and misleading political content — was substantially larger in volume and reach than anything the IRA produced.
3.1 The Partisan Media Ecosystem
The right-wing partisan media ecosystem in 2016 included websites and outlets that routinely published false or misleading political content: Breitbart News, Infowars (Alex Jones's media operation), The Gateway Pundit, True Pundit, and dozens of smaller operations. These outlets had, collectively, enormous reach. Breitbart News in 2016 had approximately 45 million monthly visitors. Infowars claimed, with some plausibility, comparable reach. The Gateway Pundit was consistently among the top twenty-five most-shared political websites on Facebook in 2016.
The content produced by these outlets ranged from aggressive but factually grounded partisan commentary to fabricated news stories with no factual basis. The fabricated content — false claims about Hillary Clinton's health, false accounts of electoral fraud, false stories about Clinton campaign officials — circulated through the same Facebook algorithm as genuine news content, at comparable speeds and with comparable distribution.
This domestic ecosystem interacted with the IRA operation in documented ways. The IRA amplified content produced by domestic partisan media. Domestic media reported on and amplified claims originating in IRA-operated accounts. The algorithm treated the domestic ecosystem and the foreign operation as the same thing: engaging content to be distributed broadly. From the perspective of the platform, there was no difference between a Gateway Pundit false story shared by a real American user and an IRA-produced false story shared by a fake American account. Both were engagement to be optimized.
3.2 The Macedonian Content Farms
Between the IRA's state-directed sophistication and the domestic partisan ecosystem's ideological motivation, there was a third element: profit-motivated content farming with no political agenda.
The most documented example was the network of pro-Trump Facebook pages operated by teenagers in Veles, a small town in North Macedonia. These teenagers — some as young as fifteen — had discovered that pro-Trump content generated enormous Facebook engagement and that Facebook's monetization system paid website owners for traffic generated by Facebook sharing. They had no ideological investment in Trump's candidacy or in American politics. They were running content farms: websites that produced high-volume, emotionally inflammatory content designed to generate maximum Facebook shares, which translated into maximum advertising revenue.
The Macedonian operation was documented in detail by journalists from BuzzFeed News and Wired. Individual teenagers were earning thousands of dollars per month producing content that circulated through the same channels as the IRA operation and the domestic partisan media. When one of the Macedonian content farmers was asked about the content's accuracy, he shrugged: accuracy had nothing to do with it. Engagement was the metric. Outrage was the most engaging emotion. Accurate, nuanced political journalism did not generate outrage. False, inflammatory political content did.
The Macedonian ecosystem illustrates a structural feature of the platform information environment: the algorithmic architecture that advantaged disinformation did not require political intent. It required only content optimized for engagement. Engagement and outrage were correlated. Truth and engagement were not.
3.3 The Quantitative Picture: Guess, Nagler, and Tucker
The most important empirical corrective to the dominant narrative about 2016 disinformation came from a study by Andrew Guess, Jonathan Nagler, and Joshua Tucker published in Science Advances in 2020. "Less than you think: Prevalence and predictors of fake news dissemination on Facebook" analyzed web traffic data from 2,525 Americans during the final month of the 2016 election campaign.
The findings were striking. Only 8.5 percent of Americans in the study sample visited a fake news website in the month before the election. Visits to fake news sites were heavily concentrated among a small number of individuals: the top decile of news consumers accounted for the majority of fake news site visits. The demographic most likely to visit fake news sites was older, highly partisan Facebook users — not, as might be assumed, younger social media natives. The study found that Facebook was the primary referral source for fake news site visits, confirming the platform's amplification role — but it also found that most Americans were not, in fact, consuming significant volumes of fake news.
These findings are important for two reasons. First, they complicate the narrative of a uniformly disinformation-saturated electorate: most Americans were not exposed to high volumes of false content. Second, they identify the actual problem with precision: exposure to fake news was highly concentrated among a specific demographic — older, highly partisan Facebook users — who were also, in U.S. politics, among the most influential political participants in terms of volunteering, donating, and driving conversation. The problem was not universal exposure. It was concentrated, high-intensity exposure among a politically consequential population.
Tariq, who had read the study before the seminar, raised the point that Guess et al. also found something about his own community: Muslim Americans were among the demographics most heavily targeted by IRA content but were not, in the study's data, among the heaviest consumers of fake news. "The targeting," he observed, "doesn't mean the targeting worked. It means someone thought it would work."
4. The COVID-19 Infodemic
If 2016 demonstrated the scale of digital disinformation's reach, 2020 demonstrated its lethality.
When COVID-19 emerged as a global pandemic in early 2020, it entered an information environment that had spent years being optimized for the spread of emotionally engaging, source-obscured content. The result was what the World Health Organization, in February 2020, declared an "infodemic" — "an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it." The WHO noted that the infodemic was spreading "faster than the virus itself."
4.1 The Disinformation Categories
COVID-19 disinformation fell into several distinct categories, each with different origins and mechanisms:
False treatment claims. Early in the pandemic, before evidence-based treatments had been identified, the information vacuum was filled with false claims about effective treatments. Drinking bleach or ingesting disinfectants was promoted, briefly, by some social media accounts following a press conference in which President Trump mused about whether such treatments might work — a statement that was widely and correctly reported as dangerous. Hydroxychloroquine, an antimalarial drug, became the center of a sustained disinformation campaign claiming it as a COVID cure; it was promoted by prominent right-wing media figures, by Trump, and by a network of physicians and politicians who, in some cases, had financial interests in the drug's adoption. Large randomized controlled trials subsequently found that hydroxychloroquine provided no benefit against COVID-19. Ivermectin, an antiparasitic drug effective against certain parasitic infections in both humans and animals, became the most sustained false treatment claim of the pandemic, promoted through a combination of misread preliminary studies, social media amplification, and coordinated advocacy networks. We will examine the ivermectin case in detail in Case Study 2.
Vaccine disinformation. The development of COVID-19 vaccines at unprecedented speed — a result of genuine scientific achievement, massive government investment, and the prior foundational work on mRNA technology — was accompanied by an equally unprecedented disinformation campaign targeting the vaccines. The specific false claims included: that mRNA vaccines altered human DNA (false — mRNA does not enter the cell nucleus and cannot alter DNA); that the vaccines contained microchips for surveillance (false — a claim derived from garbled references to a Bill Gates Foundation grant for novel vaccine delivery systems); that the vaccines caused sterilization (false — a claim that spread particularly rapidly in communities of color given historical reasons for medical mistrust); that the vaccines were responsible for deaths among vaccinated individuals (false — vaccine adverse event reporting systems document reports of adverse events following vaccination, not causation, a distinction systematically misrepresented in disinformation).
The anti-vaccine network that amplified these claims did not emerge from nowhere in 2020. Organizations and networks opposed to vaccines — particularly childhood vaccines — had been operating for decades, predating COVID-19 by many years. The Children's Health Defense, founded by Robert F. Kennedy Jr.; the National Vaccine Information Center; and numerous smaller anti-vaccine social media networks had substantial pre-existing audiences. COVID-19 did not create vaccine disinformation. It activated and massively amplified a pre-existing ecosystem.
Institutional authority attacks. The COVID-19 response became the occasion for sustained, coordinated attacks on public health authorities: the Centers for Disease Control (CDC), the National Institutes of Health (NIH), the World Health Organization, and specifically Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases. These attacks drew on genuine points of institutional vulnerability — the CDC's initial guidance on masks was revised, the WHO's early response was legitimately criticized as insufficiently skeptical of Chinese government claims — to construct a broader narrative of institutional corruption and untrustworthiness. Fauci specifically became the target of disinformation ranging from false claims about his financial connections to vaccine manufacturers to fabricated quotes to death threats.
Origin theories. The question of COVID-19's origin — whether it emerged through natural zoonotic transmission or through some connection to the Wuhan Institute of Virology — ranged, across the 2020 period, from legitimate scientific inquiry (the lab leak hypothesis was taken seriously by credible virologists) to baseless conspiracy (claims that the virus was deliberately engineered as a bioweapon, claims specifically designed to inflame anti-Chinese sentiment). The disinformation ecosystem collapsed this distinction systematically, treating the lab leak question as if it had an obvious answer when in fact it remained genuinely uncertain, and treating the uncertainty as evidence for the most extreme versions of the conspiracy.
4.2 The Foreign Dimension
COVID-19 disinformation was not exclusively domestic. Russian, Chinese, and Iranian state media all promoted different COVID narratives aligned with their specific state interests.
Russian state media (RT, Sputnik) promoted anti-vaccine content aimed at undermining Western vaccination campaigns and amplified false treatment claims that discouraged vaccination. Russian disinformation also targeted the Oxford-AstraZeneca vaccine specifically, promoting false safety concerns — at a time when Russia was promoting its own Sputnik V vaccine as an alternative. A detailed analysis by the EU's East StratCom Task Force documented more than 200 specific COVID disinformation narratives promoted by Russian state media.
Chinese state media promoted origin narratives designed to deflect from the Wuhan origin story, at various times suggesting that COVID-19 originated at a U.S. military base in Fort Detrick, Maryland, or that it had arrived in China via imported frozen food. Chinese state media also promoted the narrative that the U.S. response was uniquely incompetent and that China's authoritarian approach was vindicated.
Iranian state media promoted narratives about COVID-19 as a U.S. biological weapon and discouraged Iranian citizens from accepting Western vaccines.
These foreign operations reinforced, rather than created, the domestic disinformation ecosystem. They added state-level credibility and resources to narratives already circulating in domestic anti-vaccine and anti-institution networks.
4.3 The Mortality Estimate
The most sobering data on COVID-19 disinformation comes from an analysis published by Peter Hotez and colleagues in The Lancet Infectious Diseases (2022). Hotez, a vaccine scientist at the Baylor College of Medicine, estimated that vaccine disinformation led to approximately 318,000 preventable deaths among unvaccinated Americans in the second half of 2021 alone — the period following widespread vaccine availability, when vaccination rates plateaued substantially below the levels necessary for meaningful protection.
The methodology underlying this estimate is transparent and contestable — attributing deaths to a specific cause in a complex pandemic is necessarily an exercise in modeling with significant uncertainty — but the order of magnitude has been endorsed by multiple independent analyses. The Centers for Disease Control's own modeling found that between June and December 2021, vaccination prevented approximately 1.1 million hospitalizations and 235,000 deaths among vaccinated Americans, implying that the decision not to vaccinate had corresponding costs for unvaccinated Americans.
Disinformation, in the COVID context, was not an abstract democratic concern. It was a proximate cause of preventable death. The public health case against disinformation is, in this instance, literally a matter of lives.
4.4 Why COVID Disinformation Was Uniquely Dangerous
COVID-19 disinformation differed from election disinformation in a crucial respect: its targets were not abstract democratic preferences but concrete personal health decisions with immediate, verifiable consequences. When someone votes based on false information, the consequence is an electoral outcome they might not have chosen. When someone declines a vaccine or takes a dangerous treatment based on false information, the consequence can be death — their own, or someone they subsequently infect.
This directness made COVID disinformation simultaneously more dangerous and more difficult to correct. The psychological literature on health risk perception suggests that people's assessments of personal health risk are heavily influenced by social networks and trust relationships — the same networks and trust relationships that disinformation had systematically poisoned in the preceding years. If you had learned, through four years of exposure to disinformation, to mistrust institutional health authorities and to trust your social network's information about elections, you brought those same trust structures to your health decisions. The disinformation campaigns were cumulative in their effect on the epistemic environment.
5. The Big Lie: Election Denial as Disinformation Campaign
The 2020 presidential election produced the most consequential domestic disinformation campaign of the 2016–2020 period: the coordinated effort to promote the false claim that the election had been stolen from Donald Trump through widespread fraud. The campaign, known popularly as "Stop the Steal," culminated in the January 6, 2021 attack on the United States Capitol by a mob of Trump supporters attempting to prevent the certification of the Electoral College results.
5.1 The Claims and Their Factual Status
The "Stop the Steal" campaign advanced numerous specific claims about the 2020 election that have since been adjudicated through multiple independent processes:
Court proceedings. Trump's campaign and allied organizations filed more than 60 lawsuits challenging the election results in multiple states. Courts — including courts with judges appointed by Trump — dismissed or rejected the vast majority of these suits. The reasons cited included lack of standing, lack of evidence, procedural issues, and — in numerous cases — explicit findings that the claims of fraud were not supported by evidence. Sidney Powell, one of the attorneys promoting the most extreme fraud claims, later acknowledged in a defamation lawsuit filed by Dominion Voting Systems that "no reasonable person" would have taken her claims as factual.
Department of Justice investigation. Attorney General William Barr — a Trump appointee who had been among the most loyal members of the administration — stated publicly in December 2020 that the Justice Department had investigated the fraud claims and found no evidence of fraud sufficient to have changed the election result. Barr later testified to the January 6 Committee that he had told Trump directly that the fraud claims were "bullshit."
State certification. Republican election officials in the key contested states — Georgia's Secretary of State Brad Raffensperger, Arizona's Governor Doug Ducey — certified the election results and publicly rejected the fraud claims. Raffensperger faced sustained threats and a recorded phone call in which Trump directly pressured him to "find" the votes needed to reverse Georgia's result.
Dominion Voting Systems litigation. Dominion, whose voting machines were the target of specific false claims about algorithmic vote manipulation, sued Fox News, Rudy Giuliani, Sidney Powell, Mike Lindell, and others for defamation. Fox News settled the Dominion lawsuit for $787.5 million without trial. The discovery process in those cases produced internal Fox communications showing that Fox News executives and hosts privately acknowledged the fraud claims were false while continuing to broadcast them.
The factual status of the "Stop the Steal" claims is, among the many contested questions of the 2016–2020 period, the least contested. The claims were rejected by courts, prosecutors, election administrators, and the Trump administration's own officials, most of whom had strong political incentives to find evidence of fraud if it existed.
5.2 How the Disinformation Operated
The Stop the Steal campaign illustrates several principles of disinformation operation that we have examined throughout this textbook:
Repetition as evidence. The fraud claims were not supported by evidence but were repeated with increasing frequency and intensity by high-authority sources. Trump repeated the claims at rallies, on social media, and in White House statements. Republican members of Congress amplified them. Right-wing media broadcast them. The repetition itself created a form of social proof: if so many authority figures are saying this, it must be true.
Complexity as cover. Election systems are genuinely complex — different rules in different states, different counting procedures, different ballot types, different timing for reporting. The Stop the Steal campaign exploited this complexity by producing claims that were technically difficult for ordinary citizens to evaluate and that required detailed knowledge of specific state laws to refute. The burden of proof was effectively reversed: the complexity of the refutation made the refutation less believable than the simple, emotionally resonant claim.
Asymmetric intensity. The people who believed the fraud claims believed them with enormous intensity. The people who knew them to be false — including many Republican officials who had certified the results — often communicated their knowledge quietly, in depositions and private communications, while publicly declining to contradict the president. The asymmetry of intensity between believers and non-believers shaped the information environment dramatically.
January 6 as culmination. The January 6 Committee's Final Report, released in December 2022, documented in detail how the Stop the Steal disinformation campaign contributed directly to the attack on the Capitol. The committee found that Trump and his allies knew the fraud claims were not supported by evidence, continued to promote them, and used them to organize and mobilize the crowd that attacked the Capitol with the explicit stated goal of preventing the certification of the election. The committee referred Trump to the Justice Department on four criminal counts, including incitement of insurrection.
The Stop the Steal campaign distinguishes itself from contested political claims in a specific way: its claims were about verifiable facts — vote counts, legal procedures, audit results — rather than values, interpretations, or policy preferences. The question of whether a particular candidate's healthcare policy is preferable to another's is genuinely contested. The question of whether Joe Biden received more legal votes than Donald Trump in Arizona, Georgia, Pennsylvania, Wisconsin, Michigan, and Nevada is not a matter of interpretation. It was determined by counting, audited by multiple independent processes, and certified by officials of both parties. Election denial was not a contested political claim. It was a disinformation campaign about verifiable facts.
6. Platform Response and Its Limits
The social media platforms' response to the 2016–2020 disinformation period was extensive, belated, and systematically incomplete.
6.1 What Platforms Did
Following the 2016 election revelations, the major platforms undertook significant policy changes:
Account removal. Facebook, Twitter, and other platforms removed the identified IRA accounts and disclosed the totality of the identified IRA content to congressional investigators. Twitter removed 3,841 IRA accounts and disclosed more than 10 million IRA tweets.
Labeling policies. Platforms introduced labeling systems for potentially false content, disputed information, and state-controlled media accounts. Twitter labeled Trump's tweets about the 2020 election as "disputed" beginning in May 2020. Facebook applied similar labels to posts containing false claims about the election. These labels were applied to high-profile accounts but were inconsistently applied to smaller accounts and viral content.
Fact-checking partnerships. Facebook partnered with third-party fact-checking organizations to label false content and reduce its algorithmic distribution. Posts labeled as false received reduced distribution — roughly an 80 percent reduction in traffic according to Facebook's own reporting.
Content moderation expansion. Platforms hired thousands of additional content moderators and expanded their use of automated systems to identify and remove policy-violating content. The expansion was substantial but remained resource-constrained, particularly for non-English content.
Deplatforming. In January 2021, following the January 6 Capitol attack, Twitter permanently banned Trump from the platform. Facebook suspended his account indefinitely (later restored). YouTube suspended his channel. This was the most consequential deplatforming decision in the history of social media.
6.2 The Limits of Platform Response
Each platform action had documented limits:
The reactive nature of most interventions meant that harm was addressed after it had already occurred. IRA accounts were removed after the 2016 election. Election denial labeling was applied in 2020, after four years of election integrity disinformation had already shaped public beliefs. The COVID-19 vaccine disinformation ecosystem was substantially built before platforms' health misinformation policies were updated.
The domestic ecosystem was largely unaddressed. Platform moderation focused on foreign-origin coordinated inauthentic behavior — the IRA model — and was poorly equipped to address organic domestic disinformation production by real American users genuinely expressing their (false) beliefs. The Macedonian content farms were addressed. Breitbart and Gateway Pundit were not removed.
The algorithmic architecture was not fundamentally changed. The recommendation systems that advantaged emotionally engaging content — including disinformation — were modified at the margins but not restructured. The engagement optimization model that created the information environment remained in place. This was not lost on platform critics: Frances Haugen, a former Facebook product manager who became a whistleblower in 2021, testified to the Senate that Facebook's internal research had shown that its algorithm was systematically promoting divisive and harmful content, that this was known internally, and that the company had declined to act because the changes would reduce engagement.
The deplatforming decision illustrated a different problem: the arbitrariness of the decision. Twitter's permanent ban of Trump and Facebook's suspension raised legitimate questions about governance. The decisions were made by private companies according to privately determined criteria, with no meaningful external accountability. If those decisions were correct — if the suspended accounts were genuinely engaged in coordinated incitement — the lack of governance process was troubling. If the decisions were wrong — if the suspension of a former president was an exercise of partisan corporate power — the lack of governance was catastrophic. Both concerns are legitimate regardless of one's view of the specific decision, because both concerns are about process rather than outcome.
6.3 The Digital Services Act
The most systematic regulatory response to the platform disinformation environment came from the European Union's Digital Services Act (DSA), which entered into full effect in 2024. The DSA requires very large online platforms (those with more than 45 million EU users) to conduct risk assessments of their systems' contributions to information disorder, to make their algorithmic systems available for independent audit, and to provide researchers with access to data necessary for studying their platforms' effects. The DSA does not require platforms to remove specific content but does require them to be transparent about the systems that determine what content reaches users.
The DSA represents the most consequential regulatory effort to address the structural features of platforms — not their specific content decisions, but the algorithmic architecture that shapes what billions of people see — and whether it succeeds in changing the underlying information environment remains, at the time of this writing, an open question.
7. Research Breakdown: Guess, Nagler, and Tucker (2020)
Study: Andrew Guess, Jonathan Nagler, and Joshua Tucker, "Less than you think: Prevalence and predictors of fake news dissemination on Facebook." Science Advances, January 2020.
Research question: How widespread was exposure to and sharing of fake news on Facebook during the 2016 U.S. presidential election, and who was most likely to consume and share it?
Method: The researchers recruited a panel of 2,525 Americans who agreed to share their web browsing history in exchange for compensation. They coded news websites as "fake news" using a list drawn from multiple fact-checking organizations. They then analyzed which panel members visited fake news sites, how often, and what demographic and behavioral characteristics predicted fake news consumption and sharing.
Key findings:
Only 8.5 percent of panel members visited a fake news website in the four weeks before the election. This finding was significantly lower than prior estimates based on social media data, which had implied much broader exposure. The discrepancy is partly explained by the gap between social media exposure (seeing a link in your feed) and active consumption (clicking through to the website).
Among those who did visit fake news sites, visits were heavily concentrated: the top 10 percent of fake news consumers accounted for 60 percent of all fake news site visits. The average fake news consumer was exposed to relatively small amounts of content; the highly-engaged minority was exposed to very large amounts.
The strongest predictors of fake news consumption were age and partisanship. Americans over 65 were significantly more likely to visit fake news sites than younger Americans. Highly partisan Republicans were more likely to visit fake news sites than highly partisan Democrats. These effects were additive: older, highly partisan Republicans were the highest-consumption fake news demographic.
Facebook was the primary referral source for fake news site visits, confirming the platform's amplification role. Users who arrived at fake news sites via Facebook were more likely to be in the high-consumption group.
Significance and interpretation:
The Guess et al. study complicates the narrative — widespread in both popular media and some academic literature — that Russian disinformation and fake news "infected" the broad American electorate in 2016. Most Americans were not heavily exposed to fake news content in the election's final weeks.
However, "less than you think" is not the same as "negligible." The study documents that a specific demographic — older, highly partisan Facebook users — was exposed to large volumes of fake news content. This demographic is, in U.S. politics, among the most politically active: older Americans vote at higher rates, donate more, and are more likely to be involved in campaign volunteering and local political organizing. The concentration of fake news exposure in this highly politically active demographic is significant even if it does not represent the universal saturation that some accounts implied.
The study's implications for intervention design are also significant. If the problem is universal exposure requiring universal counter-programming, the appropriate response is different from what the data shows. If the problem is highly concentrated, high-intensity exposure among a specific identifiable demographic, then targeted interventions — prebunking campaigns aimed at older Facebook users, platform-level interventions on the specific sharing patterns of highly partisan accounts — might be more effective than broad-based media literacy campaigns.
The study also raises a question about what the relevant metric is. Fake news site visits are one measure of disinformation exposure. Exposure to misleading headlines in social media feeds — without clicking through — is another, and one the study did not capture. Exposure to misleading information on platforms that are not websites (cable news, talk radio, messaging apps) is another. The 8.5 percent figure should be understood as an estimate of one specific form of disinformation exposure, not total exposure to false political information.
Ingrid, in her comments on the study, noted that the age finding was consistent with European research: older Europeans with lower digital literacy were consistently overrepresented among fake news sharers in multiple country studies. "The problem," she observed, "is not the generation that grew up with the internet. It's the generation that had to adapt to it mid-life, and often didn't get the media literacy education that would help."
8. Primary Source Analysis: The IRA's "Heart of Texas" Facebook Page
The IRA's "Heart of Texas" Facebook page provides one of the clearest available case studies in the operational mechanics of coordinated inauthentic behavior. It is documented in detail in the Senate Intelligence Committee Vol. 2, in the Mueller Report, and in Facebook's own disclosures to Congress.
We will apply the five-part anatomy of a propaganda message to the "Heart of Texas" operation.
8.1 Source Analysis
Apparent source: A community of Texas conservatives and patriots, organized through a grassroots Facebook page, sharing content relevant to Texan identity, conservative politics, and Texas independence from overreaching federal government.
Actual source: Russian government contractors employed at the Internet Research Agency at 55 Savushkina Street in St. Petersburg, working in the IRA's American department, organized into a team targeting conservative white Americans.
The gap between apparent and actual source is total. Nothing about the "Heart of Texas" page — its name, its branding, its content, its events — reflected its actual origin. The employees who ran the page had no personal connection to Texas. Several, according to IRA internal documents obtained by investigators, had limited knowledge of American geography and had to research basic facts about Texas in order to produce authentic-seeming content.
The page reached 254,000 followers — more than double the official Texas Republican Party Facebook page — by producing content that Texas conservatives found genuinely meaningful and resonant. Civil War history. Texas sovereignty arguments. Immigration content calibrated to concerns about the Texas-Mexico border. Local community events. The content was accurate enough, and the community feel was genuine enough, that no follower had reason to question the page's authenticity.
8.2 Message Analysis
The "Heart of Texas" message operated on multiple levels simultaneously:
Surface message: Texas is unique, Texans are proud, Texan identity is worth celebrating and defending.
Political message: The federal government poses a threat to Texas's unique identity and sovereignty; immigration (particularly at the Texas-Mexico border) is an existential threat; Texas conservatives are under siege from a hostile national culture.
Strategic message: Texas separatism — the idea that Texas should secede from or operate with substantially greater independence from the federal government — is a legitimate and mainstream political position worth organizing around.
The "Texas independence" framing was the page's most significant strategic contribution. Texas secession is not a mainstream political position; polls have consistently found it supported by small minorities even among Texas conservatives. The "Heart of Texas" page systematically treated it as mainstream and organized around it as a rallying point, contributing to the normalization of a position designed to fragment American national identity.
8.3 Emotional Register
The emotional register of "Heart of Texas" combined pride, threat, and community solidarity in a specific configuration designed to be simultaneously mobilizing and resistant to counter-argument.
Pride was the foundation: celebrating Texas history, Texas identity, Texan exceptionalism. Pride is not an adversarial emotion — it is warm, welcoming, and community-building. New followers arrived at "Heart of Texas" because they found the celebration of Texas identity meaningful and appealing.
Threat was the escalating element: once community pride was established, external threats to that community became emotionally salient. The threat of immigration. The threat of federal overreach. The threat of a national culture that did not respect Texas values. Threat is mobilizing; it creates urgency where pride creates warmth.
Community solidarity was the binding element: the page created genuine community, with regular followers interacting with each other, sharing personal stories, and building relationships. This solidarity was not fabricated — the human interactions on the page were real. Only the organizer was fabricated.
8.4 Implicit Audience
The "Heart of Texas" page was designed for a specific audience: Texas conservatives with strong state identity, who felt that their identity was underrepresented or threatened in the national culture, who were receptive to both states' rights arguments and immigration restriction, and who would find grassroots Texas patriot organizing appealing.
The IRA's American department used audience data — including Facebook's own advertising targeting tools — to reach this demographic with paid advertisements and to grow its organic following among the target audience. The page's content was iteratively refined based on engagement data: posts that generated higher engagement were produced more frequently; posts that underperformed were discontinued.
8.5 Strategic Omission
The "Heart of Texas" page's most important omissions were:
The page's Russian origin — communicated to no follower at any point in the page's operation.
The page's strategic purpose — not the celebration of Texas identity but the promotion of social division and institutional distrust among American conservatives.
The inauthenticity of the "community" — the followers were real Texans engaging with each other in genuine ways, but the organizer who had assembled them was not the Texas patriot the page presented.
The Houston rally context — the followers who attended the "Stop Islamization of Texas" rally organized by "Heart of Texas" did not know that the counter-rally they faced that day had been organized by the same organization, running a different page, with the explicit goal of producing a real-world conflict between real Americans.
The "Heart of Texas" operation represents the IRA's operational logic at its most refined: genuine community engagement, built deliberately, in service of a strategic goal its community members never knew existed. The humans were real. The conflict was real. Only the organizer was fake.
9. Debate Framework: Who Is Responsible for Digital Disinformation?
The 2016–2020 disinformation campaigns produced a sustained debate about responsibility that has not been resolved and that reflects genuine disagreements about the nature of the problem. Three major positions in this debate are worth examining with care.
Position A: Platforms Are Responsible
The argument: Facebook, Twitter, YouTube, and other platforms built the architecture that makes disinformation spread faster than truth. They profited from the engagement that disinformation generated. They had internal research demonstrating that their systems were promoting harmful content and declined to act. They are private companies operating for profit, and that profit came, in part, from the scale of disinformation circulation. The deplatforming decisions of January 2021 demonstrate that platforms have the capacity to act decisively when they choose — they simply chose not to act for four years while disinformation scaled. Platforms must be held responsible through a combination of liability reform (removing or modifying Section 230 of the Communications Decency Act, which shields platforms from legal liability for user-generated content), regulatory oversight (algorithmic transparency requirements, risk assessment mandates), and financial accountability (advertising revenue from disinformation-adjacent content).
The strongest objection: Holding platforms responsible for user-generated content raises significant free speech concerns. The platforms that removed IRA content made editorial decisions about what speech was permissible — decisions that, if wrong, suppress legitimate political discourse. Platformliability for content would likely result in over-removal of controversial but legal speech. The appropriate response to a platform's algorithmic bias is transparency and regulation, not liability.
Position B: States Are Responsible
The argument: The IRA operation was a foreign intelligence operation — an act of information warfare by the Russian state against the United States. Private corporations cannot be expected to counter hostile state intelligence services operating with state resources, state direction, and strategic national interests. Counter-disinformation is a national security function, not a platform moderation function. The United States needs a whole-of-government counter-disinformation capacity, including offensive and defensive information operations capabilities, diplomatic pressure on states that sponsor disinformation operations, and international agreements on information warfare norms. The domestic disinformation ecosystem is a matter of media regulation and political culture, requiring policy responses from democratically accountable governments, not unilateral decisions by unaccountable corporate executives.
The strongest objection: State responsibility for counter-disinformation risks becoming state authority over information, with the attendant risks of government censorship and propaganda in the name of counter-propaganda. The United States government's own history of domestic information operations — documented in Chapters 23 and earlier — gives reason for skepticism about state counter-disinformation authority. The cure may be as dangerous as the disease.
Position C: Distributed Responsibility
The argument: Platforms, states, and citizens each have specific, irreducible responsibilities that cannot be transferred to the others:
Platforms are responsible for their algorithmic architecture: the recommendation systems, the engagement optimization functions, and the advertising targeting tools that enable the disinformation environment. These responsibilities are specific and addressable: algorithmic transparency, risk assessment mandates, and modification of the engagement metrics that advantage inflammatory content are all within the platforms' technical capacity.
States are responsible for counter-disinformation at the level of foreign influence operations, for regulatory frameworks that create accountability without censorship, and for public investment in media literacy education and public media that provide trustworthy information alternatives.
Citizens are responsible for their own epistemic practices — the skills and habits of source verification, lateral reading, and critical evaluation that reduce individual vulnerability to disinformation. These skills are teachable, and the responsibility for teaching them lies with educational institutions.
No single actor can solve the disinformation problem. Platform regulation without media literacy will shift the disinformation to unregulated spaces. Media literacy without platform regulation leaves the algorithmic infrastructure in place. Platform regulation and media literacy without state counter-disinformation capacity leaves foreign influence operations unopposed.
The challenge: Distributed responsibility can become diffused responsibility, where each actor defers to the others and nothing changes. Distributed responsibility requires coordination mechanisms — regulatory frameworks, public-private partnerships, civil society oversight — that do not currently exist at the necessary scale.
In the Hartwell seminar, Webb presented all three positions without declaring a preference, then asked each student to defend a position they found genuinely compelling. Sophia defended Position C. Tariq defended a modified Position B, arguing that for communities targeted by foreign disinformation — including Arab-American communities that were targeted by IRA content — state-level response was the minimum adequate intervention. Ingrid, drawing on the EU's DSA experience, defended a regulatory framework she positioned between A and C: mandatory algorithmic transparency as the foundational requirement, with platform liability as a reserve tool for cases of documented failure to act.
10. Action Checklist: Evaluating Disinformation Campaigns
When analyzing a digital disinformation campaign — whether historical or contemporary — the following questions provide a systematic framework for evaluation.
Origin and attribution: - Can the origin of the campaign be traced to a specific actor or network of actors? What is the evidence for attribution? - Is the campaign the product of a single actor (a state intelligence operation, a specific media outlet) or a distributed network of actors with different motivations? - What was the relationship between foreign and domestic components of the disinformation? Did they interact? If so, how?
Organizational structure: - What resources were required to sustain the campaign? What was the scale of the operation in terms of personnel, budget, and content volume? - What were the operational goals as distinguished from the stated or apparent goals? - What metrics was the operation optimizing for (reach, engagement, specific belief change, behavioral change)?
Content analysis: - What specific claims did the campaign promote? What is the factual status of those claims, and how has that status been established? - What emotional registers did the campaign employ? How did it combine factual content, misleading framing, and outright false claims? - What were the strategic omissions — the things the campaign systematically did not say?
Targeting and reach: - What specific audiences was the campaign designed for? What existing divisions or grievances did it exploit? - What was the documented reach of the campaign? How was reach measured, and what are the methodological limitations of those measurements? - What was the relationship between exposure and effect? Did exposure lead to measurable changes in belief or behavior?
Platform and algorithmic context: - How did platform architecture affect the campaign's spread? Which platform features were exploited? - How did the campaign interact with the algorithmic recommendation and amplification systems? - What platform responses occurred, at what point in the campaign, and with what documented effectiveness?
Effects and aftermath: - What is the documented evidence of the campaign's effects on beliefs, attitudes, or behavior? - What interventions — platform moderation, fact-checking, media literacy, regulatory — occurred, and what was their effectiveness? - What precedents did the campaign set for subsequent disinformation operations?
11. Inoculation Campaign: Completing the Historical Grounding Component
This is the final chapter of Part 4: Historical Cases, and it completes the Historical Grounding component of the progressive project — the multi-chapter analytical sequence that began in Chapter 19.
11.1 What Historical Grounding Has Covered
Across Chapters 19–24, you have studied six major historical cases of propaganda and disinformation:
- Chapter 19: World War I and the birth of modern propaganda — the Creel Committee, the systematic organization of public opinion as a state function.
- Chapter 20: Totalitarian propaganda — Nazi Germany's Reich Propaganda Machine and Stalinist information control as models of total information environment management.
- Chapter 21: Cold War propaganda — the competing propaganda systems of the U.S. and USSR, dezinformatsiya operations, the use of culture and information as weapons of geopolitical competition.
- Chapter 22: Advertising and consumer culture — the construction of desire and identity as propagandistic functions, the techniques of commercial persuasion.
- Chapter 23: U.S. domestic propaganda — the Committee on Public Information, COINTELPRO, the War on Terror information environment, propaganda in democratic societies.
- Chapter 24: Digital disinformation — the IRA operations, the COVID-19 infodemic, the Stop the Steal campaign, the interaction of foreign and domestic disinformation ecosystems.
Each case study has added to your analytical vocabulary. You have examined the role of medium (print, radio, film, television, social media) in shaping propagandistic possibility. You have examined the techniques that recur across eras: emotional amplification, source fabrication, manufactured consensus, strategic omission, the exploitation of genuine social divisions. You have examined the relationship between propaganda and power: who has the resources to run influence operations, who is targeted, and why.
11.2 The Historical Grounding Summary Assignment
For Part 4 of your final campaign brief, you will write a Historical Grounding Summary of 800–1,200 words that does the following:
Identify the three most relevant historical parallels for your target community's propaganda environment. Your target community is the specific group you have been analyzing throughout the progressive project. The "most relevant" parallels are those whose mechanisms most closely resemble the propaganda techniques your target community currently faces — not necessarily the most famous or dramatic historical cases.
A journalist covering vaccine disinformation, for instance, might find the Big Tobacco "Doubt Is Our Product" campaign (Chapter 22's anchor example) more directly relevant than the IRA operations, because both involve manufactured scientific uncertainty. A student studying political extremism targeting a specific ethnic or religious community might find the IRA's demographic-targeting model most relevant, because the IRA specifically exploited ethnic and religious identity. A student studying authoritarian information control might find the totalitarian propaganda cases most directly applicable.
Explain why each parallel is relevant. Relevance requires more than thematic similarity. You must specify: what specific technique from the historical case is currently deployed in the contemporary propaganda environment you are studying? How has the technique been adapted to contemporary media, and what has that adaptation changed about its operation?
Draw the explicit connection between historical techniques and contemporary propaganda. The historical grounding analysis is not a survey of propaganda history; it is an exercise in applying historical understanding to contemporary analysis. Your summary should be able to say, with specificity: "The technique of [X], documented in [historical case], operates in the contemporary environment through [specific mechanism], as demonstrated by [specific example from your target community's propaganda environment]."
Connect to the analytical frameworks of the course. Your summary should reference the five recurring themes of this textbook — Message/Medium, Truth/Deception spectrum, Us vs. Them, Power/Voice, Resistance/Resilience — and explain how the historical parallels you have chosen illuminate the contemporary operation of these themes in your target community's propaganda environment.
11.3 Moving Forward
The Historical Grounding Summary becomes Part 4 of your final campaign brief. Parts 1–3 — your target community analysis, your propaganda anatomy, and your resistance strategy preliminary notes — should already be in progress from the earlier progressive project assignments. With the Historical Grounding Summary complete, you will have four of the five components of the brief assembled. Part 5, the Inoculation Campaign design, will be completed in Chapter 30.
When you review your Historical Grounding Summary, ask yourself this: Does my analysis explain not just what the historical propaganda operations did, but why they worked on the specific communities they targeted? The answer to that question — the structural and psychological conditions that make a community vulnerable to specific propaganda techniques — is the foundation of effective counter-propaganda. You cannot design an inoculation without understanding the disease.
Conclusion: The Confusion Is the Product
We return to where we began: Sophia, in the Clarion newsroom, tracing a viral claim about Marcus Diallo through dead links, deleted tweets, and unverifiable screenshots. The confusion she experienced — the sensation of reaching for something solid and finding nothing — was not a failure of journalism. It was the operation working as designed.
The disinformation campaigns of 2016–2020 were not primarily in the business of making people believe specific false things. They were in the business of making the information environment itself less navigable — less coherent, less trustworthy, more exhausting to engage with seriously. If the Russian Internet Research Agency made you confused about which Facebook community pages were authentic, it had succeeded. If the Stop the Steal campaign made you uncertain whether election results could be trusted, it had succeeded — even if you ultimately concluded the results were accurate. The goal was not specific belief but diffuse uncertainty.
The response to that strategy cannot be more information alone. The 2016–2020 period produced an enormous volume of fact-checking, corrections, platform labels, and journalistic refutations of false claims. Most of that response reached a fraction of the audience that received the original disinformation. Correction alone is insufficient when the design of the information environment advantages the original false claim.
What is sufficient — or what the evidence suggests comes closest to sufficiency — is a combination of structural change and individual skill. Structural change means addressing the algorithmic architecture that advantages inflammatory, divisive content over accurate, nuanced content. Individual skill means developing the media literacy practices — lateral reading, source verification, awareness of emotional manipulation, and comfort with uncertainty — that reduce vulnerability to disinformation.
Neither is sufficient alone. Both are necessary together. And both require understanding the history we have traced across Part 4: the techniques, the patterns, the mechanisms, and the specific conditions that make propaganda work.
Webb, at the close of the seminar, returned to Sophia's question. She had asked, at the end of their first conversation, what she should do about the Marcus Diallo story. Should she publish the allegation with the caveat that she could not verify it? Should she wait and lose the story to less careful outlets? Should she publish a meta-story about the disinformation itself — about the sourceless, viral allegation — rather than the allegation's content?
Webb's answer: "Publish the story of the trail going cold. That is the story. The story is not 'candidate received illegal funding' — you can't verify that. The story is 'unverifiable allegation circulates in student group chats, source cannot be traced.' You tell your readers what you know: the allegation exists, it's spreading, you cannot find its origin, and here's what you did to try." He paused. "That's not the exciting story. It's the accurate one. And right now, the accurate story about a lot of what's happening in the information environment is that someone is trying to make you confused. Naming that is not a failure. It is the work."
Chapter 24 of 40 — Part 4: Historical Cases Next: Chapter 25 — Propaganda in Authoritarian Contexts (Part 5: Mechanisms and Techniques)