34 min read

"Information warfare" is an umbrella term broad enough to encompass activities ranging from the publication of a misleading press release to the destruction of critical infrastructure through cyberattack. To reason clearly about the subject...

Chapter 31: State-Sponsored Disinformation and Information Warfare

Learning Objectives

By the end of this chapter, students will be able to:

  1. Define information warfare and distinguish it from related concepts including psychological operations, influence operations, and cognitive warfare.
  2. Trace the historical lineage of state-sponsored disinformation from Soviet "active measures" to contemporary digital operations.
  3. Analyze the doctrinal foundations, institutional structures, and operational tactics of Russian, Chinese, Iranian, and other state information warfare programs.
  4. Identify the specific tactics employed in modern influence operations: troll farms, fake personas, hack-and-leak operations, and narrative laundering.
  5. Evaluate Western counter-measures including the EU East StratCom Task Force, NATO Strategic Communications Centre, and the "Helsinki model" of prebunking.
  6. Critically assess the challenges of attribution in influence operations and the implications of those challenges for policy responses.
  7. Recognize the blurring of domestic and foreign information operations and understand the ethical and legal complexities this creates.

Section 31.1: Information Warfare Defined

The Full Spectrum of Influence

"Information warfare" is an umbrella term broad enough to encompass activities ranging from the publication of a misleading press release to the destruction of critical infrastructure through cyberattack. To reason clearly about the subject, students must master a vocabulary that is itself contested — different governments, militaries, and researchers use these terms in overlapping and sometimes contradictory ways. What follows is a working taxonomy derived from the convergent usage of NATO doctrine, US Department of Defense publications, and academic literature.

Information operations (IO) are coordinated efforts to shape the information environment of a target audience — whether adversary governments, foreign publics, domestic populations, or combinations thereof. Information operations include both offensive actions (degrading the adversary's ability to make sound decisions) and defensive actions (protecting one's own information environment). The US Department of Defense defines IO as "the integrated employment... of electronic warfare, computer network operations, psychological operations, military deception, and operations security." This definition reveals how broad the category is: information operations span the spectrum from hacking an enemy's communication networks to spreading rumors among a civilian population.

Influence operations are a subset of information operations focused specifically on shaping human beliefs, attitudes, and behaviors rather than disrupting technical systems. An influence operation seeks to persuade: to make target audiences believe something true or false, to amplify existing sentiments, or to suppress particular viewpoints. Influence operations may be overt (attributed to their actual source, as in public diplomacy or international broadcasting) or covert (disguising their true origin, as in the creation of fake grassroots movements, or "astroturfing"). The covert variant is what most people mean when they use the term "disinformation campaign."

Psychological operations (PSYOP) is a military term for actions specifically designed to influence the emotions, motives, reasoning, and ultimately the behavior of foreign target audiences. PSYOP has a long military history — leaflet drops over enemy lines, loudspeaker broadcasts, planted rumors — and its contemporary digital descendants include targeted social media campaigns, fabricated quotes from public officials, and coordinated harassment of dissidents. The US military rebranded "PSYOP" as "Military Information Support Operations" (MISO) in 2010, partly because the older term had acquired pejorative associations, though "PSYOP" remains in wide use.

Cognitive warfare is the newest and most expansive concept in this vocabulary. NATO's Innovation Hub defines it as "the weaponization of brain science" — operations that seek not merely to change what people believe but to undermine their very capacity to form rational beliefs. Cognitive warfare exploits known psychological vulnerabilities: cognitive biases, emotional reasoning, social identity dynamics, and epistemic dependence on trusted authorities. Where propaganda tries to win arguments, cognitive warfare tries to make argument itself seem impossible or futile — to produce a generalized sense that truth is unknowable and that all information is equally suspect. This "epistemic chaos" strategy has been identified as a signature of contemporary Russian information operations.

Hybrid warfare is a broader strategic concept that integrates military and non-military means, including information operations, in ways that blur the distinction between war and peace. Russia's operations in Ukraine beginning in 2014 are frequently cited as the paradigmatic example: unmarked soldiers ("little green men"), information operations denying Russian involvement, economic coercion, and support for separatist movements were combined in a way that evaded clear classification as war and complicated NATO's collective defense obligations.

The Distinction That Matters: Disinformation vs. Misinformation vs. Malinformation

State-sponsored information operations overwhelmingly deal in disinformation — false or misleading information that is deliberately created and spread to deceive. This distinguishes state-sponsored operations from the broader ecosystem of misinformation (false information spread without intent to deceive, as when someone shares an inaccurate article in good faith) and malinformation (true information weaponized to harm, as when a government leaks genuine compromising information about an opponent). Understanding these distinctions matters because they imply different countermeasures: combating disinformation requires addressing both the supply (the creators) and the demand (the audiences who find it persuasive); combating malinformation requires grappling with privacy and the ethics of legitimate disclosure.

Callout Box: The "Information Environment" as a Strategic Domain Contemporary military doctrine treats the information environment as a warfighting domain alongside land, sea, air, space, and cyberspace. This conceptual shift has profound implications. It means that shaping how populations understand reality — their governments, their adversaries, their own history — is considered a legitimate and indeed essential military and intelligence activity. It also means that private platforms, media organizations, universities, and civil society institutions are now frontlines in an ongoing conflict they did not choose and may not even recognize.


Section 31.2: Soviet Active Measures — Dezinformatsiya as a Cold War Weapon

The KGB Playbook

The Soviet Union operated what is arguably history's most extensive and sophisticated peacetime influence operation, organized under the concept of aktivnyye meropriyatiya — "active measures." The term encompassed a wide range of operations conducted by the KGB's Service A (active measures) and later the International Information Department of the Communist Party Central Committee. Active measures included:

  • Disinformation (dezinformatsiya): Fabrication and dissemination of false documents, forged letters, and fabricated quotes attributed to real officials.
  • Forgeries: Creation of counterfeit official documents, including fabricated US government directives, military orders, and diplomatic cables.
  • Covert media placement: Planting stories in foreign newspapers, often through paid agents or "agents of influence" among journalists.
  • Front organizations: Creation and funding of organizations that appeared to be genuine civil society groups but were secretly controlled by Soviet intelligence — peace movements, anti-nuclear groups, Third World solidarity organizations.
  • Agent cultivation: Identifying and developing long-term relationships with influential individuals in target countries who could be used to spread Soviet narratives without being identified as Soviet agents.
  • Kompromat: Gathering or fabricating compromising information about adversary officials for use in blackmail or public exposure.

The KGB's active measures apparatus was not a marginal sideshow; it was a central strategic instrument. At its height, the KGB employed an estimated 15,000 officers in its First Chief Directorate (foreign intelligence), with Service A — the active measures unit — numbering in the hundreds. The annual budget for active measures operations ran to hundreds of millions of dollars in Cold War terms.

Yuri Bezmenov's Account

Perhaps the most vivid insider account of the Soviet active measures apparatus comes from Yuri Bezmenov (also known by his pen name Tomas Schuman), a KGB officer who defected to Canada in 1970. In interviews and lectures given throughout the 1980s — most notably a 1984 interview with G. Edward Griffin — Bezmenov described a four-stage model of what he called "ideological subversion":

  1. Demoralization (15-20 years): A long-term process of corrupting the moral fiber and ideological coherence of the target society. This involves promoting nihilism, relativism, and distrust of institutions, and supporting radical movements of both left and right. Bezmenov argued that this phase was essentially complete in the United States by the 1980s.

  2. Destabilization (2-5 years): Rapid exploitation of demoralized conditions to attack key structural elements of the target society — economy, defense, foreign relations. During this phase, crises are manufactured or amplified.

  3. Crisis (6 weeks): A sudden violent change in the power structure of the target country, brought about through manufactured crisis conditions.

  4. Normalization: Once the power structure has been changed, the new arrangement is presented as "normal" — echoing Soviet use of the term "normalization" for the suppression of the Prague Spring.

Bezmenov's framework must be evaluated critically: he was a defector with evident ideological motivations, and his account is pitched at a popular rather than scholarly audience. Nevertheless, his description of the patient, long-term character of Soviet subversion — focused on gradually shifting the epistemological and moral foundations of target societies rather than on specific discrete operations — has proven remarkably resonant in analyses of contemporary Russian information warfare.

Operation INFEKTION: The AIDS Disinformation Campaign

The most consequential and best-documented Soviet active measures operation of the Cold War was Operation INFEKTION (also called Operation DENVER), a multi-year campaign beginning in 1983 that claimed the HIV/AIDS virus had been created by the United States military at Fort Detrick, Maryland, as a biological weapon.

The operation began with a letter published in the Indian newspaper Patriot in July 1983, signed by an anonymous "American scientist," claiming that AIDS was the result of experiments at Fort Detrick. The article received little initial attention. Two years later, in October 1985, the Soviet newspaper Literaturnaya Gazeta amplified the claim, and from there it spread rapidly through Soviet and Soviet-aligned media globally. By 1987, the story had been picked up by more than 80 countries and translated into 30 languages. A widely distributed pamphlet titled "AIDS: Its Nature and Origin" by Jakob Segal, a retired East German biologist, provided pseudoscientific scaffolding for the claim.

The operation was so successful that the Soviet government eventually admitted its role only after Mikhail Gorbachev's glasnost reforms made the deception politically costly. In 1987, at the urging of US officials who presented evidence of Soviet orchestration, the Soviet Union formally acknowledged the campaign and agreed to discontinue it. However, by then the narrative had achieved genuine independent circulation: a 1992 survey found that 15% of African Americans believed HIV was created by the government as a weapon to harm Black people — a figure that has remained stubbornly persistent.

Operation INFEKTION is a paradigm case for several reasons: - It demonstrates the long incubation strategy: a narrative planted quietly, then amplified years later through credible-seeming intermediaries. - It shows the power of scientific-sounding disinformation: the claim was sophisticated enough to require scientific debunking, not merely political denial. - It reveals the resilience of planted narratives: once a disinformation campaign achieves genuine circulation, withdrawing it becomes impossible. - It anticipates COVID-19 lab-leak discourse: the structural parallels between INFEKTION and the 2020-era narrative claiming COVID-19 was engineered at the Wuhan Institute of Virology are striking and have been extensively analyzed by intelligence historians.

Callout Box: Dezinformatsiya vs. American Propaganda Students sometimes ask whether the United States engaged in comparable operations during the Cold War. The answer is yes, though with important distinctions. The CIA operated Radio Free Europe and Radio Liberty, funded anti-communist intellectual and cultural organizations (sometimes without their knowledge), and conducted numerous covert propaganda operations. The Church Committee investigations of 1975-1976 documented extensive domestic surveillance and foreign influence operations. The key analytical distinction is between operations that spread false information (disinformation, which both sides engaged in) and operations that spread accurate information through undisclosed channels (covert propaganda, also practiced by both sides). This parallel does not excuse Soviet or Russian disinformation, but it complicates any simple narrative of one side as purely malign and the other as purely innocent.


Section 31.3: Russia's Modern Information Warfare

The Gerasimov Doctrine: Misread and Actual

Few documents have been more consequentially misread in Western security discourse than a February 2013 article in the Russian military journal Voyenno-promyshlennyi kur'er by General Valery Gerasimov, Chief of the General Staff of the Russian Armed Forces. In the West, "the Gerasimov Doctrine" became shorthand for a Russian strategic framework that treated information operations, economic warfare, and political subversion as primary instruments of modern war — a doctrine supposedly explaining Russian operations in Ukraine and beyond.

The problem is that Gerasimov wrote no such doctrine. As military analyst Mark Galeotti — who coined the phrase "Gerasimov Doctrine" — has publicly acknowledged and lamented, his original 2013 commentary on the Gerasimov article was a misreading. Gerasimov's article was actually a descriptive analysis of what Russia saw happening in Western operations — the Arab Spring, the Libya intervention — rather than a prescriptive Russian strategic plan. Gerasimov was describing what he believed the West was doing, not announcing what Russia intended to do.

This matters because the "Gerasimov Doctrine" framing encouraged analysts to see Russian information operations as flowing from a coherent, centrally planned grand strategy, when the actual evidence suggests something more opportunistic, chaotic, and improvised. Russian information operations are better understood as the product of multiple competing institutions — the FSB, GRU, SVR, Presidential Administration, state media — with overlapping mandates, internal rivalries, and variable coordination than as the execution of a single master plan.

RT (Russia Today) as an Influence Tool

RT (originally Russia Today, rebranded in 2009) is Russia's flagship state-funded international broadcasting network, operating in English, Spanish, Arabic, German, and French. RT's mission has evolved considerably since its 2005 founding. Initially conceived as a soft-power instrument to promote a positive image of Russia abroad — analogous to BBC World Service or France 24 — RT increasingly shifted toward an adversarial posture toward Western governments, promoting narratives of Western hypocrisy, democratic dysfunction, and imperial overreach.

RT's editorial approach has been analyzed extensively. Key characteristics include: - False balance: RT gives significant airtime to fringe voices — anti-vaccine advocates, 9/11 conspiracy theorists, anti-NATO activists — presenting them as legitimate counterpoints to mainstream consensus. - Whataboutism: Systematic deflection of criticism of Russian actions by pointing to comparable (or allegedly comparable) Western actions. ("You say Russia interfered in elections — what about CIA operations in Latin America?") - Amplification of domestic divisions: RT coverage of American social issues — police violence, economic inequality, racial conflict — disproportionately emphasizes conflict and dysfunction. - Plausible deniability: RT frequently frames its output as "just asking questions" or presenting "perspectives you won't hear in the mainstream media," maintaining the appearance of journalistic inquiry while promoting specific narratives.

RT's actual viewership in Western countries is modest — it consistently ranks far behind BBC, CNN, and other mainstream international broadcasters in audience size. Its strategic value lies less in direct persuasion of large audiences than in agenda-setting: creating content that can be picked up and amplified by domestic partisan media, fringe websites, and social media users.

The St. Petersburg Internet Research Agency

The most operationally detailed picture of Russian state-sponsored disinformation emerged from the US investigation into 2016 election interference. The Internet Research Agency (IRA), based in St. Petersburg and funded by Yevgeny Prigozhin's Concord Management and Consulting company (with close ties to the Kremlin), operated what the Mueller Report described as a "sweeping and systematic" influence operation targeting American public opinion.

The IRA was founded around 2013 and initially focused on Russian domestic audiences before pivoting heavily toward the United States and other Western countries beginning around 2014-2015. Its operations included: - Maintaining hundreds of fake American personas on Facebook, Twitter, Instagram, YouTube, Tumblr, and other platforms - Creating and managing Facebook groups that attracted hundreds of thousands of genuine American followers (Black Lives Matter-themed groups, gun rights groups, immigration groups) - Purchasing $100,000+ in targeted Facebook advertising - Organizing real-world political events in the United States, including competing rallies in the same city on the same day - Creating content specifically designed to suppress turnout among likely Democratic voters, particularly Black voters

The IRA's targeting was remarkably sophisticated. Internal communications (obtained through the Mueller investigation and Senate Intelligence Committee review) reveal that IRA operatives were given detailed target audience profiles and instructed to study American culture, news, and political debates intensively. Operatives worked regular shifts, had productivity quotas (a certain number of posts, comments, and reposts per day), and had their content reviewed by supervisors.

The Firehose of Falsehood

In 2016, RAND Corporation analysts Christopher Paul and Miriam Matthews published an influential analysis of Russian propaganda, titled "The Russian 'Firehose of Falsehood' Propaganda Model." The analysis identified four key characteristics that distinguish contemporary Russian information operations from traditional propaganda:

  1. High volume: Russian state media and affiliated operations produce an enormous quantity of content — far more than any fact-checking or debunking operation can efficiently respond to.
  2. Multi-channel delivery: Content is delivered simultaneously through RT, Sputnik, social media, online comment sections, fringe websites, and other channels, creating an impression of ubiquity.
  3. Rapid, continuous, and repetitive: Rather than a single carefully crafted narrative, Russian operations push multiple, often contradictory narratives simultaneously, with rapid updating.
  4. Unconstrained by consistency or truth: Russian information operations are willing to make claims that directly contradict one another, relying on volume and speed to overwhelm rather than persuade.

The RAND analysts drew on psychological research showing that repetition increases perceived credibility — the "illusory truth effect" — and that epistemic confusion (making people unsure what to believe) can be as strategically valuable as persuasion to specific false beliefs. The firehose strategy aims not primarily to convince but to exhaust: to make critical evaluation so cognitively taxing that audiences retreat to tribal heuristics and partisan information sources.


Section 31.4: China's Information Operations

The "50 Cent Army" (Wumao)

China's domestic information operations are anchored by the wumao dang — the "50-cent party" or "50-cent army" — so named for the rumored payment of 50 Chinese yuan cents per post. Research by Gary King, Jennifer Pan, and Margaret Roberts (published in the American Political Science Review in 2017) obtained leaked Chinese government documents revealing the actual scale of the operation: an estimated 448 million fabricated social media posts per year, focused primarily on cheerleading for the Communist Party and the Chinese state rather than argumentation. The wumao's primary function is not to win arguments but to drown them out — to overwhelm genuinely critical voices with a flood of pro-government content that makes participation in online political discussion seem futile.

Importantly, King, Pan, and Roberts found that wumao posts generally avoid direct engagement with political arguments critical of the Chinese government. Rather than defending government actions, they change the subject: posting cheerleading content, patriotic slogans, and entertainment-adjacent material that shifts the conversation away from politically sensitive topics. This "strategic distraction" model is distinct from the argumentation model typical of Western propaganda.

Overseas Influence Operations Targeting Diaspora

China's overseas information operations differ significantly from Russia's primarily in their target audiences. While Russian operations are largely aimed at Western democratic publics broadly, China's overseas operations are more specifically targeted at:

  1. Chinese diaspora communities: Operations designed to maintain loyalty to Beijing, suppress support for Taiwan, Hong Kong democracy, Xinjiang, or Tibetan rights, and monitor dissidents abroad. The CCP's United Front Work Department (UFWD) coordinates much of this activity through Chinese-language media outlets, community associations, and WeChat group networks.

  2. Host country political systems: Influence operations targeting political figures, academic institutions, and think tanks in Australia, Canada, the United States, and Europe. The Australian Strategic Policy Institute (ASPI) has documented extensive Chinese government operations targeting Australian politicians and Chinese-Australian communities.

  3. Global South audiences: Coordinated narrative campaigns promoting China's development model, Belt and Road Initiative, and COVID-19 response through state-owned Xinhua News Agency, China Global Television Network (CGTN), China Radio International, and networks of affiliated local media outlets.

Taiwan Strait Narratives

Taiwan represents perhaps the most active and consistent focus of Chinese information operations. The CCP's information objectives regarding Taiwan include: - Undermining confidence in Taiwan's democratic institutions - Promoting narratives favorable to "peaceful reunification" - Discouraging US and allied military commitment to Taiwan's defense - Suppressing international recognition of Taiwan's de facto independence

Taiwan's government and civil society have developed relatively sophisticated responses, including an active-duty Cyber Command, close cooperation between government and civil fact-checking organizations (notably the COFACTS platform and the Taiwan FactCheck Center), and systematic media literacy education. Taiwan has become something of a laboratory for democratic resilience to information operations, and its experiences are closely studied by other democracies.

Xinjiang Narrative Management

The Chinese government has invested heavily in shaping global narratives regarding its internment of Uyghur and other Turkic Muslim populations in Xinjiang. Key tactics include: - Producing and distributing "documentary" content presenting Xinjiang camps as voluntary vocational training centers - Recruiting foreign journalists and influencers for supervised tours of Xinjiang facilities - Coordinating diplomatic pushback through the UN and other multilateral forums - Suppressing academic and journalistic research through institutional pressure, visa denials, and threats to researchers' contacts inside China

The Xinjiang information operation has been partially effective in shaping discourse in some Global South countries, where China's development assistance and diplomatic relationships provide significant leverage over media coverage.


Section 31.5: Iran and Other State Actors

Iran's Influence Operations

Iran's information operations have received significantly less Western analytical attention than Russia's or China's, despite being extensive and consequential. Iran's influence operations are coordinated primarily through the Islamic Revolutionary Guard Corps (IRGC) and the Ministry of Intelligence and Security (MOIS), with additional activity by state media including Press TV and Al-Alam.

The most extensively documented Iranian influence operation is the Liberty Front Press network, exposed by research from the Stanford Internet Observatory and later Digital Forensic Research Lab. Liberty Front Press created a network of fake news websites, social media accounts, and email newsletters that appeared to be from various countries (including the United States, United Kingdom, and Middle Eastern nations) but promoted Iranian government narratives. The network was removed by Facebook, Twitter, and Google in August 2018, representing one of the first major platform enforcement actions specifically against Iranian state-sponsored influence operations.

Iran's influence operations serve several distinct strategic objectives: - Regional hegemony: Promoting narratives favorable to Iranian influence in Lebanon, Iraq, Syria, Yemen, and other parts of the Middle East - Anti-Israel and anti-American narratives: Across diverse audiences globally - Domestic legitimacy: Suppressing and discrediting opposition voices both domestically and among Iranian diaspora communities - Nuclear diplomacy support: Shaping perceptions of Iran's nuclear program and international negotiations

Saudi Arabia's Information Operations

Saudi Arabia's information operations are closely tied to its rivalry with Iran and its management of domestic political opposition, particularly following the 2018 assassination of journalist Jamal Khashoggi. Research by researchers at Oxford's Computational Propaganda Project and DFRLab has documented Saudi state-linked networks engaged in: - Promoting pro-government narratives on Twitter (a platform historically significant in Arab political discourse) - Discrediting opposition voices, particularly those associated with the Muslim Brotherhood or Qatar - Building favorable coverage of Saudi economic reforms (Vision 2030) and cultural liberalization initiatives - Conducting targeted harassment campaigns against critics of the Saudi royal family

North Korea

North Korea's information operations are primarily defensive — focused on suppressing information flow into North Korea and managing the narrative about North Korea in external media — rather than offensive influence campaigns targeting foreign democracies. However, North Korea does conduct sophisticated cyber operations, including the Lazarus Group's widespread hacking activities, which have an information warfare dimension in their use of stolen data and their capacity to disrupt media organizations that publish critical coverage.

Smaller State Actors

The toolkit of state-sponsored influence operations is not exclusive to great powers. Research has documented influence operations by Ethiopia, Venezuela, the United Arab Emirates, and numerous other states, demonstrating that digital technology has dramatically lowered the barriers to entry for information warfare. A state with a modest budget and some technically skilled operatives can now conduct influence operations that would have required the resources of a superpower's intelligence apparatus thirty years ago.


Section 31.6: The Tactics Toolkit

Troll Farms

Troll farms are organizations — often semi-commercial, with the characteristics of a call center or content production house — that employ workers to create and manage fake online personas, produce content, and engage in coordinated online activity designed to shape public discourse. The IRA's St. Petersburg operations are the best-documented example, but troll farms have been documented in numerous countries including China, Iran, Ethiopia, and Mexico.

A typical troll farm operation involves: - Creation of "sockpuppet" accounts with elaborate backstories, authentic-seeming profile pictures (often generated using AI), and years of relatively normal social media activity before being "activated" for operational purposes - Coordination of activity to create the appearance of organic public opinion — coordinated posting, liking, sharing, and commenting that makes specific viewpoints appear widely held - Platform arbitrage — using platforms with weaker moderation (Telegram, Gab, certain subreddits) to develop narratives before attempting to move them to larger platforms

Fake Personas and Synthetic Identities

Modern influence operations extensively use synthetic personas — entirely fabricated online identities whose biographical details, photographs, and digital histories are manufactured. The widespread availability of generative adversarial network (GAN) technology since approximately 2018 has made it trivially easy to produce photorealistic faces of people who do not exist, eliminating one of the previous key methods for detecting fake accounts (reverse image search). AI-generated text has further lowered the cost of maintaining convincing personas at scale.

Hack-and-Leak Operations

Hack-and-leak combines cybersecurity intrusion with information operations: state-sponsored hackers obtain private communications or documents from adversary organizations and selectively release them (often through intermediaries to maintain deniability) to maximum political effect. The two most consequential hack-and-leak operations in recent history were: - The 2016 DNC and Podesta hacks, with materials released through WikiLeaks - The 2017 "Macron Leaks" operation, in which fabricated and genuine documents from the En Marche campaign were released 44 hours before France's presidential election (see Chapter 32)

Hack-and-leak is particularly effective because it combines the credibility of genuine documents with the selectivity of disinformation: authentic materials are released selectively, stripped of context, or mixed with fabrications, creating a plausible deniability problem for the target even when most released material is accurate.

Narrative Laundering: The Fringe-to-Mainstream Pipeline

Narrative laundering — also called the "fringe to mainstream pipeline" — is the process by which narratives that originate in state-sponsored or fringe-extremist sources acquire mainstream credibility through a series of intermediate amplification steps. The typical pathway:

  1. A claim is originated or amplified by a state-sponsored source (Russian state media, a troll farm account)
  2. The claim is picked up by domestic fringe websites and discussion forums (4chan, extremist subreddits, partisan blogs)
  3. Partisan aggregator websites and hyperpartisan Facebook pages amplify the claim to larger but still politically self-selected audiences
  4. Mainstream journalists, investigating the phenomenon of the claim's viral spread, write about it — which further amplifies it
  5. The claim enters mainstream political discourse, often through politicians or media figures who reference it without full sourcing

Renee DiResta and colleagues at the Stanford Internet Observatory have extensively documented this pipeline. A key finding is that the pipeline can operate regardless of whether individual amplifiers at each stage know they are transmitting foreign disinformation — the structural dynamics of the information ecosystem can launder state-sponsored narratives without any coordinating conspiracy among domestic amplifiers.

Strategic Amplification of Domestic Divisions

Perhaps the most important operational insight from analyzing state-sponsored disinformation is that its primary target is not belief in specific false claims but the amplification and radicalization of existing domestic divisions. Russian, Chinese, and Iranian information operations do not typically introduce foreign narratives into target societies; they identify, amplify, and accelerate narratives that are already present in the target society's information ecosystem.

The IRA's operations in the United States, for example, were most active around authentic American social conflicts — police violence, immigration, racial inequality — rather than purely fabricated issues. The operation worked because it could attach itself to genuine grievances and amplify them in ways that deepened polarization. This means that any response to state-sponsored disinformation that focuses exclusively on the foreign element will miss the most important part of the problem: the domestic social and political conditions that make societies vulnerable to this kind of manipulation.


Section 31.7: NATO and Western Counter-Measures

East StratCom Task Force

The East StratCom Task Force was established by the European External Action Service (EEAS) in 2015, following Russia's disinformation campaigns around the Ukraine conflict. The Task Force operates the EUvsDisinfo database, which has catalogued thousands of individual cases of pro-Kremlin disinformation since 2015, providing analysis, source tracking, and debunking resources. The Task Force also produces the "Disinformation Review" newsletter and conducts media briefings.

East StratCom represents a significant institutional commitment to systematic counter-disinformation, but it operates with significant constraints: it has a small staff, a limited budget, and must navigate complex political dynamics among EU member states (several of which have governments with varying degrees of sympathy for Russian narratives). Its effectiveness is primarily in building a documented record and raising awareness rather than in real-time intervention.

NATO StratCom COE

The NATO Strategic Communications Centre of Excellence (StratCom COE), based in Riga, Latvia, is an accredited NATO Centre that conducts research on strategic communications challenges facing the Alliance. Its research portfolio includes analysis of Russian information operations, social media manipulation, hybrid warfare, and counter-disinformation methodologies. The StratCom COE's published research has been highly influential in NATO policy and in the broader academic field.

A key StratCom COE research contribution has been measuring the scale of coordinated inauthentic behavior on social platforms. In a 2019 operation, researchers purchased bot services targeting NATO-related content and found it trivially easy to acquire fake followers, fake likes, and coordinated amplification from commercial providers — demonstrating that the infrastructure of influence operations is widely available commercially, not exclusively the province of state intelligence services.

The Helsinki Model: Prebunking and Media Literacy

Finland has emerged as something of a model for societal resilience to information operations, sometimes called the "Helsinki model." Several elements distinguish Finland's approach:

  1. Media literacy in schools: Finland introduced comprehensive media literacy education into its national curriculum, beginning in primary school. Students are explicitly taught to identify manipulation tactics, evaluate sources, and understand how emotional responses can be exploited by bad-faith communicators.

  2. Cross-sector coordination: Finland's approach involves close coordination between government, media, academia, and civil society — not a purely government-led program, which would risk the appearance of state propaganda.

  3. Historical memory: Finland's history of Soviet pressure and the concept of "Finlandization" (capitulation to Soviet demands) gives Finnish society a cultural inoculation against Russian influence operations that other countries lack.

  4. Prebunking over debunking: Rather than focusing primarily on fact-checking specific false claims (which risks amplifying them), Finland's approach emphasizes inoculation theory — teaching people to recognize manipulation tactics in general, so that they are resistant to specific applications of those tactics when they encounter them.

Research by Sander van der Linden and colleagues has demonstrated that prebunking — exposing people to weakened forms of manipulative techniques with explicit explanation of the manipulation — produces significantly more durable resistance than debunking after the fact.


Section 31.8: The Domestic-Foreign Blurring

When Foreign Operations Amplify Domestic Grievances

One of the most analytically challenging aspects of contemporary information warfare is the degree to which foreign influence operations and domestic political activity have become intertwined in ways that resist clean separation. The IRA's operations in the United States were most effective not as standalone propaganda but as amplifiers of existing domestic partisan media and political movements. Research by Joshua Tucker, Andrew Guess, and colleagues found that the IRA's Twitter activity, while extensive, reached primarily audiences that were already highly politically engaged — people whose prior beliefs made them receptive to the amplified content.

This raises a fundamental question: if foreign information operations achieve their effects primarily through domestic amplification — through real American citizens sharing, discussing, and acting on content that originated in or was boosted by Russian troll farms — where does foreign interference end and domestic political activity begin? There is no clean answer. The same content, shared by a Russian operative and then by a genuine American conservative, has the same effect on its ultimate audience regardless of its origin.

The Question of Who Is Responsible

The domestic-foreign blurring creates genuine legal, ethical, and analytical difficulties:

Legally: US law prohibits foreign nationals from spending money to influence US elections (Federal Election Campaign Act) and requires agents of foreign governments to register under the Foreign Agents Registration Act (FARA). But when foreign content is amplified by domestic actors who are unaware of its origin, no law is technically broken by the domestic amplifiers.

Analytically: Attribution of specific effects to foreign vs. domestic sources is genuinely difficult. Studies of the 2016 election (notably by Guess, Nyhan, and Reifler in Nature Human Behaviour, 2019) found that exposure to Russian IRA content was actually quite limited among most Americans, concentrated among heavy news consumers who were already highly partisan. This finding complicates narratives that attribute significant electoral effects to foreign operations.

Ethically: The domestic-foreign blurring raises difficult questions about political responsibility. Domestic political actors who knowingly or unknowingly amplify foreign disinformation bear some moral responsibility for the effects, even if they cannot easily be held legally liable. The question of "knowing or unknowingly" is itself contested: some evidence suggests that certain domestic political actors were aware they were amplifying foreign content and found it strategically useful.


Section 31.9: Attribution Challenges

Technical Attribution

Attributing an influence operation to a specific state actor is technically and politically difficult in ways that systematically favor the attacker. Technical evidence for attribution includes: - Infrastructure analysis: Tracing the servers, domain registrations, and network infrastructure used in an operation to known state-linked or state-controlled assets - Language and cultural analysis: Identifying linguistic patterns, cultural references, or operational errors (use of Cyrillic characters, timezone patterns in posting activity, idiomatic expressions) that suggest specific national origins - TTP matching: Comparing the "tactics, techniques, and procedures" (TTPs) of an operation to the documented TTPs of known state actors - Financial tracing: Following money flows from identified operations back to state-linked funding sources

None of these methods provides conclusive proof individually, and even in combination they establish probable attribution rather than certainty. Attribution is also subject to false flag operations — operations deliberately designed to appear to originate from a different actor.

The Standards of Evidence Debate

A significant tension exists between the evidentiary standards appropriate for different purposes:

Intelligence community standards allow for attribution based on classified evidence that cannot be publicly disclosed, producing "high confidence" assessments that cannot be publicly defended. The January 2017 Intelligence Community Assessment (ICA) attributing 2016 election interference to Russia operated in this framework — the most sensitive evidence was classified and therefore inaccessible to public verification.

Academic and journalistic standards require publicly verifiable evidence and appropriate acknowledgment of uncertainty. The impressive work of researchers at the Stanford Internet Observatory, Atlantic Council's Digital Forensic Research Lab, and Oxford's Computational Propaganda Project operates at this standard, producing detailed public analyses of influence operations with explicit documentation of methods and evidence.

Legal standards — required for criminal prosecution or civil liability — are the most demanding, requiring evidence admissible in court and proof beyond reasonable doubt (in criminal proceedings). The Mueller indictments of IRA and GRU operatives were legally sound but are effectively unenforceable since the indicted parties are in Russia.

Policy standards — the standard appropriate for government action in response to foreign interference — are contested. How much confidence is needed to justify sanctions, diplomatic expulsions, or counter-operations? Different actors answer this question differently, and the answer has significant strategic implications: too high a bar incentivizes sophisticated attackers to maintain plausible deniability; too low a bar risks unjust attribution and escalation.

Callout Box: The "Definitive Attribution" Fallacy Public discourse frequently demands "definitive proof" before accepting attribution of influence operations to state actors. This demand, while understandable, misunderstands how intelligence and forensic analysis work. Attribution is always probabilistic, based on the convergence of multiple evidence streams, none individually conclusive. Demanding certainty that can never be achieved functions as a practical strategy for indefinitely deferring political response. The appropriate question is not "is attribution certain?" but "is the available evidence sufficient to justify the proposed response, given its costs and risks?"


Key Terms

Active measures (aktivnyye meropriyatiya): Soviet/Russian term for influence operations including disinformation, forgeries, front organizations, and agent cultivation, intended to shape foreign political environments.

Cognitive warfare: Operations designed to undermine the target population's capacity for rational belief formation, rather than merely to promote specific false beliefs.

Dezinformatsiya: Russian term for disinformation; the deliberate creation and spread of false information to deceive adversaries.

Firehose of Falsehood: RAND Corporation term for a propaganda strategy characterized by high volume, multi-channel delivery, and indifference to consistency or truth.

Hack-and-leak: An influence operation combining cybersecurity intrusion with selective public release of stolen materials for maximum political effect.

Hybrid warfare: A strategic approach integrating military and non-military means — including information operations — in ways that blur distinctions between war and peace.

Information operations (IO): Coordinated efforts to shape the information environment of target audiences, including both offensive and defensive activities.

Narrative laundering: The process by which narratives originating in fringe or state-sponsored sources acquire mainstream credibility through intermediate amplification steps.

Prebunking: The strategy of exposing audiences to weakened forms of manipulative techniques with explicit explanation, before they encounter actual manipulative content, as a form of psychological inoculation.

Troll farm: An organization employing workers to create and manage fake online personas and produce coordinated content designed to shape public discourse.

Wumao (50-cent army): Chinese state-organized network of individuals producing pro-government social media content, named for the alleged payment per post.


Discussion Questions

  1. Yuri Bezmenov described a four-stage model of Soviet subversion culminating in "demoralization" of target societies. To what extent does this framework accurately describe the strategic logic of contemporary Russian information operations? What does it get right and what might it miss?

  2. The "Firehose of Falsehood" strategy deliberately prioritizes volume and speed over consistency and truth. How should democratic societies and media institutions adapt their information processing and verification practices in response to this strategy?

  3. The line between state-sponsored foreign disinformation and domestic political speech has become increasingly difficult to draw. Should we prioritize enforcement against clearly foreign-origin content, even if this means accepting significant manipulation of domestic discourse? Or should the focus be on the content itself regardless of origin?

  4. China's wumao operations primarily aim to "drown out" political conversation rather than to win arguments. How does this "strategic distraction" model differ from the Russian "strategic confusion" model, and what different countermeasures might be appropriate for each?

  5. Attribution of influence operations is always probabilistic, not certain. What evidentiary standard should governments apply before taking diplomatic or other action in response to suspected foreign information operations? Who should set that standard?

  6. The "Helsinki model" of societal resilience combines media literacy education, cross-sector coordination, and a culture shaped by historical experience of foreign pressure. Which elements are transferable to other democratic contexts, and which are specific to Finland's unique situation?

  7. Operation INFEKTION (the AIDS disinformation campaign) and contemporary COVID-19 lab-leak discourse share structural features. Does this parallel mean that all lab-leak discourse is disinformation? How should we distinguish legitimate scientific inquiry from disinformation that uses scientific-sounding claims?


Summary

State-sponsored disinformation and information warfare represent one of the defining strategic challenges of the digital age. This chapter has traced the history from Soviet active measures — sophisticated, institutionalized, and consequential — through the contemporary operations of Russia, China, Iran, and smaller state actors. The tactics have evolved dramatically with digital technology: troll farms, AI-generated fake personas, coordinated inauthentic behavior on social platforms, and hack-and-leak operations have replaced the Cold War's forged documents and covert media placements. But the underlying strategic logic — divide, demoralize, confuse, undermine trust — has deep roots.

Several key insights should guide students going forward: First, the most effective information operations do not introduce foreign ideas into target societies but amplify existing domestic tensions. Second, attribution is always probabilistic, and the demand for certainty is itself a strategy for avoiding response. Third, the distinction between foreign operations and domestic political activity has become genuinely blurred, complicating both legal and analytical responses. Fourth, effective countermeasures require addressing both the supply of disinformation and the social and political conditions that make it effective.

The chapter on election interference (Chapter 32) extends this analysis to the specific domain of democratic processes, where the stakes — the integrity of the mechanisms through which citizens exercise collective self-governance — are especially high.