Case Study 11.2: Operation Secondary Infektion — Russian Disinformation Across 30 Countries

Overview

In June 2020, the EU DisinfoLab and the Stanford Internet Observatory jointly published a comprehensive investigation into what they termed "Operation Secondary Infektion" — one of the most ambitious, sustained, and methodologically sophisticated state-linked disinformation campaigns ever documented. The operation had been running since at least 2014, deployed across approximately 30 countries and 19 languages, and involved the production of thousands of pieces of fabricated and manipulated content targeting audiences from Ukraine to France, from the United Kingdom to the United States.

Unlike many documented influence operations that relied primarily on social media amplification through inauthentic accounts, Secondary Infektion's primary technique was the fabrication and attempted laundering of content into legitimate media outlets — creating false documents, staging fake personas to pitch fabricated stories to real journalists, and attempting to inject disinformation into established information channels.

Secondary Infektion is named after the Soviet-era Operation INFEKTION — the KGB-originated campaign in the 1980s that spread the claim that the US government had created the AIDS virus. The "Secondary" designation reflects its nature as a successor to and evolution of Cold War disinformation techniques.


Learning Objectives for This Case Study

By analyzing this case study, students will be able to: 1. Apply the full information disorder taxonomy to a documented state-linked disinformation campaign 2. Analyze the strategic logic of disinformation designed to undermine democratic institutions 3. Evaluate the methodological approach used by researchers to identify and attribute information operations 4. Understand how Cold War disinformation techniques evolved for the digital environment 5. Consider the policy implications of state-sponsored disinformation for democratic governance


Background: The EU DisinfoLab Investigation

Research Methodology

The EU DisinfoLab and Stanford Internet Observatory investigation used a combination of:

Technical analysis: Examining network metadata, posting patterns, shared IP addresses, coordinated account behavior, and platform-specific technical signals that identify inauthentic behavior.

Content analysis: Systematic review of content for tell-tale markers of manufactured origin: distinctive translation patterns suggesting non-native speakers, copy-paste errors revealing shared content templates, metadata indicating batch creation of documents, and narrative patterns inconsistent with genuine grassroots origin.

Open source intelligence (OSINT): Cross-referencing identified content against other documented information operations, tracking the movement of specific narratives across platforms and countries, and identifying connections between seemingly unrelated content clusters.

Human source reporting: Interviewing journalists who had been targeted by the operation's attempts to seed fabricated stories in legitimate outlets.

The investigation ultimately identified over 2,500 unique content instances linked to the operation, across at least 30 countries and 19 languages. The scale of the operation was unprecedented in documented research.

Key Research Findings

The core findings of the investigation were:

  1. Longevity: The operation had been running since at least 2014, predating and overlapping with the better-known Internet Research Agency operations identified in the Mueller investigation. This longevity suggests a sustained organizational commitment, not a reactive or opportunistic operation.

  2. Geographic scope: Secondary Infektion targeted audiences in Eastern Europe (Ukraine, Belarus, Poland, Baltic states), Western Europe (Germany, France, UK, Spain, Italy), and North America. The geographic breadth suggests strategic rather than purely tactical intent — the goal was not to influence a single election but to undermine democratic institutions broadly.

  3. Media laundering as primary technique: Unlike the IRA's primary reliance on social media amplification, Secondary Infektion's signature technique was attempting to inject content into legitimate media. The operation created fake documents, staged fake online personas to pitch fabricated stories to journalists, and attempted to plant content in mainstream outlets where it would carry the credibility of established media.

  4. Low amplification success: Despite the operation's scale and sophistication, most of its content achieved very little organic amplification. The vast majority of Secondary Infektion content was shared rarely or not at all. This finding has significant theoretical implications for how we understand the impact of state-sponsored disinformation.

  5. Attribution signals: While the investigation did not identify specific individuals responsible for the operation, multiple technical and content markers pointed to a Russia-linked origin. These included Russian-language source documents visible in metadata, similarities to other documented Russian information operations, and narrative alignment with known Russian foreign policy objectives.


Taxonomy Analysis

Primary Category: Disinformation

Secondary Infektion is unambiguously an example of disinformation at the macro level. The operation was:

  • Deliberately designed to deceive: Its core techniques involved fabricating documents, creating false personas, and constructing false narratives. This was not misinformation (accidental false content) but deliberately crafted deception.

  • Strategically motivated: The operation served identifiable state-level interests — undermining Western unity, weakening support for Ukraine, delegitimizing the EU and NATO, and creating social division in target countries. These are the objectives of a strategic disinformation operation, not opportunistic misinformation.

  • Organized and sustained: The operation's longevity (six-plus years) and geographic breadth require organizational infrastructure — funding, personnel, coordination — inconsistent with any but state-level actors.

Applying the Seven-Type Taxonomy

Secondary Infektion's content portfolio included examples of most of the seven content types, deployed in different contexts and for different target audiences.

Type 3: Imposter Content (Primary technique)

The signature technique of Secondary Infektion was imposter content — fabricating documents and content purporting to come from legitimate sources:

  • Fabricated diplomatic documents: The operation created fake diplomatic correspondence, intelligence reports, and policy documents attributed to real government agencies, international organizations, and political figures. These documents were then pitched to journalists and posted to obscure websites where they might be discovered by researchers or journalists seeking primary sources.

  • Impersonation of media outlets: The operation created content mimicking the visual style and brand identity of established news organizations, including creating fake websites with names similar to legitimate outlets.

  • False academic and expert commentary: The operation created fake op-eds and commentary attributed to real academics and policy experts, submitted to legitimate outlets for publication. In several documented cases, these submissions were accepted before the fraud was detected.

Type 4: Fabricated Content

Much of Secondary Infektion's content was entirely fabricated:

  • Invented quotations attributed to real politicians, including fabricated statements about Ukraine, the EU, and NATO
  • False claims about military activities, troop movements, and diplomatic agreements
  • Invented "leaked" documents purporting to reveal hidden agendas of Western governments and institutions

The fabrication was often technically sophisticated: documents were formatted to match genuine official communications, complete with authentic-looking letterheads, signature blocks, and official language registers.

Type 5: False Context

Genuine events and real statements were frequently stripped of their context and repackaged to support false narratives:

  • Genuine footage of military exercises was repurposed as evidence of aggressive military posturing
  • Real statements by politicians, taken from different contexts, were assembled to suggest positions they had never held
  • Accurate statistics about political support or economic conditions were reframed with false interpretive context

Type 2: Misleading Content

The operation's narrative construction frequently relied on genuinely accurate information — real tensions, real disagreements, real grievances — presented in ways that amplified conflict and division:

  • Real political disagreements within EU member states about asylum policy, defense spending, or trade relations were presented as evidence of impending EU collapse
  • Genuine corruption scandals in target countries were amplified and framed to delegitimize not just the specific corrupt officials but democratic institutions broadly
  • Real crime statistics were presented without demographic or comparative context to stoke fears about specific minority or immigrant communities

Type 7: False Connection

Headlines and framing elements frequently made claims not supported by the linked content:

  • Misleading titles for fabricated documents that implied far more dramatic revelations than the documents actually contained
  • Clickbait-style summaries of genuine events that exaggerated their significance or implications

Strategic Logic of the Operation

The "Fire Hose of Falsehood" Approach

Secondary Infektion illustrates the "fire hose of falsehood" strategy identified by RAND Corporation analysts Pomerantsev and Weiss — not a strategy of creating specific false narratives to be believed, but a strategy of saturating the information environment with so many competing claims that audiences lose confidence in the possibility of knowing truth at all.

The operation's breadth — 30 countries, 19 languages, thousands of content pieces — was not driven by the expectation that each piece would go viral and change minds. Most pieces did not go viral. The strategic logic was different:

  1. Creating the appearance of widespread organic concern: Multiple apparently independent content sources raising similar concerns create the impression of genuine grassroots dissatisfaction, even if each individual piece reaches a small audience.

  2. Laundering through legitimate channels: If even a small proportion of fabricated content successfully penetrates legitimate media outlets, that content acquires the credibility of those outlets and reaches their audiences.

  3. Providing ammunition: Content that does not go viral organically can still provide material for sympathetic political actors, partisan commentators, and ideological communities who amplify it within their networks.

  4. Eroding trust in information environments: Even failed disinformation — content that is identified and debunked — contributes to information environment degradation if the existence of the operation is publicized in ways that generate generalized suspicion about all online content.

Targeting Vulnerable Narratives

Secondary Infektion demonstrated sophisticated understanding of existing political tensions in target countries, consistently targeting narratives where genuine grievances or genuine uncertainties could be exploited:

Ukraine: The operation heavily targeted Ukrainian domestic politics, amplifying genuine corruption concerns and regional tensions to delegitimize the post-Maidan government and fracture support for the EU Association Agreement.

Germany: Content targeted tensions over refugee policy, exploiting genuine public debate about the 2015-2016 refugee crisis to amplify anti-EU and anti-government sentiment.

France: Content targeted the 2017 presidential election, amplifying concerns about Emmanuel Macron's ties to financial institutions and creating false documents attributed to his campaign.

United Kingdom: Content exploited Brexit debates, simultaneously targeting both Leave and Remain audiences with content designed to maximize mutual contempt and delegitimize the referendum process.

This targeting sophistication — adapting content to the specific vulnerabilities of each target country — distinguishes Secondary Infektion from more blunt disinformation operations that apply identical content across different contexts.


Attribution and Evidence

What the Evidence Shows

The EU DisinfoLab investigation was careful to make its attribution claims proportionate to its evidence. The investigation found:

Strong evidence of coordinated, inauthentic origin: The technical and content markers of the operation — shared writing patterns, coordinated posting times, metadata trails, common infrastructure — clearly established that the content was produced by an organized entity rather than independent actors.

Strong alignment with Russian foreign policy objectives: The operation's target selection, narrative content, and timing consistently aligned with known Russian foreign policy interests and talking points. No other state actor had both the motivation and apparent capability for this operation.

Insufficient evidence for direct Kremlin attribution: The investigation did not find direct evidence linking specific Russian government agencies or individuals to the operation. This is characteristic of professional intelligence operations designed for plausible deniability.

Similarities to documented Russian operations: Multiple technical and content markers were shared with other documented Russian information operations, including the Internet Research Agency's social media campaigns and the GRU's hack-and-leak operations.

The investigation concluded that the operation was "very likely" Russia-linked, while acknowledging that this conclusion rested on circumstantial evidence — strong circumstantial evidence, but not the direct documentary evidence that would enable unambiguous attribution.

The Attribution Problem

Secondary Infektion illustrates the fundamental challenge of attributing disinformation operations in a world where states design operations for maximum deniability:

Plausible deniability by design: Professional intelligence operations use layers of proxies — shell companies, cutout organizations, foreign freelancers, compromised accounts — specifically to make attribution difficult.

Dual-use infrastructure: Technical infrastructure used for disinformation operations is often shared with legitimate uses, making infrastructure attribution ambiguous.

Mimicry of organic content: Operations that successfully mimic organic grassroots content are designed to be indistinguishable from genuine independent actors.

Legal evidentiary standards: The standard of evidence required for legal attribution (criminal prosecution) is far higher than the standard acceptable for journalistic or academic attribution. Most publicly available evidence, while compelling for research conclusions, would not meet legal evidentiary standards.


Evolution from Cold War Techniques

Continuities with Soviet Active Measures

Secondary Infektion represents the digital evolution of Soviet "active measures" (aktivnyye meropriyatiya) — a broad category of covert influence operations that included fabrication of documents, creation of front organizations, manipulation of foreign media, and targeted assassination of reputations.

The KGB's Department A (formerly Department D) produced fabricated documents attributed to Western governments throughout the Cold War. Among the most documented:

  • Operation INFEKTION (1983-1987): The KGB planted a story in an Indian newspaper claiming that the US military had created the AIDS virus through biological weapons research at Fort Detrick, Maryland. The story spread globally, was broadcast on Soviet state media, and was believed by millions. Versions of this narrative survive and circulate on the internet today.

  • The "Gehlen Organization" forgeries: KGB forgeries purporting to show that the BND (West German intelligence) was riddled with ex-Nazi war criminals, designed to undermine West Germany's credibility as an NATO partner.

  • The Campaign to Discredit the Neutron Bomb: A coordinated campaign in the late 1970s using front organizations, fabricated documents, and sympathetic Western academics to oppose NATO's deployment of neutron bomb technology.

Discontinuities — What Changed in the Digital Age

Several features of Secondary Infektion reflect genuine innovations enabled by the digital information environment:

Scale and speed: The internet allows the simultaneous targeting of multiple countries, in multiple languages, with customized content, at a speed impossible with Cold War methods. What would have required years of patient intelligence work can now be accomplished in weeks.

Cost reduction: Digital content production and distribution is vastly cheaper than physical document production, print media, or broadcast operations. This democratizes influence operations — even modestly resourced actors can run sophisticated campaigns.

Self-amplification possibilities: Social media platforms can carry disinformation content to audiences of millions without requiring state media distribution infrastructure. The operation relies on ordinary people to be its distribution network.

Persistent accessibility: Digital content, once created, is essentially permanent and can be re-surfaced years later. Cold War disinformation depended on its moment; digital disinformation can be rediscovered and recirculated in new contexts indefinitely.

Reaction time collapse: The speed of digital information means that disinformation can enter the information environment and reach large audiences before researchers, journalists, or platforms can respond. Cold War active measures could be countered over weeks or months; digital disinformation may need to be countered in hours.


Policy Implications

What the Case Reveals About Intervention Challenges

Secondary Infektion's low amplification success — the fact that most content reached small audiences — raises important questions about impact and intervention priorities. The case reveals:

  1. Volume and persistence may matter more than any single piece: An operation that produces thousands of low-reach content pieces may have cumulative effects on the information environment that exceed the effects of a single high-reach piece. Current research is inadequate to assess cumulative effects.

  2. The media laundering technique exploits journalistic norms: The operation's most effective technique — pitching fabricated content to legitimate journalists — exploits the journalistic norm of publishing authentic-seeming primary source material. Standard media literacy education (teaching readers to evaluate content) does not address the vulnerability of journalists who are the intended targets.

  3. International coordination is necessary: A campaign targeting 30 countries across 19 languages cannot be effectively countered by any single national government, platform, or researcher. The investigation itself was a cross-institutional, international collaboration — illustrating the kind of response that is needed.

  4. Attribution difficulty limits legal responses: Effective legal and diplomatic responses require confident attribution that current evidence standards often cannot provide. Investment in attribution capabilities — including classified government intelligence sharing with researchers — may be necessary for effective policy responses.

Recommendations from the Investigation

The EU DisinfoLab investigation concluded with several policy recommendations:

  • Platform transparency requirements: Platforms should be required to disclose takedown actions and share data with researchers to enable systematic study of information operations.
  • Researcher access: Governments and international institutions should create frameworks for appropriate researcher access to platform data, enabling the kind of investigation that produced this report to be conducted more systematically.
  • Media industry coordination: News organizations should share information about attempts to seed fabricated content, enabling earlier detection of coordinated operations.
  • Attribution capability investment: Government intelligence agencies with greater access to classified evidence should develop clearer frameworks for sharing attribution assessments with the public, enabling democratic deliberation about responses.

Discussion Questions

  1. Secondary Infektion achieved relatively low organic amplification — most content reached few people. Does this mean the operation was a failure? How might low-reach content still contribute to strategic goals?

  2. The operation's signature technique was attempting to launder fabricated content through legitimate media. What changes to journalistic practice or verification standards might reduce this vulnerability? Are there costs to such changes for journalistic freedom and efficiency?

  3. The case illustrates that state-sponsored disinformation exploits genuine grievances and real political tensions. Does this mean that addressing the underlying grievances — corruption, inequality, policy failures — is ultimately more important than countering disinformation directly? Or can disinformation be countered independently of its underlying conditions?

  4. Attribution to a state actor is established at the level of "very likely" but not proven with legal certainty. What threshold of evidence should be required before governments respond diplomatically or legally to state-sponsored disinformation? Who should make this judgment?

  5. The operation ran for at least six years before comprehensive public reporting. What does this longevity suggest about the adequacy of current monitoring and detection capabilities? What investment in what capabilities would you prioritize to reduce this detection gap?


Technical Analysis Using the Chapter Code

Using the code/case-study-code.py file, explore:

  1. Content classification: Apply the multi-class classifier to sample texts that reflect each of the seven content types present in Secondary Infektion.
  2. Intent signal analysis: Use the intent-harm analyzer to evaluate the rhetorical markers present in example Secondary Infektion content compared to genuine news content.
  3. Network visualization: The visualization code demonstrates how coordinated inauthentic behavior can be detected through posting pattern analysis.

Sources and Further Reading

  • EU DisinfoLab and Stanford Internet Observatory (2020). "Secondary Infektion." Full report available at eu.disinfolab.eu
  • Nimmo, Ben, et al. (2020). "Secondary Infektion: A Playbook of Russian Disinformation." Stanford Internet Observatory.
  • Pomerantsev, Peter, and Michael Weiss (2014). "The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money." Institute of Modern Russia.
  • Galeotti, Mark (2019). We Need to Talk About Putin. Penguin. (On Russian active measures strategy)
  • Thomas, Timothy L. (2004). "Russia's Reflexive Control Theory and the Military." Journal of Slavic Military Studies.
  • Rid, Thomas (2020). Active Measures: The Secret History of Disinformation and Political Warfare. Farrar, Straus and Giroux.