Case Study 9.2: Bot Networks and Manufactured Twitter Trends — The IRA's Social Media Operations


Overview

Between approximately 2013 and 2018, the Russian Internet Research Agency (IRA) conducted what the U.S. Senate Intelligence Committee described as "the most comprehensive investigation to date of foreign interference in the 2016 election." Across Facebook, Instagram, Twitter, YouTube, Reddit, and a range of smaller platforms, IRA operatives — working in shifts, following organizational charts, with approved budgets and measurable performance targets — systematically constructed fake American communities, manufactured apparent social consensus, and injected fabricated social proof signals into the information environment surrounding the 2016 U.S. presidential election and the subsequent political period.

This case study focuses primarily on the Twitter and social media bot network dimensions of the IRA's operation, using the documented evidence to analyze how manufactured social proof works at digital scale.


Organizational Context: The "Troll Factory"

The IRA was not an improvised operation of individual hackers; it was a professionally organized company with a corporate structure, human resources, a marketing department, and documented budgets. Investigative reporting by journalists at Fontanka.ru (a St. Petersburg outlet) and subsequently by The New York Times and other outlets, confirmed through the Senate Intelligence Committee investigation, documented that the IRA employed hundreds of people working in shifts around the clock.

Employees were organized into thematic departments corresponding to the American communities they were tasked with infiltrating: a department for conservative content, one for Black American content, one for Muslim American content, one for LGBT content. Each department had style guides, content quotas, and performance metrics. A content creator working the "conservative" desk was expected to post a certain number of Facebook updates per shift, generate a minimum engagement level, and follow style guidelines for the voice and persona they were managing.

The division-of-labor structure meant that the IRA was not generating crude, easily detectable foreign-sounding content. It was systematically producing culturally fluent American content, calibrated to each target audience's existing concerns and cultural references, at high volume and with operational security procedures designed to prevent detection of the Russian origin.


The Twitter Bot Network: Documented Scale and Mechanics

Twitter's cooperation with Senate investigations resulted in the platform providing detailed data about IRA-associated accounts. The disclosed dataset, later published in full by Twitter and analyzed by researchers including Emilio Ferrara at the Information Sciences Institute, consisted of approximately 3,841 accounts directly linked to the IRA, which had collectively posted approximately 10 million tweets.

But the directly attributed IRA accounts were the visible surface of a larger operation. The IRA's explicit strategy included the creation of "decoy" accounts — accounts that appeared to be ordinary American users — that would amplify IRA-created content through retweeting and liking, creating the appearance of organic spread by regular Americans. This amplification network is harder to quantify precisely because many of its accounts were deleted before they could be fully catalogued.

The social proof mechanism operated through platform features designed to surface popular content:

Trending topics: Twitter's trending algorithm identifies topics receiving a disproportionate volume of engagement within a specific time window. A coordinated network of accounts all tweeting the same hashtag simultaneously can cause it to trend — to be elevated into the trending sidebar visible to all users in a geographic area or nationally. When a topic trends, it receives additional organic attention from users who interpret trending status as evidence that the topic is genuinely important and widely discussed. The IRA specifically targeted trending topic creation as a technique, coordinating simultaneous hashtag deployment across its account network.

Follower counts and retweet metrics: An account with a large follower count appears authoritative. A tweet with many retweets appears widely endorsed. IRA-operated accounts accrued followers through coordinated following operations (following many accounts in hopes of follow-backs) and through mutual amplification within the IRA's own network. When an IRA account's tweet accumulated thousands of retweets — many from other IRA accounts — the retweet count was visible to every user who encountered the tweet and functioned as social proof of the tweet's significance.

Engagement pods within the IRA network: Researchers identified coordination signatures in the IRA's Twitter network: accounts that consistently retweeted each other, followed each other in coordinated waves, and posted similar content with small variations simultaneously. These signatures revealed that what appeared to be organic spread of popular content was in fact the coordinated performance of popularity across a closed network.


The Facebook Operations: Manufacturing Community

The IRA's Facebook operations, documented in five volumes of Senate Intelligence Committee reports and in academic analysis by researchers at Oxford's Computational Propaganda Project, illustrate a more sophisticated form of manufactured social proof than simple bot amplification.

Rather than merely inflating engagement metrics, the Facebook operations manufactured communities — entities that thousands of real Americans joined and participated in, believing themselves to be part of genuine American civic organizations.

"Blacktivist" presented itself as a Black Lives Matter-affiliated page dedicated to documenting police violence and amplifying Black political organizing. By late 2016, it had approximately 360,000 followers — a larger following than the official Black Lives Matter Facebook page. Its content was culturally fluent and engaged real events; its origin was concealed. For its followers, the page represented what they understood to be an authentic community voice. When they saw its posts, the 360,000 co-followers provided social proof that this was a significant, well-supported community organization.

"United Muslims of America" followed an analogous structure targeting Muslim American communities. With approximately 350,000 followers, it created the appearance of a large, organized Muslim American advocacy presence, producing content that spoke to real concerns about Islamophobia and civil rights while serving the IRA's destabilization objectives.

"Being Patriotic" targeted conservative audiences, accumulating approximately 200,000 followers, and produced content aligned with right-wing political positions including support for Donald Trump, opposition to immigration, and amplification of cultural grievance narratives.

"Secured Borders" built a following around anti-immigration content, functioning as an apparent grassroots organization while operated from St. Petersburg.

Each of these pages functioned as manufactured community — not merely as a source of false content but as a false source of community belonging, shared identity, and social validation. A Black American user who followed Blacktivist and shared its content was not merely passing along a piece of information; they were expressing their community identity and providing social proof for the content to their own followers. The manufacturing operated at multiple levels: the page itself was fake, but the engagement it generated from real followers was real, and the social proof that engagement provided was functionally genuine from the perspective of any downstream user who encountered it.


The Destabilization Objective

Senate Intelligence Committee analysis concluded that the IRA's operations were not primarily aimed at shifting individual voters' electoral preferences toward specific candidates. The operation was more sophisticated and more systemic than that.

The primary objectives, as the Committee's analysis reconstructed them, included:

Amplifying existing divisions: The IRA did not create American political polarization; it identified existing fault lines — racial tensions, immigration conflict, partisan animosity — and poured resources into amplifying them. By simultaneously operating pages targeting conservative and liberal audiences with maximally divisive content, the IRA was increasing the intensity of conflict rather than redirecting political preference.

Suppressing Black voter turnout: Several IRA-operated pages targeting Black American audiences produced content specifically discouraging political participation — arguing that neither major party represented Black interests, that voting was futile, that protest outside the electoral system was the only legitimate response. The Senate Intelligence Committee found this voter suppression objective to be one of the operation's most consistent goals.

Undermining institutional trust: IRA content systematically produced narratives challenging the legitimacy of American democratic institutions, election integrity, media credibility, and government authority. The goal was not to persuade audiences of any particular affirmative position but to destroy the shared epistemic foundation on which democratic deliberation depends.

These objectives illuminate why manufactured social proof, rather than manufactured facts, was the operation's primary instrument. Facts can be checked and corrected. Social proof — the apparent evidence that millions of people in specific communities hold certain views, that certain narratives are dominant, that certain institutions are distrusted — shapes the information environment in ways that content correction cannot easily address. Even after Blacktivist was revealed as IRA-operated, the damage to the social fabric that its years of operation had contributed to could not be simply retracted.


Platform Detection and the Arms Race

Twitter's detection of IRA accounts relied on behavioral analysis rather than content analysis — it was the signatures of coordination and automation, not the content of the posts, that allowed accounts to be identified and removed. These signatures included:

  • Temporal clustering: many accounts in a network posting the same content within minutes of each other
  • Infrastructure traces: multiple accounts created from the same IP address ranges or using the same device fingerprints
  • Behavioral consistency: accounts that posted only political content, never personal content, at volumes and hours inconsistent with human behavior
  • Network topology: accounts that followed each other in coordinated waves and amplified each other exclusively

The IRA adapted to these detection methods over time. Later phases of the operation showed increased variation in account behavior — personal content was mixed in, posting rhythms were randomized, infrastructure was routed through VPNs. This adaptation illustrates the structural dynamic: detection capabilities improve and evasion techniques improve in response, in an ongoing arms race that neither side definitively wins.

Meta's Coordinated Inauthentic Behavior removal reports — published quarterly and documenting operations removed from Facebook and Instagram — show that as of 2023, the company was identifying and removing CIB networks from dozens of countries on a regular basis. The scale of ongoing operations suggests that the IRA's operations, while the most publicly documented, were not anomalous; they established a model that has been widely adopted by state and non-state actors.


Analytical Conclusions

Three analytical conclusions emerge from this case study.

First, scale is the mechanism. The IRA's social proof operations were not primarily effective because they created compelling false content; much of the content was fairly ordinary political material. They were effective because of scale — the sheer volume of accounts, followers, and engagement they could generate created the appearance of widespread organic sentiment that individual account evaluation would not produce. A single IRA account is not convincing. A network of thousands of IRA accounts that have together accumulated millions of followers, and that coordinate to create trending topics, is a plausible simulation of a genuine popular movement.

Second, source concealment is the architecture. Every element of the IRA's social proof operations depended on concealment of origin. Remove the concealment and the social proof value collapses: a Kremlin-funded page called "Blacktivist" that discloses its origin would not accumulate 360,000 American followers or function as a social proof signal for Black American community opinion. The entire edifice is contingent on the audience not knowing what it is looking at.

Third, the damage to epistemic infrastructure outlasts the operation. Once it became widely known that IRA-operated pages had infiltrated Black American, Muslim American, and conservative social media communities, the knowledge that any apparently community-based page might be foreign-operated made it harder to trust genuine community pages. The IRA's operations produced a "liar's dividend" in the social proof domain: even authentic community organizations faced increased skepticism, because the existence of sophisticated fakes made the authenticity of any specific organization harder to verify. This epistemic contamination effect — distrust spreading beyond the actual contaminated material to undermine trust in the whole domain — is among the most enduring harms of large-scale manufactured social proof operations.


Discussion Questions

  1. The IRA operated separate departments for different American demographic communities, with different cultural style guides for each. What does this organizational structure tell us about the IRA's understanding of how social proof functions differently across distinct communities?

  2. The Senate Intelligence Committee found that Blacktivist had more followers than the official Black Lives Matter page by late 2016. What are the implications of this finding for how we understand the relationship between manufactured and genuine community social proof online?

  3. The IRA's apparent goal was not primarily electoral persuasion but destabilization of institutional trust and amplification of existing divisions. Why might manufactured social proof be particularly well-suited to a destabilization objective compared to a persuasion objective?

  4. Professor Marcus Webb, who covered influence operations as an investigative journalist before entering academia, observes that "the most dangerous disinformation attaches itself to real grievances." Apply this observation to the IRA's Blacktivist operation: what real grievances did it exploit, and what are the implications for distinguishing genuine community advocacy from manufactured community advocacy?

  5. The "liar's dividend" — increased general skepticism about online community authenticity as a consequence of discovering widespread manipulation — could have significant effects on genuine civic and political organizing online. How should genuine community organizations respond to this environment? What steps could they take to establish authenticity in a context where authenticity is contested?


Case Study 9.2 is paired with Case Study 9.1 (Tobacco Industry Astroturfing). For the full historical analysis of the IRA's operations, see Chapter 24 (Digital Disinformation: The 2016–2020 Campaigns). For the platform architecture that enables these operations, see Chapters 16 and 17.