When you are unsure what to do, you watch what others do. When you do not know what to believe, you look at what others believe. This heuristic — copy the crowd when you lack information — is one of the most ancient and reliable shortcuts available...
In This Chapter
- Overview
- Learning Objectives
- 17.1 The Psychology of Social Proof
- 17.2 Social Proof Signals on Social Media Platforms
- 17.3 Manufactured Consensus: Engineering Fake Social Proof
- 17.4 The Muchnik Experiment: Direct Evidence of Social Influence Bias
- 17.5 Social Proof and Political Misinformation
- 17.6 The Virality Cascade and Manufactured Organic Consensus
- Voices from the Field
- SIDEBAR: Maya and the Social Proof of Viral Content
- SIDEBAR: The Velocity Media Trending Algorithm Audit
- 17.7 The Influencer Economy and Purchased Social Proof
- 17.8 Like Count Removal: A Natural Experiment
- Summary
- Discussion Questions
Chapter 17: Social Proof and Manufactured Consensus
Overview
When you are unsure what to do, you watch what others do. When you do not know what to believe, you look at what others believe. This heuristic — copy the crowd when you lack information — is one of the most ancient and reliable shortcuts available to social animals. It is so fundamental to human cognition that the psychologist Robert Cialdini, in his landmark 1984 book Influence: The Psychology of Persuasion, identified it as one of the six primary principles of social influence, calling it "social proof." What is popular is probably good. What many people believe is probably true. What strangers confidently do in an unfamiliar situation is probably the right thing to do.
Social proof is not a weakness or an error. For most of human evolutionary history, it was an indispensable navigational tool. In a world of imperfect information, the behavior of others is valuable data. The crowd's choices reflect accumulated knowledge that you may not have. Copying the crowd is often faster and more accurate than reasoning from scratch. Social proof is, in the right conditions, a form of distributed intelligence.
The problem, as with loss aversion and most other cognitive shortcuts examined in this book, arises when the environments in which our heuristics evolved differ radically from the environments in which we now apply them. Social media platforms have constructed environments in which the signals of social proof — like counts, share counts, follower counts, trending lists, view tallies — can be manufactured, manipulated, and algorithmically amplified to an extent that would have been inconceivable in the face-to-face communities where social proof first served us. In these environments, the heuristic that once reflected distributed intelligence becomes a lever for manufactured consensus: the engineered appearance of agreement that may reflect nothing real at all.
This chapter examines social proof as a psychological phenomenon and as a design principle. We begin with Cialdini's foundational account and its evolutionary basis. We then examine how social media platforms deploy social proof signals — like counts, follower counts, trending algorithms — and how these signals can be and have been manufactured, inflated, and exploited. We look at the specific dynamics of how manufactured social proof affects political information, how virality cascades create the appearance of organic consensus, and what the research literature tells us about the actual effects of social proof signals on information processing and belief formation. We conclude with an examination of Instagram's like count removal experiment as a natural experiment in what happens when a major social proof signal is taken away.
Learning Objectives
Upon completing this chapter, readers will be able to:
- Define social proof and explain its psychological basis using Cialdini's framework and evolutionary psychology.
- Analyze how social media platforms operationalize social proof through quantified engagement signals (likes, shares, followers, trending).
- Explain the concept of manufactured consensus and identify the mechanisms by which it is created, including bot accounts, coordinated inauthentic behavior, and engagement pods.
- Describe the Muchnik et al. (2013) experiment and explain why its findings are significant for understanding social influence on digital platforms.
- Analyze the relationship between social proof and political misinformation, including how high-engagement falsehoods exploit the social proof heuristic.
- Evaluate the Instagram like count removal experiment as a case study in the effects and limits of social proof signal modification.
- Apply the concept of "virality cascade" to explain how algorithmic amplification creates the appearance of organic consensus.
17.1 The Psychology of Social Proof
Cialdini's Six Principles and the Role of Social Proof
Robert Cialdini's Influence, first published in 1984 and updated in subsequent editions, represents one of the most influential attempts to systematically catalogue the psychological mechanisms of persuasion. Cialdini, a social psychologist who spent years embedding himself in sales organizations, advertising agencies, and fundraising operations to study influence in practice, identified six fundamental principles: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity.
Social proof — the tendency to look to others' behavior and beliefs as a guide for one's own — occupies a central position in Cialdini's framework. He described it as functioning most powerfully under two conditions: uncertainty (when we do not know what to do or think) and similarity (when the people we observe are similar to ourselves). Under conditions of uncertainty, we are most vulnerable to the influence of others' apparent choices. And when the others whose choices we observe seem similar to us, we weight their behavior more heavily, assuming their situation and therefore their information is relevant to our own.
Cialdini's examples of social proof in action range from the mundane (people joining long restaurant queues because the queue signals quality) to the potentially deadly (the "copycat suicide" phenomenon, in which prominent media coverage of suicides is followed by increases in suicide rates — the behavior of others, even in extremis, influences behavior). The unifying principle is that observed consensus — real or apparent — is a powerful force on belief and action.
The Evolutionary Basis: Distributed Intelligence and Its Limits
The evolutionary case for social proof is compelling. In environments of genuine information scarcity, the behavior of others provides data that an individual could not easily replicate through independent observation. If you are new to a territory and you see that others gather food at a particular location, their behavior reveals information about food availability that you would otherwise have to discover through costly trial and error. If you are in an ambiguous social situation and you see others behave with deference toward a particular individual, their behavior reveals something about that individual's status that you did not know. Social proof is, in these conditions, a form of free information about a world you do not fully understand.
This logic has been formalized in information theory as the concept of "information cascades" — situations in which people rationally discard their own private information in favor of the public signals provided by others' choices. If I know that 1,000 people have read an article and found it valuable enough to share, their collective judgment may rationally outweigh my own initial skepticism. The 1,000 readers have collectively accumulated more information about the article's value than I have as an individual. Deferring to their judgment may be the epistemically correct thing to do.
The problem is that this logic depends on two assumptions that social media environments systematically violate. First, it assumes that others' apparent choices reflect genuine, independent assessments. When likes can be purchased, when bot accounts can be created at scale, when engagement pods coordinate artificial amplification, the crowd's apparent judgment does not reflect independent assessment — it reflects manipulation. The information that social proof was supposed to extract is not there to be extracted. Second, it assumes that "the crowd" is a meaningful epistemic community — that many people independently evaluating something will collectively converge on accurate assessments. When a platform's algorithm amplifies certain content regardless of its truth value, the "crowd" that appears to be endorsing it may not have assessed it at all; it may have simply received it first because of algorithmic selection.
Social Proof in Offline vs. Online Environments
The quantitative nature of social proof signals on digital platforms represents a fundamental departure from the qualitative social proof available in offline environments. In a physical bookstore, you observe that a book is displayed prominently at the front — a social proof signal, but one modulated by the bookstore's editorial judgment and limited in its precision. On Amazon, you see that a book has 47,293 reviews with a 4.7-star average — a quantified social proof signal of extraordinary apparent precision.
This quantification creates several important effects. First, it makes social proof signals much more legible and comparable: a post with 100,000 likes is obviously more popular than a post with 1,000 likes in a way that "many people liked this" versus "some people liked this" is not. Second, it creates opportunities for gaming that did not exist in analog environments — you cannot easily manufacture 100,000 genuine-seeming in-person endorsements, but you can purchase 100,000 bot likes. Third, it may amplify the psychological weight of social proof, because the precision of the number carries an implied certainty that qualitative signals do not.
17.2 Social Proof Signals on Social Media Platforms
Like Counts: The Universal Currency of Digital Approval
The "like" button is one of the most consequential interface elements in the history of digital design. Introduced by Facebook in 2009 (after an earlier version called the "Awesome" button) and rapidly adopted by virtually every major social platform, the like mechanic created a universal quantified signal of social approval attached to every piece of content on the platform.
Before the like button, engagement with content was qualitative and labor-intensive: you had to write a comment, send a message, or explicitly share something to signal your engagement with it. The like button created a single-action signal that could be deployed in under a second, aggregated across all users who encountered a piece of content, and displayed as a count that serves as a visible proxy for quality, importance, or truth.
The like count operates as social proof in the following way: a post with 50,000 likes appears to have been endorsed by 50,000 people. Each of those endorsements, in the social proof heuristic, represents a data point — 50,000 people thought this was worth endorsing. When you are evaluating whether to read, believe, or share a piece of content, the like count provides a quick shortcut: this many people thought it was worth engaging with. The post's apparent credibility is amplified by its engagement count before you have read a word of it.
Follower Counts: Social Proof for Credibility
Follower counts serve as a social proof signal for the credibility and importance of the account holder rather than for specific pieces of content. An account with 10 million followers signals, through the social proof heuristic, that 10 million people have judged this person's content worth following — and that their opinions, endorsements, and claims therefore carry collective authority. An account with 500 followers does not carry the same social weight, regardless of the quality of its content.
The problem is that follower counts, like like counts, can be and routinely are purchased or otherwise inflated. The market for purchased followers emerged almost as soon as follower counts became a significant social proof signal. By the early 2010s, the follower count had become a currency in the influencer economy — a metric that determined advertising rates, brand partnership opportunities, and perceived credibility — and the market for fake followers had grown correspondingly.
Research by social media analytics firms has consistently found that large proportions of the follower counts of prominent accounts consist of bot accounts or inactive profiles that were sold as followers. Estimates have varied widely, but analyses using bot-detection algorithms have suggested that meaningful percentages — in some cases exceeding 50% — of the follower counts of prominent accounts may be non-genuine. When a person with 5 million followers speaks, the social proof signal suggests 5 million human endorsements; the reality may be far fewer.
Share Counts and Virality as Social Proof
Share counts represent a more active form of social proof than likes — sharing is a more effortful action than liking, and the decision to share typically reflects greater engagement with and endorsement of the content. Research consistently finds that users treat share counts as stronger social proof signals than like counts: a post that 10,000 people have shared carries more apparent endorsement than a post that 10,000 people have liked, because sharing requires more active choice.
This stronger social proof signal makes share counts a particularly potent driver of information spread. Content with high share counts appears highly credible through the social proof heuristic, which drives more engagement, which further amplifies the share count, which further increases apparent credibility. This positive feedback loop is the mechanism behind virality cascades — self-reinforcing cycles of engagement in which early apparent popularity drives subsequent actual popularity in a way that decouples viral spread from the genuine independent evaluation of the content's quality.
Trending Algorithms: Institutionalizing Social Proof
Trending topics, trending hashtags, and "Most Popular" or "Top Stories" designations represent social proof signals that platforms have institutionalized into their core interface design. When Twitter displays "trending topics" or when YouTube displays "trending videos," it is not merely reporting what users are choosing to engage with — it is presenting a platform-curated, algorithmically amplified selection of what appears most popular, and thereby reinforcing the apparent popularity of that selection.
This creates a circular logic that is one of the most important features of trending algorithms: trending content appears important because it is trending; it attracts additional attention and engagement because it appears important; this additional engagement confirms and extends its trending status. The trending designation is both a social proof signal and a self-fulfilling prophecy. Content that achieves trending status through early algorithmic amplification appears to have organic consensus — but the consensus may have been amplified into existence rather than discovered.
17.3 Manufactured Consensus: Engineering Fake Social Proof
The 2012 Instagram Bot Problem
In 2012, Instagram was a two-year-old application with rapid user growth and a growing community of influencers — users whose aesthetic content had attracted large followings. The platform had not yet developed sophisticated tools for detecting or removing fake accounts, and the market for purchased Instagram followers was already well-established.
The fake follower ecosystem worked simply: companies would sell packages of Instagram followers — typically bot accounts or recycled accounts with profile photos and some minimal activity — that would follow a target account on command. A user could purchase 10,000 "followers" for a modest fee and instantly appear to be significantly more popular than they were. Brands seeking to partner with influencers used follower counts as a proxy for reach and credibility; purchasing followers was therefore an investment in apparent credibility that could be monetized through brand partnerships.
By 2012-2013, the problem had grown sufficiently visible that Instagram took a significant public action: a mass deletion of fake accounts that came to be known as the "Instagram Follower Purge." High-profile accounts saw their follower counts drop dramatically overnight — Justin Bieber reportedly lost more than 3 million followers; companies and media organizations lost hundreds of thousands. The event provided a brief, dramatic glimpse of the extent to which social proof signals on the platform had been manufactured.
The purge did not solve the problem. Fake account creation adapted to detection methods, and the market for purchased engagement continued to evolve — from simple follower purchases to "engagement pods" (coordinated groups of real or semi-real accounts that mutually like and comment on each other's content to boost apparent engagement), to sophisticated bot networks capable of passing more stringent authenticity tests. The fundamental market incentive — that quantified social proof signals translate directly to economic value for influencers and reduced to apparent credibility for everyone — remained unchanged.
Coordinated Inauthentic Behavior: Manufacturing Political Consensus
The scale of social proof manipulation that emerged in political contexts during the mid-2010s exceeded what had been seen in influencer marketing and represented a genuinely new threat to democratic information environments. Coordinated inauthentic behavior — the use of networks of accounts, often including bot accounts, to create the appearance of organic consensus around particular political positions — became a major instrument of political influence operations.
The mechanism is straightforward in principle: a political actor wants to create the impression that a particular message, candidate, or viewpoint has broad popular support. Rather than trying to persuade individual users one at a time, the actor creates hundreds or thousands of accounts that appear to be ordinary users and coordinates these accounts to post, share, like, and comment on content in ways that artificially inflate its apparent popularity. The social proof signals — high like counts, high share counts, trending hashtags — suggest to genuine users that the content has broad organic support. This apparent consensus influences real users' assessments of the content's credibility and, potentially, their own political views.
Research on coordinated inauthentic behavior has documented its use by state actors, political campaigns, commercial interest groups, and ideological movements. The Internet Research Agency, a Russian organization, used coordinated inauthentic behavior at significant scale on American social media platforms during the 2016 U.S. presidential election. Their activity involved creating thousands of accounts, building genuine-seeming follower bases over time, and using these accounts to amplify particular political content and create the appearance of grassroots American support for positions that served Russian strategic interests.
Engagement Pods and the Authenticity Problem
Engagement pods represent a more sophisticated form of social proof manufacturing that exploits the distinction between bot accounts and real human activity. An engagement pod is a group of real users — often influencers or content creators — who agree to systematically like, comment on, and share each other's content whenever a group member posts. The engagement is generated by real human accounts with genuine activity histories, making it much harder for platform detection algorithms to identify as inauthentic.
Engagement pod activity creates social proof signals that are technically genuine — real humans did perform these actions — but that are socially fraudulent, because the engagement does not reflect genuine independent assessment of the content's quality or interest. A post that receives 1,000 likes through an engagement pod may appear to have 1,000 independent endorsements; it actually has 1,000 coordinated endorsements from people who liked it as part of a reciprocal arrangement rather than because they found it independently valuable.
The engagement pod phenomenon reveals the limits of any detection approach based on identifying bot accounts: when the fake social proof is generated by real humans in coordinated arrangements, technical detection is inadequate. The problem is structural, not merely technical.
17.4 The Muchnik Experiment: Direct Evidence of Social Influence Bias
The Science of Social Influence on Digital Platforms
In 2013, researchers Lev Muchnik, Sinan Aral, and Sean J. Taylor published a paper in Science that provided some of the most compelling direct experimental evidence for the power of social proof in digital contexts. Their study, conducted in collaboration with a social news website, involved randomly assigning posts to receive an artificial "like" at the time of posting — before any genuine user engagement — and then tracking the subsequent engagement history of each post.
The findings were striking. Posts that received an artificial initial upvote accumulated, over subsequent months, significantly more genuine upvotes than control posts that received no artificial initial vote. The initial false signal of social proof — a single artificially added like — created a significant and lasting bias in genuine user engagement. Posts that were randomly assigned an early upvote ultimately received 25% more genuine upvotes than control posts.
The study demonstrated the mechanism of information cascades in action: early social proof signals (initial upvotes) influenced subsequent users' assessments of the post, causing them to upvote it at higher rates than they would have otherwise. Those additional upvotes further increased the post's apparent popularity, attracting still more engagement. An initial false signal created a self-reinforcing spiral of genuine engagement based on the appearance of consensus rather than the reality of independent assessment.
Implications for Information Quality
The Muchnik et al. findings have profound implications for the quality of information that achieves prominence on social media platforms. If early social proof signals significantly bias subsequent engagement, then the apparent popularity of content on social platforms reflects, in part, the randomness of early engagement and the influence of any initial boost — whether that boost was genuine, purchased, or algorithmically provided.
This means that the "wisdom of crowds" rationale for treating social proof signals as proxies for quality is undermined at the structural level. The crowd's apparent collective judgment is not an independent aggregate of individual assessments; it is a sequential cascade in which early assessments have outsized influence on all subsequent assessments. The crowd is not wise in the way the social proof heuristic assumes; it is a sequential process that systematically amplifies early signals, whether those signals are accurate or not.
For political information specifically, this finding is especially concerning. If early upvotes — whether generated organically, purchased, or provided by coordinated inauthentic behavior — create lasting social proof biases in content engagement, then the apparent popularity of political content on social platforms is systematically corruptible. A well-resourced actor who can ensure that particular content receives early algorithmic amplification or early social proof signals can bias the apparent consensus of an entire platform around that content.
17.5 Social Proof and Political Misinformation
Why High-Engagement Misinformation Appears Credible
The social proof heuristic creates a specific vulnerability to misinformation that is particularly pronounced in fast-moving information environments. When users encounter a piece of content that makes a claim they cannot easily verify — a political allegation, a public health claim, a disputed historical assertion — they rely on social proof signals to assess its credibility. A post with high engagement appears credible because many people apparently found it credible enough to engage with. This appearance of credibility makes the user more likely to accept the claim, share the content, and update their beliefs accordingly.
Misinformation often outperforms accurate information on social proof metrics for reasons documented extensively in the research literature. False or emotionally inflammatory content tends to generate stronger emotional responses (outrage, fear, moral condemnation) than accurate but mundane reporting. Stronger emotional responses drive higher engagement rates. Higher engagement rates generate stronger social proof signals. The result is a systematic tendency for false and emotionally inflammatory content to achieve better social proof metrics than accurate content — making it appear more credible through the very heuristic that users rely on to assess credibility.
Research by Vosoughi, Roy, and Aral, published in Science in 2018, found that false news stories spread significantly faster and further on Twitter than true news stories, reaching more users and penetrating more deeply into the network. The researchers found that the emotional novelty of false stories — their tendency to be more surprising, more emotionally provocative, and more morally charged than true stories — was the primary driver of this differential spread. Social proof mechanics amplified this differential: false stories that generated high early engagement attracted further engagement through social proof dynamics, while accurate but less emotionally provocative stories did not.
The Facebook News Feed and Political Polarization
The interaction between social proof signals and algorithmic amplification created conditions in the mid-2010s that significantly shaped the political information environment in multiple countries. Facebook's News Feed algorithm, which determines what content users see based on predicted engagement, effectively used social proof signals (likes, comments, shares) as proxies for content quality and interest. Content that generated high engagement — including content that generated high engagement because it triggered outrage, fear, or partisan validation — was amplified to additional users, generating additional engagement, which triggered additional amplification.
Internal Facebook research, portions of which became public through whistleblower disclosures, documented that the company's algorithm was amplifying content that users reported finding harmful, divisive, or misinformative at higher rates than more accurate, less emotionally inflammatory content. The social proof signals that the algorithm used as proxies for quality — engagement counts — did not reliably track quality. They tracked emotional provocation, which is not the same thing.
The Facebook News Feed case illustrates a structural property of engagement-optimizing algorithms: when an algorithm uses social proof signals (engagement counts) to determine what content to amplify, it systematically favors content that exploits the social proof heuristic. Content designed to trigger social proof cascades — by generating immediate emotional engagement that produces high early signal — receives algorithmic preference over content that is accurate, important, or genuinely interesting but that does not trigger immediate emotional response.
17.6 The Virality Cascade and Manufactured Organic Consensus
How Algorithmic Amplification Creates Apparent Consensus
A virality cascade occurs when the apparent popularity of content drives additional engagement, which further increases apparent popularity, which drives additional engagement — a positive feedback loop in which early prominence compounds into wide apparent consensus. On social media platforms, virality cascades are enabled and accelerated by algorithmic amplification: platforms that surface trending or high-engagement content to additional users create the conditions for cascades that individual organic sharing could not produce alone.
The resulting apparent consensus — the impression that enormous numbers of people are independently engaging with and endorsing particular content — is partially manufactured, in the sense that it would not exist without the algorithmic amplification that creates the cascade conditions. Content that achieves viral status on social platforms often does so not because millions of users independently discovered and endorsed it, but because an algorithm placed it in front of millions of users after a relatively small number of early engagements signaled its potential for high engagement.
This manufactured quality of viral consensus is rarely legible to users consuming the content. From the user's perspective, content that has been viewed 10 million times appears to have been independently chosen by 10 million people — a powerful social proof signal. The algorithmic mechanism that concentrated those views does not appear in the social proof signal; only the outcome (the view count) is visible. The appearance of organic consensus is maintained even though the consensus was partly manufactured by the platform's own amplification choices.
The Circular Logic of Trending
Trending systems institutionalize the virality cascade logic into explicit platform features. A hashtag or topic that begins trending does so because it has achieved elevated engagement levels — it is more popular than usual. But the trending designation itself increases its engagement, because users who see it trending are more likely to investigate, engage with, and share it. The trending designation is simultaneously a consequence of popularity and a cause of further popularity.
This circularity means that trending status can be gamed: if a coordinated group of accounts can push a hashtag or topic to trending status through coordinated posting, the trending designation then attracts genuine organic engagement that confirms and extends the trending status. The manufactured initial signal becomes the seed of genuine subsequent consensus, making it nearly impossible to retrospectively distinguish the genuine from the manufactured.
Political operatives, marketing agencies, and activist groups have all developed "trending engineering" capabilities — tactics for coordinating posting and engagement to artificially achieve trending status and thereby trigger genuine virality cascades. The technical difficulty of this approach has decreased as tools for coordinating social media activity have become more sophisticated and widely available.
Voices from the Field
"The like count was supposed to tell you something true. It was supposed to reflect how many people genuinely valued this piece of content. But from the beginning, it was gameable. And because it was gameable, it got gamed. By the time we understood the full scope of the problem, the fake social proof was so thoroughly mixed with the genuine social proof that we couldn't separate them. The number didn't mean what we thought it meant. We kept using it as though it did."
— Anonymous former Facebook data scientist, interviewed for this book
"What Cialdini called social proof is a beautiful evolutionary mechanism for navigating an uncertain world. The problem with social media is that it takes this beautiful mechanism and weaponizes it. It creates the appearance of consensus without the reality. And our brains can't tell the difference. We evolved for environments where the appearance of consensus was a reliable signal that consensus was real. We didn't evolve for environments where someone could buy 100,000 likes for forty dollars."
— Dr. Robert Cialdini, author of Influence, in conversation at a technology ethics symposium
SIDEBAR: Maya and the Social Proof of Viral Content
Maya is spending Sunday evening scrolling through her Instagram feed when she encounters a video that has 2.3 million views and 180,000 likes. The video makes a dramatic claim about a supplement that has allegedly been shown in clinical trials to improve teenage concentration and reduce anxiety. Several students from what appears to be a university campus are enthusiastically endorsing it in the video. The comment section shows hundreds of comments from users who say they have tried it and had positive experiences.
Maya does not know how to evaluate clinical trial claims. She does not know how to distinguish real testimonials from purchased ones. She does not know that the "university students" in the video are paid actors, that the like count includes purchased engagement from a campaign that bought 80,000 initial likes to trigger organic social proof dynamics, that the positive comments include a significant proportion posted by accounts in an engagement pod, or that the "clinical trial" referenced in the video does not support the claims being made.
What Maya knows is this: 2.3 million people watched this. 180,000 people liked it. Hundreds of people in the comments say it worked. The video's apparent social proof — the aggregated signals of millions of apparent endorsements — does the persuasive work that no single argument could do. Maya shares it to her story and tags her friend Priya: "Have you seen this? Apparently it actually works."
Maya is not stupid. She is operating exactly as human beings are designed to operate: using social consensus as a shortcut to evaluate claims she cannot easily assess independently. The platform has engineered an environment in which the social consensus she is reading has been partly purchased, partly manufactured, and partly algorithmically amplified into existence. The gap between the appearance of consensus and the reality of it is invisible to her, and the platform has done nothing to make it visible.
SIDEBAR: The Velocity Media Trending Algorithm Audit
When Velocity Media's engineering team audited the trending algorithm six months after launch, the results surprised some members of the product team and confirmed the concerns Dr. Aisha Johnson had been raising since the feature was proposed.
The audit found that content achieving trending status fell disproportionately into three categories: (1) content that triggered high-intensity emotional responses (outrage, fear, moral shock) in the first two hours after posting; (2) content from accounts with large follower counts, which had inherent social proof advantages because their early engagement was higher; and (3) content that had received early coordinated engagement, including from accounts whose activity patterns suggested pod behavior.
The third finding was the most troubling. Approximately 12% of content that achieved trending status in a sampled month showed early engagement patterns consistent with coordinated inauthentic behavior — rapid, simultaneous engagement from accounts with similar behavioral profiles. This coordinated early engagement had pushed the content into the trending algorithm's threshold, where organic social proof dynamics then took over.
Dr. Johnson's memo on the audit findings was direct: "What we are calling a 'trending' algorithm is, in significant part, a mechanism that amplifies content whose early engagement was engineered rather than organic. We are institutionalizing manufactured consensus as a core platform feature. I recommend suspending the trending feature pending implementation of more robust detection and a fundamental redesign of the trending criteria."
Marcus Webb's response emphasized user engagement: "Trending content drives 34% of daily active engagement. We can improve detection without killing the feature. Let's fix the detection, not the mechanic."
Sarah Chen ordered a phased improvement to bot and pod detection within the trending algorithm, maintaining the feature while upgrading its integrity safeguards. Dr. Johnson noted her continuing concerns in the project record, adding: "Better detection of inauthentic behavior addresses the manipulation problem but not the structural problem. The structural problem is that engagement metrics are not good proxies for content quality, and using them as such will systematically amplify emotionally provocative content over accurate content, regardless of whether the engagement is authentic."
17.7 The Influencer Economy and Purchased Social Proof
How Follower Counts Became a Financial Asset
The emergence of the influencer economy — in which individuals with large social media followings are paid by brands to promote products and services — created a direct financial incentive for follower count inflation that has profoundly distorted social proof signals across all major platforms. When follower count determines advertising rate — when 1 million followers is worth a specific dollar amount to a brand partnership — the incentive to purchase followers is economically straightforward: the cost of purchased followers is substantially less than the advertising revenue those purchased followers enable.
The influencer marketing industry has developed various mechanisms for auditing influencer authenticity — analyzing engagement rates (the ratio of likes and comments to followers), examining follower quality (account activity, posting history, follower-to-following ratios), and using platform analytics tools to identify unusual growth patterns. But the purchased follower industry has adapted to each of these detection methods, offering increasingly sophisticated "high-quality" followers whose behavioral profiles are designed to evade detection.
This arms race between authenticity detection and authenticity simulation has left the influencer marketing industry in a state of chronic uncertainty about the genuine value of social proof signals. Brands pay for apparent reach and apparent social proof; the gap between apparent and genuine reach is unknown and varies enormously. The social proof that consumers observe when they see an influencer with millions of followers endorsing a product has been systematically corrupted by the market incentives that follower counts created.
The Deeper Problem: Authentic Engagement Pods
Even when follower counts are genuine — built through authentic audience development over time — the social proof they represent may be distorted by engagement dynamics that do not reflect genuine audience interest. Engagement pods, as discussed in Section 17.3, allow influencers to maintain high engagement rates through coordinated mutual amplification even when their genuine audience engagement declines. An influencer with 500,000 genuine followers and mediocre organic engagement can maintain artificially high engagement metrics through pod membership, preserving the social proof signal that their content is highly valued even when genuine audience interest is declining.
This means that social proof signals in the influencer economy are unreliable at multiple levels: follower counts may be inflated, engagement rates may be artificially maintained, and the comments and testimonials that appear in the comment sections of influencer posts may include significant proportions of pod-generated or paid activity. The consumer who relies on influencer social proof signals to assess product quality or credibility is operating on information that has been systematically adulterated.
17.8 Like Count Removal: A Natural Experiment
Instagram's 2019 Test and Its Implications
In 2019, Instagram conducted a significant experiment in several countries including Canada, Australia, Brazil, Ireland, Italy, Japan, and New Zealand: it removed visible like counts from posts in users' feeds. Individual users could still see their own like counts, but the public-facing count — the social proof signal visible to everyone — was hidden. The experiment, described by Instagram as an effort to reduce social comparison and its negative effects on user wellbeing, effectively removed one of the platform's primary social proof signals from public view.
The results of the experiment were mixed and revealing in ways that illuminate the multiple functions that like counts serve simultaneously. Mental health researchers studying the effects of the change reported some positive effects: users who had previously reported feeling evaluated or judged by their like counts showed reduced social comparison anxiety in the test markets. Teenagers who participated in user research described feeling less pressure to post content calibrated to maximize likes when the like count was not publicly visible.
But the results also revealed the degree to which like counts had become integrated into users' core experience of the platform. Some users reported feeling disoriented without like counts, losing a navigational tool they used to assess what content was worth spending time on. Content creators — particularly those who depended on like counts as performance metrics for their work and as evidence of value for brand partnerships — strongly opposed the change, arguing that hidden like counts disadvantaged creators relative to each other and relative to their ability to demonstrate value to advertising partners.
What Happened Next
Instagram ultimately did not implement the like count removal globally. After the test period, it offered users the option to hide like counts on their own posts and on others' posts in their feed — making the change opt-in rather than default. This outcome is instructive: the platform discovered that removing a social proof signal, even one with documented negative psychological effects, created significant user experience disruption and creator economy resistance that made a mandatory removal politically untenable.
The Instagram like count experiment also generated significant academic and journalistic analysis that illuminated the paradoxical role of social proof signals in user experience. Users who claimed to find like counts stressful also used them as navigational tools. Users who said they would prefer not to be judged by their like count also used others' like counts to assess the value of content. The social proof signal was simultaneously a source of harm and a source of information — and removing it produced both relief and disorientation.
The experiment's partial implementation — an opt-in rather than default change — also revealed the limits of user choice as a solution to dark pattern harms. Making like count hiding optional means that only users who proactively understand the psychological mechanism of like count-driven social comparison and who choose to act on that understanding will benefit from the change. The users most harmed by social proof signals — those with the lowest resilience to social comparison, typically younger and more psychologically vulnerable users — are least likely to opt into protections they may not fully understand.
Summary
Social proof — the tendency to treat others' behavior and beliefs as a guide for our own — is an evolutionarily ancient and generally adaptive heuristic that serves as a form of distributed intelligence in environments of genuine information scarcity. On social media platforms, this heuristic is systematically exploited through quantified engagement signals: like counts, share counts, follower counts, trending designations, and view tallies that present the appearance of social consensus regardless of whether genuine, independent consensus underlies them.
The manufactured consensus problem operates at multiple levels simultaneously. Bot accounts and purchased engagement directly inflate social proof signals. Coordinated inauthentic behavior creates the appearance of organic political consensus around manufactured positions. Engagement pods produce authentic-seeming social proof from coordinated rather than independent assessments. And virality cascades — positive feedback loops in which apparent popularity drives additional engagement that further increases apparent popularity — create the impression of organic consensus even when the original signal was algorithmically generated or artificially seeded.
The Muchnik et al. (2013) experiment provides the most direct evidence that early social proof signals create significant and lasting biases in subsequent content engagement — that the crowd's apparent collective judgment is a sequential cascade that amplifies early signals, not an independent aggregate of genuine assessments. This finding has particularly serious implications for political information, where high-engagement misinformation exploits social proof to appear credible, and for trending algorithms that institutionalize the amplification of emotionally provocative content over accurate content.
Instagram's like count removal experiment reveals the complexity of addressing social proof harms: even well-designed interventions face user experience resistance, creator economy opposition, and the limits of opt-in solutions for harms that disproportionately affect the most vulnerable users. The social proof problem on social media platforms is not a technical glitch that can be fixed by better detection; it is a structural consequence of using quantified engagement signals as proxies for content quality and credibility.
Discussion Questions
-
Cialdini identified social proof as one of six fundamental principles of social influence. How does the digital quantification of social proof signals — transforming "many people endorsed this" into a precise number — change the psychological dynamics of social proof? Does quantification make social proof more or less reliable as an epistemic guide?
-
The Muchnik et al. (2013) experiment found that an artificial initial upvote created significant lasting bias in genuine user engagement. What are the implications of this finding for the design of social media platforms? Should platforms randomize the display order of initial engagement signals to prevent early social proof biases from compounding?
-
Coordinated inauthentic behavior creates manufactured social proof that is designed to be indistinguishable from organic consensus. At what point does the existence of such manipulation undermine the epistemic value of social proof signals on social media platforms? Is there a threshold of manipulation at which social proof signals become net harms?
-
The Instagram like count removal experiment revealed that users simultaneously found like counts stressful and used them as navigational tools. How should platform designers respond to features that serve both beneficial and harmful functions? Is an opt-in solution adequate for harms that disproportionately affect the most vulnerable users?
-
The Velocity Media audit found that approximately 12% of content achieving trending status showed early engagement patterns consistent with coordinated inauthentic behavior. At what percentage does the presence of inauthentic engagement in a trending system warrant suspending the feature? How should this threshold be determined?
-
Misinformation research consistently finds that false content outperforms true content on engagement metrics because false content is more emotionally provocative. If engagement metrics are systematically biased toward emotionally provocative content, can any engagement-optimizing algorithm avoid amplifying misinformation? What would a non-engagement-optimizing algorithmic approach look like?
-
The chapter describes the influencer marketing industry as existing in "chronic uncertainty" about the genuine value of social proof signals, given the prevalence of purchased followers and engagement pods. Should regulators require influencers and platforms to provide verified engagement metrics that have been audited for authenticity? What would such a regulatory framework look like?