Case Study 17-2: Instagram Hiding Like Counts — What Happened When a Platform Removed Its Core Social Proof Signal
Background
In July 2019, Instagram began one of the most significant natural experiments in the history of social media platform design: it removed visible like counts from posts in the feeds of users in Canada, Australia, Brazil, Ireland, Italy, Japan, and New Zealand. Individual account holders could still see how many likes their own posts received, but the public-facing count — the number displayed beneath each post for all users to see — was hidden. Users visiting a post would see that it had likes but not how many.
The experiment was, from Instagram's stated perspective, an effort to address documented harms associated with social comparison and social proof dynamics on the platform. Instagram CEO Adam Mosseri described the goal as creating "a less pressurized environment where people feel comfortable expressing themselves." Multiple mental health researchers and advocacy organizations had spent years documenting the association between like count anxiety and negative psychological outcomes, particularly among adolescent users. The like count removal was Instagram's most concrete response to this body of evidence.
The experiment attracted extraordinary attention from users, creators, mental health researchers, platform design scholars, and industry observers. Over the subsequent months and into 2020, it produced a substantial body of evidence — including platform-commissioned research, independent academic studies, qualitative user research, and creator community feedback — that illuminated the multiple conflicting functions that like counts serve and the complexity of removing a social proof signal that had become deeply integrated into the platform's social and economic architecture.
The outcome — a partial implementation in the form of an opt-in feature rather than a global mandatory change — tells us as much as the experiment itself. Understanding why a platform with documented evidence of like-count-related harm chose a minimal rather than comprehensive response requires understanding the multiple stakeholders whose interests were affected by the change.
Timeline
2012-2013: Research by Ethan Kross and colleagues finds that Facebook use — particularly passive consumption of others' content — is associated with decreased life satisfaction and increased feelings of envy. Like counts are identified as a mechanism of social comparison that drives these effects.
2015-2017: Multiple large-scale studies link Instagram-specific engagement with social comparison and negative body image among teenage girls. Researchers identify like counts and follower counts as primary mechanisms through which social comparison operates on the platform.
2018: The Royal Society for Public Health in the United Kingdom publishes "Status of Mind," a report rating Instagram as the social media platform most harmful to young people's mental health, with like count-driven social comparison identified as a central mechanism.
February 2019: Instagram CEO Adam Mosseri signals publicly that the company is exploring changes to like count visibility, citing mental health concerns.
July 2019: Instagram begins the like count removal test in Canada. The feature is described as experimental and limited to one market, with plans to expand and evaluate.
August 2019: The experiment expands to six additional countries. User reactions vary sharply: many users express relief and describe reduced anxiety; creator communities and marketing professionals express concern about losing performance metrics.
September 2019: The first independent qualitative research on the experiment is published, drawing on interviews with Instagram users in test markets. Findings are mixed: users report reduced social comparison anxiety but also describe using other signals (follower counts, comment counts, Explore page features) as proxies for the absent like counts.
October 2019: The experiment expands to the United States for a subset of users. Coverage increases significantly in U.S. media.
November 2019: A coalition of social media marketing agencies and influencer marketing companies publishes an open letter opposing the like count removal, arguing that it would damage the influencer economy by removing a primary metric for assessing campaign performance.
January 2020: Instagram releases limited results from the experiment, indicating that mental health benefits were observed among users who reported feeling most pressured by like counts before the change, but that overall user engagement effects were mixed and that creator satisfaction had declined in test markets.
May 2020: COVID-19 pandemic causes a significant change in the experiment environment, as lockdowns dramatically alter social media use patterns. The pandemic period is recognized as confounding for any clear before-after comparison.
2021: Instagram announces that the like count change will not be implemented as a global mandatory change. Instead, users will have the option to hide like counts on their own posts and to hide like counts on other users' posts in their feed. The opt-in framing is described as respecting user choice.
2022-2024: Research assessing the like count removal experiment in retrospect finds that the opt-in implementation reached relatively small proportions of users — primarily those with existing mental health literacy who were already motivated to manage their social comparison behavior. The users most harmed by like count anxiety — adolescents with high social comparison orientation — were among the least likely to use the opt-in feature.
What the Research Found
Mental Health Benefits Among High-Risk Users
The most consistent finding across multiple studies examining the like count removal experiment was that users who had previously reported the most distress related to like counts — high social comparison orientation, low self-esteem, adolescent users — reported meaningful reductions in like count anxiety and social comparison pressure in test markets. For these users, the removal of the public like count removed a specific trigger for social comparison: the ability to observe that others' posts received significantly more likes than their own, and to interpret this discrepancy as evidence of relative social worth.
Research using experience sampling methodology found that the frequency of spontaneous social comparison thoughts declined significantly for high-social-comparison users in test markets relative to control groups. This finding is consistent with the social proof framework: when the quantified social proof signal (like count) is absent, the specific trigger for social comparison it provided is absent. Users still engaged in social comparison through other channels, but the most salient and immediate trigger was removed.
The Navigation Problem
A finding that complicated the straightforward "remove like counts, reduce harm" narrative was what researchers called the "navigation problem." Many users — particularly those who used Instagram as a discovery platform for products, services, content creators, and public information — reported using like counts as a navigational tool: a quick signal for assessing whether content was worth spending time on. Without like counts, these users reported spending more time on content that proved less valuable and experiencing difficulty quickly assessing the credibility or quality of posts they encountered.
This finding captures a genuine tension in social proof signal design. Like counts are harmful when used for social comparison (evaluating one's own relative social worth); they may be genuinely useful when used for content curation (quickly assessing which of many available content options is worth engaging with). These two uses are not cleanly separable in practice — the same user uses like counts for both purposes, sometimes within a single session. Removing the signal removes both the harmful and the beneficial use simultaneously.
The navigation problem was particularly pronounced for users who followed accounts in categories where like counts had served as credibility signals — health, financial advice, political commentary, product recommendations. These users had used like counts as a quick quality heuristic, and removing them created genuine epistemic uncertainty about content credibility. The social proof that like counts provided was not purely harmful; it served real functions even when those functions were epistemically imperfect.
Creator Economy Resistance
The most organized and politically potent opposition to the like count removal came from the influencer and creator economy. For creators who relied on like counts as the primary metric for assessing the performance of their content and for demonstrating value to brand partners, the like count removal was an economic threat. Without visible like counts, creators had reduced ability to benchmark their performance against other creators, to demonstrate campaign performance to advertisers, and to self-assess which types of content their audiences valued most.
The creator community's objection reveals a dimension of social proof that the mental health research frame had not fully addressed: like counts serve not only as social proof signals for audiences but as performance metrics for creators. Removing the audience-facing social proof signal also removed the creator-facing performance feedback loop. These are distinct functions that happen to be served by the same interface element.
Creator opposition to the like count removal was also partly self-interested in ways that should be acknowledged: creators whose value to brands is measured by engagement metrics have incentives to preserve those metrics regardless of the effects on their audiences. But the objection was not entirely self-interested — some creators genuinely used like counts as feedback mechanisms for understanding what content resonated with their audiences, and the removal of public like counts disrupted this feedback in ways that were not purely about advertising value.
The Substitution Problem
Several studies found that users in test markets partially substituted other social proof signals for the absent like count. Follower counts became more salient as a credibility indicator when like counts were removed; comment counts were more actively examined as proxies for engagement; the Explore page's curation was used more heavily as a signal of content quality. This substitution phenomenon is important because it demonstrates that social proof-seeking behavior does not simply disappear when one signal is removed — users find alternative signals.
The substitution finding suggests that the removal of individual social proof signals may have limited effectiveness in reducing social proof-driven harm if alternative signals remain available and if the underlying social comparison motivation is not addressed. A user whose primary concern is where they stand in the social hierarchy relative to their peers will find ways to make that comparison regardless of which specific metrics are available. The harm may be reduced when the most salient and immediately actionable comparison signal is removed, but it is unlikely to be eliminated.
Analysis: Why Instagram Chose the Opt-In Path
The Multi-Stakeholder Problem
Instagram's decision to implement like count hiding as an opt-in feature rather than a mandatory global change reflected a genuine multi-stakeholder problem. The evidence on mental health benefits was real but not universal — the benefits were concentrated among high-social-comparison users, while neutral or negative effects were experienced by others. The creator economy's opposition was organized and financially significant. Advertisers and brands that relied on like count visibility for campaign assessment had their own stake in the decision. And users themselves were divided, with some expressing strong preferences for restored like counts.
In this context, an opt-in solution has a surface plausibility as a compromise: users who want to hide like counts for mental health reasons can do so; users who want to see like counts for navigational or professional reasons can do so. The individual user's choice is respected.
The problem with this framing is that it applies market-liberal choice logic to a harm that is not symmetrically distributed among users with equal ability to protect themselves. The users most harmed by like count anxiety — adolescents with high social comparison orientation, users experiencing mental health difficulties, users who lack digital literacy about how social proof signals are manufactured — are precisely the users least likely to (a) know that the opt-in feature exists, (b) understand the mechanism by which like counts drive harm, and (c) have the self-awareness and executive function to use a protective feature proactively.
This is a recurring problem in platform harm reduction: voluntary protective features systematically underprotect the most vulnerable users because vulnerability and protective self-advocacy are inversely correlated.
The Economic Logic of Minimal Intervention
A more cynical but not entirely unfair reading of Instagram's decision is that the like count is a core driver of the engagement dynamics that the platform's business model depends on. Like counts drive content creation (creators post more when like counts are visible as performance feedback), drive content consumption (users engage more when like counts signal what is worth engaging with), and drive social comparison anxiety (which itself drives return visits as users check their like counts and monitor their social standing). Removing like counts entirely would remove a significant driver of platform engagement at a moment when Instagram was competing intensively with TikTok for user attention.
The opt-in implementation preserves all of these engagement drivers for the majority of users who do not opt in, while providing a visible, communicable response to mental health critics that allows the company to say it has taken action. This is not necessarily cynical bad faith — it may be the rational corporate response to genuinely conflicting stakeholder interests — but it does illustrate the structural difficulty of expecting platforms to take strong action against features that drive their core business metrics.
Discussion Questions
-
The Instagram experiment found mental health benefits concentrated among high-social-comparison users. Does this finding support or undermine the argument for a mandatory global change? Is it appropriate to impose a mandatory change that benefits some users while inconveniencing others?
-
The "navigation problem" revealed that like counts serve both harmful (social comparison) and beneficial (content curation) functions simultaneously. How should platform designers respond to features that are simultaneously useful and harmful? Is there a design solution that preserves the navigational function while eliminating the social comparison harm?
-
Creator community opposition to like count removal was partly self-interested and partly legitimate. How should platform design decisions weigh creator interests — which are professional and economic — against user wellbeing interests — which are psychological and developmental? Are these interests always in conflict?
-
The substitution finding suggests that removing one social proof signal may cause users to rely more heavily on alternative social proof signals. Does this finding argue for removing all social proof signals simultaneously rather than one at a time? Or does it argue that individual signal removal is ineffective and that underlying social comparison motivation should be addressed instead?
-
Instagram chose an opt-in rather than opt-out implementation, describing this as respecting user choice. Apply the concept of choice architecture from behavioral economics to evaluate this decision: is opt-in or opt-out the appropriate default for a feature designed to reduce psychological harm, and why?
What This Means for Users
The Instagram like count experiment provides several practical lessons for users navigating social media platforms with social proof signals:
The feature exists — use it if you want to: Instagram's opt-in like count hiding is available to users who want to use it. If you find that checking like counts on your own or others' posts produces feelings of comparison, inadequacy, or anxiety, the feature is worth trying. The research suggests it helps most for users who actively experience like count-related distress.
Notice your own signal substitution: If you hide like counts on your feed but find yourself paying more attention to follower counts or comment volumes, you have observed the substitution phenomenon in yourself. This is worth noticing: social comparison motivation, not the specific signal, is the underlying driver. The signal removal may help, but addressing the comparison motivation directly — through awareness, through deliberate practice of comparing yourself to your past self rather than others, through limiting time on comparison-triggering content — may be more effective.
For parents and educators: The research consistently finds that adolescents with high social comparison orientation are most harmed by visible like counts and least likely to proactively use protective features. If you are a parent or educator working with teenagers, it is worth explicitly discussing the like count hiding feature and the mechanisms by which like counts drive social comparison. Awareness of the mechanism provides some protective distance from its effects.
The number does not mean what it seems: Even visible like counts that have not been explicitly purchased have been shaped by algorithmic amplification, the Muchnik-effect cascade dynamic, and various forms of engagement gaming. The 47,000 likes on a post you encounter is not evidence of 47,000 independent genuine endorsements. Understanding this changes what the number means as a signal of quality or credibility — it becomes, at best, a noisy proxy for popularity, not a measure of worth.