Case Study 4.2: Dunning-Kruger and COVID-19 Misinformation — Who Spreads It and Why

Overview

The COVID-19 pandemic, which emerged in late 2019 and dominated global public health for the following years, produced an unprecedented volume of misinformation — a phenomenon the World Health Organization termed the "infodemic." Claims about the virus's origins, prevention, treatment, severity, and the nature of public health responses circulated at massive scale across social media platforms, often faster than health authorities could respond. This case study examines the role of the Dunning-Kruger effect and related calibration failures in COVID-19 misinformation production and spread, while also attending to the other cognitive and structural factors that amplified the infodemic.


The COVID-19 Infodemic: Scale and Character

The Information Environment

The World Health Organization declared COVID-19 a global health emergency in January 2020 and formally characterized the simultaneous outbreak of misinformation as an "infodemic" — an overabundance of information, accurate and otherwise, that makes it difficult to identify trustworthy health guidance. The scale of COVID-19 misinformation was genuinely unprecedented:

By early 2020, researchers at the Reuters Institute found that health misinformation represented 88% of COVID-related content flagged by moderators on major platforms. A systematic review by Islam et al. (2020) identified 2,311 distinct pieces of COVID-19 misinformation across 25 languages in the first three months of the pandemic. MIT researchers documented that false COVID news spread six times faster than accurate health information on Twitter.

The character of COVID-19 misinformation was varied: - Treatment claims: False cures ranging from hydroxychloroquine to ivermectin to injecting disinfectant - Prevention myths: Unmasked social distancing; vitamin D/C megadosing; garlic - Origin theories: Laboratory escape, bioweapon claims, 5G towers - Vaccine misinformation: mRNA-based genetic modification, tracking chips, infertility claims - Severity denial: "It's just the flu," herd immunity without vaccination, inflated death toll claims - Statistical misinformation: Misrepresentation of case fatality rates, recovery statistics, vaccine efficacy


The Dunning-Kruger Dynamic in COVID Misinformation

Who Spreads COVID Misinformation?

Several empirical studies have examined the relationship between COVID-19 knowledge, confidence, and willingness to share misinformation during the pandemic.

Roozenbeek et al. (2020) surveyed 9,816 participants across the United States and United Kingdom, measuring actual COVID-19 knowledge (factual accuracy), self-assessed knowledge, and susceptibility to COVID misinformation. Their findings were consistent with a Dunning-Kruger pattern:

  • Individuals who scored lower on objective COVID knowledge measures were more likely to rate false COVID claims as accurate
  • These same low-knowledge individuals were not systematically less confident in their health information assessments
  • The gap between objective knowledge and self-assessed competence was largest among low-knowledge individuals — they overestimated their ability to evaluate COVID health claims

Plohl and Musil (2021) examined the relationship between analytical thinking, COVID conspiracy beliefs, and self-rated COVID knowledge. They found that lower analytical thinking predicted greater conspiracy belief, but that this relationship was partially mediated by overconfidence — individuals with lower analytical skills were more confident in their (incorrect) assessments than their accuracy warranted.

Allington et al. (2021) studied social media use and health misinformation beliefs in the UK during the pandemic. They found that individuals who reported spending more time on social media — and who tended to rely on social media as a primary news source — showed greater belief in health misinformation. Critically, this group also showed higher subjective confidence in their COVID information — a pattern consistent with the Dunning-Kruger prediction that those with less reliable information sources may be less aware of the limits of their knowledge.

The "Research" Pattern in COVID Misinformation Sharing

One of the most consequential applications of the Dunning-Kruger framework to COVID misinformation involves what researchers have called the "I did my own research" phenomenon. Social media platforms — particularly Facebook groups, YouTube comment sections, and Twitter threads — saw an explosion of individuals presenting themselves as having independently investigated COVID topics and arrived at conclusions differing from official public health guidance.

The Dunning-Kruger effect predicts exactly this pattern: individuals who lack the domain expertise to recognize the limitations of their information-gathering and reasoning appear, to themselves, to have conducted rigorous research. They lack the background knowledge necessary to:

  1. Identify cherry-picked or statistically misrepresented data
  2. Evaluate the quality of non-peer-reviewed sources
  3. Understand the significance of sample sizes and confidence intervals
  4. Recognize the difference between preprints and peer-reviewed findings
  5. Contextualize findings within the broader body of research

Without these skills, the subjective experience of having "looked into it" feels identical to the experience of genuine careful investigation. The confident self-presentation that results is not deliberate deception but a predictable consequence of metacognitive incompetence — not knowing what one doesn't know.

Domain-Specific Dunning-Kruger Effects

Research by Pennycook and Rand (2019) demonstrated that overconfidence in one's ability to evaluate health information is not simply a function of general cognitive ability but is domain-specific. Someone highly capable and well-calibrated in their professional domain (say, an accountant or an engineer) may show Dunning-Kruger patterns in the unfamiliar domain of epidemiology or virology. This helps explain why COVID misinformation was not spread exclusively by low-education individuals — it was also spread by educated professionals who were highly confident in their unfamiliar-domain assessments.

A particularly consequential example involved physicians and other healthcare professionals who shared COVID misinformation. Surveys found that medical professionals in specialties unrelated to infectious disease or public health were sometimes as susceptible to COVID misinformation as non-professionals, while being more likely to be perceived as credible sources. The formal credentials of an orthopedic surgeon do not confer expertise in epidemiology, but both the surgeon and their audience may overestimate the generalizability of domain expertise.


Other Cognitive Biases Contributing to the Infodemic

Availability Heuristic

In the early pandemic, the COVID-19 death toll became highly available through wall-to-wall media coverage, producing availability-heuristic overestimations of personal risk — which had policy consequences for compliance behavior. Later, as the acute crisis phase passed, COVID deaths became less available through media (displaced by other news), potentially producing availability-heuristic underestimations of ongoing risk. The availability heuristic contributed to widely varying risk perceptions that tracked media coverage cycles rather than actual mortality trends.

Proportionality Bias

The COVID-19 pandemic was one of the most consequential global events in a generation. Proportionality bias strongly predicts that such major events should attract major, intentional, organized causes. The persistent conspiracy theories about COVID-19 origins — laboratory engineering, bioweapon deployment — are consistent with this prediction. Natural zoonotic spillover, while scientifically well-supported as a mechanism, is not a "large" cause in the proportionality intuition sense. An organized laboratory accident or deliberate bioweapon release provides a cause more proportionate to a global pandemic that killed millions.

Importantly, proportionality bias does not resolve the scientific question of COVID-19's origin — that remains subject to ongoing investigation. But it does predict that conspiracy explanations will be psychologically appealing regardless of the evidential status of the natural-origin hypothesis.

Confirmation Bias and Echo Chambers

COVID-19 misinformation spread through social networks that were already structured by existing identities and beliefs. In the United States, COVID skepticism — about mask effectiveness, vaccine safety, the severity of the virus — became rapidly aligned with political identity. Once this alignment occurred, confirmation bias ensured that individuals preferentially sought and accepted information consistent with their group's position, while discounting contradictory evidence from official sources.

Platforms showed that COVID-19 health misinformation clustered in Facebook groups and Twitter communities that were defined by other anti-establishment interests — vaccine skepticism, alternative medicine, political conspiracy communities. New members of these communities encountered concentrated COVID misinformation, which confirmation bias then made them more likely to share.

In-Group/Out-Group Dynamics and Source Credibility

The WHO, CDC, NIH, and major health systems were perceived by significant portions of the population as out-group authorities — institutions associated with cultural and political positions that conflicted with their identity. The perceived credibility of these institutions collapsed for segments of the population during the pandemic, partly through motivated reasoning and identity-protective cognition.

Meanwhile, in-group authorities — community leaders, alternative health influencers, politically aligned media figures — gained credibility as COVID information sources for these populations, even when their claims directly contradicted scientific consensus. This pattern directly reflects Kahan's cultural cognition framework: factual beliefs about COVID aligned with cultural identity dimensions, not with access to accurate information.


The Problem of Well-Credentialed COVID Misinformation

The "Great Barrington Declaration" Case

In October 2020, a small group of epidemiologists and public health researchers published the "Great Barrington Declaration," which argued for a "focused protection" approach to COVID-19 — allowing the virus to spread through lower-risk populations while protecting the elderly and vulnerable — as an alternative to broad lockdowns. The declaration gathered hundreds of thousands of signatures, including many claiming to be medical professionals.

The declaration was subsequently criticized by the majority of the epidemiological and public health scientific community for misrepresenting herd immunity dynamics, overestimating the ability to protect vulnerable populations in practice, and understating long COVID and other non-mortality consequences. It was rejected by the WHO and major public health agencies.

From a Dunning-Kruger perspective, the Great Barrington Declaration represents a more complex case than simple lay misinformation: it was produced by genuine epidemiologists and attracted signatures from medical professionals. However, it also illustrates overconfidence in a specific analytical approach (herd immunity modeling) at the expense of broader public health expertise about implementation challenges.

The case also illustrates what researchers have called epistemic privilege claims: the use of credentials to claim authority in adjacent but distinct domains. Signatories who were physicians in non-public health specialties claimed epidemiological expertise that their training did not specifically confer.


Implications for Public Health Communication

Acknowledging Uncertainty Without Creating False Equivalence

One of the distinctive challenges of COVID-19 communication was genuine scientific uncertainty — particularly in the early months. The science was genuinely evolving rapidly: mask guidance changed, models of transmission were updated, vaccine efficacy data accumulated progressively. This genuine uncertainty was exploited by misinformation producers who framed all uncertainty as evidence of equal uncertainty — implying that mainstream scientific guidance was no more reliable than alternative claims.

Effective communication of genuine uncertainty without creating false equivalence requires: - Clearly distinguishing between areas of high confidence (COVID-19 is real and deadly; vaccines are effective) and areas of genuine uncertainty (precise timing of variant emergence; long-term immunity duration) - Explaining why scientific consensus changes with new evidence (as a sign of scientific health, not unreliability) - Providing probabilistic, calibrated estimates rather than false certainty

Reaching Overconfident Information Consumers

The Dunning-Kruger dynamics in COVID misinformation spread suggest that conventional health communication — providing accurate information to correct false beliefs — faces a fundamental obstacle: individuals who are most confidently engaged in spreading misinformation are least likely to recognize their need for updated information. They feel they already know.

Strategies that may be more effective:

Inoculation/prebunking: Building general resistance to misinformation manipulation techniques, rather than correcting specific false claims, may be more effective for individuals whose overconfidence makes them unreceptive to corrections.

Motivational interviewing approaches: Communication techniques that begin by acknowledging the person's concerns and exploring their reasoning, rather than immediately contradicting their views, reduce defensive reactivity and may open more space for accurate information.

Credentialing that matches the domain: When corrections come from sources who are specifically credentialed in the relevant domain (virologists correcting virology claims, epidemiologists correcting epidemiology claims), they are more likely to be respected by individuals whose overconfidence is partly based on perceived expertise.

Calibration-building activities: Directly addressing metacognitive calibration — building accurate self-assessment of the limits of one's knowledge — may be more effective than domain-specific corrections for reducing Dunning-Kruger-driven sharing of misinformation.


Discussion Questions

  1. The COVID-19 pandemic saw both lay individuals and credentialed professionals spreading misinformation. How does the Dunning-Kruger framework apply differently to these two groups? What does each case imply for communication strategy?

  2. COVID-19 public health guidance genuinely changed over the pandemic as scientific evidence accumulated (mask guidance, understanding of transmission, vaccine dosing intervals). How should public health communicators handle genuine uncertainty while preventing it from being exploited to create false equivalence with misinformation?

  3. Social media platforms took unprecedented steps to moderate COVID-19 health misinformation, removing content and labeling false claims. What does the cognitive science in this chapter and Chapter 3 suggest about the effectiveness and potential unintended consequences of these moderation strategies?

  4. The alignment of COVID skepticism with political identity created classic identity-protective cognition dynamics. What strategies — communication, platform design, community-based — might reduce this alignment and make COVID health information less identity-threatening for politically skeptical audiences?

  5. Research on the "infodemic" suggests that having more information available is not sufficient to improve health decision-making — the information environment itself needs to be designed to support good decisions. What principles for information environment design does the cognitive science of this chapter suggest?

  6. The Dunning-Kruger effect predicts that individuals who most need calibration correction are least likely to recognize that need. What institutional mechanisms (outside of individual cognitive improvement) might compensate for this problem in the context of public health communication?


Key Research Referenced

  • Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L., Recchia, G., ... & van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 across 26 countries. Royal Society Open Science, 7(10), 201199.
  • Islam, M. S., Sarkar, T., Khan, S. H., Mostofa Kamal, A. H., Murshid Hasan, S. M., Kabir, A., ... & Seale, H. (2020). COVID-19-related infodemic and its impact on public health: A global social media analysis. The American Journal of Tropical Medicine and Hygiene, 103(4), 1621–1629.
  • Plohl, N., & Musil, B. (2021). Modeling compliance with COVID-19 prevention guidelines: The critical role of trust in science. Psychology, Health & Medicine, 26(1), 1–12.
  • Allington, D., Duffy, B., Wessely, S., Dhavan, N., & Rubin, J. (2021). Health-protective behaviour, social media usage and conspiracy belief during the COVID-19 public health emergency. Psychological Medicine, 51(10), 1763–1769.
  • Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytical thinking. Journal of Personality, 88(2), 185–200.
  • Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770–780.