Case Study 32.1: PolitiFact, Snopes, and the Professionalization of Fact-Checking


Overview

Two organizations define the modern fact-checking landscape for most American readers: PolitiFact, which professionalized the political fact-check in American journalism, and Snopes, which established viral claim verification as a distinct genre of digital journalism. Both emerged from different contexts, developed different methodologies, and serve different functions in the information ecosystem — yet both are routinely cited, criticized, and used as models. Examining them together reveals both the possibilities and the persistent tensions in the professionalization of fact-checking.


Background: Different Origins, Convergent Purpose

PolitiFact was created by Tampa Bay Times journalist Bill Adair and launched in August 2007, during the early days of the 2008 presidential primary season. Its founding premise was direct: political claims were increasingly difficult for citizens to evaluate independently, and a dedicated journalistic operation applying consistent standards could provide a public service by doing so systematically.

The innovation PolitiFact introduced was not the fact-check itself — newspapers had been checking political claims for decades — but the rating scale. The Truth-o-Meter, with its six categories (True, Mostly True, Half True, Mostly False, False, and Pants on Fire), turned the outcome of a fact-check into something shareable, memorable, and visually distinctive. A "Pants on Fire" rating conveyed not only that a claim was false but that it was so obviously false as to be almost satirical. The scale was immediately controversial for precisely this reason: it encoded editorial judgment within a quasi-scientific presentation.

PolitiFact won the Pulitzer Prize for National Reporting in 2009, which significantly legitimized the fact-checking enterprise as serious journalism. The organization subsequently expanded through a state network model, licensing its brand and methodology to local news organizations across the country, and was eventually sold to the Poynter Institute in 2018. As of 2023, it remains one of the most frequently cited fact-checking organizations in American political discourse.

Snopes has an entirely different origin. Founded in 1994 by David and Barbara Mikkelson as a hobby project to investigate urban legends and folklore, Snopes predates modern political fact-checking by nearly a decade. Its early subjects were chain emails, urban myths, and online rumors — the pre-social-media viral content of the early internet. The Mikkelsons developed a distinctive methodology: find the claim's origin, trace its transmission history, locate the evidence (or absence of evidence) for the core assertion, and provide a narrative explanation that is accessible to non-specialists.

When social media transformed the information environment after 2007, Snopes's skill set became newly relevant. The viral misinformation that flooded Facebook and Twitter was not so different, structurally, from the chain emails Snopes had been debunking for years. By the 2010s, Snopes had become a primary reference for ordinary internet users trying to evaluate viral claims. Its narrative format — more explanatory and less categorical than PolitiFact — proved well-suited to the complexity of internet-era misinformation, which often involved true facts assembled in false contexts.


Comparative Methodology

The methodological differences between PolitiFact and Snopes are instructive.

PolitiFact's focus is on claims made by identifiable political figures in public contexts — speeches, debates, television appearances, official documents. Its claim selection is driven by political salience: it investigates claims made by the most prominent political figures about the most consequential political issues. This focus means PolitiFact rarely covers viral social media misinformation that does not originate with a specific named political figure. Its value is in creating an accountability record for political speech.

Snopes's focus is on viral claims regardless of origin. Its subjects include claims made by political figures, but also anonymous viral posts, misidentified images, false health information, and entertainment myths. Snopes's methodology is less categorical than PolitiFact's — it frequently concludes that a claim is "Mixture" (containing both true and false elements) or "Unproven" (plausible but not documentable) — because the viral claims it investigates often resist simple true/false resolution.

PolitiFact's rating scale creates visibility and shareability but at the cost of precision. A claim rated "Mostly True" and a claim rated "Half True" may be separated by a complex judgment that the rating scale collapses. Snopes's narrative approach preserves more of the complexity but requires more reader engagement to be useful.

Both organizations explicitly commit to IFCN principles. Both have published corrections. Both have been criticized for specific fact-checks by partisans across the political spectrum. The partisan criticism itself is in some ways a sign of functioning non-partisanship: an organization that only attracted criticism from one side would be reason for concern.


Controversy and Challenge

Neither organization has escaped significant controversy.

PolitiFact has faced sustained criticism about the consistency of its rating methodology. A 2011 study by George Mason University Center for Media and Public Affairs found that PolitiFact rated Republican statements as "False" or "Mostly False" at a higher rate than Democratic statements. PolitiFact's defenders argued this reflected an actual asymmetry in the rate of false claims being made during the Obama administration. Critics argued the selection and rating methodology was inconsistent. The debate has never been fully resolved, in part because it is methodologically very difficult to determine whether a rating difference reflects actual differences in claim accuracy or differences in the evaluator's judgment.

More significantly, critics have argued that PolitiFact's rating scale applies categorical judgments to claims that exist on a continuous spectrum, and that the assignment of categories is more art than science. Two experienced fact-checkers evaluating the same claim have been shown to assign different ratings. The scale's apparent precision obscures genuine evaluative ambiguity.

Snopes faced a different type of challenge in 2016–2017. A bitter dispute between founder David Mikkelson and Barbara Mikkelson (during their divorce), combined with questions about the company's finances and the freelance fact-checker compensation model the organization had adopted, produced a public controversy about organizational governance. For critics looking for reasons to question Snopes's credibility, the internal dispute provided ammunition. The controversy subsided, and Snopes retained its IFCN certification, but it illustrated how quickly institutional credibility can become contested in a polarized information environment.

Both organizations have also grappled with the scale challenge in the social media era. The volume of viral claims requiring evaluation during major political events — elections, public health crises, social movements — far exceeds the capacity of any professional organization. Both have developed partnerships with social media platforms to insert fact-check labels into the feed, with mixed results. The label approach partially addresses the reach problem (people see the label without seeking out the full check) but creates new concerns about content moderation authority.


The Professionalization Question

The emergence of PolitiFact and Snopes, and of fact-checking organizations more broadly, represents a significant professionalization of a function that was previously either invisible (internal newspaper checking) or informal (public skepticism). This professionalization has had identifiable benefits and costs.

Benefits of professionalization: Systematic claim selection ensures that some of the most consequential false claims receive rigorous investigation. The archival record created by organizations like PolitiFact provides a resource for journalists, historians, and researchers. The development of professional standards — IFCN certification, explicit methodology, public corrections policies — creates a basis for accountability and improvement. The mere existence of credentialed fact-checking organizations creates an accountability norm that affects political speech even when no specific claim is being checked.

Costs of professionalization: Professionalization creates institutional identities that become targets for political attack. When a fact-checking organization becomes an institution with a staff, a budget, and a brand, it also becomes something that political actors can campaign against. The ability to dismiss "mainstream fact-checkers" as an institutional bloc is partly a product of the professionalization that concentrated fact-checking authority in a small number of recognizable organizations.

Professionalization also concentrates the credentialing function in ways that may not be optimal. The IFCN certification process is valuable but represents a single standard-setting body whose own governance and funding deserve the same scrutiny it applies to others. Poynter, which hosts the IFCN, is itself a journalism education institution with its own institutional relationships and perspectives.


Analysis Questions

1. PolitiFact's Truth-o-Meter and Snopes's narrative format represent different solutions to the communication challenge of conveying fact-check results to a mass audience. What are the specific tradeoffs of each approach? Under what circumstances is each more or less appropriate?

2. The partisan criticism both organizations receive is often cited as evidence of their non-partisanship. Is this reasoning sound? Under what conditions would symmetrical criticism from both sides not be evidence of genuine non-partisanship?

3. The case study describes professionalization as having both benefits and costs. What specifically makes fact-checking as an institutional form — with brands, budgets, and certifications — both more and less effective than individual, informal fact-checking by skilled journalists embedded in specific beats?

4. The challenges Snopes faced in 2016–2017 over organizational governance were largely separate from the quality of its fact-checking. But they affected its credibility with some audiences. What does this suggest about the relationship between institutional credibility and methodological credibility? Can an organization with governance problems still produce good fact-checks?

5. If you were designing a fact-checking organization from scratch, what would you do differently from both PolitiFact and Snopes? How would your design address the volume problem, the partisan credibility problem, and the framing problem?


End of Case Study 32.1