Case Study 19.1: PolitiFact's Truth-O-Meter — Methodology, Critiques, and Impact
Overview
PolitiFact, launched in August 2007 by the Tampa Bay Times (then the St. Petersburg Times), introduced the Truth-O-Meter to American political journalism as a memorable, shareable system for rating the accuracy of political claims. What began as a feature within a regional newspaper became one of the most recognized institutions in American political life — and one of the most contested. This case study examines how the Truth-O-Meter works, documents representative examples of its application, analyzes the systematic critiques it has attracted, and assesses its influence on the fact-checking field and on political behavior.
Background: The Emergence of PolitiFact
The political environment of 2007 was marked by deep frustration with accuracy in political discourse. The preceding years had seen high-profile cases of political misinformation with serious consequences — claims about weapons of mass destruction in Iraq, disputes about the accuracy of claims made during the 2000 and 2004 presidential campaigns, and increasingly heated controversy about accuracy in political advertising. Conventional political journalism offered readers fact-checks embedded within campaign coverage stories, but these lacked systematic methodology, transparency about evidence, and a memorable summary format.
Bill Adair, then bureau chief of the Tampa Bay Times' Washington bureau, conceived PolitiFact as a dedicated fact-checking operation that would apply systematic methodology to political claims and present findings in a format accessible to general audiences. The Truth-O-Meter was central to this design from the beginning: Adair recognized that readers needed a quick, memorable summary they could share and reference, not just a long narrative they might not read to its conclusion.
PolitiFact's Pulitzer Prize for National Reporting in 2009, awarded for its fact-checking coverage of the 2008 presidential election campaign, conferred institutional legitimacy and attracted national attention. The organization subsequently franchised its model to partner organizations in multiple U.S. states and expanded its national coverage substantially.
The Truth-O-Meter: How It Works
The Six Categories
The Truth-O-Meter employs six categories arranged along a continuum from most accurate to least:
True: The statement is accurate and there is nothing significant missing. This rating is reserved for claims that are entirely accurate in all their components and that do not involve meaningful misleading omissions. True ratings require that the claim be correct in its literal meaning, appropriately contextualized, and without significant misleading framing.
Mostly True: The statement is accurate but needs clarification or additional information. A "Mostly True" rating indicates that the core of the claim is correct but that it is missing important context, uses slightly imprecise figures, or would be more accurately stated in a qualified form.
Half True: The statement is partially accurate but leaves out important details or takes things out of context. A "Half True" rating indicates that the claim contains both accurate and inaccurate elements, or that the technically accurate statement creates a misleading impression.
Mostly False: The statement contains an element of truth but ignores critical facts that would give a different impression. A "Mostly False" rating indicates that the claim's fundamental assertion is not supported by evidence, even if some peripheral element is accurate.
False: The statement is not accurate. A "False" rating indicates that the claim's core assertion is incorrect and that the speaker cannot reasonably claim they were mistaken.
Pants on Fire: The statement is not accurate and makes a ridiculous claim. The "Pants on Fire" label — derived from the playground taunt "Liar, liar, pants on fire" — is reserved for claims that are not only false but additionally outrageous or absurd in character.
The Editorial Process
The Truth-O-Meter rating is not assigned by a single journalist unilaterally. PolitiFact's published methodology describes a multi-step editorial process:
- A staff journalist identifies and researches a claim, gathering evidence from primary sources, expert interviews, and documentary materials.
- The staff journalist writes a draft fact-check and proposes a rating.
- An editor reviews the draft and the proposed rating, discussing with the journalist and potentially requesting additional research.
- The final rating is determined collaboratively between the journalist and editor, with senior editors involved for consequential decisions.
- The fact-check is published with all source links and the journalist's byline.
PolitiFact has acknowledged that rating decisions are not purely mechanical — the same evidence base might support adjacent ratings depending on editorial judgment about how to weigh context, omission, and framing. This acknowledgment of subjectivity is, in one sense, honest about the nature of fact-checking; in another sense, it is the feature that critics most often seize upon.
Documented Examples
Example 1: The Affordable Care Act and Plan Retention
In October 2013, PolitiFact rated "Lie of the Year" to the claim, repeated by President Barack Obama throughout the ACA debate: "If you like your health care plan, you can keep it."
The fact-checking history of this claim is illustrative of how ratings can evolve as additional evidence emerges. PolitiFact had rated the claim "Half True" multiple times in 2008-2012, based on the reasoning that most people would be able to keep their plans, that the exceptions involved noncompliant plans, and that the ACA was designed to improve rather than displace existing coverage. When the ACA's implementation in 2013 revealed that several million Americans on individually purchased insurance plans received cancellation notices, PolitiFact upgraded its rating to "False" and ultimately selected it as "Lie of the Year."
The choice was immediately contested. Some critics argued that Obama's claim was a reasonable simplification rather than a deliberate lie. Others argued that "Lie of the Year" carried a moral accusation (intentional deception) that "False" does not. Supporters of the original "Lie of the Year" designation argued that the claim had been made repeatedly with full knowledge of the exceptions, which rose to the level of intentional misleading.
This case illustrates several features of Truth-O-Meter methodology: the willingness to update ratings as evidence changes; the subjective judgment involved in distinguishing "Half True" from "False"; and the difficulty of specifying criteria for "Lie of the Year" that avoid the accusation of political motivation.
Example 2: Crime Statistics
Claims about crime rates have been among the most commonly fact-checked categories, and the Truth-O-Meter's application to them illustrates the "missing context" problem.
In 2015, claims circulated that crime in the United States was at a decades-long high, attributing the trend to various policy failures. PolitiFact's fact-check found these claims "False" to "Mostly False," documenting that FBI crime statistics showed violent crime rates at multi-decade lows, while noting that some cities had experienced recent upticks and that homicide rates showed more complex patterns than the simple "all-time high" narrative.
The same underlying data produced different fact-check verdicts depending on the specificity of the claim being checked. A broad claim that "crime is at an all-time high" received a "False" rating. A more specific claim that "homicide rates increased in several major cities in 2015" received a "Mostly True" rating. This demonstrates how the Truth-O-Meter must be applied to specific claim formulations — changing a single word can change the appropriate rating — and why claim characterization in the setup paragraph of a fact-check is consequential.
Example 3: Economic Claims and Complex Data
Economic claims frequently receive "Half True" or "Mostly True" ratings that reflect genuine data complexity rather than deliberate deception. A common example involves claims about job creation. When politicians claim credit for job creation numbers, fact-checkers must address multiple layers: Are the figures accurate? Does the trend predate the politician's policies? Do the policies plausibly have the effects claimed? Are part-time jobs included? What time period is cited?
A fact-check of the claim "We created 4 million jobs under my administration" might rate "Mostly True" if the figure is accurate but the time period selected is specifically chosen to show the administration in the best light, or if the figure includes part-time jobs when full-time job creation was more modest. The Truth-O-Meter's "Half True" or "Mostly True" categories are doing significant work in these cases — communicating a judgment that requires reading the full narrative to understand.
Systematic Critiques
The Selection Bias Critique
The most empirically tractable criticism of PolitiFact is that its claim selection is systematically biased, leading to unrepresentative coverage that makes one party appear less accurate than the other even if both parties' politicians make similar numbers of false statements.
Several academic analyses of PolitiFact's rating database have found that Republican politicians receive "False" or "Pants on Fire" ratings at higher rates than Democratic politicians. PolitiFact and its defenders have offered two responses: first, that the selection reflects checking claims by prominence and significance rather than by party, and that Republican politicians during the periods studied made more check-worthy factual claims; second, that the rating differences reflect genuine differences in accuracy, not selection bias. Critics respond that it is methodologically very difficult to distinguish these explanations without data on the full population of potential claims from which PolitiFact selected — data that doesn't exist.
A related methodological critique, made by political scientists including Mark Hemingway, argues that PolitiFact sometimes selects for checking claims that are technically true but that PolitiFact finds objectionable in framing or implication, then uses the "Half True" or "Mostly False" ratings to penalize the framing rather than a genuine factual error. This critique is difficult to evaluate systematically because the distinction between penalizing false framing and penalizing misleading true claims is precisely the kind of judgment call that lies at the heart of fact-checking.
The "Objectivity Norm" Critique
A different critique, articulated from the left rather than the right, argues that the Truth-O-Meter's commitment to appearing nonpartisan leads to false equivalence — rating claims from both parties at similar rates even when one party makes demonstrably more false claims. The "both sides" norm in journalism, critics argue, pressures fact-checkers to find comparable numbers of false statements from each party, creating artificial balance.
This critique is essentially the inverse of the selection bias critique from the right: where conservatives argue that fact-checkers are biased against them, progressives argue that fact-checkers are insufficiently rigorous in holding conservatives accountable because they are committed to appearing balanced. The symmetry of these critiques itself tells us something: it reflects the hostile media effect in action, with partisans on both sides perceiving the same institution as biased against their side.
The Subjectivity Critique
Perhaps the deepest methodological critique is that the Truth-O-Meter's rating decisions are irreducibly subjective in ways that the methodology's procedural apparatus (multi-step editorial review, documented sources, published reasoning) cannot fully address. The same evidence base can legitimately support adjacent ratings — "Half True" versus "Mostly False," for instance — depending on how the evaluator weights different considerations. Two rigorous, good-faith fact-checkers reviewing the same claim can reasonably reach different Truth-O-Meter ratings.
Research by Uscinski and Butler (2013) asked subjects to apply PolitiFact's stated criteria to sample claims and found substantial inter-rater disagreement, suggesting that the criteria, as specified, do not uniquely determine ratings. This finding does not necessarily impugn PolitiFact's ratings — it may reflect the inherent complexity of fact-checking rather than sloppiness in PolitiFact's methodology — but it does challenge any claim that Truth-O-Meter ratings are the uniquely correct answer that a transparent verification process produces.
The Narrative vs. Label Problem
PolitiFact's methodology requires readers to read the full fact-check narrative to understand the evidence behind a rating. But research on media consumption suggests that most users who encounter fact-check labels do not read the full narrative. They see the label — "Pants on Fire" — and draw conclusions from the label alone, without engaging with the evidentiary reasoning. This creates a gap between the full epistemically defensible fact-check and the simplified communication that actually reaches most audiences.
FactCheck.org's decision not to use a rating label reflects a different judgment about this trade-off: better to forgo the communicative efficiency of labels and ensure that readers engage with the full evidence base, even if this means fewer people will process the fact-check. Neither approach is obviously superior, and the choice reflects different theories about how fact-checking should function within the information ecosystem.
The Partisan Perception Problem
Studies of how partisans respond to PolitiFact consistently find that partisan identity moderates perceived credibility. Participants presented with PolitiFact ratings favorable to their party's politicians rate PolitiFact as more credible and more accurate than participants presented with PolitiFact ratings unfavorable to their party's politicians — even when the same participants are shown the same underlying evidence.
This finding, robust across multiple studies, creates a structural challenge: PolitiFact's credibility with any given partisan audience depends partly on the distribution of its recent ratings. If PolitiFact rates multiple prominent figures from one party as "False" in a given period, supporters of that party will tend to discount subsequent PolitiFact ratings. If PolitiFact rates prominent figures from the other party unfavorably, those supporters will experience credibility effects too. The credibility PolitiFact needs to be effective is partly constituted by the very ratings it produces — a circular problem that is inherent to the institution.
Impact on Political Behavior
PolitiFact's impact on politician behavior is difficult to measure directly, but several lines of evidence are suggestive. Journalists and political operatives have noted that the existence of PolitiFact and similar organizations has made political campaigns more attentive to the verifiability of specific factual claims. Research on U.S. state legislatures where PolitiFact's state franchise operations operate versus states where they do not suggests some deterrence effect. And the introduction of the "Bottomless Pinocchio" designation by the Washington Post represents an institutional response to the problem of facts-checked claims that continue to be repeated — acknowledging that fact-checking deters some repetition but not all.
The most consequential impact may be on political journalism more broadly. PolitiFact's success with the Truth-O-Meter model prompted many news organizations to add fact-checking functions, created a genre of political journalism that holds individual claims accountable rather than focusing solely on narratives, and established the norm that specific factual assertions in political discourse are appropriate subjects for systematic public verification.
Conclusions
PolitiFact's Truth-O-Meter is simultaneously a genuine methodological innovation and a permanent target of methodological criticism. The innovation lies in creating a memorable, systematic, transparent approach to political fact-checking that became broadly influential — the Truth-O-Meter model has been adapted by dozens of organizations worldwide. The permanent criticism reflects facts about political fact-checking that cannot be engineered away: the claim selection problem cannot be solved without resources to check all claims; the subjectivity of adjacent-category judgment calls cannot be eliminated by procedure; and the partisan perception problem is structural, not a product of PolitiFact's specific decisions.
A sophisticated consumer of PolitiFact's work understands these limitations while still finding substantial value in the organization's output: the documented evidence behind each fact-check, the transparency about sources and reasoning, the systematic attention to accuracy in political discourse, and the public record of which specific claims have been assessed and found wanting. The Truth-O-Meter is not an oracle — it is a useful institution with identified limitations, best used alongside other information sources rather than as a sole authority.
Discussion Questions
-
PolitiFact evolved from a "Half True" rating of Obama's "keep your plan" statement to selecting it as "Lie of the Year." What does this evolution suggest about how truth-values of political claims can change as evidence accumulates? Is this appropriate methodological flexibility or inconsistency?
-
The partisan perception problem means PolitiFact is perceived as biased by partisans on both sides. What institutional design changes could reduce this perception problem without abandoning the commitment to nonpartisanship?
-
Would fact-checking be more effective if the Truth-O-Meter used only three categories (True, Mixed, False) rather than six? What would be gained and lost?
-
PolitiFact designates a "Lie of the Year." What criteria should determine this designation, and how does moral language ("Lie") interact with the epistemically neutral language ("False") of routine fact-checking?
-
If research shows that most users who see a Truth-O-Meter label do not read the accompanying narrative, what are the implications for PolitiFact's methodology and communication strategy?