Case Study 26.1: The ODA Fact-Check — Tracing a False Claim Through the Garza-Whitfield Information Environment

Background

OpenDemocracy Analytics (ODA) operates as a non-partisan civic data organization focused on election integrity and information quality in competitive congressional and state legislative races. Executive director Adaeze Nwosu (39) leads a team that includes data journalist Sam Harding (35, they/them), who specializes in applying computational methods to political information tracking.

In October, five weeks before Election Day in the competitive Garza-Whitfield congressional race, ODA's tiplines received a screenshot claiming that Representative Elena Garza had voted to defund local police departments. The claim was false: no such vote existed. But its spread was rapid, reaching an estimated 72,000 unique accounts before ODA's fact-check was published at 4 p.m. on the day it emerged.

This case study examines ODA's complete fact-checking and tracking workflow, the editorial decisions Adaeze made during the process, and what the data revealed about how the false claim spread — and why the correction fell short.


Part I: The Claim Arrives

The screenshot appeared to show a tweet from a verified account associated with the Garza campaign, reading: "I'm proud of my vote to reallocate police funding toward community services. Our district deserves a different kind of public safety." The language was designed to be plausible — it used terminology consistent with progressive policy discourse and was formatted to mimic a real tweet.

Sam's first task was provenance tracing. Three immediate steps:

Step 1: Primary source verification. Sam pulled Garza's full voting record from GovTrack and the Congressional Record for the past three legislative sessions. No vote matching the claim's description appeared. Sam also checked the FEC database to ensure no campaign finance disclosures referenced this policy position.

Step 2: Image forensics. The screenshot showed visual inconsistencies: the font weight in the display name was slightly heavier than the body text, and the verification checkmark was offset by two pixels from its standard position. Sam ran the image through Google Reverse Image Search — no prior appearances. The account handle in the screenshot did not match any verified account in Twitter's search function.

Step 3: Campaign contact. Sam emailed and called the Garza campaign communications director. Within 45 minutes, the campaign confirmed the screenshot was fabricated — the account shown did not exist, and the represented vote had never occurred.

At this point, Sam had sufficient verification for a fact-check. But before proceeding, Adaeze reviewed the tracking data.


Part II: The Spread Tracking

Sam had been monitoring the screenshot's diffusion in parallel with the verification process. Using CrowdTangle for Facebook and a social media monitoring dashboard for Twitter/X, they documented the spread pattern:

Origin: A now-deleted account with 1,200 followers, created 47 days before the screenshot appeared (a common indicator of an account created specifically for an influence operation).

First major amplification: A local conservative radio host with 38,000 followers shared the screenshot with the comment "Garza shows her true colors. This is who she is." This single share drove a significant cascade.

Secondary amplification: Three Facebook groups with a combined membership of approximately 120,000 accounted for the bulk of the screenshot's Facebook reach. All three groups were local to the congressional district and focused on public safety issues.

Velocity pattern: The screenshot spread most rapidly between hours 2 and 5, consistent with the radio host's share occurring during morning drive time. By hour 6, the spread rate was declining, consistent with organic content half-lives observed in previous ODA tracking work.

Coordination indicators: Sam identified a cluster of 43 accounts that shared the screenshot within a 12-minute window — all accounts created within the past 60 days, all with fewer than 500 followers, with minimal prior posting activity. This pattern is consistent with coordinated inauthentic behavior, though Sam noted it was not conclusive.

At 2:30 p.m., Adaeze convened a brief team discussion. The false claim had been circulating for approximately five hours. Their tracking showed approximately 38,000 unique accounts had encountered it. The verification was complete.


Part III: The Editorial Decision Point

Adaeze faced a decision with two competing considerations.

On one hand, ODA's mission was to correct misinformation. The claim was demonstrably false, the verification was complete, and the potential for electoral harm was real: the false claim implied a policy position that could affect voter decisions in a competitive race.

On the other hand, Adaeze's amplification concern was legitimate. ODA's fact-check would be read by civic media organizations, picked up by regional news aggregators, and syndicated through their partner network. Publishing the fact-check would inevitably bring the false claim to the attention of people who had not yet seen it.

She applied ODA's threshold policy: claims circulating in fewer than 5,000 accounts are not fact-checked to avoid amplification. This claim had already reached an estimated 38,000 accounts. The threshold was not the issue.

Her second consideration: the corrections design. Adaeze instructed Sam to lead with the accurate information — Garza's actual voting record on public safety — before ever quoting the false claim. The structure: (1) accurate statement of Garza's record, (2) statement that a fabricated screenshot is circulating, (3) clear demonstration that the screenshot is false (voting record evidence, image forensics, campaign confirmation), (4) call to action for readers to share the correction.

The fact-check was published at 4:00 p.m.


Part IV: The Correction's Performance

Sam tracked the correction's spread for the following 48 hours. The results were sobering.

The correction was shared approximately 2,800 times in the first 24 hours. The original false claim had been shared approximately 12,000 times in its first four hours alone. By the 48-hour mark, the correction had reached an estimated 18,500 unique accounts. The false claim had reached approximately 74,000 unique accounts by the same point.

The correction-to-exposure ratio at 48 hours: 0.25. For every four people who had encountered the false claim, one had encountered the correction.

Sam's analysis identified a structural problem in the correction's distribution: ODA's primary audience was civic technology professionals, journalists, and engaged civic participants — predominantly people who had already followed the race closely, had high prior knowledge, and were least susceptible to the original false claim. The correction was reaching the wrong audience.

Two local TV affiliates cited ODA's fact-check in follow-up coverage, which expanded the correction's reach. But both segments framed the story as "controversy" rather than "fabrication," giving the original false claim further airtime in the process.


Part V: The Longer Arc

ODA's 30-day post-election tracking found that the false defund-the-police claim continued to be shared at measurable rates for 18 days after the fact-check. Social media searches for Representative Garza's name continued to surface the original screenshot before the correction in organic results for several weeks.

Adaeze concluded her analysis with a question that she raised at ODA's next advisory board meeting: "We did everything right. Complete verification. Excellent corrections design. Fast turnaround. And we still reached one in four of the people who saw the false claim. What would it take to actually solve this problem?"

The board's answer: probably not a better fact-check.


Discussion Questions

  1. Evaluate ODA's verification workflow. Were there any steps Sam could have taken to verify faster without sacrificing accuracy? What are the trade-offs?

  2. Adaeze chose to lead with accurate information rather than repeating the false claim first. Based on the corrections research in Section 26.4, was this the right call? What alternatives might she have considered?

  3. The 0.25 correction-to-exposure ratio reflects a structural problem with ODA's audience. What distribution strategies could ODA employ to reach a more targeted audience — specifically, the people who saw the original false claim? What are the practical constraints on each strategy?

  4. The two local TV affiliates covered ODA's fact-check as a "controversy" story, giving the false claim additional airtime. How should civic organizations and fact-checkers think about media partnerships that risk this outcome?

  5. Sam identified a cluster of 43 accounts with coordinated behavior as indicators (but not proof) of an influence operation. What evidentiary standard should ODA apply before making a public claim that coordination occurred? What are the risks of under- and over-claiming coordination?

  6. Adaeze's conclusion was that a better fact-check wouldn't solve the problem. What structural or institutional changes would actually address the 0.25 ratio? Who would need to implement them?