Case Study 12-1: Polling a Polarized State — Meridian's Nonresponse Challenge

The Situation

It is eight weeks before election day. Meridian Research Group has just completed its second wave survey of the Garza-Whitfield race, and Dr. Vivian Park is not satisfied with the numbers — not because they're bad, but because she's not sure she trusts them.

The topline results show Garza ahead 48%-44%, with 8% undecided — a 4-point Democratic lead. On the surface, this is a reasonable number for a state where Democrats hold a slight registration edge. But Vivian has been polling long enough to know that reasonable-looking numbers can conceal serious methodological problems. And right now, there is one problem she can't shake: partisan differential nonresponse.


Background: The 2020 Lesson

In 2020, Meridian had polled a similar purple Sun Belt state and shown the Democratic Senate candidate ahead by 5 points in their final survey. The actual result was a 1.5-point Democratic win — much closer than they had shown. Internal postmortem analysis pointed to a likely culprit: Republican voters had been systematically underrepresented in their surveys, probably because the enthusiasm asymmetry of 2020 (highly engaged Democrats, less enthusiastic Republicans in some markets) was driving differential nonresponse.

The problem with nonresponse correction is circular: you can try to weight your sample to match the expected partisan composition of the electorate, but to do that, you need to know what the partisan composition of the electorate is — which is itself uncertain until after the election.

Vivian had developed a workaround that she called "historical anchoring": weighting the survey's partisan composition to match the partisan composition of validated voters in the three most recent comparable elections in that state, with recency weighting. It wasn't perfect, but it was better than weighting solely to registration rolls.


The Current Problem

The second wave survey had been conducted via phone (both landline and cell), supplemented by an online panel of registered voters. Response rates had been: 12% for landline, 8% for cell, and 29% for the online panel. The combined effective sample size was 847.

When Carlos Mendez ran the partisan composition check, the numbers were telling:

  • Self-identified Democrats: 42% of the sample
  • Self-identified Republicans: 34% of the sample
  • Self-identified Independents: 24% of the sample

Compared to the historical anchoring baseline — which suggested the expected composition of likely voters should be approximately 38% Democrat, 38% Republican, 24% Independent — Democrats were overrepresented by 4 points and Republicans underrepresented by 4 points.

"That's a lot," Carlos told Vivian. "If we weight back to the baseline, Garza's lead shrinks from 4 points to... probably about 1 point."

"Probably," Vivian replied, and Carlos could hear the skepticism in the single word. "Or the baseline is wrong and the state has genuinely moved Democratic. Or Republican voters are systematically hiding their preference. Or some combination."


Three Competing Hypotheses

Vivian laid out the diagnostic problem clearly in a team meeting with Carlos and Trish McGovern, Meridian's field director.

Hypothesis 1: Differential Nonresponse (Enthusiasm Gap)

The most likely explanation: Republican voters are currently less motivated to participate in surveys. This might reflect a temporary enthusiasm asymmetry — perhaps Democrats are more engaged right now due to recent political events. If true, the correction is to weight back toward the expected partisan baseline, which would narrow Garza's apparent lead.

Hypothesis 2: Real Electorate Composition Change

An alternative: the composition of the likely voter electorate has genuinely shifted Democratic since the last comparable election. New voter registrations have been skewing Democratic in the metro area; some older Republican voters have moved out of state; recent events (a high-profile fight over reproductive rights legislation in the state) may have motivated more Democratic-leaning irregular voters to consider participating. If this is true, weighting back to the historical baseline would introduce bias in the other direction — correcting a real change as if it were an error.

Hypothesis 3: Social Desirability Bias

A third possibility: Republican voters are participating in surveys at normal rates but systematically underreporting their preference for Whitfield, perhaps because they perceive survey interviewers as aligned with media organizations they distrust. If true, the problem is not in the sample composition but in the response behavior of Republican-identifying respondents who are in the sample.

Trish noted a clue from the field data: the partisan composition gap was larger in the phone subsample (where live interviewers were present) than in the online panel (where respondents answered alone). This was consistent with Hypothesis 3 — social desirability effects would be stronger with a live interviewer.


The Methodological Decision

After two hours of debate, Vivian made a call: they would apply partial nonresponse correction, weighting the partisan composition halfway between the current sample composition and the historical baseline. This was explicitly a judgment call — not a mechanical application of a rule.

"We're acknowledging uncertainty rather than pretending we've solved it," she told the team. "We'll report the raw numbers, the fully corrected numbers, and the halfway-corrected numbers, with a transparency note about the methodology."

After applying the partial correction, Garza's lead was approximately 2.5 points — splitting the difference between the 4-point raw lead and the ~1-point fully corrected lead.

They also ran a "lean-only" analysis: removing "undecideds" from the sample and looking only at respondents who expressed a preference. The lean-only topline showed Garza at 52%, Whitfield at 48%. This was consistent with the 2.5-point partially corrected estimate.

The final report would show: Garza +2.5 (±3.5 at 95% confidence), with a methodology note explaining the nonresponse correction approach and its uncertainty.


The Jake Rourke Reaction

Two days after Meridian's results were published, Jake Rourke called a reporter he trusted at a major national newspaper. "Meridian's poll has problems," he said, not for attribution. "They're oversampling Democrats. The real number is Whitfield within 1. We see it in our own data."

When the reporter called Meridian for comment, Vivian issued a one-paragraph statement explaining their methodology and the fact that they had published three estimates — raw, partial correction, and full correction — precisely to be transparent about the uncertainty involved.

"We're not hiding the methodological challenge," she wrote. "We're the only firm in this race that has published all three estimates. The appropriate response to uncertainty is transparency, not false precision."


Discussion Questions

1. The Circular Problem

Vivian describes partisan nonresponse correction as circular: to correct the sample, you need to know the electorate's composition, but the composition is what you're trying to measure. Is there any methodological escape from this circularity? What data sources or analytical approaches might provide a more objective benchmark for expected electorate composition?

2. Hypothesis Discrimination

The three competing hypotheses (differential nonresponse, real composition change, social desirability bias) have somewhat different empirical signatures. Using the clue Trish provided (the phone-online discrepancy), design an additional diagnostic test that could help discriminate between at least two of these hypotheses. What would each hypothesis predict about patterns in your diagnostic data?

3. The Transparency Defense

Vivian's response to Jake Rourke's criticism was to point to the transparency of her methodology: "We published all three estimates." Is transparency by itself a sufficient response to the methodological criticism? What are the limits of transparency as a methodological defense?

4. Partial Correction as a Policy

Vivian's "halfway correction" was explicitly a judgment call. Evaluate this decision. What are the assumptions embedded in choosing to weight halfway between the raw and fully corrected estimates? Under what conditions would this decision be better or worse than either extreme?

5. Publication and Polarization

Jake Rourke's response to the Meridian poll — accusing it of partisan bias rather than engaging with the methodological substance — reflects the politicization of polling in a polarized environment. What responsibility do campaigns have for engaging honestly with polling methodology rather than simply dismissing unfavorable polls as biased? What responsibility do media organizations have for adjudicating such disputes?