Case Study 7.1: The Question That Moved the Race — Wording Effects in the Garza-Whitfield Survey
Setup
Two weeks after the Meridian team finalized the Garza-Whitfield questionnaire, Carlos Mendez ran into a problem during data analysis. He was cross-checking their results against a competing poll released by a local newspaper the same week. Both polls were conducted at roughly the same time, from the same target population (likely voters in the state), and with similar sample sizes. But the headline horse-race numbers were remarkably different:
- Meridian poll: Garza 49%, Whitfield 44%, Undecided 7%
- Newspaper poll: Garza 44%, Whitfield 47%, Undecided 9%
That's a 10-point swing in the margin between two polls fielded within days of each other. For a race expected to be close, the difference between "Garza up 5" and "Whitfield up 3" was not academic — it was the difference between two completely different strategic narratives.
Carlos pulled the newspaper's methodology statement and located their horse-race question.
Newspaper version: "If the Senate election were held today, would you vote for Republican Tom Whitfield, Democrat Maria Garza, or someone else?"
Meridian version: "If the election for U.S. Senate were held today, and the candidates were [ROTATE: Maria Garza, the Democrat / Tom Whitfield, the Republican], for whom would you vote?"
Carlos stared at the two questions. They seemed almost identical. He walked down to Vivian's office.
Vivian's Diagnosis
Vivian identified three differences immediately.
Difference 1: Candidate order. The newspaper question always listed Whitfield first. Meridian rotated candidate order randomly across respondents. In telephone surveys, where the recency effect dominates, the candidate listed second (heard last) may have a slight advantage. Listing Whitfield first may have given Garza a marginal response-order edge in the newspaper poll — but that should have helped Garza, not hurt her.
Difference 2: "Someone else" as a response option. The newspaper question explicitly offered "someone else" as an option; Meridian's did not (though Meridian had "Undecided/Not sure" and "Refused"). The explicit "someone else" option legitimizes a non-major-party choice in a way that the standard two-candidate frame does not. In this race, there is a Libertarian candidate polling at roughly 3-4%. The newspaper's "someone else" option may be capturing more of that support explicitly, while Meridian respondents who lean Libertarian are being forced to choose between the two major candidates or go undecided.
Difference 3: Framing context. The newspaper's version placed the Republican candidate first in the question stem ("Republican Tom Whitfield") — subtly establishing Whitfield as the reference-point candidate. This is subtle, but in a state where Republicans have held the seat for twelve years, naming the Republican first may be consistent with "incumbent framing," giving Whitfield a slight status-quo advantage.
"How big are these effects, collectively?" Carlos asked.
"That's the key question," Vivian said. "Individually, each of these is probably 1-2 points. Combined — and you'd need a direct experiment to know — they could plausibly explain most of the 8-point margin difference."
"But we don't know which poll is right."
"We can't know without an experimental comparison or actual election results. What we can know is that the question wording differences are large enough to plausibly explain the discrepancy — which means this is not a 'real' 8-point difference in voter preference. It's a measurement artifact."
Trish's Field Intelligence
Trish McGovern had been tracking early vote data and county-by-county comparison from past cycles. She came to the meeting with a different angle.
"The question wording differences matter," she said, "but so does the sample composition. Pull their demographics."
The newspaper had not released detailed methodology, but their published crosstabs showed that their sample was 62% white non-Hispanic, compared to Meridian's 58%. In a race where Garza's coalition depended heavily on Latino voter support, a 4-point difference in white representation in the sample could shift the top-line by 2-3 points even with identical question wording.
"We weighted to the voter file," Carlos said. "Did they?"
"The story says they 'adjusted for demographic representation,'" Trish read from her phone. "That could mean anything. It could mean they raked on three variables. It could mean they eyeballed it."
The Meridian team's conclusion: the newspaper poll's divergent result was probably a combination of question wording differences (particularly the "someone else" option and candidate order), sampling differences (higher Anglo share, different likely voter screen), and possibly weighting differences. No single factor was definitive, but the convergence of methodological differences was more than enough to explain the gap.
The Correction
Meridian flagged the comparison in their client memo to the Garza campaign:
"A competing poll released this week shows a substantially different result from our survey. Analysis of the methodology suggests multiple sources of divergence, none of which reflect a genuine change in voter preferences. We maintain confidence in our result, which uses candidate randomization, no 'someone else' prompt, and demographic weighting to the voter file. Clients should not interpret the between-poll gap as evidence of a 'real' shift in the race."
Nadia Osei sent back a one-line reply: "Which poll do you think is closer to the truth?"
Vivian's reply: "We believe our methodology is more defensible on the specific points identified. But the honest answer is that we won't know which is closer to the truth until November. This is why we prefer to track the race across multiple surveys rather than treating any single poll as definitive."
Discussion Questions
-
The newspaper poll did not publish its full questionnaire. What specific information would you need from their methodology statement to fully evaluate the sources of the discrepancy?
-
Design a split-sample experiment that could directly test the effect of including a "someone else" option versus not including one on the horse-race estimate. What would the ideal sample size be, and how would you analyze the results?
-
Candidate order rotation is standard practice at Meridian but was not used by the newspaper. Is this a minor methodological quibble or a serious problem? Under what conditions would candidate order matter most?
-
Nadia Osei is a sophisticated user of data — she runs the Garza campaign's analytics operation. How would you explain the concept of "measurement artifact" to a sophisticated but non-methodologist client without undermining their confidence in your results?
-
If both polls are conducted again three weeks later and show similar divergence, what would you conclude? What if they converge to the same number?