Case Study 20.2: Meridian's 2022 Postmortem — What the Numbers Were Really Saying

Background

This case study follows Meridian Research Group's internal process after a midterm cycle in which their state-level model overestimated Republican performance by an average of 2.8 points in the Senate races they polled. The postmortem is presented as a reconstructed internal document — the kind of frank assessment that most firms conduct privately but rarely publish. It is based on the methodological patterns identified in real post-2022 industry reviews, mapped onto the Meridian narrative framework.

The Pre-Election Forecast

In the weeks before the 2022 midterms, Meridian had published polling in six competitive Senate races. Their final estimates and the actual results:

State Meridian Final (R - D) Actual (R - D) Error
State A +3.2 (R) -3.5 (D won) 6.7 pts wrong
State B +1.1 (R) -4.2 (D won) 5.3 pts wrong
State C +4.8 (R) +1.2 (R won) 3.6 pts wrong
State D -1.0 (D) -6.1 (D won) 5.1 pts wrong
State E +0.5 (R) -3.3 (D won) 3.8 pts wrong
State F +2.1 (R) +4.5 (R won) 2.4 pts overestimate D

Note: Negative numbers indicate Democratic lead.

All but one error ran in the same direction: Meridian overestimated Republicans (or underestimated Democrats) in five of the six states. The mean absolute error was 4.5 points. The stated margin of error in each poll was ±3.1 points.

The Postmortem Document

The following is excerpted from Meridian's internal postmortem report, authored by Dr. Vivian Park with contributions from Carlos Mendez (data analysis) and Trish McGovern (field operations).


MERIDIAN RESEARCH GROUP — INTERNAL POSTMORTEM 2022 Midterm Senate Polling Review Confidential — Internal Distribution Only

Prepared by: Dr. Vivian Park Analyst: Carlos Mendez Field Review: Trish McGovern


Section 1: Summary Finding

Meridian's 2022 Senate polling overestimated Republican performance by an average of 4.5 percentage points (in the margin), with the error running in the same direction in five of six races polled. The stated margin of error (±3.1 points) captured sampling variance only; the actual errors significantly exceeded this range and were directional rather than random.

Primary Conclusion: The errors are inconsistent with random sampling variance. They are consistent with one or more systematic bias mechanisms. We identify three likely contributors and assess the relative weight of each.


Section 2: Candidate Mechanisms

Mechanism A: Overcorrection for 2020 Republican Nonresponse

Following the 2020 cycle, Meridian implemented more aggressive weighting adjustments for Republican nonresponse, including partial weighting on recalled 2020 presidential vote. Internal analysis by Carlos Mendez (Appendix B, this document) suggests that this adjustment may have been too aggressive: we overweighted 2020 Trump voters relative to their actual share of the 2022 electorate.

The underlying issue: 2022 is not 2020. Recalled vote weights calibrated to match the 2020 electorate assume that the partisan composition of the electorate is stable from cycle to cycle. In 2022, several factors shifted the composition of likely voters relative to 2020: elevated turnout among younger voters (particularly motivated by Dobbs), differential enthusiasm between parties, and meaningful ticket-splitting in several states we polled. By over-adjusting toward 2020's partisan balance, we may have overrepresented Trump 2020 voters relative to their share of 2022 likely voters.

Estimated contribution to error: approximately 1.5–2.0 points.

Mechanism B: Failure to Model Dobbs Effect on Likely Voter Screen

Our likely voter model is calibrated to historical turnout patterns by demographic group. Historically, older and white voters turn out at higher rates than younger and non-white voters; this pattern is built into the likely voter weight. In 2022, however, the Dobbs decision may have substantially altered differential turnout rates — specifically by increasing turnout propensity among younger voters, women, and college-educated suburban voters who were heavily motivated by abortion rights.

Carlos Mendez ran a retrospective analysis (Appendix C) comparing our pre-election likely voter estimates to validated vote files released three months post-election. In three of the six states we polled, college-educated women under 45 turned out at rates 6–9 percentage points higher than our likely voter model predicted. This group broke heavily Democratic in 2022.

Our likely voter model, calibrated to cycles in which abortion was a Republican-mobilizing rather than Democratic-mobilizing issue, did not capture this shift. This is a textbook case of a fundamentals assumption failing because a structural feature of the political environment changed.

Estimated contribution to error: approximately 2.0–2.5 points.

Mechanism C: Early Vote and Mail Modeling

In three states, we conducted our final poll approximately 10 days before Election Day. By that point, approximately 15–20 percent of the eventual vote had already been cast by mail. Our survey design treated all respondents symmetrically, whether they had already voted or intended to vote on Election Day. Post-election data suggests that early/mail voters in 2022 broke substantially more Democratic than Election Day voters, a pattern that differed from 2020 (when mail voting was more evenly distributed due to pandemic-related adoption by both parties).

If early voters were disproportionately Democratic and had already voted before our final field dates, our samples may have contained a lower proportion of committed Democratic votes than the actual electorate, because many of those Democratic votes had already been cast and were not reflected in our likely voter screen.

Estimated contribution to error: approximately 0.5–1.0 points.


Section 3: What We Got Right

In the interest of balanced analysis, it is worth noting what worked. State F — the one race where Meridian's estimate was closest to correct — was the race where Trish's field team had access to validated vote file data showing unusually high early Republican vote totals in rural precincts. This led us to apply additional weighting toward rural, low-education voters that we did not apply in the other five states. The resulting estimate was within 2.4 points, our most accurate.

This suggests that the additional adjustment was directionally correct, not that it fully corrected the underlying methodology. We applied it inconsistently — only in State F, where we had unusually good field intelligence. The question going forward is whether we can routinize this kind of adjustment across all states.


Section 4: Recommended Methodological Changes

  1. Suspend automatic recalled-vote weighting calibrated to 2020. Replace with a cycle-specific assessment of the partisan composition of the expected electorate, drawing on early vote data, voter registration trends, and expert political judgment about expected shifts.

  2. Develop a structural event adjustment for likely voter modeling. When a high-salience structural event (a major court ruling, a significant candidate development) may alter the differential turnout patterns built into the likely voter model, flag this explicitly and present a range of estimates under different turnout scenarios.

  3. Stratify analysis by vote mode (early/mail vs. Election Day intent). Weight early and mail voters separately from intended Election Day voters and apply mode-specific vote preference patterns from available early vote data.

  4. Report effective sample size alongside nominal sample size. Our stated margins of error have been based on nominal samples. Going forward, we will report effective sample sizes after weighting and calculate margins of error accordingly.

  5. Conduct a mandatory mid-cycle methodology review after the primary season in each cycle, updating likely voter calibrations and weighting schemes to reflect observed primary turnout patterns.


Section 5: Concluding Observations

The 2022 miss is painful but instructive. The error was systematic, directional, and — with the benefit of hindsight — traceable to specific, identifiable decisions: overcorrection for 2020 Republican nonresponse, a likely voter model that did not account for structural shifts driven by Dobbs, and insufficient attention to vote mode dynamics.

None of these were unforeseeable in principle. The Dobbs decision was made in June 2022. We had four months to assess its potential impact on turnout dynamics and did not act on that assessment in our likely voter model. This is the failure we most need to avoid repeating: the failure to update our priors when new, high-quality information suggests that the structural assumptions of our models may no longer hold.

We told our clients the model was doing one thing. It was doing something slightly different. We owe them — and ourselves — complete transparency about what happened and a clear account of what we are changing.

— Vivian Park, Principal Investigator


Discussion Questions

1. Section 4 recommends "suspending automatic recalled-vote weighting calibrated to 2020." What are the trade-offs of this recommendation? If you don't weight on recalled vote, how do you address partisan nonresponse bias?

2. The postmortem identifies the Dobbs decision as a "structural event" that the likely voter model failed to incorporate. What process would you design to identify structural events that warrant likely voter model adjustment mid-cycle? Who in the organization should be responsible for this assessment?

3. Vivian argues that the Dobbs-driven turnout shift was "not unforeseeable in principle." What does it mean to say something was foreseeable in principle but was not incorporated into a model? What organizational or incentive structures might prevent known information from being acted upon?

4. The postmortem is described as "confidential — internal distribution only." Should firms be required to publish postmortem analyses? What are the arguments for and against mandatory public disclosure of polling methodology assessments?

5. Compare Meridian's 2022 situation (overestimating Republicans in a Democratic-overperformance cycle) to the 2016 and 2020 situations (underestimating Republicans). What does it mean for the polling industry's credibility that the direction of the systematic error reversed between cycles?