46 min read

Imagine you are handed a single number: the annualized growth rate of real disposable personal income per capita in the first two quarters of an election year. Nothing else — no polling data, no candidate information, no campaign events. Just that...

Learning Objectives

  • Explain the core insight of fundamentals forecasting: that structural factors predict elections better than most observers expect
  • Identify and evaluate the major economic variables used in election forecasting models
  • Describe the Time for Change model and its components
  • Analyze why incumbency advantages have changed over time
  • Distinguish between what fundamentals models can and cannot explain
  • Critically evaluate the family of structural forecasting models and their limitations

Chapter 18: Fundamentals Models: The Economy, Incumbency, and Structure

Imagine you are handed a single number: the annualized growth rate of real disposable personal income per capita in the first two quarters of an election year. Nothing else — no polling data, no candidate information, no campaign events. Just that one economic statistic. How accurately could you predict the popular vote outcome in the presidential election?

The disturbing answer, for those who believe campaigns are the central drama of democratic politics, is: quite well. Models built on this and similar economic variables — constructed months before Election Day, before a single debate has occurred, before a dollar of campaign advertising has aired — routinely predict election outcomes within 2-3 percentage points of the actual result.

This is the central insight of fundamentals forecasting, and it is one of the most important and underappreciated findings in modern political science. Elections are not simply won and lost in the heat of campaigns. They are shaped, to a substantial degree, by underlying structural conditions that constrain what campaigns can accomplish. The economy, the incumbent president's approval ratings, how long the president's party has held the White House — these factors predict outcomes with a reliability that should humble anyone who thinks they know which debate moment or which ad buy will determine the winner.

This chapter examines how fundamentals models work, what they measure, why they work, and — crucially — what they cannot tell us.

18.1 The Core Insight: Structure Shapes Elections

The intellectual lineage of fundamentals forecasting begins with the observation that voters, in the aggregate, behave rationally in a specific sense: they hold the incumbent party accountable for economic performance. If the economy is growing, incomes are rising, and people feel financially secure, they tend to reward the party in power. If the economy is contracting, inflation is high, or unemployment is rising, they tend to punish the incumbent.

This pattern is sometimes called retrospective voting: voters evaluate the incumbent's performance and vote accordingly, rather than choosing on the basis of policy platforms or candidate characteristics. The American political scientist V.O. Key articulated this in 1966: voters are "rational gods of vengeance and reward." They may not know much about policy detail, but they know whether their economic situation has improved, and they vote accordingly.

The political science literature has refined and formalized this insight into a family of structural or fundamentals models — statistical models that use measurable structural conditions to predict election outcomes. These models have several defining characteristics:

  • They are estimated well before Election Day, typically using data available by early summer of an election year
  • They do not use polling data as a primary input (some hybrid models add polls later)
  • They predict the popular vote share or seat share, not individual-race outcomes
  • They are evaluated on out-of-sample prediction: how well they predict elections they weren't calibrated on

The accuracy of these models, assessed across multiple election cycles, is remarkable — and it raises profound questions about what campaigns actually accomplish.

💡 Intuition: Why Structure Matters

Think of the structural factors as setting the playing field. If the incumbent party enters an election with a 6-point headwind — because the economy is weak and the president is unpopular — even an excellent campaign may not be enough to overcome that disadvantage. Conversely, with a 6-point tailwind, even a mediocre campaign can win. The structure determines the range of plausible outcomes; the campaign determines where within that range the result falls. Most campaigns stay within the range that structure defines.

18.2 The Economy: Which Indicators Matter Most?

Economic voting is the bedrock of fundamentals forecasting. But "the economy" is not a single variable — it's a complex of indicators that vary in what they measure, when they're measured, and how voters experience them. Different models use different economic inputs, and understanding why requires thinking about what voters actually perceive.

Real Disposable Personal Income Growth

The single most powerful economic predictor in the presidential forecasting literature is growth in real disposable personal income (RDPI) per capita — essentially, how much purchasing power the average person's income has grown after inflation and taxes.

This measure, used prominently by economist Douglas Hibbs in his "Bread and Peace" model, captures something specific about how voters experience economic conditions: not the headline GDP number, which is abstract, but the concrete experience of having more or less money to spend. When income is growing, voters feel prosperous and credit the incumbent. When income is stagnant or falling, they feel squeezed and punish the incumbent.

Hibbs's model, using only RDPI growth (and casualties from unpopular foreign wars), correctly predicted the direction of presidential popular vote outcomes in most post-WWII elections. The simplicity is almost scandalous: one economic number does most of the predictive work.

GDP Growth

Most forecasting models use some measure of gross domestic product (GDP) growth, typically focusing on the election year. The Abramowitz Time for Change model uses the growth rate of real GDP in the second quarter of the election year — the most recently available economic data at the point when the model is typically published.

The second-quarter focus reflects a theory about voter psychology: voters weight recent economic conditions more heavily than older ones. An economy that grew strongly in 2022 but has been stagnant in 2024 will be evaluated on its current state, not its historical performance. This is consistent with the research finding that economic effects on vote share "decay" rapidly — the most recent quarters matter far more than earlier ones.

Inflation

The relationship between inflation and voting behavior is more complex than the relationship between income growth and voting behavior. High inflation clearly hurts incumbents — the 2022 midterms, in which Democrats vastly underperformed the structural forecast, coincided with historically high inflation. But inflation is difficult to incorporate into forecasting models in a simple linear way:

  • Inflation is experienced differently by different income groups (lower-income voters spend more of their budget on food and energy, where inflation is most volatile)
  • Inflation can coexist with low unemployment and rising wages, creating cross-cutting signals
  • Voter psychology about inflation may be nonlinear (moderate inflation is tolerated; high inflation is acutely punishing)

Some models include inflation as a separate variable; others argue that RDPI growth implicitly captures inflation's effect by measuring real (inflation-adjusted) purchasing power.

Unemployment

Despite its prominence in political discourse, unemployment is a weaker predictor of election outcomes than income growth or GDP growth. The paradox is that unemployment affects a relatively small fraction of the electorate at any given time; most voters are employed and evaluate the economy through income and prices rather than job loss. Rising unemployment signals economic distress, but the signal is attenuated because most voters aren't directly experiencing it.

Some models include unemployment change (the direction matters more than the level); others exclude it as redundant with income growth.

📊 Real-World Application: Economic Timing and the 2020 Election

The 2020 election presented a dramatic challenge for fundamentals models. The COVID-19 pandemic caused the deepest quarterly GDP contraction in modern history in Q2 2020, followed by an equally dramatic rebound in Q3. What should models use? The terrible Q2 number, which was available at model-publication time? The recovering Q3 number, which suggested the economic shock was transient? And how do you handle an economic disruption that voters broadly attributed to a global pandemic rather than to the incumbent's economic management?

Most models using Q2 GDP data suggested a Trump loss larger than what actually occurred, illustrating a broader truth: economic indicators are most predictive when voters actually attribute economic conditions to the incumbent. When blame is diffuse (a pandemic, a global supply shock), the economic signal is weaker.

18.3 Presidential Approval Ratings as a Forecasting Input

Presidential approval ratings are a natural addition to fundamentals models: they summarize voters' overall assessment of the incumbent's performance, incorporating economic, foreign policy, and other considerations.

The logic is intuitive. A president with 55% approval ratings going into Election Day is much better positioned than one with 42% approval, holding everything else constant. Approval ratings capture the holistic voter judgment that the economic measures approximate only partially.

Alan Abramowitz's Time for Change model — one of the most celebrated and most scrutinized fundamentals models — includes presidential net approval (approval minus disapproval) in June of the election year as a key input. Abramowitz chose June as the reference point because it's when the model is typically published, long before the campaign's final sprint. The June approval rating thus serves as a midterm report card that voters carry into the fall.

Approval as an Aggregate of Individual Assessments

What makes presidential approval a useful forecasting variable is that it already aggregates enormous amounts of information: voters' economic assessments, their foreign policy judgments, their personal evaluations of the president's character and competence. Rather than trying to include all these factors separately, approval ratings summarize them into a single composite score.

The limitation is that approval ratings are themselves a form of polling — they're subject to the same measurement challenges as horse race polls. If there's a systematic measurement problem with how approval is estimated, it flows directly into the fundamentals model.

The Ceiling on Presidential Approval

Modern presidents typically have approval ratings between 40% and 55%, with relatively rare excursions outside that range. This compression limits the variance explained by approval ratings. In an era of deep partisan polarization, presidents' approval ratings are heavily anchored by partisan identification: most Democrats will approve of a Democratic president; most Republicans won't; and the moveable middle is smaller than it used to be.

This polarization-induced compression is part of why more recent fundamentals models have struggled somewhat more than models estimated on data from less polarized eras.

⚠️ Common Pitfall: Approval Ratings Are Not Thermostats

A common misconception is that high approval ratings "cause" election victories in some direct sense. In reality, both approval ratings and vote shares are effects of underlying conditions — the economy, foreign affairs, incumbent performance. Approval ratings are a useful summary statistic of those conditions, but they're not an independent lever. A president can't simply will their approval rating to rise; it moves in response to events and performance.

18.4 Incumbency: The Advantage and Its Decline

Incumbency has long been recognized as a structural advantage in American elections. Incumbent presidents, senators, and representatives typically run for re-election with the advantage of name recognition, access to the levers of government, established donor networks, and the ability to take official actions that serve constituent interests. Challengers must build all of this from scratch.

The incumbency advantage has been measured rigorously in congressional elections through the technique of comparing an incumbent's vote share to what a generic same-party candidate would expect in the same district — the "sophomore surge" or "retirement slump" method. Through most of the postwar period, this advantage was estimated at 5-8 percentage points for House incumbents.

The Declining Incumbency Advantage

Since the 1990s, however, the incumbency advantage has been declining, particularly in federal elections. Several forces have driven this:

Partisan sorting: As voters sort more completely into the two parties based on ideological identity, straight-ticket voting has increased. Voters who once split their ticket — supporting a popular local incumbent from the other party — now vote the party label almost exclusively. This reduces the space in which incumbency can operate.

Nationalization of elections: Congressional and Senate elections have become more referenda on the president and national conditions than evaluations of local incumbents. The local constituent service model, in which incumbents built personal coalitions that transcended party, has weakened substantially.

Media fragmentation: Incumbents historically maintained advantages partly through superior access to local media — being quoted in local papers, appearing on local TV. As local media has declined and national media has fragmented, this advantage has eroded.

The empirical evidence for declining incumbency advantage is now substantial: estimates put the current House incumbency advantage at roughly 2-4 percentage points, down from the 5-8 point estimates of the 1970s-80s.

Incumbency at the Presidential Level

Presidential incumbency operates differently from congressional incumbency. First-term presidents running for re-election have a complicated advantage: they have the visibility, resources, and governing record that come with the office, but they also own whatever has happened on their watch — economic conditions, foreign policy outcomes, domestic crises.

The historical record is instructive: since WWII, incumbents seeking re-election have won most of the time, but incumbents facing bad economic conditions or very low approval ratings have been defeated. Gerald Ford in 1976, Jimmy Carter in 1980, and George H.W. Bush in 1992 all lost as incumbents, in each case in conditions where the fundamentals were unfavorable.

18.5 The Time for Change Model

The most famous fundamentals model in American political science is Alan Abramowitz's Time for Change (TFC) model, first published in 1988 and updated after each election cycle. The model predicts the incumbent party's popular vote share in presidential elections using three variables:

  1. Real GDP growth in Q2 of the election year: Captures recent economic momentum
  2. Presidential net approval in June of the election year: Captures the incumbent's political standing
  3. A first-term incumbency dummy variable: Codes whether a first-term president is running for re-election (coded 1) or a non-incumbent from the incumbent party is running (coded 0)

The model is fit on data from all post-WWII presidential elections, and Abramowitz publishes its prediction — with a confidence interval — typically in summer of each election year.

The results are striking. The TFC model's out-of-sample predictions — estimates for elections after the data period used to fit the model — have generally been within 2-3 percentage points of actual results. In several cycles, the model has predicted the winner before any significant campaign events occurred.

Why Three Variables?

The parsimony of the TFC model is a feature, not a bug. With only a few decades of presidential elections, the data set is small: somewhere between fifteen and eighteen observations, depending on how far back you go. With more variables than that, the model would be overfitting — finding patterns in the data that are specific to the sample rather than genuinely structural.

Abramowitz's three-variable choice captures the core story: how the economy is doing right now (Q2 GDP), how the incumbent is being evaluated overall (net approval), and whether the incumbent has the extra benefit of personally seeking re-election versus being a new candidate from an incumbent party (the first-term incumbency variable).

The "Time for Change" Insight

The name of the model reflects an important finding embedded in the incumbency variable: there is a strong tendency for voters to "want a change" after two terms of the same party in the White House. The coefficient on the first-term incumbency variable is large and statistically significant: a first-term incumbent running for re-election is predicted to do substantially better than a candidate from an incumbent party running for a third consecutive term.

The intuition is that party fatigue compounds over terms. Two terms of one party can accumulate grievances, policy failures, and stagnation that make voters receptive to a change — independent of current economic conditions or approval ratings. Ronald Reagan in 1984 (first-term incumbent) was in a much better structural position than George H.W. Bush in 1992 (third consecutive Republican term) or Al Gore in 2000 (third consecutive Democratic term), even if the current economic conditions had been identical.

📊 Real-World Application: The TFC Model in Recent Elections

The TFC model has had a mixed but generally respectable recent record:

  • 2012: Model predicted Obama +2.2; actual Obama +3.9. Off by 1.7 points.
  • 2016: Model predicted Clinton +2.1 (with caveats about polarization effects and no first-term incumbent); actual Clinton +2.1 popular vote. Essentially perfect on popular vote — though obviously missing the Electoral College outcome.
  • 2020: Model struggled with COVID disruption of economic data; various versions predicted Biden wins of different magnitudes.

The model's strength is in predicting the direction and approximate magnitude of the popular vote, not in capturing Electoral College dynamics or the specific circumstances of any given election.

🔴 Critical Thinking: What Does Model Accuracy Mean?

When a fundamentals model says "Democrat +3.2" and the actual result is "Democrat +3.8," we call that a success. But think carefully about what the model is and isn't doing. It's predicting an aggregate national number from a handful of variables. It's not telling you which states flip, which Senate races are affected, what the down-ballot consequences are, or whether there are important events in the final weeks that the model can't anticipate. Success at the national popular vote level doesn't mean the model has "explained" the election — it means it found a statistical regularity that has persisted across multiple cycles.

18.6 The Model Family Tree: Other Structural Forecasters

Abramowitz's TFC model is the most prominent, but it's one of a large family of fundamentals models developed by political scientists. Understanding the broader family illuminates both the robustness of the structural approach and its limits.

Douglas Hibbs — The Bread and Peace Model

Douglas Hibbs's model is in some ways the purest expression of the economic voting hypothesis. Using only weighted average RDPI growth (with higher weight on more recent quarters) and military casualties in unpopular wars, Hibbs has predicted presidential popular vote shares with impressive accuracy.

The simplicity of Hibbs's approach is philosophically significant: it suggests that two things dominate how Americans vote for president — their pocketbook assessment and their assessment of foreign military adventurism. Everything else — candidate quality, campaign strategy, debate performance — is noise around the fundamental signal.

Michael Lewis-Beck and Colleagues — The Abridged Model

Lewis-Beck and various collaborators have developed models that include consumer confidence and economic perceptions as well as objective economic measures. Their work emphasizes the subjective dimension of economic voting: voters respond to how they feel about the economy, which is related to but distinct from how the economy is actually performing.

John Sides and Lynn Vavreck — The Air War and Ground War

In their analysis of the 2012 election, The Gamble, Sides and Vavreck offer an important integration of fundamentals with campaign analysis. Their argument: the fundamentals set the expected outcome; campaigns matter, but they matter mostly at the margins, and well-matched campaigns cancel each other out. Only when one side has a significant resource or messaging advantage does campaign effects move the needle beyond the fundamentals range.

This synthesis is perhaps the most balanced view: fundamentals set the frame, campaigns determine where within the frame the result falls.

Thomas Holbrook — The Comprehensive Model

Thomas Holbrook has developed a "reaction function" model that explicitly models how polling data responds to fundamental conditions over the course of the campaign. His insight: polls start closer to the fundamentals-implied outcome at the beginning of the campaign season, then sometimes diverge as campaign events shift attention, then converge back toward the fundamentals equilibrium as Election Day approaches. The final polls are thus more informative than summer polls, but the fundamentals-implied outcome is always a gravitational center.

18.7 The Generic Ballot and the Congressional Environment

Presidential fundamentals models have congressional analogues, though the prediction target is different: not a single national vote share but the net change in House or Senate seats. The structural inputs are similar — economic conditions, presidential approval — but additional variables are added to account for the specific geometry of congressional elections.

The Generic Ballot

The generic ballot — "If the election for Congress were held today, would you vote for the Republican or Democratic candidate in your district?" — serves as a structural input in many congressional forecasting models. Unlike individual race polls, the generic ballot measures the national political environment: how favorable or unfavorable is the overall climate for each party?

Generic ballot readings are available throughout the election cycle and tracked in aggregated form by the same organizations that aggregate presidential polls. In elections with strong national waves, the generic ballot moves dramatically; in more neutral environments, it hovers near 50-50.

Seats in Play and the "Exposure" Problem

Congressional fundamentals models also account for how many seats each party is "defending" — the geometry of which members are running in which districts. In a Senate election, only one-third of senators are on the ballot; the specific mix of seats up for election can disadvantage one party or another independent of the national environment.

The concept of seats in play captures this: if Democrats are defending fifteen seats in competitive territory and Republicans are defending five, Democrats face structural disadvantage even in a neutral national environment. This geometric exposure interacts with the national wave to produce the final seat change.

🌍 Global Perspective: Comparative Fundamentals Forecasting

Economic voting is not uniquely American. Political scientists have developed analogous fundamentals models for elections in the United Kingdom, Germany, France, Canada, Australia, and many other democracies. The general finding is that economic conditions predict incumbent party vote share across contexts, though the specific mechanisms and variables differ.

In parliamentary systems, the fundamentals often predict whether the incumbent party retains government rather than its precise vote share. In systems with more than two major parties, the modeling challenge is more complex: the incumbent may lose vote share but still form a government if the opposition is fragmented. The American two-party simplicity makes fundamentals modeling more tractable than in many other democracies.

18.8 What Fundamentals Models Cannot Capture

Given the predictive power of fundamentals models, it's tempting to ask: if we can predict the election from a few structural variables, what do campaigns actually do? What role does candidate quality play? What about October surprises?

The honest answer from the fundamentals literature is nuanced: these things matter, but they matter less than most observers believe, and their effects are largely captured (imperfectly) in the structural variables before Election Day arrives.

Candidate Quality

Fundamentals models treat candidates as interchangeable: the model predicts what any Republican or any Democrat would do in given structural conditions. But some candidates are clearly better or worse than others. The 2022 Senate elections provided a vivid illustration: Republican candidates in Georgia, Pennsylvania, Arizona, and Nevada — many of them weaker by traditional standards — significantly underperformed what structural conditions would have predicted for a generic Republican.

Most fundamentals models don't include candidate quality because it's hard to measure objectively before an election. Some researchers have tried to use observable proxies — whether a candidate has held elective office before, their fundraising in early quarters — but these remain noisy approximations.

Black Swan Events

Fundamentals models, by definition, can't incorporate events that haven't happened yet and can't be predicted from historical regularities. A major terrorist attack, a financial crisis, a health scandal — these can shift elections in ways that no structural model could anticipate.

The key question is whether such events are large enough to overwhelm the fundamentals. The historical record suggests that most events — even ones that seem momentous during the campaign — don't shift outcomes much. But some events do. The 2008 financial crisis is a case where a structural shock materialized during the campaign season and may have been decisive in a race where structural conditions already favored the Democrats.

Local Factors

National fundamentals models predict national outcomes. Individual races — a Senate race in a particular state, a congressional district — are influenced by local factors that national models can't capture: the quality of the candidates, local economic conditions that differ from national trends, state-specific issues, and idiosyncratic events.

This is why fundamentals models work better for predicting presidential popular vote totals than for predicting individual Senate races or House seats. The further you get from the national aggregate, the more local noise overwhelms the structural signal.

⚠️ Common Pitfall: The Ecological Fallacy in Fundamentals Models

It's tempting to take a national fundamentals model and apply its logic to individual races: "The structural environment favors Democrats by 3 points nationally, so this specific Senate race should shift 3 points toward Democrats." This is a version of the ecological fallacy — applying aggregate relationships to individual cases. The relationship between national conditions and individual race outcomes is mediated by local factors that the national model doesn't capture.

18.9 Applying Fundamentals to the Garza-Whitfield Race

The Garza-Whitfield Senate race illustrates both the power and the limits of fundamentals analysis applied below the presidential level.

At the national level, the structural environment in the election year was modestly favorable for Democrats: the incumbent president's approval rating was around 48%, real income growth was positive but modest at around 1.5% annualized, and it was not a "time for change" year in the Abramowitz sense (the incumbent party was completing only its first term). These conditions suggested a neutral-to-slightly-favorable national environment for Democrats.

But applying that national signal to the Garza-Whitfield race required additional translation. The state had presidential results within 3 points in both 2020 and 2024, so it was structurally competitive. Garza's own approval ratings in the state were moderately positive, suggesting some incumbency advantage. Whitfield's favorability was high among Republican base voters but weak with Latino voters and independents.

Nadia Osei had built a state-level structural model as a complement to her polling tracking. Her model used: - State-level presidential approval (for the incumbent president) - State economic conditions (not just national numbers) - Garza's personal approval in the state - Historical Senate results in the state over the last six cycles - State demographic change (growing Latino electorate)

The model output: Garza favored by approximately 4-6 points, with substantial uncertainty. The structural inputs alone couldn't narrow the range much more than that.

"The fundamentals tell us where the gravitational center is," Nadia told her team. "The polls tell us whether we're at the center or somewhere off it. Right now, the polls and the fundamentals are saying the same thing — Garza up 3-5. That convergence is the most confidence-inspiring situation you can be in."

🔗 Connection: Fundamentals + Polls = FiveThirtyEight "Polls-Plus"

The "polls-plus" framework we mentioned in Chapter 17 is essentially a formal integration of fundamentals analysis with polling aggregation. The fundamentals provide an anchor — an estimate of where the race "should" be based on structural conditions — and the polls provide information about whether the race has deviated from that anchor. Early in the campaign, when polls are volatile and sparse, the model weights the fundamentals heavily. As Election Day approaches and reliable polls accumulate, the model shifts weight toward the polls. This is how sophisticated forecasting models use both tools.

18.10 Prediction vs. Explanation: A Critical Distinction

The existence of accurate fundamentals models raises a profound question about causation and explanation. If structural factors predict election outcomes, does that mean they cause outcomes? And does it mean that campaigns, candidate quality, and political events don't matter?

Here we encounter the central theme of prediction versus explanation. A model can be predictively powerful — accurately forecasting election results months in advance — without explaining the mechanisms by which those results come about.

The fundamentals models are making a different claim than a causal story would: they're saying that certain observable variables (economic growth, approval ratings, incumbency status) are strong statistical predictors of vote shares. They're not saying that voters consciously evaluate these variables and update their vote choices accordingly. The causal chain from "economy grows" to "incumbents win" runs through individual voter decisions, media coverage, candidate strategies, and many other mediating factors.

What fundamentals models capture, at their best, is a statistical regularity that has held across multiple elections. The challenge is that statistical regularities estimated on small samples (fifteen presidential elections) can be fragile: a few unusual elections can substantially change the estimated coefficients, and elections that violate the regularity are hard to distinguish in advance from elections that will follow it.

The fundamentals literature is most confident about the directional prediction (which party wins) and less confident about the exact magnitude. As forecasting tools, fundamentals models set reasonable priors that should be updated as additional information arrives — polls, late-breaking events, down-ballot dynamics.

🔵 Debate: Do Campaigns Matter?

The fundamentals literature is sometimes read as implying that campaigns don't matter — that the outcome is largely determined before the campaign begins. Scholars like John Sides and Lynn Vavreck push back vigorously: campaigns matter, they argue, but they matter mostly by "priming" voters to think about conditions the fundamentals have already established. Well-matched campaigns cancel out; poorly matched ones create deviations from the fundamentals prediction. The debate is not resolved, but the weight of evidence suggests the truth is somewhere between "campaigns are decisive" and "campaigns are irrelevant."

18.11 International Analogues: Comparative Forecasting Models

The fundamentals approach has been applied productively beyond American elections. Several comparative findings are worth noting.

United Kingdom: Economic conditions predict incumbent party vote share in British elections, with some evidence that subjective economic perceptions (how voters feel about the economy) are more predictive than objective measures. The 2019 Conservative landslide illustrates how Brexit complicated normal economic voting: the Conservatives won despite a middling economic record because they offered resolution of the dominant non-economic issue.

Germany: Economic voting is present but weaker than in the United States, partly because coalition governments diffuse responsibility — voters aren't sure which party to credit or blame for economic conditions.

France: The two-round presidential system creates complications for fundamentals forecasting: the first round selects candidates who enter the runoff, making the structural prediction more about who advances than who ultimately wins.

Canada: Canadian fundamentals models show similar patterns to the American case, with economic conditions and incumbent approval being the strongest predictors of government changes.

The comparative literature broadly confirms the economic voting hypothesis across democracies, while finding that the specific variables and their magnitudes vary by institutional context.

18.12 The Full Time for Change Model: Working With the Equation

For analysts who want to apply the TFC model directly rather than simply read its outputs, here is a more detailed treatment of working with the model.

Abramowitz's published version of the model, estimated across post-WWII presidential elections, yields regression coefficients that have been approximately stable across multiple updates. A representative specification looks like:

Incumbent Party Vote Share = 48.0 + 0.54(Q2 GDP) + 0.10(June Net Approval) + 2.50(First-Term Incumbent)

Where: - Q2 GDP is the annualized growth rate of real GDP in the second quarter of the election year (in percentage points) - June Net Approval is the incumbent president's approval minus disapproval in June polling (in percentage points, so +10 means approve exceeds disapprove by 10 points) - First-Term Incumbent is 1 if the president is personally on the ballot seeking re-election in their first term, 0 otherwise (note: a president seeking a second term would be 0 if it's their second re-election attempt)

Interpreting the Coefficients

The intercept (48.0): This represents the baseline vote share for an incumbent-party candidate when all other variables are zero — essentially a neutral economic environment, net approval of zero, and no incumbent running. 48% is below the 50% threshold for winning, reflecting a modest structural disadvantage to the non-incumbent party's position at baseline.

The GDP coefficient (0.54): For each percentage point of Q2 GDP growth, the incumbent party is predicted to gain about 0.54 percentage points in vote share. In a year with 4% Q2 GDP growth, this contributes about 2.2 points. In a year with -2% GDP (recession), this subtracts about 1.1 points from the incumbent's expected total.

The net approval coefficient (0.10): For each percentage point of net presidential approval, the incumbent party gains about 0.1 points. A president with +20 net approval (55% approve, 35% disapprove) gains 2 points from this variable relative to a president at net zero. This coefficient is smaller than the GDP coefficient, but approval ratings vary over a wider range, so the practical impact can be comparable.

The incumbency coefficient (2.50): A first-term president personally on the ballot is predicted to perform about 2.5 percentage points better than a non-incumbent from the same party would in the same economic and approval environment. This is a substantial advantage, reflecting the personal incumbency effect above and beyond partisan identification.

Worked Example: The 2024 Context

In a hypothetical scenario approximating conditions at the time of the 2024 election: - Q2 GDP growth approximately +3.0% - June net approval (Biden) approximately -14 (negative net approval) - First-term incumbent: Biden would be seeking a second term, which in TFC is coded differently — he's a first-term incumbent in the strict sense (first time seeking re-election) so this would be 1

Predicted Biden vote share: 48.0 + 0.54(3.0) + 0.10(-14) + 2.50(1) = 48.0 + 1.62 - 1.40 + 2.50 = 50.72

This would predict a narrow Biden win — though real applications require using the actual published coefficients and accounting for the model's standard error.

⚠️ Common Pitfall: Treating Model Output as a Point Estimate

The model equation produces a point estimate, but that estimate comes with a standard error of roughly 3 percentage points. A prediction of 50.72% should always be accompanied by a confidence interval: something like 44.7% to 56.7% (±2 standard errors). Political analysts who cite fundamentals model outputs without confidence intervals are presenting false precision.

18.13 When the Fundamentals Don't Align: Mixed Signals and Analytical Judgment

The cleanest applications of fundamentals models come when all the structural inputs point in the same direction: a strong economy, a popular president, a first-term incumbent running for re-election. These conditions produce a confident structural prediction of an incumbent advantage.

The harder cases arise when structural inputs conflict. What do you do when the economy is strong but the president is unpopular? When the president is popular but the economy is weak? When the structural environment should produce a wave, but candidate quality differences seem likely to counteract it?

The 2022 Midterms: A Case of Mixed Structural Signals

The 2022 midterm elections presented a genuinely complicated structural picture: - The generic ballot showed Republicans ahead by 2-4 points — suggesting a favorable Republican environment - But Biden's approval rating, while low (approximately 42%), was not historically disastrous for a first-term midterm - Inflation was at 40-year highs, a strong negative signal for the incumbent party - But unemployment was extremely low, a strong positive signal - The Supreme Court's Dobbs decision had energized Democratic voters, particularly women — a structural shift with no close historical analogue

These conflicting signals made 2022 a genuinely hard case for structural models. Most predicted a substantial Republican wave (historical patterns for first-term midterms with low presidential approval suggested losses of 20-30 House seats). The actual result was Republicans gaining 9 seats — a net gain, but far short of the historical comparison group.

The lesson: when structural signals conflict, the model's uncertainty increases substantially. Analysts should widen their confidence intervals and become more agnostic about direction and magnitude in mixed-signal environments.

Analytical Judgment in the Face of Structural Uncertainty

When structural models give mixed signals, good analysts apply several additional checks:

Historical analogues: Find the 3-5 elections in the historical record most similar to the current structural environment. What was the range of outcomes in those elections? This provides a historically grounded confidence interval.

Weight the strongest predictors: Not all structural variables are equally reliable. Real income growth and presidential approval have stronger empirical track records than most other variables. In a mixed signal environment, weight these more heavily.

Scenario-weight the conflicting signals: Rather than trying to combine conflicting signals into a single average, explicitly model scenarios. "If inflation dominates voter evaluations, the Republican wins by 5+ points. If Dobbs energizes Democratic turnout and dominates, the race is essentially tied. What do I think is the probability of each scenario?"

Stay epistemically humble: When structural signals conflict, the right answer is often genuine uncertainty — a wide confidence interval that honestly represents what the data supports. Claiming more confidence than the conflicting signals justify is a common error.

18.14 Seat-Level Applications: Translating National Fundamentals to Individual Races

The fundamentals framework that works so well at the national level becomes more approximate and more uncertain when applied to individual congressional races. Understanding exactly how this translation works — and where it breaks down — is essential for Senate and House analysts.

The Translation Problem

National fundamentals say: the political environment is roughly D+2, meaning Democrats are expected to run about 2 points better than their partisan baseline nationwide. Translating this to an individual Senate race requires:

Step 1: Establish the partisan baseline for the specific state. A state that has voted Democratic for president by an average of 3 points over the last three cycles has a baseline of approximately D+3. In a D+2 national environment, the Democratic Senate candidate's starting position is roughly D+5.

Step 2: Add the incumbency adjustment. A Democratic incumbent with strong approval ratings in the state adds perhaps 2-4 points of personal incumbency advantage. A Democratic challenger starting from scratch subtracts the reverse.

Step 3: Apply candidate quality adjustments. This is where structural models break down and judgment must supplement the formula. A particularly strong or weak candidate can move the race 3-5 points in either direction from the structural baseline.

Step 4: Account for state-specific issues. A Senate race in a border state dominated by immigration policy may respond differently to national conditions than one in a state where economic issues are paramount. State-specific conditions can dampen or amplify the national wave.

Step 5: Acknowledge the residual uncertainty. After all these adjustments, the remaining uncertainty in an individual Senate race is substantially larger than the uncertainty in a national popular vote prediction. Individual races have idiosyncratic features that national models can't capture.

The "Regression to the Mean" Principle in Seat-Level Analysis

A useful concept for translating national fundamentals to individual races is regression to the mean: in favorable national environments, safe seats tend to get safer; competitive seats tend to shift toward the favored party; and some previously safe seats for the opposition become competitive. But the degree of shift varies enormously, and races that look competitive on paper can be decided by local candidate factors that dwarf the national signal.

The correct posture for seat-level analysis is to use the national environment as a prior — a baseline expectation — that is then updated substantially based on local polling, candidate quality, and state-specific factors. In a race with strong local polling, the polling should typically dominate the structural baseline; in a race with no local polling, the structural baseline is the best available guide.

18.15 The Polarization Challenge to Fundamentals Models

Modern partisan polarization poses specific challenges to fundamentals models that are worth examining systematically. Several features of contemporary American politics may be reducing the reliability of economic voting and structural forecasting relative to earlier decades.

Compressed Approval Ratings

As partisan identity has become more fixed and more emotionally central to voters, presidential approval ratings have become more compressed. In earlier decades, presidents regularly had approval ratings that varied by 15-20 points over their term — affected substantially by economic conditions and major events. Today's presidents operate in a more constrained range: partisan opponents are near-uniformly opposed regardless of performance, while partisan supporters are near-uniformly approving. The moveable middle — the truly persuadable evaluators — has shrunk.

This compression has two consequences for fundamentals models. First, approval ratings vary less, so they have less statistical power in models that use approval as a predictor. Second, the relationship between economic conditions and approval may be weaker: if a Republican president can't win approval from Democrats even in a booming economy, and Democratic presidents can't lose approval from Democrats even in a downturn, the economic-approval link is attenuated.

Some researchers have argued that models developed on data from 1948-1992 — a less polarized era — should have their coefficients adjusted before applying them to the 2012-present period. The practical implication is that structural models may need to be recalibrated for the polarized environment, though the small sample of recent elections makes precise recalibration difficult.

The Nationalization Effect

The nationalization of politics — congressional races functioning more as referenda on the president and national conditions than as local evaluations of specific incumbents — has strengthened some aspects of structural forecasting while complicating others. Presidential approval ratings are now a stronger predictor of Senate and House outcomes than they were in the 1970s and 80s, because all races move more together with the national environment. But local economic conditions and local incumbency effects have become weaker predictors, because voters pay less attention to them.

The net effect for structural forecasting is a model that works somewhat differently than it did in the classic literature: national variables (approval, generic ballot) have become relatively more important, while local variables have become relatively less important.

Sorting and the Shrinking Swing Voter

As partisan sorting has progressed, the share of genuine swing voters has decreased. Forecasting models built on the assumption that 20-25% of the electorate are persuadable and responsive to economic conditions need updating for an environment where perhaps 8-12% of voters are genuinely moveable. In a more sorted electorate, structural factors still matter, but they operate on a smaller fraction of the population — which means elections are simultaneously more predictable in their direction and more sensitive to turnout than in previous eras.

This shift toward turnout as a determinant of outcomes creates new challenges for structural models, which historically have focused on the preference side of election outcomes rather than the participation side.

18.16 The Role of Issue Environments in Structural Analysis

Traditional fundamentals models focus on economic conditions and approval ratings as structural inputs. But elections are also shaped by the broader issue environment — what issues are most salient to voters, and how those issues cut across partisan lines.

The classic economic model implicitly assumes that economic evaluations dominate voting decisions. But in elections where a non-economic issue is overwhelmingly salient — a foreign war, a pandemic, a major domestic policy shift — the economic model may miss the most important structural input.

Issue Salience as a Structural Variable

Some scholars have argued that issue salience should be incorporated into structural models explicitly. When a non-economic issue dominates the information environment — and when that issue cuts in a direction different from what economic conditions would predict — the standard fundamentals model will be miscalibrated.

The 2022 midterms provide the clearest recent example. The Dobbs Supreme Court decision eliminating the federal right to abortion was a policy shock with no close historical analogue in the fundamentals forecasting literature. It energized Democratic-leaning voters, particularly young women, in ways that economic models predicted no mobilization advantage for Democrats. The result: a substantially better-than-expected Democratic performance relative to the economic indicators.

To incorporate issue salience, analysts need a measure of which issues dominate the electorate's attention. Generic question wording — "what is the most important problem facing the country?" — can provide a rough proxy, but it's a blunt instrument. More sophisticated approaches might track issue search volume, media attention metrics, or question-specific polling to assess whether economic issues or non-economic issues are driving electoral decision-making in a specific cycle.

The Generic Ballot as an Issue-Incorporating Signal

One practical solution to the issue salience problem is to rely more heavily on the generic ballot — which implicitly incorporates whatever issues are currently driving voter preferences — and less on structural economic variables when issue salience patterns are unusual. The generic ballot is itself a form of structural indicator: it measures the national partisan environment directly, whatever causes it.

This creates a natural hybrid approach: in "normal" election cycles where economic voting dominates, weight the economic structural variables heavily. In unusual cycles where non-economic issues are dominating voter attention, weight the generic ballot (which picks up those issues) more heavily and the economic variables less so.

18.18 Structural Models and Long-Range Forecasting: How Early Is Too Early?

One of the most practically useful applications of fundamentals models — and one of the most commonly misunderstood — is their use for long-range forecasting: making predictions about election outcomes months or even years before Election Day.

What Fundamentals Models Say at Different Horizons

The information available to a fundamentals model varies substantially depending on how far in advance you're making the prediction:

Two years out: Only historical patterns and party composition of the legislature/presidency are known. At this horizon, the "time for change" variable (how many consecutive terms the incumbent party has held the White House) and very long-run economic trends are the primary inputs. Predictions at this horizon are little better than base rates — the 50/50 plus historical incumbent party performance.

One year out: Structural factors are becoming clearer: the economic trend is establishing itself, presidential approval has settled into a rough range, and the likely candidates are beginning to emerge. Models at this horizon can make meaningful directional predictions — which party is currently favored — but with very wide uncertainty intervals.

Six months out: This is roughly the horizon at which most published fundamentals models operate. Q2 GDP data is available, June approval ratings are in hand, and the models can be formally estimated and published. Predictions at this horizon are meaningfully informative, with confidence intervals of approximately ±4-5 percentage points.

Three months out: Fundamentals models are most accurate at this horizon when combined with early general-election polling data. The hybrid approach — weighting fundamentals heavily when polls are sparse, then reweighting toward polls as they accumulate — outperforms either pure approach.

The Value of Long-Range Structural Forecasts

Even at the one- or two-year horizon, structural forecasts have value for specific purposes:

Campaign planning: A Senate campaign beginning to organize 18 months before Election Day benefits from knowing whether the structural environment favors them or their opponent. If the structural fundamentals suggest a 5-point headwind, the campaign may invest more in candidate development, fundraising infrastructure, and early voter contact. If the fundamentals are favorable, they may focus on maintaining advantages rather than driving change.

Resource allocation by national parties: The DSCC and NRSC, responsible for electing Democratic and Republican senators respectively, are constantly making investment decisions about which races to prioritize. Long-range structural forecasts that identify the likely competitive states allow earlier and more strategic investment than waiting for formal polling to begin.

Journalistic framing: Reporters and editors who understand the structural landscape can frame election stories more accurately from the beginning of the cycle, rather than adopting narratives that ignore fundamental advantages and disadvantages.

The Limits of Long-Range Prediction

The longer the horizon, the wider the uncertainty interval — and the more important it is to communicate that uncertainty honestly. A structural model saying "Democrats favored by approximately 3 points, with a confidence interval of ±6 points" at the two-year horizon is not a prediction of the outcome — it's a characterization of the structural environment as currently understood. Major economic changes, candidate selection, geopolitical events, and dozens of other factors can shift the actual outcome far from the structural prediction.

The common error is treating early structural forecasts as commitments rather than priors. A party organization that concludes in spring of the prior year that they're going to win, based on favorable structural indicators, and therefore under-invests in candidate recruitment and campaign infrastructure, is mistaking a structural prior for a certainty. The prior should inform strategy; it should not substitute for it.

18.20 Synthesis: Reading the Structural Landscape Before the Polls Arrive

One of the most valuable practices a political analyst can develop is the habit of reading the structural landscape before polling data becomes available — establishing a baseline expectation that then gets updated as polls arrive, rather than treating the first polls as authoritative from the moment they're published.

Here is what a disciplined structural reading looks like in practice:

In late spring of an election year, before reliable general-election polling is available, a Senate race analyst should be able to answer the following questions from structural data alone:

What is the national political environment? Assess the president's approval rating trend and the generic ballot. Is the environment favorable to one party? Neutral? Is there a wave pattern forming, or does the current environment look like a typical midterm or presidential year?

What is the historical partisan lean of this state? Over the last four to six election cycles, has this state voted consistently for one party? Has it been drifting? Is the demographic composition of the state changing in ways that would shift the baseline?

Who are the candidates and what are their structural characteristics? Is there an incumbent? How long have they served? What is their approval rating in the state? Is the challenger a high-profile or experienced candidate, or a first-time candidate starting from scratch?

What are the state-level economic conditions? Is the state economy performing better or worse than the national economy? Are there specific industries or issues — agriculture, manufacturing, energy, housing — that might make the state respond differently to national conditions?

With answers to these questions, a thoughtful analyst can construct a structural baseline that identifies (a) the rough partisan balance point for the race, (b) the direction and approximate magnitude of the structural advantage for either candidate, and (c) the range of uncertainty around that baseline given the data available.

This baseline is not a prediction. It is a calibrated prior — an informed starting point for a Bayesian process of updating as more information arrives. The first credible poll of the race is not a revelation; it is new evidence to weigh against the structural baseline. If the poll is consistent with the baseline, confidence in both increases. If the poll diverges from the baseline, the analyst has a mystery to solve — and solving it will typically yield deeper insight into the specific dynamics of the race than simply accepting either the structural model or the poll at face value.

This practice — establishing a structural baseline before polls arrive, and then updating the baseline rather than abandoning it when polls come in — is the hallmark of analytically sophisticated election forecasting. It uses all available information in a principled way, rather than lurching from one data source to the next as each becomes available.

18.21 Building Institutional Credibility Through Structural Analysis

For political analysts and polling organizations, there's a professional dimension to fundamentals modeling that goes beyond technical accuracy: using structural analysis well builds long-term credibility with clients and audiences.

Campaigns that receive a structural baseline analysis in February or March — months before serious polling begins — have more time to understand the nature of the environment they're operating in. A campaign told in March that structural conditions favor their candidate by approximately 4 points is positioned to make better decisions about resource deployment, candidate positioning, and voter targeting than one that doesn't get this information until the polling season matures in September.

This is part of what makes Nadia Osei's integrated approach — combining structural analysis with polling aggregation — more valuable than either tool alone. The structural analysis provides the anchor that makes individual polls interpretable; the polls provide the real-time updates that keep the structural baseline honest.

For Meridian Research Group, Vivian Park's insistence on sharing structural analysis with clients — not just poll numbers — reflected this philosophy. "A pollster who only gives you the current horse race number," she told Carlos once, "is like a weather service that only tells you today's temperature. The structural context is the climate forecast. Both matter, for different decisions."


Summary

Fundamentals models represent one of political science's most robust and surprising findings: structural conditions — the economy, presidential approval, incumbency, political time — predict election outcomes months before Election Day, with accuracy that should humble anyone who thinks campaigns are the primary determinant of results.

The Time for Change model and its relatives capture the essentials of retrospective voting: when conditions are good, incumbents win; when conditions are bad, they lose. The parsimony of these models is a feature: with small historical samples, complex models overfit; simple ones generalize better.

But fundamentals models have real limits. They cannot capture candidate quality, local factors, or events that haven't happened yet. They predict averages, not individual races. They distinguish prediction from explanation — knowing that GDP growth predicts vote share doesn't explain the causal mechanisms. And in a polarized era, with compressed approval ratings and weaker economic voting, their track record may be slightly worse than it was in the mid-20th century.

The right approach — and the one Nadia Osei uses — is to treat fundamentals models as providing a prior, an anchor for expectations, which is then updated as polls accumulate and campaign events unfold. This is neither dismissing the fundamentals ("only polls matter") nor over-relying on them ("the race is predetermined"). It is using the right tool for the right purpose: structural models for baseline and context-setting, polling aggregates for current-state measurement, and the combination of the two for the most honest possible assessment of where the race is headed.

Nadia had explained this to Carlos on the day she assigned him to build the structural model for Garza-Whitfield. "Why do both?" he had asked. "If the fundamentals predict the outcome anyway, what do we need the polls for?"

"Because the fundamentals have a confidence interval of plus or minus five points," Vivian had answered from across the office. She had an uncanny ability to answer questions directed at other people, particularly when the question touched her area of deepest conviction. "A five-point interval spans the difference between a comfortable Democratic win, a nail-biter, and a Republican upset. The polls narrow that interval as we get closer to the election. You need both to get the full picture."

Carlos wrote that down. He had developed a habit, in his first months at Meridian, of writing down things Vivian said that he wanted to think about later. The notebook was getting full.

"What if the polls and the fundamentals completely disagree?" he asked.

"Then you have a mystery," Vivian said. "And your job is to solve it." The integrated approach, combining fundamentals with polling aggregation, is more powerful than either alone. That integration is the subject of Chapter 19's probabilistic forecasting framework.