Chapter 19 Quiz: Probabilistic Forecasting and Uncertainty

Multiple Choice

1. A Monte Carlo simulation in election forecasting works by:

a) Randomly selecting which polls to include in the average b) Drawing many random election-day outcomes from a probability distribution and counting how often each candidate wins c) Randomly shuffling the order in which states report results d) Using random sampling to find the optimal polling average

2. "Correlated errors" in election forecasting refers to:

a) The intentional coordination between pollsters to produce similar results b) The tendency of polling errors in different states to move in the same direction simultaneously c) Errors that correlate with a candidate's favorability ratings d) The statistical correlation between sample size and margin of error

3. In 2016, FiveThirtyEight gave Trump approximately a 29% win probability — higher than most other forecasters. This was later seen as evidence that 538's model was:

a) Biased in favor of Trump b) Poorly calibrated because Trump actually won c) More realistic because it modeled correlated errors and gave more meaningful probability to the actual outcome d) Overconfident in its methods

4. "Calibration" in probabilistic forecasting means:

a) The process of adjusting for house effects b) The degree to which predicted probabilities match actual event frequencies across many predictions c) The calibration of survey instruments to reduce bias d) The adjustment of poll weights to match known population parameters

5. The "Vivian Park method" of uncertainty communication emphasizes which of the following as its first principle?

a) Never disclose uncertainty to clients who can't handle it b) Leading with a range rather than a point estimate c) Using technical language to demonstrate expertise d) Converting probabilities to dollar amounts to make them concrete

6. Which of the following is the correct interpretation of a 30% win probability for Candidate B?

a) Candidate B has no realistic chance of winning b) Something has gone wrong in the model if Candidate B is at only 30% c) There is a meaningful, non-trivial probability that Candidate B wins — roughly comparable to rolling a 1 or 2 on a six-sided die d) The forecaster believes Candidate A is about three times as good a candidate as Candidate B

7. Bayesian updating in election forecasting refers to:

a) Replacing old polls with new ones in the polling average b) Updating the probability estimate as new information (polls, events) arrives, giving more weight to information consistent with prior evidence c) Adjusting for partisan bias in Bayesian analysis d) Using Bayes' theorem to compute the probability that a poll is accurate

8. The 80% confidence interval on an election outcome of A +3.8 (with historical SD of 3.2 points) would approximately be:

a) A wins by exactly 3.8 points b) A wins by 2-6 points c) A wins by -0.3 to +7.9 points (roughly 1.3 standard deviations in each direction) d) The interval cannot be computed without more data

9. When a forecaster says that an election is "genuinely uncertain" with a 50% win probability for each candidate, this most accurately means:

a) The forecaster doesn't know how to forecast this race b) The race is tied in the polls c) Given current information, both outcomes are equally consistent with what we know; the forecaster is not withholding a confident prediction d) The forecaster is refusing to commit to a prediction for political reasons

10. The "map vs. territory" distinction in probabilistic forecasting means:

a) Electoral maps are different from territory maps used by campaigns b) The forecast is a representation of our knowledge about a future event, not the event itself; uncertainty is in the model, not necessarily in the world c) Forecasters in urban areas ("the map") are systematically different from rural forecasters d) State-level maps are more accurate than national-level forecasts

True/False

11. A forecaster who gives a 40% probability to the candidate who eventually wins has made a forecasting error. (True/False — explain)

12. Adding more correlated polling errors to a model generally increases the tails of the probability distribution and assigns more probability to extreme outcomes. (True/False — explain)

13. A well-calibrated forecaster should never be surprised by an election outcome. (True/False — explain)

14. The historical standard deviation of polling averages as predictors of election outcomes is approximately 0.5 percentage points. (True/False — with correction)

15. Scenario analysis is more useful for campaign decision-making than a single win probability number because it connects uncertain outcomes to specific observable conditions and strategic responses. (True/False — explain)

Short Answer

16. Explain in two to three sentences why ignoring correlated state errors in a presidential election model leads to underestimation of the probability of large, systematic national shifts.

17. A client says: "Your model says we have a 68% win probability, but you were also showing 68% two months ago. Why are we paying you for updates if the number doesn't change?" How would you respond?

18. What is the difference between a forecaster being "wrong" and a forecaster being "poorly calibrated"? Why does this distinction matter for evaluating forecasters?

19. Describe the four components of the Vivian Park method for communicating uncertainty to non-technical clients.

20. A journalist reports on a Senate race as follows: "Forecasters show Candidate A likely to win, with Candidate B having only a 25% chance." What is missing from this description that would make it more informative? How would you rewrite it?

Answer Key

  1. b
  2. b
  3. c
  4. b
  5. b
  6. c
  7. b
  8. c (80% CI is approximately ±1.28 SD; 1.28 × 3.2 = 4.1; so 3.8 ± 4.1 = approximately -0.3 to +7.9)
  9. c
  10. b
  11. False — giving 40% to the winner is not a forecasting error; it means you gave 60% to the loser, which is appropriate if the winner was genuinely uncertain. Calibration error would be giving 40% when winners at that probability level win 80% of the time.
  12. True — correlated errors mean that all states can be off in the same direction simultaneously, making national-level landslides and blowouts more probable than under independence assumptions.
  13. False — a well-calibrated forecaster should be surprised by events they gave low probability, roughly as often as the probability implies. Being well-calibrated doesn't mean you're never surprised; it means you're surprised at the appropriate frequency.
  14. False — historical standard deviation is approximately 3-4 percentage points in competitive races, not 0.5.
  15. True — a single probability number doesn't tell you what conditions produce a win or a loss; scenario analysis connects the probability to specific observable developments and strategic choices.
  16. See section 19.3: under independence, a systematic national error requires each state independently getting the same wrong answer — extremely unlikely by chance. Under correlated errors, one underlying cause (e.g., a faulty likely voter screen) affects all states simultaneously, making a large uniform miss much more plausible.
  17. The probability hasn't changed, but what has changed is the underlying evidence base — more polls have come in, confirming the earlier estimate rather than revising it. Stability in the estimate is itself information: the race hasn't moved. Also, the 68% estimate from two months ago carried more uncertainty about whether it would hold; today's 68% is built on a larger evidence base with tighter confidence intervals.
  18. Being "wrong" on a single prediction means the outcome differed from the predicted central estimate. Being "poorly calibrated" is a systematic pattern: events assigned 70% probability happen only 40% of the time, for example. A single wrong prediction tells us almost nothing about calibration. The distinction matters because poorly calibrated forecasters are providing systematically misleading information, even when any single prediction appears reasonable.
  19. (1) Lead with a range, not a point estimate; (2) Translate probability into accessible analogies (weather forecasts, dice, playing cards); (3) Make uncertainty actionable by connecting it to observable scenarios and decision-relevant contingencies; (4) Report confidence in the estimate — what drives the uncertainty, not just the estimate itself.
  20. Missing: what the 25% probability means in practice (roughly 1-in-4 — meaningful, not negligible); the specific conditions under which Candidate B would win; the confidence interval on the margin if Candidate A does win. A better version: "Models show Candidate A as a moderate favorite with roughly a 75% win probability — about the likelihood of drawing a red card from a standard deck. Candidate B has a meaningful 1-in-4 chance of winning, which would likely require [specific conditions]. The race is expected to be decided within 3-6 points in A's favor, though either a closer race or a wider margin are plausible."