Case Study 1: The 1948 Debacle and the Social Science Research Council Report

The Crisis

The morning of November 3, 1948, was one of the most embarrassing in the history of American journalism and social science. Harry Truman, whom every major poll had written off, won the presidency with 49.6 percent of the popular vote to Thomas Dewey's 45.1 percent---a margin of 4.5 percentage points that was entirely outside the range suggested by the pre-election polls.

The consequences were immediate and severe. The public's confidence in polling cratered. Newspaper editorials mocked the pollsters. George Gallup, who had built his career on the premise that scientific sampling could accurately measure public opinion, faced calls to abandon the enterprise entirely. Elmo Roper, whose firm had stopped polling in September, was accused of professional malpractice. And the Chicago Tribune, which had relied on poll predictions to set its headline before the results were in, became a symbol of overconfidence in data.

The Social Science Research Council (SSRC), a prestigious body of academic researchers, convened a committee to investigate the failure. The committee, chaired by the distinguished sampling statistician Frederick Mosteller, included some of the leading methodologists of the era. Their report, published in 1949, remains one of the most important documents in the history of survey research.

The Investigation

The SSRC committee examined the methodologies of the three major polling organizations---Gallup, Roper, and Crossley---in exhaustive detail. Their findings were damning but also constructive, identifying specific mechanisms of error and recommending specific reforms.

Finding 1: Quota Sampling Introduced Systematic Bias.

All three organizations used quota sampling, in which interviewers were instructed to fill specified quotas of respondents by age, sex, geography, and socioeconomic status. The problem was that within each quota cell, interviewers had discretion to choose anyone who fit the criteria. The committee found that interviewers systematically chose respondents who were easier to find and more willing to talk---people who tended to be more educated, more politically engaged, and more likely to live in accessible locations (downtowns, main streets, residential neighborhoods with sidewalks).

This introduced a consistent bias toward higher-status respondents. In 1948, higher-status respondents were more likely to support Dewey. The bias was small within any individual quota cell, but it was consistent across cells, and it accumulated across the entire sample.

Finding 2: Pollsters Stopped Too Early.

Gallup's final pre-election survey was completed approximately two weeks before Election Day. Roper stopped polling in September. Crossley's last survey was about ten days before the election. None of the organizations attempted to measure late-breaking opinion movement.

The committee found evidence of significant movement toward Truman in the final weeks of the campaign. This movement came primarily from undecided voters who had been leaning Democratic but had not yet committed. By the time these voters made their decisions, the polls had already been completed and published.

Finding 3: Undecided Voters Were Handled Poorly.

The three organizations reported large numbers of undecided voters in their final polls---in some cases, more than 15 percent of the sample. But they handled these undecided voters differently. Gallup allocated them proportionally based on their other survey responses. Roper did not allocate them at all. Crossley used a partial allocation method.

The committee found that the undecided voters were not evenly split. They were disproportionately Democratic-leaning voters who eventually chose Truman. The failure to recognize and account for this pattern inflated Dewey's apparent advantage.

Finding 4: The Conceptual Model Was Flawed.

Perhaps most importantly, the committee identified a conceptual error that underlay the entire enterprise: the pollsters assumed that opinions were stable. They treated the pre-election period as a time when voters had already made up their minds, and the pollster's job was simply to record those settled opinions. In fact, the 1948 campaign was unusually dynamic, with significant numbers of voters remaining genuinely undecided until the final days.

This assumption of opinion stability justified the practice of early cessation. If opinions do not change, there is no reason to keep polling up to Election Day. The 1948 failure demonstrated that this assumption was wrong---and that campaigns can shift voter intentions right up to the moment ballots are cast.

The Recommendations

The SSRC report made several recommendations that reshaped the polling industry:

  1. Adopt probability sampling. The committee recommended that pollsters move from quota sampling to probability-based methods, in which every member of the target population has a known, non-zero probability of selection. This would eliminate interviewer selection bias and enable the calculation of valid margins of error.

  2. Continue polling until Election Day. The committee recommended that pollsters conduct surveys as close to Election Day as possible to capture late-deciding voters.

  3. Develop better methods for handling undecided voters. The committee recommended research into the characteristics and likely behavior of undecided respondents, rather than simply ignoring them or allocating them mechanically.

  4. Report uncertainty honestly. The committee urged pollsters to report margins of error and to communicate clearly that polls are estimates, not predictions. A poll showing Dewey ahead by 5 points with a margin of error of 4 points should be reported as a close race, not a Dewey victory.

  5. Collaborate with academic researchers. The committee recommended closer ties between the commercial polling industry and academic survey methodologists, arguing that the industry needed the rigor that academic training provided.

Lasting Impact

The SSRC report did not immediately transform the polling industry. The transition from quota to probability sampling took more than a decade, and some of the report's recommendations---particularly around honest communication of uncertainty---are still not fully implemented more than seventy-five years later.

But the report established the intellectual framework for modern survey methodology. It articulated the principles that Vivian Park would later build into Meridian Research Group's philosophy: that methodology matters, that transparency is essential, and that the assumptions behind a poll are as important as the numbers it produces.

The report also established a precedent: after every major polling failure, the profession would commission a rigorous post-mortem to understand what went wrong and how to do better. The AAPOR reports after 2016 and 2020 are direct descendants of the SSRC's 1948 investigation.

Discussion Questions

  1. The SSRC committee found that interviewers in quota samples systematically chose "easy" respondents. How might a similar bias operate in modern polling, where the equivalent of the "easy respondent" is the person who answers their phone or clicks on a survey link?

  2. The committee recommended probability sampling as the solution to quota sampling's biases. Given that response rates have since fallen below 5 percent, has probability sampling fulfilled its promise? What would the SSRC committee make of a probability-based survey with a 3 percent response rate?

  3. The recommendation to "report uncertainty honestly" was made in 1949. Why, more than seventy-five years later, do media organizations still routinely report poll results without adequately conveying uncertainty? What structural incentives work against honest uncertainty communication?

  4. Compare the SSRC's investigation to the AAPOR post-mortem after the 2016 election. What similarities and differences do you see in the types of errors identified and the recommendations made?

  5. Vivian Park's "Worry List" is inspired by the SSRC tradition of systematic self-examination. What would be on your worry list if you were designing a poll of the Garza-Whitfield race today?