Carlos Mendez had been at Meridian Research Group for three weeks when he first asked the question out loud. He was staring at a crosstab printout — approval ratings sliced by age, region, and education — and something nagged at him. He walked down...
Learning Objectives
- Evaluate competing theories of what public opinion is and whether it constitutes a coherent social reality
- Explain Philip Converse's concept of non-attitudes and ideological constraint in mass publics
- Apply Zaller's Receive-Accept-Sample (RAS) model to explain opinion formation and volatility
- Describe the thermostatic model and how public opinion responds to policy change
- Analyze how question wording and measurement choices construct rather than merely discover opinion
- Identify aggregation problems and their implications for democratic theory
- Recognize social desirability bias and the spiral of silence as forces that distort measured opinion
- Trace the intellectual history of the public opinion concept from Lippmann through Habermas
- Explain the two-step flow model and the role of opinion leaders in shaping mass opinion
- Apply a cross-national perspective to opinion structure and its measurement
In This Chapter
- The Founding Paradox
- The Intellectual History: From Lippmann to Habermas
- Philip Converse and the Uncomfortable Truth About What Americans Believe
- John Zaller and the Architecture of Opinion Formation
- The Thermostatic Model: Opinion as a Policy Feedback Loop
- Measurement as Construction: The Central Problem
- Aggregation Problems: Whose Voice Counts?
- Social Desirability Bias and the Spiral of Silence
- Opinion Leaders and the Two-Step Flow
- Cross-National Variations in Opinion Structure
- The Philosophical Stakes: Does Public Opinion Exist?
- Vivian's Nuanced Reply: What Carlos Learned
- Implications for Analysts: A Working Theory of Public Opinion
- Summary
- Key Concepts Review
Chapter 6: What Is Public Opinion?
Carlos Mendez had been at Meridian Research Group for three weeks when he first asked the question out loud. He was staring at a crosstab printout — approval ratings sliced by age, region, and education — and something nagged at him. He walked down the hallway to Vivian Park's office, knocked, and waited.
"Dr. Park, can I ask you something that might sound stupid?"
Vivian looked up from her monitor. After twenty-six years in survey methodology — first at Ohio State, then at a think tank, now running Meridian — she had learned that the best questions usually begin with that apology. "There are no stupid questions in this office," she said. "Only expensive ones."
Carlos laughed nervously. "Okay. So — when we report public opinion, what exactly are we reporting? Like, does public opinion actually exist? Or are we kind of... making it up?"
Vivian set down her coffee. She had been waiting for a junior analyst to ask this for years. Most of them just learned the software and never looked behind the curtain.
"Close the door," she said. "This is going to take a while."
The Founding Paradox
Before we can measure public opinion, we have to decide what it is. That turns out to be harder than it sounds — so hard, in fact, that some of the sharpest minds in political science have spent careers arguing that "public opinion" as we normally use the term is largely a fiction, a convenient shorthand for something far messier and more contingent than any poll result suggests.
This is not a counsel of despair. Understanding the paradox at the heart of public opinion research makes you a better analyst, a more honest communicator, and a sharper critic of the data you encounter every day. It also, as Carlos discovered, makes the work more interesting.
Let's start with the definition problem.
What Do We Mean by "Public Opinion"?
The phrase has been in use since at least the eighteenth century, when Enlightenment thinkers invoked "the tribunal of public opinion" as a check on arbitrary power. For Jean-Jacques Rousseau, something like the general will — a collective rational judgment — was the source of legitimate government. For James Madison, writing in Federalist No. 49, "the reason of the public" was the only legitimate authority over the Constitution.
These are normative claims about what public opinion should be: reasoned, stable, aggregated through deliberation. They are not descriptive claims about what public opinion actually is when you start asking a random sample of Americans what they think about the federal budget.
The modern, empirical study of public opinion begins — roughly — with Walter Lippmann's Public Opinion, published in 1922. Lippmann's argument was, at its core, deeply skeptical. Citizens, he observed, do not directly experience the complex social reality they are asked to have opinions about. They experience a "pseudo-environment" — a simplified, stereotyped, emotionally colored picture of the world constructed largely by media and elite communication. Lippmann was not arguing that public opinion doesn't exist; he was arguing that what citizens hold in their heads is not the rational, well-informed judgment that democratic theory requires.
This is where the tension begins. Lippmann's critique did not stop pollsters from measuring public opinion — if anything, it intensified the drive to measure it, on the theory that systematic measurement would at least give us a clearer picture of what the public actually thought, whatever that was. But it did establish a persistent anxiety at the center of the field: what exactly are we measuring when we ask someone a survey question?
💡 Intuition: The Photograph Problem
Think of public opinion as a photograph rather than a mirror. A mirror reflects reality directly. A photograph involves choices — what to frame, what to exclude, what lighting to use, what moment to capture. Two photographers shooting the same scene produce different photographs. Two pollsters designing questions about the same issue produce different distributions of responses. The photograph is real — it captures something true — but it is not a transparent window onto an objective reality. Public opinion, similarly, is something researchers produce as much as they observe.
The Intellectual History: From Lippmann to Habermas
The concept of "public opinion" did not emerge fully formed in 1922. It has a long and contested philosophical history, and understanding that history clarifies why the concept remains so difficult to pin down empirically.
Lippmann's Skeptical Realism
Walter Lippmann trained as a journalist before becoming one of the twentieth century's most influential political thinkers. His Public Opinion (1922) and its companion volume The Phantom Public (1925) together constitute the most sustained skeptical argument ever made against the idea that the general public can be the competent deliberative agent that democratic theory requires.
Lippmann's central argument rested on what he called "the pictures in our heads." The modern world, he observed, is too vast and too complex for any individual to experience directly. Citizens of a democracy are asked to form judgments about faraway wars, intricate economic policies, and foreign cultures they will never encounter. They cannot do this through direct experience; they must rely on mediated representations — newspapers, radio broadcasts, political speeches, the second-hand accounts of those around them. These representations are inevitably simplified, often distorted, and heavily shaped by the interests and perspectives of those who produce them.
Lippmann did not conclude that democracy was impossible, but he did conclude that it was harder than its advocates acknowledged. He proposed a solution: expert-driven policymaking, guided by social science, that would supplement the limited rational capacity of mass publics with the superior informational processing of trained specialists. This proposal sits uncomfortably with democratic egalitarianism, and it provoked an immediate response.
Dewey's Democratic Rejoinder
John Dewey, America's most influential democratic philosopher, reviewed The Phantom Public and spent much of the following decade constructing a response, crystallized in The Public and Its Problems (1927). Dewey's argument was that Lippmann had correctly identified a problem but had drawn the wrong conclusion.
Dewey agreed that the "great society" of industrial modernity had disrupted the face-to-face community life in which genuine democratic deliberation had once been possible. Citizens no longer participated in politics through direct experience of their community's problems; they participated, if at all, through mass media and formal political institutions that kept them at arm's length from genuine deliberation.
But where Lippmann saw a reason for expert administration, Dewey saw a reason to rebuild the conditions for democratic community. The solution to the failures of public opinion was not to bypass the public but to reform the conditions of communication and community life that would allow a genuine, deliberating public to form. Education, journalism, local community organization, and experimental social science — directed toward illuminating shared problems rather than manipulating mass audiences — were Dewey's tools.
The Lippmann-Dewey debate is not a historical curiosity. It maps almost precisely onto contemporary debates in political communication about whether social media and partisan news have made the informed public Dewey imagined more or less achievable, and whether the expert-driven technocracy Lippmann described is itself a threat to democratic participation.
Habermas and the Public Sphere
German philosopher Jürgen Habermas entered this conversation in 1962 with The Structural Transformation of the Public Sphere, a book that would become, by the late twentieth century, one of the most cited works in political theory and communication studies. Habermas's contribution was to provide a historical and sociological account of how something like a genuine "public opinion" had emerged in European modernity — and then been degraded.
Habermas located the origin of a genuine bourgeois public sphere in the coffeehouses, literary salons, and reading societies of eighteenth-century Western Europe. In these spaces — characterized by open participation, argument on the merits rather than on social authority, and discussion oriented toward the common good — something approximating the rational-critical debate that democratic theory requires had actually existed. Citizens read newspapers and pamphlets, met in semipublic spaces to discuss what they had read, and collectively formed political positions that then exerted pressure on state power.
The degradation of this public sphere, in Habermas's account, came with the transformation of the media from a vehicle of rational-critical debate into a vehicle of commercial entertainment and public relations. Modern mass media did not inform publics; they colonized their attention. Political parties and governments learned to manufacture consent through advertising and propaganda rather than to earn it through deliberation. The result was what Habermas called the "refeudalizing" of the public sphere: political power was once again exercised through display, spectacle, and manipulation rather than through rational argument.
For empirical survey researchers, Habermas's framework poses a sharp challenge. If the public opinion that pollsters measure is itself a product of a degraded public sphere — opinions formed through media manipulation rather than genuine deliberation — then measuring it accurately tells you something real about the distribution of manufactured consent, but says very little about what a genuinely deliberating public would believe. The poll numbers are true; what they are true of is contestable.
📊 Real-World Application: Deliberative Polling
Political scientist James Fishkin, partly inspired by the Habermasian critique, developed an empirical methodology called "deliberative polling" to test what happens when citizens are given the conditions that genuine public sphere deliberation requires: high-quality balanced information, facilitated discussion with citizens holding diverse views, and time to reflect. Fishkin's deliberative polls consistently find that citizens' positions shift substantially when given these conditions — often becoming more nuanced, more internally consistent, and more aligned with policy expertise. This does not validate the elitist conclusion that only experts should govern; it suggests that the gap between actual and ideal public opinion reflects the conditions of information and deliberation available to citizens, not fixed limitations of their cognitive capacity.
🔗 Connection to Chapter 12: The Lippmann-Dewey-Habermas lineage informs contemporary debates about social media's effects on political deliberation. Is the contemporary information environment more like Dewey's ideal (diverse, participatory, decentralized) or more like Habermas's degraded public sphere (commercially driven, attention-capturing, conducive to manipulation)?
Philip Converse and the Uncomfortable Truth About What Americans Believe
In 1964, the University of Michigan's Philip Converse published an article that still makes political scientists uncomfortable to discuss with non-specialists: "The Nature of Belief Systems in Mass Publics." Based on panel data from the 1956-1960 National Election Studies, Converse made two arguments that shook the foundations of democratic theory.
Argument One: Ideological Constraint Is Rare
The first argument concerned ideological constraint — the degree to which an individual's positions on different political issues are logically connected to one another. Converse expected that people who held liberal positions on economic redistribution would also tend to hold liberal positions on civil liberties, foreign policy, and social issues. He expected, in other words, that the political world would be organized in most people's minds the way it is organized by political scientists and editorial writers — along a coherent left-right dimension.
What he found instead was that issue positions among ordinary citizens were largely uncorrelated with one another. A person who favored government health insurance was barely more likely than chance to also favor government housing assistance. A person who opposed foreign aid was not reliably opposed to domestic welfare spending. The elegant ideological consistency that elite commentators attributed to "the public" was almost entirely absent from the actual responses of ordinary survey respondents.
This did not mean everyone was ideologically incoherent. Converse identified what he called "ideologues" — a small fraction of the electorate, perhaps 2-4 percent, who genuinely organized their political views around abstract ideological principles. A somewhat larger group (around 10 percent) he called "near-ideologues," who used ideological language but inconsistently. The vast majority of respondents organized their political views around group loyalties (particularly partisan and class identities), single issues, or simply vague evaluations of times being "good" or "bad."
The implications for democratic theory were uncomfortable. If citizens don't hold internally consistent ideological belief systems, what does it mean to say that the public "believes" something about tax policy or foreign intervention?
Argument Two: Non-Attitudes
Converse's second and even more disturbing finding concerned the stability of political attitudes over time. Using panel data — the same people interviewed in 1956, 1958, and 1960 — he found that many respondents gave completely different answers to the same policy questions across waves. Asked whether the federal government should ensure that all people have a job and good standard of living, a substantial fraction of respondents flipped from one end of the scale to the other across waves — not because there had been a major intervening event that might rationally change their views, but apparently at random.
Converse's interpretation was radical: many respondents, when asked about issues they had never thought about and had no formed opinion on, simply made up an answer on the spot. They didn't want to seem uninformed or uninterested, so they gave a response — any response — that satisfied the social expectation of having an opinion. He called these fabricated responses non-attitudes: survey answers that had no underlying cognitive or evaluative content, that were not connected to any real preference or belief, and that were therefore randomly distributed across response options.
The "doorbell test" was one way Converse dramatized this: if you rang someone's doorbell and asked them about "the metallic metals act," a wholly fictional piece of legislation, a substantial fraction would tell you whether they supported or opposed it. They had no opinion because there was nothing to have an opinion about — but they answered anyway.
📊 Real-World Application: The Metallic Metals Act, Updated
Converse's fictional legislation test has been replicated many times. A 2019 survey experiment found that between 20 and 30 percent of respondents were willing to express opinions on made-up policies with realistic-sounding names. More troubling, expressed opinions on fake policies showed the same partisan patterning as real policies — respondents were more likely to support a fake policy if it was attributed to their party's president. If people will form opinions about things that don't exist, what does that tell us about the opinions they form about things that do?
Living with the Converse Legacy
Converse's findings have been challenged, refined, and defended over the past sixty years. Some scholars argued that his panel data overstated instability because of measurement error in the survey instruments themselves — when you account for the unreliability of questions, real underlying attitudes may be more stable than they appear. Others noted that political knowledge and engagement have increased since the 1960s, and that the rise of cable news and partisan media may have helped citizens construct more internally consistent belief systems (though whether those systems are formed by reasoning or partisan cue-taking is itself contested).
But the core insight survives all the challenges. A significant fraction of survey responses — probably the majority on issues that are distant from respondents' everyday experience — are not expressions of deeply held beliefs. They are responses to social situations. They are performances of citizenship. Understanding this is not cynical; it is essential for interpreting what polls actually measure.
🔴 Critical Thinking: What Does "The Public Believes X" Actually Mean?
When a news headline says "63% of Americans support stricter gun laws," pause and ask: What fraction of that 63% has thought about this issue more than once? What fraction would give the same answer if asked next week? What fraction's answer would change if the question used the word "gun control" instead of "stricter gun laws"? None of this invalidates the survey result, but it should shape how you interpret and communicate it.
John Zaller and the Architecture of Opinion Formation
The most influential theoretical framework for understanding how public opinion actually forms — and why it is so volatile — came from UCLA political scientist John Zaller. His 1992 book The Nature and Origins of Mass Opinion introduced what is now known as the Receive-Accept-Sample (RAS) model, and it remains the central theoretical framework in the field.
The Four Axioms
Zaller's model rests on four simple axioms about how people process political information:
Axiom 1: Reception. Citizens are more likely to receive political information if they are politically engaged and cognitively capable. High-information citizens receive more elite communication about politics; low-information citizens receive less.
Axiom 2: Resistance. Citizens resist arguments that are inconsistent with their political predispositions. If you are a strong Democrat and you encounter a Republican argument, you are likely to counter-argue against it. If you are politically uninvolved, you may not know enough to resist.
Axiom 3: Accessibility. Citizens' expressed opinions are based on the considerations that are most accessible to them at the moment they are asked. Recent information, emotionally salient information, and information consistent with strongly held predispositions is most accessible.
Axiom 4: Response. When asked a survey question, citizens sample from the considerations that are currently accessible and report a summary of those considerations.
The crucial implication is that there is no single, fixed "true" opinion stored inside each citizen's head waiting to be discovered by a survey question. Instead, opinion is constructed in the moment of asking, based on whatever considerations happen to be accessible at that moment.
Why Axiom 3 and 4 Are Revolutionary
Consider what this means for survey measurement. If you ask someone about immigration policy after they have just watched a news segment about border crossings, different considerations will be accessible than if you ask the same person after watching a segment about immigrant business owners. The same person will give substantively different responses depending on what is mentally foremost — what psychologists call priming.
This explains something that has puzzled polling organizations for decades: why do questions that seem to be asking about the same underlying value produce wildly different responses depending on how they are framed, what questions precede them, and what context surrounds them? The RAS model's answer: there is no single underlying value to discover. There are multiple considerations, variably accessible, and the measurement itself helps determine which ones are sampled.
For Vivian Park, the RAS model is both a methodological guide and a source of professional humility. "The question I hate most," she told Carlos, "is when a reporter calls and asks what the public 'really' thinks about some issue. As if there's a gold standard we're falling short of. The question itself is part of the answer. That's not a failure of measurement. That's just what measurement is."
Elite Cueing and the Partisan-Information Interaction
One of Zaller's most powerful empirical findings concerns how citizens use elite cues. Low-information citizens don't have the ideological resources to evaluate arguments on their merits, but they can recognize party labels and respond to them. High-information citizens can evaluate arguments, but they are also better at filtering out arguments that challenge their predispositions.
The result is a counter-intuitive pattern: on many issues, the citizens who follow politics most closely are also the most polarized — not because information makes you more extreme in some simple sense, but because high-information citizens are better at selectively accepting information consistent with their predispositions and rejecting information inconsistent with them.
This has profound implications for how we interpret changes in public opinion. When measured opinion shifts, is it because citizens received new information and genuinely changed their minds? Or is it because elites changed their cues, and citizens — especially low-information citizens — followed along without really "thinking about" the issue at all?
🔗 Connection to Chapter 12: The relationship between elite cues and mass opinion is central to understanding partisan polarization. As elite parties have sorted more clearly along ideological lines, the cues they send have become cleaner, contributing to the apparent ideological polarization of the mass public — even if, as some scholars argue, the mass public's underlying preferences have not changed as dramatically as their expressed opinions.
The Thermostatic Model: Opinion as a Policy Feedback Loop
A third major framework — developed primarily by Christopher Wlezien and Robert Erikson — approaches public opinion not as a set of attitudes citizens carry around, but as a dynamic signal that responds to changes in government policy. They call this the thermostatic model of public opinion, and it adds a dimension that Converse and Zaller largely ignore: the relationship between what government does and what the public says it wants.
The Thermostat Metaphor
A household thermostat doesn't just measure temperature — it reacts to it. When the room gets too cold, the thermostat triggers the heater. When the room gets too warm, the thermostat turns it off. Wlezien and Erikson argue that public opinion works similarly with respect to government policy.
When government spends more on defense, the public gradually becomes more satisfied with defense spending and begins to signal that it wants less additional spending, not more. When government cuts social programs, the public begins to signal that it wants more spending restored. The public isn't demanding ever-more-extreme versions of its preferred policies; it's reacting against the direction of policy change, providing a corrective feedback signal.
The empirical evidence for thermostatic response is substantial. Studies of spending preferences across decades show that public "liberalism" on domestic spending tends to rise during Republican administrations that cut programs and fall during Democratic administrations that expand them — not because citizens are changing their fundamental values, but because the thermostat is reacting to what government is doing.
📊 Real-World Application: The ACA Thermostatic Effect
The Affordable Care Act provides a textbook example. In 2009-2010, as the ACA was being debated and passed, public support was tepid and falling. For the next six years, as Republicans attacked it without successfully repealing it, support gradually recovered. By late 2016, the ACA had majority support for the first time. When Republicans in 2017 mounted serious repeal efforts, support spiked dramatically — up 10-15 percentage points in some polls within weeks. The thermostat was functioning: the threat of removal triggered a demand signal for preservation that the relatively passive status quo had not.
Extended Policy Examples: The Thermostatic Model in Action
The power of the thermostatic model becomes clearer when you trace it across multiple policy domains. Consider three additional examples:
Defense spending (1970s–1990s). After the military buildup of the early Reagan administration, public desire for further defense spending increases dropped sharply — even among Republican identifiers — by the mid-1980s. When defense spending was subsequently cut in the early 1990s, support for defense spending rebounded. The pattern repeated with remarkable regularity, moving in the opposite direction from actual policy each time.
Social welfare generosity. James Stimson's "policy mood" index — a composite measure of public liberalism on domestic policy questions — has tracked government spending since the 1950s. The mood consistently moves in the direction opposite to government action: it liberalizes when Republicans are cutting programs and conservatizes when Democrats are expanding them. The amplitude of these swings is modest, typically 2-4 points on Stimson's scale per administration, but their regularity is striking.
Immigration policy. Stuart Soroka and Christopher Wlezien extended the thermostatic model to immigration and found similar patterns. During periods of relatively permissive immigration policy, public opinion shifts toward favoring restrictions. During periods of enforcement-heavy policy, public opinion shifts toward more tolerance. The dynamics are slower than in spending domains — immigration attitudes change over years rather than months — but the corrective pattern is present.
📊 What This Means for Analysts: The thermostatic model has an important practical implication for anyone who works with public opinion data over time. When you see support for a policy rising, ask whether the rise reflects genuine value change or thermostatic reaction to policy change. Rising support for environmental regulation during a period of deregulation may not represent a durable new majority; it may represent a thermostatic correction that will diminish if government policy reverses. Misreading thermostatic adjustment as value change leads to overconfident claims about permanent opinion shifts.
What the Thermostatic Model Tells Us About Policy and Opinion
The thermostatic model has a striking implication: in a democracy, public opinion and policy may be in a constant, self-correcting equilibrium. Policies that go "too far" in any direction generate public resistance that eventually reverses course. This is either reassuring (democratic self-correction works!) or sobering (radical policy change is harder than it looks, because public opinion will push back).
For political analysts, the thermostatic model is a reminder that public opinion is not a fixed backdrop against which campaigns play out — it is itself a dynamic quantity that responds to governing decisions, media coverage, and the behavior of political elites.
Measurement as Construction: The Central Problem
All three frameworks — Converse's non-attitudes, Zaller's RAS model, and the thermostatic model — point toward the same fundamental issue: the measurement of public opinion does not discover a pre-existing reality; it participates in constructing one.
This is what Carlos intuited when he asked Vivian his question. If we designed the question differently, would we get a different answer?
"Yes," Vivian told him flatly. "Usually. Sometimes dramatically. And that's not a bug in the methodology. That's the nature of the thing."
The Construction Problem in Practice
Consider what happens when a polling organization decides to ask about "government assistance to the poor" versus "welfare." These are, by any neutral accounting, roughly the same policy. But across hundreds of studies, support for the former consistently runs 10-20 percentage points higher than support for the latter. The word "welfare" activates a set of considerations — dependency, fraud, racial resentment in some respondents — that "assistance to the poor" does not. The "true" level of support for the policy is not somewhere between the two numbers; there is no true level. The policy support that exists in the public is a function of how the policy is framed, communicated, and embedded in a larger political context.
This is not unique to sensitive topics. Studies show that support for "affirmative action" versus "preferential treatment" for historically disadvantaged groups differs by 20-30 points. Support for "estate tax" versus "death tax" differs substantially. Even seemingly neutral factual questions — "How many immigrants are in the United States?" — produce wildly different estimates depending on whether the questionnaire primes thoughts about crime or about economic contribution.
The implication is not that all survey results are meaningless. It is that survey results are context-dependent in ways that demand careful interpretation. When you report that "54% support Policy X," you are really reporting something more like "54% of a sample expressed support for something that roughly resembles Policy X, when asked in the way we asked, at the time we asked, in the context of the other questions we asked."
⚠️ Common Pitfall: Treating Survey Results as True Score Data
Beginning analysts often treat poll numbers as if they were measurements of fixed quantities — like a thermometer reading, or a vote count. They are not. They are measurements of a distribution of accessible considerations at a specific moment in time, mediated by the wording, order, and context of the questionnaire. This doesn't mean you should refuse to interpret them; it means you should interpret them with appropriate uncertainty about their generalizability.
Aggregation Problems: Whose Voice Counts?
Even if we solve the measurement problem — even if we find question wordings that reliably tap genuine underlying attitudes — we face a second fundamental problem: aggregation. When we average individual responses into "public opinion," we are making implicit choices about how to weight different voices.
The Averaging Problem
In a simple poll, each respondent's answer counts equally. But should it? Consider:
A survey of "American public opinion" on immigration policy that samples equal numbers from urban and rural areas will produce different results than one that weights by actual population. A survey that weights by likely voters will look different from one that weights by all adults, because the voting population skews older, whiter, and more educated than the adult population. A survey conducted in English will exclude respondents who are more comfortable in another language.
None of these weighting choices is obviously wrong. They are analytical decisions with political implications. "Public opinion" according to likely voters is the relevant quantity for predicting elections. "Public opinion" according to all adults may be more relevant for assessing the democratic legitimacy of policy. "Public opinion" among registered voters falls somewhere in between.
When Meridian reports poll results for the Garza-Whitfield Senate race, these choices matter enormously. The Sun Belt state in question has a large Latino population, a substantial immigrant community, and a wide urban-rural divide. Whether Meridian weights toward likely voters, all registered voters, or all adults will produce meaningfully different estimates of where the race stands — and different implicit answers to the question of whose preferences count.
🔵 Debate: Should Polls Weight Toward Likely Voters?
The conventional practice in horse-race polling is to weight toward or filter to "likely voters" — respondents who, based on stated intention and past voting behavior, are judged most likely to actually vote. This makes sense for election prediction: we're forecasting an election, not a population survey.
But it also has a political valence. Lower-propensity voters — who are disproportionately young, non-white, and lower-income — are systematically underrepresented in likely voter polls. When we report "the public's view," we are almost always reporting the view of the relatively advantaged fraction of the public that tends to vote. Is this a technical necessity or a form of voter suppression through data?
Arrow's Impossibility Theorem and the Limits of Aggregation
There is a deeper mathematical problem with aggregation that political scientists often wave past. Kenneth Arrow's Impossibility Theorem (1951) proved that no aggregation method can simultaneously satisfy a small set of seemingly reasonable democratic criteria — things like: if everyone prefers A to B, the social preference should prefer A to B; and: the group's preference between A and B shouldn't depend on what everyone thinks about unrelated option C.
Arrow's theorem has direct implications for public opinion measurement. When we construct an index of "liberal/conservative" opinion from a set of questions about different issues, we are implicitly assuming that these answers can be meaningfully aggregated — that there is a consistent underlying dimension we are measuring. But Converse showed that issue positions are often uncorrelated, and Arrow showed that even perfectly consistent individual preferences cannot always be coherently aggregated.
The bottom line is not that aggregation is impossible, but that any aggregated measure of public opinion involves choices that are simultaneously technical and political — and honest analysts acknowledge those choices.
Social Desirability Bias and the Spiral of Silence
So far we've focused on problems with what opinions are and how aggregation works. But there's a third set of problems, more immediately practical: respondents don't always tell you what they actually think.
Social Desirability Bias
Social desirability bias refers to the tendency of survey respondents to give answers that make them look good — to conform to what they perceive as the socially acceptable or normatively correct response, rather than to their actual attitude.
The effect is well-documented across a wide range of sensitive topics. Respondents overstate their willingness to vote for female or minority candidates. They understate their support for racially discriminatory policies. They overstate their charitable giving and church attendance. They understate their alcohol and drug use. In political polling, the "Bradley Effect" — the tendency for pre-election polls to overstate support for Black candidates — was long attributed to social desirability bias (though recent research suggests it may be less robust than originally thought).
Social desirability bias does not affect all topics equally. For issues where the socially correct answer is clear and salient, bias will be larger. For issues where there is genuine social disagreement about what the correct answer is, bias may be smaller. The challenge is that the topics where bias is largest are often the ones political analysts most need to measure accurately: racial attitudes, support for controversial candidates, attitudes toward out-groups.
Several techniques exist to reduce social desirability bias. Self-administered modes (web surveys, paper questionnaires) reduce it compared to interviewer-administered modes, because there is no human interviewer whose approval the respondent is managing. List experiments (also called item count technique) allow researchers to estimate the prevalence of sensitive attitudes without asking directly — respondents are randomly assigned to see either a list of non-sensitive items or a list with one sensitive item added, and the prevalence of the sensitive attitude is estimated from the difference in mean count between groups. Randomized response technique introduces a random element (like a coin flip) that gives respondents plausible deniability for sensitive answers.
Deeper Examples of Social Desirability in Survey Practice
The mechanics of social desirability bias are worth examining in concrete detail, because the abstract description underestimates how systematically it operates.
In the 1990s, a series of surveys using experimental designs found that white respondents expressed substantially higher support for Black political candidates in telephone interviews than in anonymous mail surveys — the difference averaging around 8-10 percentage points. The effect was larger in communities with stronger public norms favoring racial tolerance, suggesting that respondents were tracking perceived social expectations about what the "right" answer was.
In the post-9/11 period, surveys on anti-Arab and anti-Muslim attitudes showed similar patterns. List experiments conducted in 2005-2008 consistently found that the actual prevalence of negative attitudes toward Muslim Americans was 15-20 percentage points higher than what direct survey questions captured. The gap was largest in regions with more visible Muslim communities, where negative attitudes were simultaneously more common and more socially unspeakable.
More recently, surveys on attitudes toward transgender rights and immigration enforcement have shown significant mode effects — with phone surveys showing more socially tolerant responses than self-administered online panels with equivalent question wording. These differences may partly reflect genuine demographic differences between phone and online survey samples, but experimental designs that hold samples constant find similar patterns.
A practical note for analysts: When you see a striking difference between phone and online survey results on politically sensitive topics, do not immediately assume methodological error. Consider whether social desirability bias is operating differently across modes. The online result may be closer to genuine opinion precisely because it is less socially observed.
Elisabeth Noelle-Neumann and the Spiral of Silence
Social desirability operates at the individual level. Elisabeth Noelle-Neumann, in her 1974 theory of the spiral of silence, proposed that something similar operates at the social level: people's willingness to express opinions publicly depends on their perception of the distribution of opinion around them.
When people perceive that their opinion is in the minority, they become less likely to express it publicly — in conversation, in public forums, and sometimes in survey interviews. This creates a feedback loop: minority opinion becomes less visible, which makes it appear even more marginalized, which causes more suppression of its expression, which makes it appear still more marginal. The opinion spirals into silence.
The implications for polling are significant. If voters who support a candidate from a stigmatized political position — say, an extreme party in a European context, or a highly controversial figure in an American context — are less willing to express that support in surveys, pre-election polls will systematically underestimate that candidate's true level of support. Some analysts attributed the polling errors of 2016 and 2020 partly to a "shy Trump voter" effect — voters who supported Trump but were unwilling to say so to pollsters.
The evidence for the shy voter hypothesis is mixed. Some studies find evidence for it; others do not. The difficulty is identifying a clean test: if the effect exists, by definition it is hard to measure directly (because the people most affected by it are least likely to tell you so). The methodological response is to look for mode effects — comparing phone polls to online polls — on the theory that stigmatized opinions will be more freely expressed in less social modes.
⚖️ Ethical Analysis: Who Benefits from Spiral of Silence?
The spiral of silence is not a neutral social process. Historically, majorities have used social pressure to silence minority views — not just extreme or fringe views, but legitimately contested political positions held by disadvantaged groups. When we design survey instruments, we should ask: whose voices are most likely to be suppressed by social desirability effects? Often the answer is: the voices of people who are already politically marginalized. Methodological techniques that reduce social desirability bias are not just technical improvements — they are a form of democratic inclusion.
Opinion Leaders and the Two-Step Flow
The frameworks we have examined so far — Converse's non-attitudes, Zaller's RAS model, the thermostatic model — largely treat public opinion formation as a relationship between individual citizens and the elite information environment. But political communication research has long recognized that information does not flow directly from elites to mass publics. It passes through intermediaries.
Katz and Lazarsfeld's Two-Step Flow
In Personal Influence (1955), sociologists Elihu Katz and Paul Lazarsfeld introduced what they called the "two-step flow of communication" based on research into how voters made decisions in the 1940 presidential election. Their key finding was counterintuitive: mass media exposure had less direct influence on vote choice than personal conversation with opinion leaders in respondents' social networks.
The model they proposed was two-step: mass media influences a relatively small group of "opinion leaders" — people who are more politically engaged, more attentive to media, and more willing to share their views with others — and these opinion leaders then relay, interpret, and translate political information to their social networks through interpersonal communication. The mass public is thus not a direct audience of elite political communication; it is, at least partly, an audience of its own socially embedded intermediaries.
The two-step flow model has been revised and updated substantially over the past seventy years, but its core insight remains relevant. Consider:
The role of primary social networks. Katz and Lazarsfeld found that family members, coworkers, and community figures often carried more weight than party platforms or campaign advertisements in shaping vote choice. Research on social influence in voting behavior confirms that people who talk about politics regularly with others are more likely to vote, more likely to hold stable opinions, and more likely to align their views with perceived social norms in their network.
Opinion leaders are not necessarily prominent figures. An opinion leader in Katz and Lazarsfeld's sense is not necessarily a nationally recognized commentator or politician. It is the person in your social network who others recognize as particularly informed, engaged, or trustworthy on political matters — your politically active aunt, your union steward, your pastor, your trusted coworker. Opinion leadership is a relational property, not a title.
The model helps explain opinion homophily. One reason why social networks tend to show high rates of opinion agreement is not simply that people self-select into networks of like-minded others (though they do) but that interpersonal communication within networks actively moves opinions toward consensus. Opinion leaders shape the views of those around them through ongoing conversation, interpretation, and social influence.
Implications for Modern Political Communication
The two-step flow model was developed in a media environment vastly different from today's. Network television, local newspapers, and face-to-face community interaction have given way to algorithmically curated social media feeds, podcast ecosystems, and online communities organized around ideological or identity affinities.
Several scholars have argued that social media has, paradoxically, made interpersonal opinion leadership more rather than less important. When the information environment is characterized by abundance and fragmentation, the challenge for most citizens is not finding information but determining which of the overwhelming volume of available signals to trust and attend to. Opinion leaders — in the updated form of trusted online voices, podcasters, local Facebook community administrators, and engaged group members — play a curation and authentication role that is arguably more valuable in high-information environments than it was in the relatively low-information environment that Katz and Lazarsfeld studied.
💡 Intuition: Why Political Analysts Should Track Opinion Leaders
For campaign analytics, the two-step flow model has practical implications that go beyond understanding opinion formation. If you want to reach persuadable voters efficiently, identifying and reaching their social network opinion leaders may be more effective than broadcasting to the mass audience directly. Peer-to-peer mobilization programs — where campaigns recruit volunteers to contact friends and family members — implicitly leverage the two-step flow model: they treat volunteers as opinion leaders whose personal relationships carry more weight than campaign communications from strangers.
📊 Real-World Application: The Relational Organizing Turn
Progressive campaigns since 2016 have explicitly built relational organizing programs — training volunteers to have structured conversations with their own social networks rather than with strangers at doors. The analytics teams supporting these programs track not just whether targets were contacted but who contacted them, on the theory that the identity and relationship of the messenger shapes the persuasive impact of the message. Early research on relational organizing suggests it may be substantially more effective at persuasion and mobilization than traditional stranger-to-stranger canvassing.
Cross-National Variations in Opinion Structure
The frameworks discussed in this chapter were developed primarily in the American empirical tradition. Before accepting them as universal descriptions of how public opinion works, we need to ask: how well do they travel?
Does Converse Replicate Internationally?
Converse's findings about low ideological constraint and non-attitudes were based on American survey data from the late 1950s. Subsequent research in other political systems has found a more complex picture.
In Western European parliamentary systems — particularly those with strong, programmatic parties that offer clear and contrasting policy platforms — ideological constraint among mass publics tends to be higher than Converse found in the American case. Citizens of countries like Sweden, Germany, or the Netherlands, where party brands communicate clear left-right distinctions consistently over decades, organize their political views more coherently along ideological lines than did the American voters Converse studied, who were navigating a more ambiguous two-party system in which both parties contained significant ideological factions.
This comparison suggests that Converse's findings may be partly a product of the American institutional context rather than a universal feature of mass political cognition. The complexity of American political parties in the 1950s — when conservative Southern Democrats and liberal Northern Republicans coexisted within the same nominal partisan categories — gave citizens fewer clean cues to organize their views around. European systems, with clearer party-policy linkages, produce more ideologically organized publics.
The implication for analysts: Ideological constraint is partly a product of the party system. Where parties provide clearer programmatic cues — and where media systems reinforce those cues consistently — mass publics will exhibit more organized belief systems. This is both an argument for taking party cue-following seriously (it is not just confusion; it is rational response to institutional incentives) and a caution against overgeneralizing from any single political context.
Social Desirability Across Cultures
Social desirability bias takes different forms in different political cultures, and those differences matter for cross-national opinion research.
In authoritarian or semi-authoritarian political contexts, the relevant social norm being performed is not "I am a tolerant, informed citizen" but often "I support the government." Survey researchers in countries with significant political repression consistently find that expressed support for ruling parties and governing regimes substantially exceeds private preferences inferred through indirect measurement techniques. This is not a minor wrinkle; in some contexts, it renders conventional survey measurement essentially uninformative about genuine political preferences.
Even in democracies, the specific content of social desirability bias differs substantially. In countries with stronger collective identity norms — researchers have documented this in East Asian contexts, for example — the bias runs toward suppressing strongly stated individual opinions rather than toward specific political positions. In countries with recent histories of political violence organized along ethnic or religious lines, survey respondents may suppress certain identity-related political positions out of genuine concern for personal safety rather than merely social approval.
Cross-national polling firms — Ipsos, Gallup, YouGov, and the various national academic survey programs that coordinate through the International Social Survey Programme and Comparative Study of Electoral Systems — have developed specialized techniques for navigating these cross-national differences. But the fundamental challenge remains: a standardized question wording does not produce a standardized measurement when the social and political context surrounding the question differs so dramatically.
🌍 Global Perspective: Public Opinion Across Political Cultures
The theoretical frameworks we've discussed were developed primarily in the American context. They travel uneasily. Ideological constraint, for instance, may be higher in parliamentary systems where parties present clearer programmatic platforms and citizens have stronger cues to organize their views around. Social desirability bias takes different forms in different cultural contexts: in some countries, expressing strong support for the ruling party to an interviewer may reflect fear rather than genuine preference. The spiral of silence operates differently in societies with different norms about political expression. As global polling has expanded, cross-national researchers have learned to be cautious about assuming that the concepts and findings from one context generalize to another.
The Thermostatic Model Outside the United States
The thermostatic model has been tested in comparative context, and the results are instructive. Evidence for thermostatic dynamics has been found in the United Kingdom, Canada, and Australia — all Westminster-style parliamentary systems where a single governing party has reasonably clear policy responsibility and where public opinion can plausibly "read" the direction of policy change.
The model travels less well to countries with proportional representation systems and coalition governments. When governing coalitions contain parties with different policy positions, when governing responsibility is dispersed across multiple parties, or when coalition agreements produce policies that no party's voters specifically endorsed, the clear policy-as-signal that the thermostatic model requires is murkier. Publics in these systems may react to governing coalition outcomes, but the signal is noisier and the response less clean.
The Philosophical Stakes: Does Public Opinion Exist?
We are now in a position to answer Carlos's question directly. Does public opinion exist?
The honest answer is: it depends on what you mean by exist.
If you mean: is there some coherent, stable, discoverable entity called "public opinion" that polls measure the way a thermometer measures temperature — then no, probably not. The evidence from Converse, Zaller, the thermostatic modelers, and the measurement researchers suggests that what we call "public opinion" is better understood as:
- A distribution of variably-stable considerations in individual minds
- That are differentially accessible depending on context, priming, and recent events
- That are inconsistently organized along ideological dimensions
- That are partially constructed by the act of measurement itself
- That are subject to social desirability pressures that systematically distort their expression
- And that, even when accurately captured at the individual level, cannot be simply aggregated into a single "public" view without making contested political choices about whose voice counts how much
If you mean: is there something real that polls measure, something that matters for politics and democracy — then yes, clearly. Politicians respond to polls. Policies shift in response to opinion signals. Elections are won and lost on the basis of public preferences. Whatever the philosophical complexities, there is a social reality here that affects people's lives.
The productive middle ground — the one that characterizes serious political analytics — is to treat public opinion as a social construct that is nonetheless real in its consequences. It is real the way money is real: it depends on shared belief and social convention, it is not a fixed natural quantity, and yet it is powerful enough to start wars and end careers.
Vivian's Nuanced Reply: What Carlos Learned
Vivian Park spent nearly an hour walking Carlos through the Converse findings, the RAS model, and the construction problem. Carlos took notes, his initial anxiety giving way to something more like intellectual excitement. Finally, he asked the question again:
"So if we designed the question differently, would we get a different answer?"
Vivian smiled. "Almost always. Sometimes a little. Sometimes a lot. Which is why we don't just design any question and call it a day. We think hard about what we want to measure, we pretest the question to see how respondents actually interpret it, we compare to established benchmarks when we can, and we report results with appropriate caveats about what the question actually asked."
"But if the answer changes with the question," Carlos pressed, "isn't that — I don't know — dishonest? Like we're manufacturing the result we want?"
"It would be dishonest if we didn't know that was happening," Vivian said. "And it would be dishonest if we chose the wording specifically to get the result we wanted, without disclosing it. But if we understand the measurement process, use accepted question wordings where they exist, and are transparent about what we asked and how we asked it — then we're doing science. Careful, humble, imperfect science, but science."
She picked up a copy of the crosstab printout Carlos had brought in. "This number here — 47% support. That's not a fact about the world the way the speed of light is a fact about the world. It's a fact about how a particular population of people responded to a particular question, asked in a particular way, at a particular time. Whether it's meaningful depends on everything I just told you." She tapped the paper. "Now, is it meaningful?"
Carlos looked at the question wording printed at the top. He read it twice. He thought about response scales, about what considerations the wording might activate, about who was in the sample.
"I think so," he said. "With caveats."
Vivian nodded. "Welcome to the field."
Implications for Analysts: A Working Theory of Public Opinion
Based on everything we've covered, here is a working theory of public opinion that should inform how you conduct and interpret research:
1. Opinions are distributions, not points. Any individual's "true" position is better thought of as a probability distribution over response options than as a single fixed point. Measurement samples from that distribution.
2. Context is data. The framing, wording, order, and mode of measurement are not contaminants to be controlled away; they are part of what you are measuring. Report them alongside your results.
3. Stability is relative. Some opinions are highly stable over time (basic partisan identity, broad ideological self-placement). Others are volatile (specific policy positions, especially on unfamiliar issues). Distinguish between them.
4. Elite cues are amplifiers. Opinion change often follows elite communication rather than leading it. When you see rapid opinion change, look for the elite cue structure that drove it.
5. Opinion leaders mediate the information flow. Direct effects of mass media on mass publics are often weaker than the effects of interpersonal influence from embedded social network contacts. Track opinion leader networks, not just mass audiences.
6. Aggregation is political. Every choice about who to include in a sample, how to weight responses, and how to represent the resulting distribution involves implicit claims about whose preferences count. Be transparent about those choices.
7. Silence is data too. The opinions not expressed — suppressed by social desirability, spiral of silence, or survey design choices that exclude certain respondents — are part of the public opinion landscape you need to account for.
8. Context is culturally specific. The frameworks that illuminate American public opinion do not always translate to other political systems. Approach cross-national opinion data with institutional humility.
✅ Best Practice: Report Your Question Wording
Every poll result should be accompanied by the exact question wording, response options, sample definition, and fielding dates. When reporting from secondary sources, always look for the question wording before interpreting the result. A 10-point difference in "support" between two polls on the same topic may reflect a genuine opinion shift — or it may reflect a difference in how the question was asked.
Summary
Public opinion is not a simple fact waiting to be measured. It is a complex social reality that emerges from the interaction of citizens' pre-existing predispositions, the information environment they inhabit, the social contexts in which they express themselves, and the measurement instruments researchers use to capture their views.
The concept itself carries a long intellectual history. Lippmann's skepticism about the rational competence of mass publics, Dewey's democratic counter-argument that the conditions for genuine deliberation must be rebuilt, and Habermas's analysis of the public sphere's structural degradation under commercial media — all of these frameworks illuminate aspects of the gap between what public opinion could be in democratic theory and what it is in empirical practice.
Philip Converse showed us that many survey responses are non-attitudes — momentarily fabricated responses with no stable underlying content. John Zaller showed us that the considerations people draw on to form expressed opinions are variably accessible and easily primed by framing and context. The thermostatic modelers showed us that public opinion is not a fixed backdrop but a dynamic signal that reacts to government action across multiple policy domains. Opinion leaders and two-step flow dynamics remind us that mass publics do not encounter political information directly but through socially embedded intermediaries whose relationships carry independent persuasive weight. And cross-national comparison cautions us against treating any single country's findings as universal.
None of this means polling is useless. It means polling is a craft — one that requires theoretical sophistication, methodological rigor, and epistemic humility. The best political analysts are those who can hold two ideas in mind simultaneously: that their poll numbers represent something real and important, and that they are also constructions, always provisional, always context-dependent, always incomplete.
As Carlos walked back to his desk that afternoon, he looked at the crosstab differently. The numbers hadn't changed. But his relationship to them had.
Key Concepts Review
- Public opinion is a social construct: real in its consequences, but not a fixed, pre-existing entity that surveys simply discover.
- Lippmann's skepticism about the pseudo-environment and Dewey's democratic rejoinder established the foundational tension between descriptive and normative accounts of public opinion.
- Habermas's public sphere provides a historical account of conditions under which genuine rational-critical opinion formation was possible and of how commercial media degraded those conditions.
- Non-attitudes (Converse) are survey responses with no stable underlying evaluative content — respondents generate them on the spot to satisfy the social expectation of having an opinion.
- Ideological constraint refers to the logical coherence of a citizen's positions across different issues; Converse found it to be rare in mass publics, though international comparisons suggest it is higher in systems with clearer programmatic parties.
- The RAS model (Zaller) proposes that expressed opinions are constructed in the moment of answering, based on considerations that are currently accessible to the respondent.
- The thermostatic model (Wlezien/Erikson) shows that public opinion reacts to policy change in a self-correcting feedback loop across domains including defense, social welfare, and immigration.
- Opinion leaders and the two-step flow (Katz and Lazarsfeld) describe how mass media influence passes through socially embedded intermediaries before reaching mass publics, with implications for both opinion formation and campaign strategy.
- Social desirability bias causes respondents to report socially acceptable rather than genuinely held positions on sensitive topics; the effect varies by mode of administration and cultural context.
- The spiral of silence (Noelle-Neumann) describes how minority opinion becomes self-suppressed as people conform to perceived majority views.
- Aggregation problems arise because there is no neutral way to combine individual opinions into "public opinion" — every aggregation method embeds contested assumptions about whose voice counts.
- Cross-national variation in opinion structure reminds analysts that frameworks developed in one institutional context do not automatically generalize to others.