Appendix D: Key Studies Summary

The twenty-five studies annotated here are the intellectual backbone of this textbook. They are not a comprehensive literature review — each could be the subject of a semester-long seminar. They are, rather, the studies most frequently referenced across chapters, the ones whose findings and arguments have most shaped the field of empirical political science and political analytics.

For each study, this appendix provides: the full citation, a summary of methods and core findings, an explanation of why the study matters for political analytics specifically, and the key limitations that qualified researchers acknowledge.

Read these summaries before the chapters that cite them. Return to them when a chapter references a finding you want to understand more deeply.


Study 1: Converse (1964) — Belief Systems and Non-Attitudes

Full citation: Converse, P. E. (1964). The nature of belief systems in mass publics. In D. E. Apter (Ed.), Ideology and discontent (pp. 206–261). Free Press.

Methods and findings: Converse analyzed American National Election Studies panel data from 1956–1960, examining the stability and coherence of political attitudes over time. He found that most Americans' issue positions were startlingly unstable across interview waves: asked the same policy question two years apart, a large fraction of respondents gave different answers. Their responses appeared to be essentially random across waves — what Converse called "non-attitudes." A small elite held coherent, stable belief systems organized around ideological principles. The mass public did not.

Why it matters for political analytics: Converse's findings are a foundational challenge to democratic theory and to opinion polling. If many survey responses are random noise rather than genuine preferences, what do polls measure? His work motivates the careful attention to question wording, response stability, and measurement validity that characterizes good survey research. It also anchors debates about elite versus mass polarization (Chapter 17): if most people held genuine ideological attitudes, mass polarization would look very different from what Converse described.

Key limitation: The "non-attitudes" interpretation is contested. Achen (1975) argued that apparent instability is largely attributable to measurement error in the survey questions themselves, not to the absence of genuine attitudes. Subsequent research using more reliable multi-item scales finds greater stability than Converse reported. The debate about whether Converse found meaninglessness in mass opinion or merely in early survey instruments has not been definitively resolved.


Study 2: Zaller (1992) — Opinion Formation and the RAS Model

Full citation: Zaller, J. R. (1992). The nature and origins of mass opinion. Cambridge University Press.

Methods and findings: Drawing on ANES data across multiple election cycles, Zaller developed the Receive-Accept-Sample (RAS) model of public opinion. The model proposes that individuals receive messages from the political information environment, accept those messages in proportion to their political awareness and predispositions, and sample from recently considered considerations when asked to state an opinion. This simple model explains a wide range of empirical patterns: why the most politically aware people are most resistant to attitude change from opposition communications, why opinion change diffuses from elites downward, and why survey responses are context-dependent.

Why it matters for political analytics: The RAS model is probably the most influential formal theory of public opinion in the past thirty years. It directly informs how political analysts interpret polling data, how campaigns design persuasion strategies, and why media framing effects are theorized to work differently for high- and low-information voters. Chapters 13 and 17 draw on Zaller's framework extensively.

Key limitation: The model's strength — parsimony — is also a limitation. It treats "predispositions" and "awareness" as black boxes and does not model the specific content of considerations. Critics (Sniderman et al.) argue that it understates the role of heuristics and values in opinion formation. The model is better at describing aggregate patterns than individual-level attitude change.


Study 3: Campbell et al. (1960) — The American Voter

Full citation: Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American voter. Wiley.

Methods and findings: Based on the 1952 and 1956 ANES data, Campbell and colleagues proposed the "funnel of causality" model of vote choice, in which social structural factors (class, religion, region) shape partisan identification, which in turn shapes perceptions of candidates and issues, which determine the vote. Party identification — measured as a psychological attachment to a party, not mere registration status — emerged as the dominant predictor of vote choice. Most Americans voted consistently with their party identification even when they cross-pressured by short-term forces.

Why it matters for political analytics: The Michigan model, as it is called, established party identification as the central concept in the study of American electoral behavior. Every model of vote choice builds on or against this framework. The persistence of partisanship across shifting issue environments explains why election forecasting models have predictive power before campaigns begin. Chapters 2 and 15 engage directly with the Michigan model's legacy.

Key limitation: The book reflected a relatively stable, non-polarized party system. In an era of sorted parties (Democrats more uniformly liberal, Republicans more uniformly conservative), partisan identification and ideological position are less separable than Campbell et al. treated them. The 1950s electorate was also demographically constrained — survey-era African Americans in the South faced disenfranchisement — limiting the model's generalizability.


Study 4: Gerber and Green (2000) — GOTV Field Experiments

Full citation: Gerber, A. S., & Green, D. P. (2000). The effects of canvassing, telephone calls, and direct mail on voter turnout: A field experiment. American Political Science Review, 94(3), 653–663.

Methods and findings: Gerber and Green randomly assigned over 30,000 registered voters in New Haven, Connecticut to receive door-to-door canvassing, telephone calls, direct mail, or a control condition before the 1998 municipal election. They found that personal canvassing produced statistically significant and substantively meaningful turnout increases (approximately 8–9 percentage points). Telephone contact by volunteers had modest effects; professional phone bank calls had near-zero effects. Direct mail had small but marginally significant effects.

Why it matters for political analytics: This study launched the modern era of campaign field experiments. Before Gerber and Green, most get-out-the-vote research was observational — comparing areas with and without mobilization efforts — and produced unreliable results. The field experimental approach is now standard in campaign organizations and academic research. It directly informs the content of Chapters 20 and 21 on voter mobilization.

Key limitation: External validity — whether results from New Haven in 1998 generalize to other contexts — is always a concern with single-site experiments. Subsequent meta-analyses find that door-to-door canvassing effects vary considerably across election types, target populations, and message content. Kalla and Broockman (2018) specifically found minimal persuasion effects from canvassing on vote choice (as opposed to turnout).


Study 5: Kalla and Broockman (2018) — Minimal Persuasion Effects

Full citation: Kalla, J. L., & Broockman, D. E. (2018). The minimal persuasive effects of campaign contact: Evidence from 49 field experiments. American Political Science Review, 112(1), 148–166.

Methods and findings: Kalla and Broockman synthesized results from 49 field experiments on campaign persuasion, including door-to-door canvassing, phone calls, mailers, and digital advertising. They found that across this large sample of experiments, the average persuasion effect on vote choice was approximately zero. They conclude that campaigns can effectively mobilize their base and turn out low-propensity supporters but are largely ineffective at persuading voters who support the other party or who are genuinely undecided.

Why it matters for political analytics: This is a landmark finding that revises conventional campaign wisdom and challenges the multi-billion-dollar assumptions of modern political advertising. It suggests that campaign spending affects primarily GOTV operations and advertising among already-persuadable voters close to election day, not fundamental vote choice. Chapters 20–22 engage with this finding and its implications for how to analyze campaign effects.

Key limitation: The study covers field experiments of relatively limited duration and intensity. It does not address the possible effects of sustained, years-long campaigns or the presidential campaigns that mobilize billions of dollars and dominate the information environment for months. Some scholars argue that the experiments studied are too short and underpowered to detect the modest but real persuasion effects that accumulate over a campaign.


Study 6: Sides and Vavreck (2013) — The Gamble

Full citation: Sides, J., & Vavreck, L. (2013). The gamble: Choice and chance in the 2012 presidential election. Princeton University Press.

Methods and findings: Using a combination of daily tracking polls, advertising data, and campaign event records, Sides and Vavreck analyzed the 2012 presidential election. Their central finding is that the campaign had minimal effects on the final outcome — the economic fundamentals (moderate growth, relatively low unemployment) predicted an Obama victory in advance, and the campaign itself did not significantly shift the result from what structural models anticipated. They call this the "equilibrium" view of campaigns: professional campaigns on both sides largely cancel each other out, leaving fundamentals to determine outcomes.

Why it matters for political analytics: The book provides the clearest exposition of the fundamentals-versus-campaigns debate in modern electoral politics. It motivates the attention to economic and structural predictors that forecasting models employ and raises the appropriate skepticism about campaign narratives that attribute elections to particular debate moments, gaffes, or advertising strategies.

Key limitation: The 2012 election was a relatively unshocking contest — an incumbent running during modest economic recovery. Whether the equilibrium model holds in elections with genuinely unusual candidates (2016), during systemic shocks (2020 pandemic), or in multi-candidate fields is an open question. Sides and Vavreck themselves conducted a similar analysis of 2016 that found somewhat more anomalous campaign effects.


Study 7: Abramowitz — The Time for Change Model

Full citation: Abramowitz, A. I. (2012). The polarized public: Why American government is so dysfunctional. Pearson. See also: Abramowitz, A. I. (2008). Forecasting the 2008 presidential election with the time-for-change model. PS: Political Science and Politics, 41(4), 691–695.

Methods and findings: Abramowitz's Time for Change model predicts presidential election outcomes using three variables: second-quarter GDP growth in the election year, the incumbent president's net approval rating in late June, and a "time for change" indicator that disadvantages parties holding the White House for two or more consecutive terms. The model has performed well across multiple election cycles, often producing accurate forecasts months before election day.

Why it matters for political analytics: The Time for Change model is the canonical example of structural forecasting and embodies the argument that most of the action in presidential elections is determined before campaigns begin. It is directly applicable to the economic voting models discussed in Chapter 11 and the forecasting frameworks in Chapter 14.

Key limitation: The model's predictive accuracy is well-documented in-sample but sometimes struggles out-of-sample. The 2016 election (predicted a Republican win, correct) and 2020 election (correctly predicted a Democratic lean despite strong Trump approval among Republicans) have both generated debate about whether the model's components are stable across increasingly polarized electorates.


Study 8: Lazarsfeld et al. (1944) — The People's Choice

Full citation: Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1944). The people's choice: How the voter makes up his mind in a presidential campaign. Duell, Sloan and Pearce.

Methods and findings: Lazarsfeld and colleagues conducted a panel survey of 600 Erie County, Ohio residents during the 1940 presidential campaign, interviewing them monthly from May through November. They found that the campaign had minimal effects on vote choice: most voters decided early, and those who were initially undecided were less interested in politics and paid less attention to campaign communications. Social group membership (class, religion, rural/urban residence) was the primary determinant of vote choice. Media exposure reinforced rather than changed existing political predispositions.

Why it matters for political analytics: The minimal effects tradition in political communication research traces directly to this book. The finding that media reinforces rather than converts political views — because people selectively expose themselves to politically congenial messages — remains a live hypothesis in the digital media era. Chapter 21 revisits selective exposure in the context of social media.

Key limitation: Erie County in 1940 was a single, relatively homogeneous community. The panel study covered a single election. Survey panel attrition and the novelty of panel methodology in 1940 raise questions about sample quality. Most importantly, the communications environment of 1940 — radio, print, limited political advertising — is radically different from the present, limiting direct application of the minimal effects finding.


Study 9: Iyengar and Kinder (1987) — News That Matters

Full citation: Iyengar, S., & Kinder, D. R. (1987). News that matters: Television and American opinion. University of Chicago Press.

Methods and findings: Through a series of laboratory experiments, Iyengar and Kinder exposed participants to edited evening news broadcasts that varied in how much attention they devoted to different policy issues. They found clear agenda-setting effects: issues that received more coverage were rated as more important by viewers, and candidates were evaluated more on the issues that had been emphasized in the news they watched (the priming effect). Framing experiments showed that the way an issue is presented — as an individual story versus a social problem — affected attributions of responsibility.

Why it matters for political analytics: Iyengar and Kinder established the experimental study of media effects as a legitimate and productive research program. Agenda-setting, priming, and framing remain the central theoretical frameworks for studying how media shapes political opinion. Chapters 23 and 24 build directly on these concepts.

Key limitation: The external validity of laboratory experiments on media effects is always contested. Participants in a controlled setting watching edited news clips may respond differently than real audiences choosing what to watch. The fragmented, user-controlled digital media environment differs fundamentally from the era of three broadcast networks, raising questions about whether the original agenda-setting and priming findings hold when audiences self-select their information sources.


Study 10: DellaVigna and Kaplan (2007) — Fox News Effects

Full citation: DellaVigna, S., & Kaplan, E. (2007). The Fox News effect: Media bias and voting. Quarterly Journal of Economics, 122(3), 1187–1234.

Methods and findings: DellaVigna and Kaplan exploited the staggered rollout of Fox News to cable markets between 1996 and 2000 as a natural experiment. Towns that gained access to Fox News earlier showed larger Republican vote share increases between 1996 and 2000 compared to towns that gained access later, even after controlling for other factors. They estimated that Fox News increased Republican presidential vote share by 0.4–0.7 percentage points in towns that received it, with larger effects in towns that previously lacked strong news alternatives.

Why it matters for political analytics: This is one of the most cited studies on partisan media effects, using a quasi-experimental design that provides stronger causal evidence than correlational studies. It motivates the discussion of partisan media effects and political segregation in Chapter 23 and informs debate about whether media shapes opinion or reflects it.

Key limitation: The effects identified are from the 1990s, when Fox News was new and cable systems were still rolling out. The mechanism — comparing those with and without access — no longer applies in an era of universal cable and internet access. Martin and Yurukoglu (2017) extended this research to later years using a different identification strategy and found larger and continuing effects, but the original study's external validity to the current media environment is limited.


Study 11: Vosoughi, Roy, and Aral (2018) — How Misinformation Spreads

Full citation: Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.

Methods and findings: Vosoughi and colleagues analyzed the diffusion of approximately 126,000 rumor cascades on Twitter between 2006 and 2017, involving roughly 3 million people. They found that false news spread faster, further, more broadly, and more deeply than true news. False political news was particularly viral. The difference could not be explained by bots — false news spread faster than true news in cascades initiated by verified human accounts. The authors suggest novelty and emotional content drive the advantage of false information.

Why it matters for political analytics: Published in Science with a large dataset and rigorous methodology, this study anchored debates about misinformation and election integrity (Chapter 35). Its finding that humans, not bots, are the primary drivers of misinformation spread has important implications for platform regulation.

Key limitation: The study is purely descriptive of spreading patterns; it does not establish that exposure to false political news changed political attitudes or behavior. The Twitter-specific finding may not generalize to other platforms with different sharing mechanics. "False" vs. "true" classification relied on fact-checkers, whose own decisions involve judgment and are sometimes contested.


Study 12: Rooduijn and Pauwels (2011) — Measuring Populism

Full citation: Rooduijn, M., & Pauwels, T. (2011). Measuring populism: Comparing two methods of content analysis. West European Politics, 34(6), 1272–1283.

Methods and findings: Rooduijn and Pauwels compared two approaches to measuring populism in party manifestos: holistic grading (experts assign overall populism scores) and a quantitative text analysis method based on counting key terms from populism's core definitional elements (the Manichean people-elite distinction, general will, heartland mythology). Applied to Belgian and Dutch party manifestos, both methods produced similar rankings of parties by populism level, suggesting the quantitative method is a valid, more scalable alternative to expert coding.

Why it matters for political analytics: The systematic measurement of populism is essential for comparative research on populist parties and leaders (Part VI of this textbook). This study validates text-based measurement approaches applicable to large corpora. Chapter 27 adapts these methods to analyze U.S. political speeches using the ODA text data.

Key limitation: The study is limited to two Western European countries and their party manifestos. Populism may manifest differently in speeches, social media, or other political communication formats, and across ideological traditions (left populism vs. right populism). The specific dictionary terms validated for Dutch and Belgian politics may not translate directly to other political contexts.


Study 13: Gelman and King (1993) — Campaign Effects and Forecasting

Full citation: Gelman, A., & King, G. (1993). Why are American presidential election campaign polls so variable when votes are so predictable? British Journal of Political Science, 23(4), 409–451.

Methods and findings: Gelman and King proposed the "enlightenment" theory of campaigns: early campaign polls appear variable because voters have not yet received enough campaign information to make informed choices. As the campaign progresses and voters learn more, polls converge toward the outcome predicted by fundamental factors. They argued that the apparent variability of early polls reflects genuine uncertainty, not genuine volatility in voter preferences.

Why it matters for political analytics: This paper provides the theoretical foundation for modern probabilistic election forecasting, explaining why structural models grounded in fundamentals outperform early polling averages and why late-campaign polls are more predictive than early ones. It motivates the weighting of economic fundamentals alongside polling in forecasting models (Chapter 14).

Key limitation: The paper predates the internet and social media, and it was written for elections less polarized than today's. Whether campaigns truly "enlighten" voters in an era where most voters are already highly sorted partisan identifiers — leaving a smaller genuinely persuadable middle — is an open question that researchers continue to investigate.


Study 14: Berinsky (2009) — In Time of War

Full citation: Berinsky, A. J. (2009). In time of war: Understanding American public opinion from World War II to Iraq. University of Chicago Press.

Methods and findings: Berinsky analyzed public opinion on military conflict across multiple wars, focusing on the conditions under which citizens accept or reject political elite cues about the wisdom of using military force. His central finding is that bipartisan elite consensus on supporting a war drives public support; partisan division among elites allows citizens to follow their partisan cues, producing polarized public opinion. Aggregate public opinion numbers conceal the mechanisms by which opinion is formed.

Why it matters for political analytics: Berinsky's framework for understanding war opinion generalizes to other policy domains: elite consensus produces apparent public consensus; elite polarization produces public polarization. This is a key argument in Chapter 17's discussion of ideological sorting and in Chapter 13's treatment of how partisan cues shape policy opinion formation.

Key limitation: The book's empirical focus is war and foreign policy. The extent to which the elite cue-following model generalizes to domestic policy domains — particularly issues with stronger predispositional roots like abortion or gun control — is debatable. On highly moralized issues, citizens may update their partisan allegiances to match their values rather than updating their values to match partisan cues.


Study 15: Green, Palmquist, and Schickler (2002) — Partisan Hearts and Minds

Full citation: Green, D., Palmquist, B., & Schickler, E. (2002). Partisan hearts and minds: Political parties and the social identities of voters. Yale University Press.

Methods and findings: Green and colleagues analyzed decades of ANES panel data to argue that party identification is a stable social identity — more like religion or ethnicity than a running tally of policy preferences. They found that party identification changes very little over individual lifetimes and is highly resistant to political events. Apparent changes in party identification in cross-sectional surveys largely reflect measurement error and sampling variation. Their social identity model directly challenges rational-choice accounts that treat partisanship as continuously updated based on policy performance.

Why it matters for political analytics: The stability of party identification is foundational to political forecasting (stable baselines), campaign strategy (base mobilization over persuasion), and the interpretation of polling. Understanding that most apparent partisan change in polls reflects short-term thermostatic responses (approval of the incumbent) rather than genuine dealignment prevents misinterpretation of approval fluctuations.

Key limitation: The book was published before the era of dramatic realignment along educational and demographic lines that characterized 2012–2020. Whether educational polarization represents genuine partisan change or simply a reshuffling of the social groups to which the major parties appeal — which Green et al.'s model would predict could happen at the group level even with individual-level stability — is actively debated.


Study 16: Putnam (2000) — Bowling Alone

Full citation: Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. Simon and Schuster.

Methods and findings: Drawing on dozens of surveys, Putnam documented a sustained decline in civic participation across multiple domains — voting, club membership, volunteer work, church attendance, even informal socializing — in the United States between roughly 1960 and 2000. He attributed the decline primarily to generational replacement (older, more civically engaged cohorts dying and being replaced by less engaged younger cohorts), the privatizing effects of television, and suburbanization. He argued this decline in social capital has negative consequences for democratic governance and community wellbeing.

Why it matters for political analytics: Bowling Alone is a landmark work on civic participation and democratic health. It motivates analysis of voter turnout trends (Chapter 18), the relationship between community social capital and electoral behavior, and the comparative analysis of participation across demographic groups. Leighley and Nagler (2013) update and challenge some of Putnam's empirical claims.

Key limitation: Subsequent research has challenged some of Putnam's measurements (some forms of civic participation declined while others remained stable or increased) and questioned whether the causal mechanisms he identifies (television, suburbanization) are correct. The social capital framework has also been criticized for downplaying the role of race, power, and structural inequality in determining who benefits from community institutions.


Study 17: Mudde (2004) — The Populist Zeitgeist

Full citation: Mudde, C. (2004). The populist zeitgeist. Government and Opposition, 39(4), 541–563.

Methods and findings: Mudde proposed defining populism as a "thin-centered ideology" — a minimal ideology that divides society into two homogeneous and antagonistic groups, the "pure people" and the "corrupt elite," and sees politics as the expression of the general will of the people. This "thin" ideological core can attach to "host" ideologies of either the left (socialist populism) or right (nativist populism), explaining the ideological diversity of parties labeled populist. Mudde argued that mainstream parties in the 1990s and 2000s were beginning to adopt populist discourse, creating a populist zeitgeist.

Why it matters for political analytics: Mudde's thin-ideology definition is the dominant conceptual framework in academic populism research and is the definition adopted in this textbook's Part VI. It enables the construction of measurable indicators of populism based on the presence or absence of the people-elite dichotomy and claims to represent the general will.

Key limitation: "Thin-centered ideology" has been criticized as too broad — nearly any challenger party claims to speak for "the people" against some elite in some sense. The definition risks classifying too many parties as populist. Others argue that focusing on discourse neglects the organizational and leadership features of populist movements. Debates about the correct definition of populism remain unresolved.


Study 18: Leighley and Nagler (2013) — Who Votes Now?

Full citation: Leighley, J. E., & Nagler, J. (2013). Who votes now? Demographics, issues, equality, and the electorate. Princeton University Press.

Methods and findings: Using Current Population Survey Voter Supplement data from 1972 to 2008, Leighley and Nagler documented the persistent class and demographic skew in American voter turnout. Higher-income, higher-education, and older voters participate at substantially higher rates than lower-income, lower-education, and younger voters. Importantly, they find that this turnout bias has policy consequences: nonvoters hold systematically different preferences (more economically liberal, supporting greater redistribution) than voters. The electorate is not simply a scaled-down version of the public.

Why it matters for political analytics: Who votes matters as much as how many vote. The differential turnout patterns documented by Leighley and Nagler explain why politicians respond to different constituents than pure majority-preference models would predict, and why understanding mobilization versus persuasion requires attention to whose voice is amplified or muted by turnout patterns. Chapter 18 builds directly on this research.

Key limitation: The study covers 1972–2008; turnout patterns have shifted since, particularly with the surge of young and minority voter participation in 2008 and 2020 and the increased turnout of non-college white voters beginning in 2016. The policy-preference gap between voters and nonvoters may have narrowed or shifted during this period.


Study 19: Ansolabehere and Iyengar (1995) — Going Negative

Full citation: Ansolabehere, S., & Iyengar, S. (1995). Going negative: How political advertisements shrink and polarize the electorate. Free Press.

Methods and findings: Through a series of experimental studies, Ansolabehere and Iyengar showed that exposure to negative political advertising demobilized voters — reducing turnout intention — particularly among independent voters. They also found that negative ads were more effective at damaging the targeted candidate's support than positive ads were at building the sponsor's support. They argue that negative advertising benefits dominant parties (Republicans at the time of writing, who have a smaller but more reliable base) by shrinking the electorate.

Why it matters for political analytics: Going Negative is the foundational academic treatment of negative political advertising and its turnout effects. It motivates the discussion of campaign strategy in Chapter 20 and the analysis of advertising data from the Wesleyan Media Project.

Key limitation: The experimental results have been contested by subsequent field experimental research. Lau et al.'s (1999) meta-analysis found smaller and less consistent demobilization effects. The conditions under which negative ads demobilize, versus simply persuade or inform, remain debated. The media environment of the mid-1990s (dominated by broadcast television) is very different from today's multi-channel, digital advertising landscape.


Study 20: Bartels (2000) — Partisanship and Voting Behavior

Full citation: Bartels, L. M. (2000). Partisanship and voting behavior, 1952–1996. American Journal of Political Science, 44(1), 35–50.

Methods and findings: Bartels analyzed ANES data from 1952 through 1996 to assess trends in the impact of party identification on vote choice. Contrary to conventional wisdom that partisanship had weakened since the 1960s, Bartels found that the impact of party identification on presidential vote choice actually increased substantially over this period, particularly from the 1970s onward. Independent voters (a growing share of the public) were also voting increasingly in line with their partisan leanings when they had any.

Why it matters for political analytics: This study directly challenged the narrative of partisan dealignment that was common in the 1970s and 1980s and helped establish the realignment/sorting perspective that now dominates the field. Understanding the strengthening of partisan voting behavior is essential context for Chapter 15 on partisan identification and Chapter 17 on polarization.

Key limitation: The study covers only presidential elections through 1996. The subsequent two decades of further partisan sorting and polarization have made Bartels' findings look like the early stages of a larger trend rather than a stable equilibrium. The mechanisms producing stronger partisan voting — sorting, polarization, or the increasing salience of partisan identity — are distinct and carry different implications.


Study 21: Bonica et al. (2013) — Money in Politics

Full citation: Bonica, A., McCarty, N., Poole, K. T., & Rosenthal, H. (2013). Why hasn't democracy slowed rising inequality? Journal of Economic Perspectives, 27(3), 103–124.

Methods and findings: Bonica and colleagues combined campaign finance records with legislative roll-call vote data (via DW-NOMINATE scores) to demonstrate that as economic inequality has risen, the donor class has simultaneously become more politically active and more ideologically extreme (particularly on the Republican side). They find that the political system's responsiveness to money — which flows disproportionately from the wealthy — helps explain why democratic representation has not produced the redistribution that median-voter models would predict.

Why it matters for political analytics: This paper synthesizes the campaign finance and legislative ideology literatures in a way directly relevant to Chapter 16 on money and politics. It provides both the empirical framework and the normative stakes for understanding campaign finance data.

Key limitation: DW-NOMINATE scores measure revealed legislative preferences (roll-call voting), which are influenced by strategic considerations, constituency pressure, and agenda-setting by leadership — not purely by personal ideology. The causal story (money causes legislative behavior or both reflect underlying preferences) is difficult to establish with observational data alone.


Study 22: Guess, Nagler, and Tucker (2019) — Who Shares Misinformation

Full citation: Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586.

Methods and findings: Matching survey data on political attitudes to actual Facebook sharing behavior through a browser plugin study during the 2016 election, Guess and colleagues found that sharing of fake news (as classified by professional fact-checkers) was concentrated among a small share of the population — primarily older adults (those over 65 were seven times more likely to share fake news than those aged 18–29) and strong Republicans. Overall rates of fake news sharing were lower than media panic suggested, but the concentration in specific demographic groups is substantively important.

Why it matters for political analytics: This study is a methodological exemplar — it overcomes the self-report bias of standard surveys by measuring actual online behavior rather than asking people what they shared. Its finding about the role of age complements the misinformation literature and informs the targeting strategies relevant to Chapter 35.

Key limitation: The browser plugin methodology recruits volunteers, raising concerns about selection bias — the kinds of people willing to share their browsing data may differ systematically from those who are not. The study covers the 2016 election specifically; whether patterns have changed as platforms have implemented content moderation and media literacy interventions is unknown.


Study 23: Entman (1993) — Framing

Full citation: Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51–58.

Methods and findings: Entman synthesized the fragmented framing literature into a coherent theoretical framework. He defines framing as "to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation." Framing is distinguished from agenda-setting (what to think about) in that framing shapes how to think about an issue. The essay identifies frames as existing in communicators, texts, receivers, and culture.

Why it matters for political analytics: Framing analysis is one of the most used methods in political communication research and is central to understanding how media shapes policy opinion. Chapter 24 applies Entman's framework to empirical framing analysis. The concise definition in this paper is the one most commonly cited and operationalized in quantitative research.

Key limitation: Framing has expanded to mean many things in the years since Entman's paper, sometimes encompassing almost all of political communication research. The very generality of the concept makes operationalization for quantitative research challenging. Different researchers measure frames through different methods (automated text analysis, content coding, survey experiments), making findings across studies difficult to aggregate.


Study 24: Benjamin (2019) — Race After Technology

Full citation: Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Methods and findings: Benjamin examines how algorithmic systems and data-driven technologies encode and perpetuate racial bias. Drawing on case studies from criminal justice (predictive policing and recidivism algorithms), healthcare (diagnostic algorithms), and social media (content moderation disparities), she argues that the appearance of objectivity and neutrality that accompanies technological systems can obscure and amplify racial discrimination. She introduces the concept of the "New Jim Code" to describe how technologies that appear neutral reproduce discriminatory social arrangements.

Why it matters for political analytics: As political analytics increasingly relies on algorithmic targeting, machine learning models, and large administrative datasets, Benjamin's analysis is essential critical context. Chapters 33 and 38 engage directly with algorithmic bias in political data systems. Analysts who treat their models as objective are making a political and ethical claim that deserves scrutiny.

Key limitation: Benjamin's analysis is primarily qualitative and case-study based; it does not provide quantitative estimates of the magnitude of algorithmic bias. Critics from some computational perspectives argue the case studies involve complex tradeoffs that the framework does not fully address. The policy prescriptions — "abolitionist tools" — reflect a specific political tradition that not all readers will share, though the empirical observations about bias stand independent of the policy conclusions.


Study 25: Autor, Dorn, and Hanson (2013) — The China Shock

Full citation: Autor, D. H., Dorn, D., & Hanson, G. H. (2013). The China syndrome: Local labor market effects of import competition in the United States. American Economic Review, 103(4), 2121–2168.

Methods and findings: Using variation in exposure to Chinese import competition across U.S. commuting zones — driven by China's accession to the WTO and its industry-by-industry export expansion — Autor and colleagues found that areas more exposed to Chinese import competition experienced larger manufacturing employment losses, lower wages, and increased unemployment. These effects persisted for over a decade and were not offset by gains in other sectors. Subsequent work by the same group connected these economic shocks to political polarization, increased support for Republican and non-mainstream candidates, and reduced partisan stability.

Why it matters for political analytics: The "China shock" literature connects economic dislocation directly to electoral outcomes, providing some of the most credible causal evidence for the economic anxiety hypothesis of populist politics. Chapter 12's analysis of deindustrialization and electoral change draws on this framework. The instrumental variable design — using China's export growth to other countries to isolate trade exposure variation that is exogenous to local U.S. conditions — is a methodological model for causal inference with economic data.

Key limitation: The instrumental variable design is clever but relies on assumptions that are debated. Critics argue that Chinese export growth affected different countries for different reasons, complicating the instrument's validity. More fundamentally, the connection between trade-induced economic distress and political outcomes involves many mediating factors — local political culture, the availability of populist candidates, media environment — that the study does not fully model. The effects found are for regional labor markets; individual-level mechanisms (do the displaced workers themselves vote differently, or does community change affect all residents?) remain less clearly established.


A Note on Reading Foundational Research

These twenty-five studies are not final verdicts. Political science is a cumulative, self-correcting enterprise. Converse's non-attitudes finding has been contested for sixty years without reaching definitive resolution. The minimal effects tradition was upended and partially restored multiple times. The China shock findings have been replicated and challenged within ten years of their publication.

Reading foundational research means holding findings provisionally — taking them seriously as the best available evidence while remaining alert to the limitations, subsequent critiques, and updated research that this appendix identifies. The goal is not to memorize conclusions but to understand the logic of the evidence, the design choices that support or limit causal inference, and the ongoing scholarly conversation in which each study participates.

When you encounter a claim in this textbook about what research shows, the studies in this appendix are often where that claim originates. Tracing the chain of evidence — from textbook summary to study finding to data and methods — is the foundational practice of political analysis done well.