Chapter 32: Further Reading — Political Polarization and Algorithmic Amplification


1. Bail, C. A. (2021). Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing. Princeton University Press. The book-length treatment of Bail's research program on social media and polarization, including his Twitter bot experiment and subsequent work. Bail argues against both the filter bubble narrative and the simpler "exposure to outgroup increases understanding" narrative, developing a nuanced account of how social media interacts with identity processes to produce polarization. Accessible, research-grounded, and full of counterintuitive findings.

2. Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. J. (2019). The origins and consequences of affective polarization in the United States. Annual Review of Political Science, 22, 129–146. The definitive academic review of the affective polarization literature, covering measurement, trends, causes, and consequences. Essential reading for understanding the distinction between affective and ideological polarization and the evidence for dramatic increases in the former. Written by the leading political scientists in the field and accessible to non-specialists.

3. Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press. The book that popularized the filter bubble concept. Essential reading for understanding the hypothesis that subsequent empirical research has both supported and significantly qualified. Read alongside the empirical literature that has complicated Pariser's claims to develop an accurate understanding of what algorithmic personalization actually does to political information environments.

4. Haugen, F. (2021). Senate Testimony: Protecting Kids Online. United States Senate Commerce Subcommittee on Consumer Protection, Innovation, and Consumer Protection. The primary public document from Frances Haugen's congressional testimony. Available as a public record and worth reading in full for its specificity about internal Facebook decision-making, the gap between internal research findings and corporate actions, and the structural dynamics of the engagement-safety trade-off. Haugen's testimony is complemented by the internal documents she released, many of which are available through the Facebook Papers archive.

5. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. The landmark study demonstrating that false news spreads faster, deeper, and more broadly than true news on Twitter—and that this differential is driven primarily by human behavior, not bots. Essential quantitative foundation for understanding why political misinformation is systematically advantaged in social media information environments.

6. Boxell, L., Gentzkow, M., & Shapiro, J. M. (2017). Greater internet use is not associated with faster growth in political polarization among US demographic groups. Proceedings of the National Academy of Sciences, 114(40), 10612–10617. The paper documenting that affective polarization increased most among demographic groups with lowest internet use—a finding that significantly complicates attributing polarization primarily to social media. Should be read alongside responses and subsequent work that attempts to reconcile this finding with evidence of social media's polarizing effects.

7. Brady, W. J., McLoughlin, K., Torres, M., Luo, K. F. X., Gendron, M., & Crockett, M. J. (2023). How social learning amplifies moral outrage expression in online social networks. Science Advances, 9(37). While not the exact "chronological feed" study, Brady's work on how social learning amplifies outrage expression in online networks is directly relevant to understanding the mechanism by which social media amplifies political anger. His research program documents that the social rewards for outrage expression on social media create feedback loops that intensify partisan anger.

8. United Nations Human Rights Council. (2018). Report of the Independent International Fact-Finding Mission on Myanmar. UN General Assembly Document A/HRC/39/64. The full UN Fact-Finding Mission report on Myanmar, including the detailed findings on Facebook's role in the genocide. Available as a public document through the UN website. Essential primary source for the Myanmar case study. The section on social media (Chapter 14) is particularly relevant to Chapter 32.

9. BSR. (2018). Human Rights Impact Assessment: Facebook in Myanmar. BSR (Business for Social Responsibility). The independent human rights impact assessment commissioned by Facebook following the UN report. Documents Facebook's failures in Myanmar and makes 23 specific recommendations. Valuable as an example of human rights impact assessment methodology and as a document of corporate accountability. Available on the BSR website.

10. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). A study finding that only a small fraction of Facebook users shared fake news in 2016, and that sharing was heavily concentrated among older, more conservative users. Complicates simple narratives about misinformation spread being a general population-level phenomenon and suggests more targeted interventions. Important counterevidence for thinking about where misinformation interventions should focus.

11. Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A. Y., ... & Tucker, J. A. (2023). Like-minded sources on Facebook are prevalent but not polarizing. Nature, 620, 137–144. The Brady et al.–adjacent Science study by Nyhan and colleagues that found switching from algorithmic to chronological feed and reducing close-contact vs. broader-network content had limited effects on political polarization during the 2020 election. Essential reading for understanding the limits of algorithm change as a solution to polarization.

12. Sunstein, C. R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press. A prominent legal scholar's account of how social media and algorithmic personalization threaten the shared public sphere that democratic deliberation requires. Sunstein's "echo chamber" concerns predate the empirical literature that has qualified them, but his normative arguments about what democracy requires in terms of shared information remain relevant.

13. Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. A multi-author review of what is known about fake news in digital environments, covering definition, measurement, spread, and interventions. A useful overview piece that synthesizes findings from multiple research disciplines on political misinformation. Published alongside the Vosoughi et al. paper in the same issue of Science.

14. Levin, S. (2017). Facebook promised to tackle fake news. But the evidence shows it's not working. The Guardian, May 16, 2017. A piece of investigative journalism documenting the gap between Facebook's stated commitments to address misinformation and the actual effectiveness of its interventions. While dated, it illustrates the pattern of announced commitments outrunning evidence of effectiveness that characterizes platform responses to political harm.

15. Lukito, J. (2020). Coordinating a multi-platform disinformation campaign: Internet Research Agency activity on three U.S. social media platforms, 2015 to 2017. Political Communication, 37(2), 238–255. An analysis of the Russian Internet Research Agency's 2016 US election interference across multiple social media platforms, documenting how coordinated inauthentic behavior was adapted to different platform architectures. Provides background for understanding the type of information operation that preceded the Myanmar military's more extreme version.

16. Freelon, D., Marwick, A., & Kreiss, D. (2020). False equivalencies: Online activism from left to right. Science, 369(6508), 1197–1201. A study finding that left- and right-wing online political actors use social media asymmetrically, with right-wing actors showing greater tendency toward coordinated information operations and norm-violating content. Relevant to evaluating claims of platform bias and to understanding the asymmetric character of political information operations on social media.

17. Sunstein, C. R., & Vermeule, A. (2009). Conspiracy theories: Causes and cures. Journal of Political Philosophy, 17(2), 202–227. An influential paper analyzing the psychology of conspiracy theory belief and spread, and evaluating possible government responses. The framework for understanding why conspiracy theories are psychologically compelling and socially reinforcing is directly applicable to understanding political misinformation in social media environments.

18. Tufekci, Z. (2018). YouTube, the great radicalizer. The New York Times, March 10, 2018. The op-ed that most effectively brought the YouTube radicalization pathway to public attention, documenting how YouTube's recommendation algorithm directed users from mainstream political content toward progressively more extreme content. A piece of public-facing analysis that anticipated subsequent academic research and helped frame the public debate about algorithmic amplification of extreme political content.