Chapter 8 Further Reading: Platform Algorithms and the Attention Economy

Annotated Bibliography

Sources are organized thematically and annotated to guide students toward materials most relevant to their specific interests. Items marked with an asterisk (*) are particularly accessible to undergraduate readers.


The Attention Economy: Theory and History

Simon, Herbert A. "Designing Organizations for an Information-Rich World." In Computers, Communication, and the Public Interest*, edited by Martin Greenberger. Baltimore: Johns Hopkins University Press, 1971.

The foundational text on attention as a scarce resource in information-rich environments. Simon's framework, written for a management science audience, turns out to describe the logic of digital media economics with remarkable precision. Essential reading for understanding why the attention economy is not a metaphor but a literal description of how digital platforms generate value. The paper is brief and accessible.


Wu, Tim. The Attention Merchants: The Epic Scramble to Get Inside Our Heads*. Knopf, 2016.

The most comprehensive historical account of the attention economy, tracing its development from 19th-century newspaper advertising through radio, television, and digital platforms. Wu's framework of "harvesting attention" and selling it to advertisers provides essential historical context for understanding why digital platform misinformation problems are not novel, even if their scale and sophistication are. Particularly valuable for the television and radio chapters, which provide direct historical analogies to current digital media debates.


Goldhaber, Michael H. "The Attention Economy and the Net." First Monday* 2.4 (1997).

A prescient early articulation of the attention economy framework applied specifically to the internet, written before social media existed. Goldhaber's prediction that attention-based dynamics would dominate the internet economy proved remarkably accurate. Available free online; valuable for demonstrating that these dynamics were theoretically predictable before they emerged.


Recommendation Algorithms: Technical Foundations

Aggarwal, Charu C. Recommender Systems: The Textbook. Springer, 2016.

The comprehensive technical reference for recommendation system algorithms, covering collaborative filtering, content-based filtering, knowledge-based systems, and hybrid approaches. Not a light read, but chapters 2 and 3 (collaborative and content-based filtering) are accessible to students with basic programming and statistics backgrounds. Essential for students who want to understand the technical mechanisms behind the phenomena described in this chapter.


Koren, Yehuda, Robert Bell, and Chris Volinsky. "Matrix Factorization Techniques for Recommender Systems." IEEE Computer 42.8 (2009): 30-37.

The landmark paper from the Netflix Prize competition that established matrix factorization as the standard approach in collaborative filtering. More accessible than its technical nature might suggest; the introduction and discussion sections are readable for general students. Understanding matrix factorization helps explain why collaborative filtering creates feedback loops and why similar users end up seeing similar content.


Chaslot, Guillaume. "The YouTube Algorithm and the Alt-Right Pipeline." Medium*, February 2019.

A first-person account from a former YouTube engineer of how watch-time optimization creates predictable incentives toward extreme content. Chaslot's AlgoTransparency project provided the first systematic external documentation of YouTube's recommendation patterns. This accessible piece explains technical mechanisms in plain language and provides essential insider context for evaluating the academic research on YouTube radicalization.


The Spread of Misinformation: Key Studies

Vosoughi, Soroush, Deb Roy, and Sinan Aral. "The Spread of True and False News Online." Science 359.6380 (2018): 1146-1151.

The most important single empirical paper in this chapter. The finding that false news spreads faster, farther, and more broadly than true news — driven by human behavior rather than bots — transformed the research agenda. Read the full paper, including the supplementary materials, which document the methodology in sufficient detail to evaluate the study's limitations. The paper's discussion of novelty and emotional arousal as mechanisms is particularly important.


Brady, William J., Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel. "Emotion Shapes the Diffusion of Moralized Content in Social Networks." Proceedings of the National Academy of Sciences 114.28 (2017): 7313-7318.

Documents that moral-emotional language — words expressing moral condemnation or righteous anger — significantly increases content diffusion within ideologically homogeneous networks. Each moral-emotional word increases retweet probability by approximately 20%. This study complements Vosoughi et al. by providing a mechanism (moral outrage) for why false news is more emotionally arousing and therefore more shareable.


Facebook, Engagement Optimization, and the Haugen Disclosures

Haugen, Frances. U.S. Senate Commerce Committee Testimony, October 5, 2021.

The primary source for Haugen's account of Facebook's internal research and decision-making. The full transcript is available online and is essential reading for understanding the specific claims the Haugen disclosures contained, as opposed to reporting on those claims. Approximately 20,000 words; particularly important sections address the civic integrity team disbanding and the youth mental health research.


Horwitz, Jeff, et al. "The Facebook Files." The Wall Street Journal*, September-October 2021.

The original series of reporting that published the Haugen documents, including: "Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show" and "Facebook's Internal Chat Boards Show Politics Often at Center of Decision-Making." Available at WSJ.com (paywalled, but university library access typically available). Reading the original reporting alongside Haugen's testimony provides necessary context and specific documentation.


Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. "Exposure to Ideologically Diverse News and Opinion on Facebook." Science 348.6239 (2015): 1130-1132.

The Facebook insider study that found algorithmic effects on cross-cutting exposure to be real but smaller than user choice effects. This paper is controversial (it was co-authored by Facebook researchers, raising independence questions) but methodologically important because it had data access that external researchers cannot replicate. Evaluate its methodology critically, with attention to the tension between data access and research independence.


Filter Bubbles: Evidence and Debate

Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You*. Penguin Press, 2011.

The foundational text that introduced "filter bubble" into public vocabulary. Read the original carefully, attending to both the descriptive claims (what Pariser says algorithms actually do) and the normative claims (what he argues they should do from a democratic perspective). The normative argument is stronger than the descriptive one, and the two should be evaluated separately.


Guess, Andrew, Brendan Nyhan, and Jason Reifler. "Selective Exposure to Misinformation: Evidence from the Consumption of Fake News During the 2016 U.S. Presidential Campaign." European Research Council, 2018.

The large-scale survey study that found most Americans encountered very little fake news during the 2016 election, and that consumption was concentrated among a small group of older, strongly conservative users. This paper is frequently cited to challenge "fake news epidemic" narratives; its methodological approach (measuring direct exposure rather than social media sharing) is worth understanding. Available free online.


Flaxman, Seth, Sharad Goel, and Justin M. Rao. "Filter Bubbles, Echo Chambers, and Online News Consumption." Public Opinion Quarterly 80.S1 (2016): 298-320.

An analysis of web browsing data from 1.2 million users that found social media and search engines led users to slightly more ideologically diverse sources than direct navigation, complicating simple filter bubble narratives. The research design (actual browsing data rather than self-reported consumption) is a methodological strength.


Algorithmic Interventions and Design Alternatives

Pennycook, Gordon, and David G. Rand. "Fighting Misinformation on Social Media Using Crowdsourced Judgments of News Quality." Proceedings of the National Academy of Sciences* 116.7 (2019): 2521-2526.

Demonstrates that crowdsourced accuracy judgments from non-expert laypeople can reliably identify misinformation with accuracy approaching professional fact-checkers. This provides the theoretical basis for Community Notes-style approaches. Accessible and important.


Pennycook, Gordon, et al. "Shifting Attention to Accuracy Can Reduce Misinformation Online." Nature 592.7855 (2021): 590-595.

The experimental study demonstrating that accuracy nudges — brief prompts that focus users' attention on accuracy — significantly improve the accuracy of subsequently shared content. The mechanism (attentional rather than restrictive) is important for policy design; this paper provides strong evidence that liberty-preserving interventions can improve information quality.


YouTube Radicalization Research

Ribeiro, Manoel Horta, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, and Wagner Meira Jr. "Auditing Radicalization Pathways on YouTube." Proceedings of the ACM on Human-Computer Interaction 4.CSCW1 (2020): 1-27.

The systematic empirical examination of YouTube's recommendation connections between mainstream and extremist content. Read alongside Munger and Phillips (2022), which offers significant methodological critique, to understand the state of the empirical debate.


Munger, Kevin, and Joseph Phillips. "Right-Wing YouTube: A Supply and Demand Perspective." International Journal of Press/Politics 27.1 (2022): 186-219.

The most substantial methodological critique of the radicalization pipeline narrative. Argues that demand — what users seek out — explains more of extremist channel growth than algorithmic supply. Essential counterpoint to the Ribeiro et al. findings; understanding both sides of this debate is important for calibrated assessment of algorithm-driven radicalization claims.


Further Reading list for Chapter 8 of "Misinformation, Media Literacy, and Critical Thinking in the Digital Age."