Chapter 39 Further Reading: Design Ethics and Humane Technology
The following annotated bibliography provides guidance for readers who wish to pursue the chapter's themes in greater depth. Entries are organized thematically rather than alphabetically. Each annotation explains both what the work contains and why it is relevant to this chapter's specific arguments.
Foundational Works on Persuasive Technology and Design Ethics
Fogg, B.J. Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann, 2003.
This is the foundational academic text on what Fogg called "captology" — the study of computers as persuasive technologies. Fogg at Stanford's Persuasion Lab spent years documenting, with rigorous empirical detail, how digital interfaces could be designed to change user beliefs and behaviors. Tristan Harris was trained in this framework, and much of what he later criticized in engagement-maximizing design is a direct application of Fogg's principles. Reading this book alongside Harris's public writing creates a clear picture of how a research framework designed for beneficial persuasion was applied to attention extraction. Fogg himself has expressed concern about how his research has been used. Essential background for anyone who wants to understand what "persuasive technology" actually means technically.
Harris, Tristan. "How Technology Hijacks People's Minds — from a Magician and Google's Design Ethicist." Thrive Global, 2016. (Also widely available via Medium.)
The public essay that brought Harris's ideas from his 2013 internal Google presentation to a broad audience. Harris lays out the specific psychological mechanisms platforms exploit — variable reward schedules, social comparison, hijacked social reciprocity — with unusual clarity and specificity. The "slot machine" metaphor for social media notifications, now ubiquitous in public discourse, is developed here in detail. This essay is essential reading both for its substance and as a historical artifact: it represents a significant moment in which insider critique of the attention economy entered mainstream public discourse. The essay is freely available online and should be read before engaging with Harris's later, more developed arguments.
Harris, Tristan. Testimony before the Senate Commerce Committee on "Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms." June 25, 2019.
Harris's Senate testimony is one of the most effective examples of translating technical concepts about platform design for a legislative audience. He explains engagement optimization, variable reward design, and the structural conflict between advertiser-funded platforms and user wellbeing in language accessible to legislators. The testimony also demonstrates the limitations of congressional hearings as mechanisms for producing regulatory change — the concepts were heard, there was apparent comprehension and concern, and relatively little legislative action followed directly. The full transcript is available through the Senate Commerce Committee's public records.
Humane Technology Principles and Frameworks
Center for Humane Technology. The Ledger of Harms. humanetech.com. (Updated regularly.)
The CHT's regularly updated research compilation documenting evidence of harms associated with current platform design — mental health impacts, erosion of attention, political polarization, children's safety. The Ledger is notable for its attempt to aggregate peer-reviewed research in an accessible format and for its transparency about the strength of evidence. It is not an academic document, but it provides a useful entry point to the research literature and demonstrates how the CHT translates research findings into policy-relevant claims. Readers should engage with the underlying research the Ledger cites rather than treating the Ledger itself as primary evidence.
Meyer, Eric A., and Sara Wachter-Boettcher. Design for Real Life. A Book Apart, 2016.
This book addresses design ethics from a practical design perspective, focusing on how products designed for the "average user" systematically harm users in vulnerable circumstances. Meyer and Wachter-Boettcher developed the concept of "stress cases" — the situations in which a design, fine for most users, fails or actively harms users who are experiencing crisis, grief, illness, or other heightened vulnerability. The book is directly relevant to the chapter's discussion of the Hartley incident and the Velocity Media scenario: Velocity's recommendation algorithm failed a vulnerable user precisely because the design had been optimized for the median case without considering what happened to outliers in crisis. Practical, accessible, and essential for anyone designing products that will be used by real people in real circumstances.
Monteiro, Mike. Ruined by Design: How Designers Destroyed the World, and What We Can Do to Fix It. Mule Design, 2019.
A blunt, polemical, and useful argument that designers bear moral responsibility for the products they create and cannot discharge that responsibility by claiming they were just following instructions or couldn't have known the outcomes. Monteiro draws on a range of examples from tech design and makes a case for a designer's code of ethics analogous to those in medicine and law. The argument is more persuasive as diagnosis than as prescription — Monteiro is better at explaining what went wrong than at specifying exactly what individual designers should do within institutions where following his prescriptions would end their careers. But the diagnosis is valuable and the polemic is useful for its clarity about what "individual responsibility" actually means in practice.
Alternative Platform Models and Case Studies
Reagle, Joseph M., Jr. Good Faith Collaboration: The Culture of Wikipedia. MIT Press, 2010.
The most thorough academic treatment of Wikipedia's community governance culture. Reagle examines how the Wikipedia community developed its norms of neutral point of view, verifiability, and good faith collaboration, and analyzes the tensions and failures within those norms. Essential reading for anyone who wants to understand Wikipedia as something more than a reference tool — as a governance experiment in knowledge production at scale. The book is somewhat dated (published before several significant Wikipedia governance controversies) but remains the best entry point to the academic literature on Wikipedia's community.
Marwick, Alice, and Rebecca Lewis. "Media Manipulation and Disinformation Online." Data & Society Research Institute, 2017.
This report provides essential context for understanding why Wikipedia's community governance model matters as a structural alternative to algorithmically curated information environments. Marwick and Lewis document the specific techniques by which bad actors manipulate algorithmically curated platforms — including search engines and social media feeds — to propagate disinformation. The analysis makes clear that algorithmic curation's vulnerability to manipulation is not a bug but a structural feature: algorithms optimize for engagement, misinformation is often more engaging than accurate information, and the result is systematic. The contrast with Wikipedia's community-governed fact-checking processes is instructive.
Doctorow, Cory. The Internet Con: How to Seize the Means of Computation. Verso, 2023.
Doctorow develops his influential concept of "enshittification" — the process by which platforms that initially serve users well degrade over time as they shift value from users to advertisers to shareholders — and proposes interoperability and federation as structural solutions. The book is directly relevant to the chapter's discussion of federated platforms (Mastodon, ActivityPub) and to the question of why ethical platforms that begin with good intentions tend to shift toward exploitation over time. Doctorow's argument that interoperability requirements would change the structural logic of platforms — by allowing users to leave without losing their network — provides an important complement to the design ethics arguments in this chapter.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
The comprehensive intellectual framework for understanding how behavioral data collection functions as a form of capital accumulation. Zuboff's concept of "surveillance capitalism" — in which human behavior is the raw material extracted, processed, and sold as "behavioral futures" to advertisers — provides the theoretical foundation for understanding why advertising-based platforms are structurally incompatible with user privacy. This is a dense and demanding book (over 700 pages), but the first three chapters, which lay out the core framework, are essential reading for anyone who wants to understand why platform design is inseparable from business model. Chapter 39's arguments about the business model problem build directly on Zuboff's analysis.
Policy, Regulation, and Systemic Change
Napoli, Philip M. Social Media and the Public Interest: Media Regulation in the Disinformation Age. Columbia University Press, 2019.
Napoli applies frameworks from broadcast regulation and public interest theory to social media, arguing that platforms that have achieved the scale and social influence of broadcasters should face analogous public interest obligations. The book is particularly relevant to the chapter's discussion of what regulatory frameworks are needed to change the structural incentives of large platforms. Napoli's analysis of the "public trustee" concept — the idea that powerful communications infrastructure carries public obligations — provides a useful bridge between the design ethics arguments in this chapter and the regulatory arguments in Chapter 38.
Haugen, Frances. Testimony before the Senate Commerce Committee Subcommittee on Consumer Protection, Product Safety, and Data Security. October 5, 2021.
Haugen's testimony, accompanied by the Facebook documents she provided to the SEC and to journalists, remains the most comprehensive public account of the internal culture at a major social media platform — the research that was known, the harms that were identified, and the decisions that were made anyway. Her testimony is directly relevant to the chapter's discussion of what individual designers can and cannot do inside extractive institutions. Haugen's specific description of how internal research on platform harms was consistently subordinated to growth metrics is essential evidence for the structural arguments in this chapter. The full transcript and supporting documents are publicly available.
Design Practice and Ethics
Norman, Don. The Design of Everyday Things. Revised ed. Basic Books, 2013.
The foundational text on human-centered design. Norman's concepts — affordances, signifiers, feedback, mappings, constraints — provide the vocabulary for understanding how designed objects shape behavior. The book was written primarily about physical objects but is directly applicable to digital interface design, and Norman himself has written extensively about the responsibility of designers for the behaviors their designs produce. Understanding Norman's framework is a precondition for understanding what "humane design" means at the level of specific interface choices. The chapter's discussion of friction, stopping cues, and consent architecture all draw implicitly on Norman's conceptual vocabulary.
Costanza-Chock, Sasha. Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press, 2020.
Costanza-Chock extends the design ethics conversation to questions of who participates in design processes and who is excluded. The book argues that design choices systematically reflect the values and assumptions of the relatively homogeneous communities that make them — and that inclusive design practices, in which affected communities participate in design, produce better and more just outcomes. This is directly relevant to the chapter's discussion of Wikipedia's editor demographic problem and to the broader question of how humane design for whom is determined. Essential reading for anyone who wants to think about design ethics beyond the individual designer-user relationship.
Costanza-Chock's framework pairs productively with the work of the Algorithmic Justice League (ajl.org), a research and advocacy organization focused on the social implications of algorithmic systems, particularly their differential impacts on marginalized communities.
Further resources on humane technology, platform design ethics, and alternative platform models are available through the Center for Humane Technology (humanetech.com), the Electronic Frontier Foundation's Surveillance Self-Defense project (ssd.eff.org), and the Data & Society Research Institute's publication archive (datasociety.net).