Part I: The Hook

Foundations of the Attention Economy


What This Part Is About

The simplest version of the attention economy argument goes like this: your time and focus are scarce resources; platforms sell advertisers access to that focus; therefore platforms are economically motivated to capture as much of your attention as possible, by whatever means work. This is true. But it is also, by itself, incomplete — because it sounds like an accusation, and accusations produce defensiveness rather than understanding.

Part I is about understanding. Not in the sense of excusing — the harms documented in this book are real and the design choices that produce them are, in many cases, choices that people with better information could have made differently. But understanding how the incentive structure works is a prerequisite for analyzing why specific design decisions get made, why they persist despite their effects, and what would actually have to change to produce different outcomes. Without that foundation, the rest of the book is just a catalog of complaints.

The six chapters of Part I build that foundation layer by layer.


The Arc of These Chapters

Chapter 1 begins at the level of abstraction — the attention economy as an economic system. Herbert Simon's insight that information abundance creates attention scarcity, and what that scarcity implies when attention becomes the commodity being sold. This chapter establishes the fundamental logic that everything else in the book rests on: not that platforms are malicious, but that they are rational agents operating within an incentive structure that rewards attention capture above almost everything else.

Chapter 2 provides the historical context that prevents the attention economy from feeling like a sudden aberration. The persuasion industry did not start with Silicon Valley. From patent medicine advertising in the nineteenth century through the rise of broadcast television to the early internet, each communication technology has generated its own ecosystem of attention capture. Understanding this arc matters because it shows both how old the underlying dynamic is and how social media represents a qualitative escalation in scale, personalization, and feedback speed.

Chapter 3 sharpens the analysis by defining what "algorithmic addiction" means — and, as importantly, what it does not mean. This chapter takes the addiction framing seriously as a useful metaphor while being precise about its limits. The goal is a definition rigorous enough to be analytically useful rather than rhetorically powerful.

Chapter 4 descends into the mechanics of the business model: how advertising-based platforms generate revenue, how that revenue model connects to specific design decisions, and why the engagement-maximization imperative is not a conspiracy but a structural outcome. The chapter introduces the concept of the engagement-revenue feedback loop that will run through the rest of the book.

Chapter 5 provides a preview of the neuroscience that Part II will develop in depth. Without diving into full mechanistic detail, Chapter 5 establishes that platform design is not merely exploiting human weakness in a vague psychological sense — it is engaging specific, documented brain systems in ways that have measurable effects. This chapter bridges the economic analysis of Chapters 1–4 with the biological analysis that follows in Part II.

Chapter 6 closes Part I with the human cost — not as an emotional appeal, but as an empirical grounding. Who builds these systems, what do they know about what they are building, and what does the evidence suggest about the aggregate effects of design choices made at scale?


The Characters You Will Follow

Part I introduces the two recurring threads that run through the entire book.

Maya is seventeen, lives in Austin, Texas, and has been on social platforms since she was twelve. She is not a cautionary tale — she is a specific, intelligent person navigating an information environment she did not design and was never fully informed about. Her experiences in Part I anchor the abstract economic and psychological arguments in something concrete. When Chapter 3 defines what algorithmic design does to attention, Maya's experience of trying to study while her phone sits face-up on her desk is the illustration that makes the definition legible.

Velocity Media is a fictional social startup — a small team, a promising product, genuine ambitions to do things better. Their story in Part I is the story of how good intentions encounter structural incentives. The founders of Velocity do not want to manipulate their users. What happens anyway, and why, is one of the more important arguments in Part I.

The Facebook News Feed Arc is the real-world case study that threads through these chapters. Facebook's transition from a chronological feed to an algorithmically curated one — the internal debates, the A/B test results, the decision-making — provides the clearest documented example of the engagement-maximization logic operating at scale.


The Analytical Frame

Part I establishes the frame through which the rest of the book should be read: not moral panic, not techno-apologia, but structural analysis. The question is not "are platform designers bad people?" The question is "what incentive structure are they operating in, what does that structure reward, and what evidence do we have about the effects?"

That question is more useful — and ultimately more damning — than the question of individual intent. A system that produces harmful outcomes through structural incentives is harder to fix than a system run by bad actors, because it cannot be repaired by firing the bad actors. Fixing it requires changing the incentives. That is the argument that makes Parts V and VI necessary.

Start here. The rest follows.

Chapters in This Part