In the spring of 2005, YouTube's founders — Chad Hurley, Steve Chen, and Jawed Karim — faced a decision that seemed, at the time, mostly administrative. The site was growing. People were uploading videos. Bandwidth costs were rising. They needed...
In This Chapter
- The Moment Everything Changes
- How Programmatic Advertising Actually Works
- Why Engagement Became the Optimization Target
- Velocity Media: Series A, 2019
- The Incentive Trap: Why Good People Build Extractive Systems
- The Facebook News Feed Arc: EdgeRank to Emotional Contagion
- Business Model Alternatives: What Else Is Possible?
- The Constraint That Shapes Everything
- Chapter Summary
Chapter 4: The Business Model of Engagement: How Attention Becomes Revenue
The Moment Everything Changes
In the spring of 2005, YouTube's founders — Chad Hurley, Steve Chen, and Jawed Karim — faced a decision that seemed, at the time, mostly administrative. The site was growing. People were uploading videos. Bandwidth costs were rising. They needed money. The question was: from whom?
They could have charged users a subscription fee — say, five dollars a month to upload and share videos. They could have sold the platform outright to a media company. They could have pursued a hybrid model, free for casual viewers, paid for heavy uploaders. Instead, they chose advertising. Google agreed, and in October 2006 acquired YouTube for $1.65 billion, bringing with it the most sophisticated advertising infrastructure ever built.
That choice — advertising over subscription, attention over access fees — was not made cynically. The founders genuinely believed free access was more democratic. The engineers at Google genuinely believed they were building tools to organize the world's information. The decision made strategic sense, competitive sense, and even, in a narrow sense, moral sense: more people could use a free service than a paid one.
But the choice was not neutral. It embedded a logic into YouTube's DNA that would shape every subsequent product decision, every algorithmic optimization, every recommendation the system ever made. Because the moment you choose advertising as your revenue model, you have chosen a master. And that master has a very specific appetite.
This chapter is about that appetite — what it demands, how it shapes platform behavior, and why it pushes even the most well-intentioned engineers toward outcomes they would never have chosen directly. We will trace the specific financial mechanics of the attention economy: how programmatic advertising auctions actually work, why engagement metrics became the optimization target, and what your attention is literally worth in the open market. We will introduce Velocity Media, a fictional startup whose Series A funding moment in 2019 illustrates the structural pressures every platform faces. We will trace the Facebook News Feed Arc from its EdgeRank origins to the emotional contagion study that revealed what the system was capable of. And we will examine the business model alternatives that don't require exploiting users — what they achieve, what they sacrifice, and what their trajectories reveal about the commercial viability of ethical monetization.
The central argument is this: the business model is not a detail. It is the constraint that shapes everything else. If we want platforms to behave differently, we need to understand exactly what their current model demands of them — and what a different model would demand instead.
How Programmatic Advertising Actually Works
Most people know, in a vague sense, that "free" platforms make money by showing ads. What most people don't know is the specific mechanism — the millisecond-scale auction that determines which ad you see, at what price, based on what data. Understanding this mechanism is essential to understanding why platforms behave the way they do.
The Real-Time Bidding Auction
When you load a webpage or open an app, a remarkable sequence of events unfolds in under 100 milliseconds. Here is what happens, step by step:
Step 1: The bid request (0–5ms). The moment your browser begins loading the page, the publisher's ad server sends a "bid request" to an ad exchange — a marketplace that connects publishers (the platform showing the ad) with advertisers (the companies buying the placement). The bid request contains a packet of information: your approximate location, your device type, the content category of the page you're loading, and — crucially — a pseudonymous identifier tied to your browsing history.
Step 2: The data enrichment (5–20ms). The ad exchange passes the bid request to a Data Management Platform (DMP) that holds your behavioral profile. This profile has been assembled from cookies, device fingerprints, and data purchased from third-party brokers. It knows — or infers — your age range, income bracket, purchasing history, health conditions you've searched for, political leanings derived from your content consumption, and hundreds of other attributes. This enriched profile is attached to the bid request.
Step 3: The auction (20–80ms). The enriched bid request is simultaneously sent to dozens or hundreds of demand-side platforms (DSPs), each representing advertisers. Each DSP runs its own internal algorithm, comparing the user profile to the advertiser's target audience specifications and calculating a maximum bid price. An auto insurance company might bid $8 per thousand impressions (CPM) for a user profile matching "male, 25–34, recent car purchase, driving record clean." The same impression might be worth $0.40 to a mass-market cereal brand with no targeting criteria.
Step 4: The winner (80–95ms). The ad exchange collects bids, identifies the winner, and typically charges them the second-highest bid price (a Vickrey auction format that incentivizes honest bidding). The winning ad creative — the image, video, or text — is retrieved from the advertiser's servers.
Step 5: The ad appears (95–100ms). By the time you consciously register that the page has loaded, the entire auction has concluded. The ad you see is the product of a real-time market in your attention, conducted at machine speed, involving potentially hundreds of participants, none of whom you are aware of.
This system — called Real-Time Bidding (RTB), or more broadly Programmatic Advertising — now accounts for the majority of digital advertising spend globally. In 2023, programmatic ad spend in the United States alone exceeded $150 billion. The market is dominated by two players: Google's advertising ecosystem (Google Ads, Google Ad Manager, Google Display Network) and Meta's (Facebook Ads, Instagram Ads, Audience Network). Between them, these two companies capture approximately 47 cents of every dollar spent on digital advertising in the United States.
What You Are Worth: The CPM Economy
CPM stands for "cost per mille" — the price an advertiser pays for one thousand ad impressions. It is the fundamental unit of the attention economy, the price tag on your eyes.
CPM rates vary enormously, and understanding the variation reveals the underlying logic of the system.
Intent matters most. A search ad on Google for "best personal injury lawyer in Chicago" might carry a CPM of $400 or more, because the user has explicitly signaled purchase intent: they are actively looking for a service, and the conversion rate will be high. A display ad shown to the same user while they're reading a news article about Chicago sports carries a CPM of perhaps $3–8, because the intent signal is absent.
Demographics follow. High-income demographics command premium CPMs because their purchasing power makes each conversion more valuable to advertisers. A 35–54 year-old household with income over $150,000 might generate CPMs 3–5x higher than an 18–24 year-old with no purchasing history. This creates a disturbing dynamic: platforms have stronger financial incentives to engage wealthy users than poor ones, which shapes everything from content recommendation to interface design.
Context and brand safety. Advertisers pay premiums to appear alongside "brand-safe" content — news, lifestyle, and entertainment that won't embarrass the brand by association. They pay deeply discounted rates, or refuse to appear at all, alongside content involving violence, controversy, or politically sensitive topics. This creates a system where certain types of content are financially penalized — not by any deliberate editorial decision, but by the aggregated risk-aversion of advertisers running automated brand-safety algorithms.
The engagement multiplier. Here is the critical connection: platforms that can demonstrate higher engagement rates — longer session duration, more clicks, more return visits — can charge higher CPMs. Advertisers are willing to pay more to reach users who are deeply immersed in a platform, because their attention is more valuable. This creates the financial incentive that drives every subsequent optimization decision we'll discuss. Engagement doesn't just help platforms sell more ads; it helps them sell the same ads at higher prices.
A useful benchmark: Facebook's average CPM in 2023 was approximately $7–12 for general audiences, rising to $20–40 for highly targeted segments. YouTube's average CPM ranges from $3–10, though premium content like financial or technology videos can reach $15–30. These numbers fluctuate with the advertising market — they spike during Q4 holiday seasons and drop during economic downturns — but the relative structure is stable.
To make this concrete: if a platform has 100 million daily active users, each spending an average of 30 minutes per day on the platform, and each exposed to an average of 15 ads per session at an average CPM of $8, the daily advertising revenue is approximately $12 million — more than $4 billion annually. Increase average session time to 40 minutes, and the same math yields roughly $5.3 billion. That incremental 10 minutes of attention per user per day is worth more than $1.3 billion annually. This is why the optimization of engagement is not an abstract engineering problem. It is a billion-dollar imperative.
The Data Flywheel
Programmatic advertising's power derives not just from targeting precision but from the self-reinforcing data flywheel it creates. Each interaction — each click, view, scroll, and pause — generates behavioral data that enriches the user profile. The richer the profile, the more precisely the platform can target ads. The more precisely ads are targeted, the higher the CPMs advertisers will pay. The higher the CPMs, the more revenue the platform earns. The more revenue, the more the platform can invest in data infrastructure, algorithmic sophistication, and user experience improvements that generate more interactions. And around it goes.
This flywheel creates enormous barriers to entry for competitors. A new platform cannot match the targeting precision of Google or Meta because it lacks the accumulated behavioral data — years of signals from billions of users across billions of interactions. The data moat is more defensible than any patent portfolio or brand advantage. It is built from time, scale, and the specific design decisions that extract maximum behavioral signal from every user interaction.
Understanding this flywheel is essential to understanding why the platforms' behavior is so hard to change from within. The entire revenue architecture depends on extracting maximum behavioral data from users. Any feature that reduces behavioral signal extraction — privacy-protecting defaults, limited data retention, opt-in tracking — directly reduces the platform's ability to command premium CPMs. The economics punish user privacy. They reward surveillance.
Why Engagement Became the Optimization Target
The causal chain is cleaner than most people realize. Understanding it step by step helps explain how a system designed by well-meaning engineers ends up optimizing for psychological manipulation.
Step 1: Platforms need revenue. They choose advertising because it allows free access, which drives user growth, which gives them market leverage. Subscription models require convincing users to pay upfront, which is harder and slower.
Step 2: Ad revenue is proportional to inventory. If you show 100 ads per user per day instead of 50, you double your inventory. The most direct way to show more ads is to keep users on the platform longer.
Step 3: Time on platform requires engagement. Users don't stay if they're bored. They stay when they feel the pull of the next post, the next video, the next notification. Engagement — defined as clicks, likes, shares, comments, time spent — is the behavioral signal that predicts continued platform use.
Step 4: Engagement must be measurable to be optimized. Engineers need metrics they can track and algorithms they can tune. Abstract concepts like "user satisfaction" or "user wellbeing" are hard to measure in real time. Clicks are immediate. Shares are countable. Watch time is trackable to the second. So these proxies become the targets.
Step 5: Optimizing for engagement proxies diverges from optimizing for user wellbeing. This is the gap at the center of everything. Content that makes people angry generates more comments and shares than content that makes people thoughtful. Sensational news gets more clicks than nuanced analysis. Emotionally charged posts spread further than informative ones. The algorithm that maximizes engagement metrics will systematically favor content that provokes strong emotional reactions — regardless of whether those reactions are good for users or for society.
Step 6: The algorithm reshapes the content ecosystem. Once creators understand what the algorithm rewards, they optimize for it. Headline writers learn to write outrage bait. Video makers learn to start with a hook that prevents skipping. Journalists learn that controversy performs better than complexity. The content ecosystem is not random; it is shaped by the financial incentives flowing through the advertising model.
Step 7: The system becomes self-reinforcing. Higher engagement generates more data, which improves targeting, which raises CPMs, which increases revenue, which funds more algorithmic optimization, which drives higher engagement. The flywheel accelerates.
This is not a story about malicious intent. The engineers who built these systems were trying to build products people would love to use. The advertisers buying impressions were trying to reach their target customers efficiently. The executives approving the algorithms were trying to grow their businesses. The incentive trap is structural, not personal — and that distinction matters enormously, because structural problems require structural solutions.
Velocity Media: Series A, 2019
[Note: Velocity Media is a fictional company used as a running example throughout this book to illustrate the structural pressures facing real platforms.]
In the autumn of 2019, Sarah Chen stood at the front of a conference room in a Sand Hill Road venture capital firm, presenting the Series A pitch for Velocity Media. She was 31, had shipped product at two previous startups, and had spent the past 18 months building what she believed was something genuinely different: a short-form video platform oriented around skill-based content — cooking, woodworking, coding, language learning. "YouTube for doing things," she called it.
The traction numbers were real. 2.3 million registered users. 340,000 monthly actives. Average session duration of 11 minutes. A creator community of 12,000 who had uploaded over 400,000 videos. The engagement metrics were strong by any standard.
The VC partner across the table from her — a former Google executive named David Park — leaned forward. "What's your monetization strategy?"
"Advertising," Sarah said. "We're building a first-party data model. Our users are highly engaged with skill acquisition, which means high-intent context for relevant ads. A user watching a woodworking video is a better target for tool brands than someone scrolling a generic feed. We can charge premium CPMs."
Park nodded slowly. "Okay. And your engagement trajectory?"
Sarah pulled up the slide. Month-over-month engagement growth had been 18% for the first year, then slowed to 11% in the most recent quarter.
"Eleven percent is good," Park said, "but you're entering a market where TikTok is compounding at 40% monthly and Instagram Reels is subsidized by Meta's cash reserves. To compete for advertising dollars, you need engagement numbers that justify premium CPMs. At 340K MAUs, advertisers are going to look elsewhere." He paused. "What's Marcus working on?"
Marcus Webb, Velocity's Head of Product, was not in the room, but his roadmap was in Sarah's pitch deck. Autoplay for recommended videos. A personalized feed replacing the chronological creator-follow model. Push notifications for new uploads from creators users had engaged with. Social features: comments, reactions, shares, a leaderboard of trending videos.
These features were not cynical additions. Marcus had designed each one because they addressed genuine user needs: nobody wanted to miss uploads from their favorite creators; discovering new skill-based content was genuinely hard in a chronological feed; the social features created community around shared learning. Every feature had a legitimate user-benefit rationale.
But each feature also, mechanically, increased engagement metrics. And that was what the Series A was really asking for.
Park's firm offered $12 million at a $47 million post-money valuation — below Sarah's ask, but enough to run the company for 18 months if they hit their growth targets. The term sheet included one unusual clause: a board seat, and a requirement to present monthly engagement metrics — specifically, Daily Active Users as a percentage of Monthly Active Users (DAU/MAU ratio), average session duration, and Day-7 user retention. These were the metrics the VC firm would use to evaluate whether a Series B was warranted.
Sarah and Marcus spent the flight home from San Francisco debating whether to take the deal. Marcus was excited; the funding would let him hire three more engineers. Sarah was uneasy in a way she couldn't quite articulate. "It's not the money," she said. "It's that we just agreed to be measured by three numbers, and none of those three numbers are 'did the user learn something useful.'"
She took the deal.
Dr. Aisha Johnson joined Velocity six months later as Head of Trust and Safety — a role Sarah had created partly out of genuine concern and partly because several major advertisers had asked whether Velocity had a content moderation policy. Aisha had a background in behavioral psychology and immediately noticed something in the recommendation algorithm that Marcus had shipped: the system was surfacing emotionally provocative skill content — the most dramatic failures, the most impressive transformations, the most surprising results — at a rate disproportionate to its actual educational value.
"The algorithm is learning that people click on the dramatic stuff," she told Sarah and Marcus in their first product review meeting. "A woodworking video where someone makes a beautiful chair in four steps gets fewer clicks than a woodworking video where someone makes a dramatic mistake and then recovers. The algorithm doesn't know the difference between a great learning experience and a great drama. It just sees engagement signals."
Marcus considered this carefully. "The data is the data," he said, not dismissively, but genuinely uncertain. "If users are engaging more with certain content, is that bad?"
This is the question at the heart of the incentive trap. And it does not have an easy answer.
The discomfort in that exchange — Aisha's concern, Marcus's genuine uncertainty, Sarah's awareness that the metrics she'd promised to the board would answer the question for them regardless of what they decided — is the texture of how the incentive trap actually operates. Nobody is lying. Nobody is indifferent to their users. But the metric structure they've committed to will make one answer financially safe and the other financially dangerous. The system selects for engagement. The people inside it make local decisions. And the aggregate output is extractive.
The Incentive Trap: Why Good People Build Extractive Systems
The pattern we see at Velocity Media is not unique to fictional startups. It is the dominant pattern across the entire platform economy, and it has a name: Goodhart's Law. When a measure becomes a target, it ceases to be a good measure.
Engagement metrics were originally designed as proxies for user satisfaction. A user who spends more time on a platform, who clicks more, who shares more — surely this user is enjoying themselves, right? This inference seemed reasonable in 2008. By 2015, the evidence was mounting that it was wrong. By 2020, internal research at multiple platforms confirmed it explicitly: high engagement scores were not reliably correlated with user-reported wellbeing, and in some cases were negatively correlated with it.
But the advertising model doesn't care about user wellbeing. It cares about user attention — specifically, the quantity and quality of attention that can be sold to advertisers. And these two things — wellbeing and sellable attention — are not the same. They overlap significantly, but the divergence at the margins is where the damage accumulates.
The incentive trap has several compounding layers:
The measurement layer. Wellbeing is hard to measure at scale in real time. Engagement is easy. When you can only measure one thing, you optimize for the thing you can measure — even if you know it's an imperfect proxy.
The competition layer. Even if one platform decided to deprioritize engagement metrics in favor of user wellbeing, its competitors would not. Users would migrate to the more engaging platform (people's evolved psychology consistently prefers high-stimulation environments in the short term, even when they report preferring calm in the long term). Advertisers would follow the users. The well-intentioned platform would lose revenue, lose users, and eventually either revert to engagement maximization or exit the market.
The capital layer. VC funding, public market valuations, and analyst expectations are all calibrated to engagement metrics. A platform that reports declining DAU/MAU ratios will see its valuation cut, regardless of whether it's reporting improved user satisfaction scores. Capital flows toward engagement, starving alternatives of the resources they would need to compete.
The talent layer. The engineers who can build the best recommendation algorithms are recruited by the highest-paying companies, which are the ones with the most advertising revenue, which are the ones with the highest engagement metrics. Ethical design expertise commands lower salaries and generates fewer high-profile career credentials. The talent market reinforces the structural incentive.
The regulatory vacuum layer. Absent regulatory requirements to measure or report on user wellbeing, there is no external accountability mechanism for the gap between engagement metrics and user welfare. Platforms are required to disclose their financial performance to investors. They are not required to disclose their internal research findings about user harm. This asymmetry allows platforms to know about harm while managing it as a reputational risk rather than a compliance obligation.
The result is a system that doesn't require any individual actor to intend harm. Sarah Chen is not trying to manipulate her users. Marcus Webb is not trying to exploit anyone's psychology. The VC partners funding them are not consciously choosing to incentivize addiction. Each actor is making locally rational decisions within a system whose aggregate output is exploitative.
This is the structural analysis that must precede any individual blame. Not to excuse individual responsibility — executives who knowingly suppress internal research documenting harm bear real moral responsibility — but to identify the level at which intervention is necessary. Blaming engineers or product managers for the attention economy is like blaming auto mechanics for traffic deaths. The system's design is the problem. The system's incentives are the problem. The business model is the constraint.
There is an important asymmetry of power embedded in this structure that we must name explicitly. The individual user comes to the platform with genuine needs — to connect, to learn, to be entertained. The platform comes to the interaction with a systematically optimized behavioral science apparatus, billions of dollars of computing infrastructure, and years of aggregated data about exactly which stimuli produce which behavioral responses in which user segments. This is not a negotiation between equals. The user does not know the auction is running. The user does not know their emotional profile has been assembled. The user does not know that the content they're about to see was selected because a model predicted it would provoke a reaction strong enough to delay disengagement. The asymmetry of knowledge is total. The asymmetry of power follows from it.
The Facebook News Feed Arc: EdgeRank to Emotional Contagion
No story better illustrates the gap between intent and effect than the evolution of the Facebook News Feed. We will follow this arc across multiple chapters; here we introduce its foundational mechanics.
The Chronological Feed and Its Discontents
When Facebook launched the News Feed in September 2006, it was chronological. You saw posts in reverse time order from your friends and the pages you followed. This was intuitive and egalitarian: every post competed on equal footing, and the only optimization variable was when you posted.
The chronological feed had a problem that became apparent as Facebook scaled. By 2009, a typical Facebook user followed hundreds of friends and dozens of pages. Their feed was an undifferentiated torrent of content — birthday announcements, political opinions, baby photos, spam, news articles, and brand promotions, all equally weighted by timestamp. Users were increasingly overwhelmed and increasingly likely to miss content they actually cared about.
EdgeRank: The First Algorithm (2011)
Facebook's solution, launched and refined between 2009 and 2011, was called EdgeRank. It ranked content in the News Feed using three variables:
- Affinity: How close are you to the person who posted? Measured by how often you view, like, and comment on their content.
- Weight: What type of content is it? Photos and videos were weighted higher than status updates; links were weighted lower.
- Recency: How recent is the post? Older posts decayed in rank over time.
EdgeRank was not designed to maximize engagement in the pathological sense. It was designed to show users the content most likely to be relevant to them. A post from your best friend was worth more than a post from someone you met once at a conference. A photo album from your sister's wedding mattered more than a brand's promotional image.
But EdgeRank immediately established the fundamental architecture that all subsequent algorithms would build on: the News Feed would be algorithmic, not chronological. The platform, not the user, would decide what you saw. And the platform's decisions would be based on behavioral signals — specifically, your past engagement with content as a predictor of your future interest in similar content.
This architecture was the original decision that set everything else in motion. It was not malicious. It was, in many ways, reasonable. But it meant that the platform was now in the business of modeling human psychology in order to predict and shape attention allocation. The News Feed was no longer a transparent display of your friends' activity. It was an editorial product, curated by an algorithm whose optimization target was something other than your stated preferences.
The Like Button and the Data Transformation (2009–2012)
The Like button, introduced in February 2009, seems trivial in retrospect. It was not. It was the most important data collection instrument Facebook ever built.
Before the Like button, Facebook could track page views, time on site, and clicks — the standard behavioral data of the web. After the Like button, Facebook had explicit emotional signal at scale: billions of data points per day telling them exactly which content produced positive affective responses in which users. Combined with the comment data (which gave them the semantic content of user reactions) and the share data (which told them which content users found valuable enough to redistribute), Facebook suddenly had an unprecedented real-time emotional map of its user base.
This data transformed EdgeRank's successor algorithms. Rather than relying on relatively crude proxies like content type and poster affinity, the algorithms could now model emotional resonance directly. Content that generated high like rates and comment rates was learned to be "good" content — where "good" meant "engagement-generating," which was treated as equivalent to "user-satisfying."
The problem, as we noted in the previous section, is that these two things are not equivalent. And the Like button data, while rich, had systematic biases that the algorithm would exploit in ways nobody anticipated. Specifically: negative emotions generate engagement too. An outrage-inducing post generates as many (often more) comments and shares as a heartwarming one. Anger is a powerful engagement driver. The algorithm, optimizing for engagement signals without any ability to distinguish positive from negative engagement, learned to favor content that provoked strong reactions — including content that provoked fear, anger, and disgust.
The Emotional Contagion Study (2012/2014)
In January 2012, Facebook's internal data science team — in collaboration with researchers from Cornell University and the University of California, San Francisco — ran an experiment. For one week, they manipulated the News Feeds of approximately 689,003 users without their knowledge or explicit consent. Half the subjects had the emotional content of their feeds skewed positive — more posts expressing happiness and less expressing sadness. The other half had their feeds skewed negative — more sadness, less happiness.
The experiment's results, published in the Proceedings of the National Academy of Sciences in June 2014 under the title "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks," were unambiguous: users whose feeds were made more positive posted more positive content; users whose feeds were made more negative posted more negative content. Emotional states were contagious through the News Feed. Facebook could, deliberately and measurably, make its users happier or sadder by adjusting what they showed them.
The paper caused an immediate public controversy when its publication was noticed by journalists in late June 2014. The objections were multiple:
The informed consent problem. Participants had not consented to an emotional manipulation experiment. Facebook's defense was that users had agreed to its Data Use Policy, which mentioned use of data for research — but this was, at best, a thin reading of informed consent norms. No IRB at a university would have approved this experiment under those conditions.
The capability revelation. More disturbing than the consent issue was what the study revealed about Facebook's capabilities. The platform could demonstrably alter the emotional states of hundreds of thousands of people, in measurable directions, through feed manipulation. This capability had presumably existed for years before the study. The study just made it visible.
The intent question. If Facebook could make its users happier by adjusting the feed, why wasn't it doing so systematically? The answer, implied by the advertising model, is that user happiness was not the optimization target. Engagement was. And the relationship between happiness and engagement is complicated — not the simple positive correlation one might assume. A user who is mildly happy and mildly bored scrolls indefinitely. A user who is deeply satisfied closes the app and goes about their life. From the advertising model's perspective, satisfaction is a conversion failure.
What the emotional contagion study revealed, ultimately, was the gap between what Facebook's algorithms were optimizing for and what users believed they were being served. Users believed the News Feed was showing them what their friends had posted, in a relevance order determined by their stated preferences. What the News Feed was actually doing was constructing an emotional environment designed to maximize a behavioral metric that the advertising model rewarded.
We will follow this arc — from the emotional contagion study through Cambridge Analytica and the 2018 "meaningful interactions" pivot to the Facebook Papers of 2021 — across subsequent chapters. Here, it serves as our primary illustration of how the advertising business model, through the mechanism of engagement optimization, produces systems capable of deliberate emotional manipulation, regardless of whether that was anyone's original intent.
Business Model Alternatives: What Else Is Possible?
The central problem of this chapter is not that advertising exists, or even that programmatic advertising is inherently evil. The problem is the specific incentive structure that advertising creates when combined with algorithmic content ranking at scale. Understanding alternatives requires understanding which parts of that structure we need to change.
Subscription Models: Paying for Access
The most straightforward alternative to advertising is charging users directly for access. Netflix, Spotify, and — in the content creator ecosystem — Substack represent three variants of this model.
Netflix ($220 billion market cap as of 2024) demonstrates that subscription can scale massively. Netflix has no advertising-based engagement optimization problem: it makes money when users pay monthly fees and renew those subscriptions. Its incentive is to produce content users value enough to maintain subscriptions — which is meaningfully closer to user wellbeing than advertising's incentive to produce content users will consume in high quantities. Netflix's recommendation algorithm is still optimizing for engagement (specifically, retention — the probability that a user will not cancel their subscription), but the alignment between engagement and user satisfaction is tighter, because a user who feels regret about how much time they spent on Netflix is more likely to cancel than a user who feels their time was well-spent.
Netflix's limitation is also instructive: subscription models are most viable when the content itself is the product. Netflix makes excellent television because excellent television is what justifies the subscription fee. Social media platforms face a different challenge: their primary content is user-generated, and users are unlikely to pay for the privilege of sharing their own lives. The platforms that have succeeded with subscription models for user-generated content — Substack, Patreon, OnlyFans — have done so by becoming tools for individual creators rather than social platforms for general users.
Spotify ($60 billion market cap as of 2024) runs a hybrid model: free tier with advertising, premium tier without. This creates an interesting internal dynamic. Spotify's free tier users are customers of advertisers; its premium tier users are customers of Spotify itself. The result is that Spotify has weaker incentives to exploit its premium users' psychology than pure advertising-supported platforms do, because premium users' willingness to pay is directly tied to their satisfaction with the service.
The hybrid model does create its own tensions: Spotify has an incentive to make the free tier just unpleasant enough that users upgrade, but not so unpleasant that they abandon the platform entirely. This is a different kind of manipulation than attention maximization — call it "friction manipulation" — but it is meaningfully less harmful.
Substack (private, founded 2017) has built a publishing ecosystem where writers charge readers directly, with Substack taking 10% of subscription revenue. As of 2024, Substack hosts tens of thousands of paid newsletters, with approximately 35 million active subscribers. The model aligns writer incentives with reader satisfaction in a relatively clean way: a writer who fails to provide value loses subscribers and income; one who provides genuine value retains and grows their subscriber base.
Substack's limitation is its scale. The platform works well for individual writers; it is not obvious how it extends to the social media use cases — sharing with friends, discovering new connections, building community around events — that currently define the dominant platforms. Substack solves the journalism problem; it does not solve the Facebook problem.
Cooperative Ownership: Wikipedia's Model
Wikipedia is the most visited website in the world that does not run advertising. It operates as a nonprofit, funded by user donations ($150–160 million annually) and run on a cooperative model where volunteer editors govern content through consensus-based processes. It has no venture capital, no advertising revenue, no engagement optimization algorithm, and no incentive to maximize the time users spend on the site.
The result is a platform that is, by most measures, extraordinarily valuable: 60 million articles in 333 languages, freely available to anyone with an internet connection, governed by a community of editors who enforce quality standards without payment. It is also a platform that has deep structural problems — the editor community skews heavily male and toward certain geographies and demographics, article quality varies enormously, and the governance model is slow and often hostile to new contributors.
Wikipedia's relevance as a model for social media is limited by the nature of its content. An encyclopedia, by definition, is not a social platform; it does not need to curate a personalized feed of information or facilitate real-time social interaction. But it demonstrates something important: at massive scale, a platform can be organized around values (knowledge access, intellectual freedom, collaborative truth-seeking) rather than around an advertiser-serving engagement metric — and it can survive financially without exploiting its users.
Privacy-Respecting Advertising: DuckDuckGo's Model
The problem with advertising is not advertising per se; it is surveillance advertising — the model that requires building detailed behavioral profiles of individual users to enable precise targeting. There is an alternative: contextual advertising, which targets ads based on the content being viewed rather than the person viewing it.
DuckDuckGo, the privacy-respecting search engine, runs contextual ads: if you search for "best hiking boots," you see ads for hiking boots. DuckDuckGo does not know who you are, does not track you across the web, and does not build a behavioral profile. It simply matches ads to search terms. As of 2024, DuckDuckGo processes approximately 100 million searches per day — a fraction of Google's 8.5 billion, but enough to support a profitable business.
The contextual advertising model solves the surveillance problem but does not solve the engagement optimization problem entirely. A platform that earns more when users spend more time on it still has incentive to maximize time-on-platform, even without surveillance targeting. However, without the behavioral profile data that makes hyper-targeting possible, the CPM rates for contextual ads are significantly lower — which means the financial incentive to maximize engagement at any cost is proportionally weaker. The economics of the system are less extreme.
The limit of contextual advertising for social media platforms is the CPM gap. Contextual ads on a social platform — where the "context" is a social post about your friend's birthday, not a high-intent search query — have much lower CPMs than behaviorally targeted ads. A social platform running contextual-only advertising would earn significantly less revenue per user than one running surveillance advertising, making it harder to compete for talent, infrastructure, and market position.
What the Alternatives Reveal
The business model alternatives are not silver bullets, but they clarify the problem. The specific pathologies of the current platform ecosystem — emotional manipulation, outrage amplification, filter bubbles, attention extraction — are not inherent features of digital platforms. They are features of a particular business model applied to a particular type of platform at a particular scale.
A subscription model changes the incentive: platforms profit from user satisfaction rather than from user attention time. The incentive is better aligned, though not perfectly so.
A cooperative model eliminates the profit motive entirely, replacing it with community governance. This solves some problems and creates others; cooperative governance is slow, often dominated by early participants, and struggles to scale.
A contextual advertising model retains advertising but eliminates the worst aspects of surveillance and behavioral profiling. It reduces CPMs, which weakens engagement-maximization incentives, but doesn't eliminate them.
Regulation is a fourth alternative — not a business model, but a constraint on business models. We will examine regulatory approaches in Part III. For now, the key insight is that regulation works best when it targets the specific financial mechanism that creates the bad incentive — in this case, the combination of surveillance data collection and engagement-based optimization — rather than trying to prohibit advertising broadly.
The Constraint That Shapes Everything
We close this chapter where we began: with the founding decision. Every platform that chooses advertising as its revenue model is choosing a set of incentives that will shape every subsequent decision. Those incentives are not random; they are specific and predictable, and their downstream effects — on content ecosystems, on user psychology, on political discourse, on public health — are increasingly well-documented.
Sarah Chen, sitting in a Sand Hill Road conference room in 2019, was not making a simple financial choice. She was choosing a master. The master's demands were not immediately visible, but they were already present in the term sheet's engagement metrics, in the product roadmap Marcus would build, in the advertising sales team she would eventually hire, and in the recommendation algorithm that would learn, over time, that dramatic failure videos generated more clicks than careful instructional content.
The gap between what Sarah intended to build — a platform for genuine skill acquisition — and what the business model would incentivize her to build is the gap at the center of this book. It is not a gap that can be closed by hiring Dr. Aisha Johnson, or by adding a wellbeing feature, or by publishing a thoughtful values statement. It can only be closed by changing the model.
This is the insight that makes the attention economy critique actionable rather than merely descriptive. We are not saying that digital platforms are bad, or that technology is corrupting, or that the engineers are villains. We are saying that a specific financial structure, applied to a specific type of product, produces a specific set of harmful outcomes with remarkable consistency. Identify the financial structure. Change the financial structure. The outcomes change.
The Facebook News Feed did not become an emotional manipulation machine because Mark Zuckerberg decided one day to make his users angry. It became one because an advertising business model, combined with an engagement optimization algorithm, combined with Like button data that captured emotional signals, combined with a competitive market that punished any deviation from engagement maximization, produced a system that learned — through pure optimization — that emotional provocation was more valuable than emotional satisfaction.
The system worked as designed. The design was the problem.
In Chapter 5, we will examine the specific psychological mechanisms that these designs exploit — the behavioral and cognitive vulnerabilities that make attention extraction so effective. But the foundation for that analysis is what we have built here: the business model is the constraint. The incentive is the cause. The psychology is the mechanism. And if we want different outcomes, we need to start at the beginning.
Chapter Summary
The attention economy operates through a specific financial mechanism: programmatic advertising auctions that sell access to user attention in real-time markets operating at machine speed. CPM rates — the price of one thousand impressions — vary by user intent, demographic characteristics, and context, creating financial incentives that favor engaging wealthy, high-intent users and brand-safe content.
Engagement became the optimization target through a causal chain: advertising revenue requires inventory; inventory requires time-on-platform; time-on-platform requires engagement; engagement is measurable where wellbeing is not. The resulting system optimizes for behavioral proxies — clicks, shares, time spent — that diverge from actual user wellbeing at the margins where the most harm accumulates.
Velocity Media's Series A in 2019 illustrates the structural pressures that push even well-intentioned platforms toward engagement maximization: VC term sheets measure engagement metrics, competition for advertising dollars rewards engagement at scale, and the talent market concentrates algorithmic expertise at high-engagement platforms.
The Facebook News Feed Arc illustrates the consequences: EdgeRank's original engagement-maximization architecture, combined with Like button data that captured emotional signals, combined with competitive pressure to maximize engagement metrics, produced a system that learned emotional provocation was more valuable than emotional satisfaction — culminating in the emotional contagion study's revelation that Facebook could deliberately alter the emotional states of hundreds of thousands of users.
Business model alternatives — subscription (Netflix, Spotify, Substack), cooperative (Wikipedia), contextual advertising (DuckDuckGo) — each reduce some specific pathology of the advertising model while introducing their own constraints. No alternative is a perfect solution, but each demonstrates that the specific harms of the current model are not inherent to digital platforms.
The business model is the constraint that shapes everything. Change the model, change the incentive. Without structural change, ethical hiring, wellbeing features, and values statements are surface-level interventions on a systemic problem.
Chapter 5: The Psychological Exploitation Stack — What the Algorithms Know About Your Brain