In 2019, a single Facebook post by a small-town Alabama church generated more engagement — likes, shares, comments — than any post published that week by the New York Times. The post claimed, falsely, that an Antifa group was planning to attack the...
In This Chapter
- Learning Objectives
- Introduction
- Section 10.1: The Political Economy of Attention — Advertising, CPM Economics, and Engagement Metrics
- Section 10.2: Outrage as a Business Strategy — The Outrage Cycle and Moral Contagion
- Section 10.3: The Misinformation Revenue Stream — Clickbait Farms and Fake News as Advertising Arbitrage
- Section 10.4: Native Advertising and Sponsored Content — Advertorial Blending and Trust Transfer
- Section 10.5: Subscription Models and Paywalls — Alternative Economics and Their Limitations
- Section 10.6: The Dark Side of Monetization — Conspiracy Theories as Revenue Streams
- Section 10.7: Platform Incentives vs. Platform Responsibility — The Structural Tension
- Section 10.8: Alternative Economic Models for Information Ecosystems
- Callout Box: The Brady et al. Moral Contagion Study — A Closer Look
- Callout Box: Why Advertisers Fund Misinformation (Without Intending To)
- Key Terms
- Discussion Questions
- Chapter Summary
- References
Chapter 10: The Business Model of Outrage — Engagement Over Truth
Learning Objectives
By the end of this chapter, students will be able to:
- Explain the advertising-based economic model of media and social media platforms, including how CPM economics and engagement metrics drive content incentives.
- Analyze the "outrage cycle" as a structural feature of engagement-optimized media, drawing on Brady et al.'s moral contagion research and Ovadya's Outrage Machine framework.
- Describe the economics of clickbait farming and fake news as forms of advertising arbitrage, including the specific case of Macedonian fake news operations.
- Distinguish between native advertising and editorial content, evaluate disclosure standards, and analyze the "trust transfer" problem.
- Evaluate subscription models and paywalls as alternative economics for news organizations, including their partial solutions and structural limitations.
- Assess how alternative monetization platforms (Patreon, Substack, YouTube Super Chats, merchandise) create incentive structures that can fund misinformation and conspiracy content.
- Analyze the structural tension between advertising-based platform economics and the prioritization of information quality.
- Evaluate alternative economic models for information ecosystems, including public media, cooperative journalism, and reader-funded models.
Introduction
In 2019, a single Facebook post by a small-town Alabama church generated more engagement — likes, shares, comments — than any post published that week by the New York Times. The post claimed, falsely, that an Antifa group was planning to attack the town. It was shared thousands of times before it was identified as false. No Antifa group existed. No attack was planned. But the post had done something the newspaper's careful, accurate reporting could not: it generated an enormous emotional reaction in a short period of time.
This episode, trivial in itself, illustrates a structural feature of the contemporary information economy that has profound consequences for the quality of public discourse. The economic infrastructure of digital media — advertising-funded platforms, engagement-based ranking algorithms, revenue-sharing models that reward content creators based on views and clicks — systematically incentivizes content that provokes strong emotional reactions over content that is accurate, nuanced, or important. The result is not a conspiracy or an accident but an emergent property of rational economic behavior within a particular incentive structure.
This chapter examines that incentive structure in detail. We analyze how the advertising economy of digital media works, why it rewards outrage, how misinformation has been monetized as a form of advertising arbitrage, and what alternative economic models might better align media incentives with public interest. The goal is not to vilify media companies or social media platforms but to understand the structural forces that shape the information environment we all inhabit.
Section 10.1: The Political Economy of Attention — Advertising, CPM Economics, and Engagement Metrics
Attention as Commodity
The foundational economic insight for understanding digital media is that in the advertising-based model, the product being sold is not content — it is attention. Platforms and media outlets do not sell news, entertainment, or social connection to users. They sell access to users' attention to advertisers. Users are not customers; they are the inventory.
This insight, sometimes captured in the aphorism "if you're not paying for the product, you are the product," traces back to the media theorist Dallas Smythe, who in 1977 argued that the primary product of commercial television was the audience "commodity" — packaged blocks of viewer attention sold to advertisers. In the digital era, this logic has been extended to an unprecedented degree: platforms can track individual users' attention in real time, at granular levels of detail, and sell advertising access calibrated to extremely specific audience segments.
CPM Economics
The advertising model of digital media revolves around the concept of CPM — cost per mille, or cost per thousand impressions. Advertisers pay a CPM rate for every 1,000 times their advertisement is shown to users. CPM rates vary enormously depending on the audience (demographic characteristics, purchasing intent, engagement level), the platform, the content context, and the format.
For a media outlet or content creator, the revenue equation is simple:
Revenue = (Total Impressions / 1,000) × CPM Rate
This equation creates a powerful and direct incentive structure. Revenue is maximized by maximizing total impressions (getting as many eyeballs as possible on content for as long as possible) while maintaining a CPM rate that advertisers will pay. These two objectives together drive the economic logic of the attention economy.
Maximizing impressions requires content that: - Attracts large audiences through distribution (sharing, viral spread, algorithmic recommendation) - Keeps audiences engaged for extended periods (high dwell time, return visits, multiple page views) - Generates repeat exposure (email subscription, social media follow, app notifications)
Maintaining CPM rates requires content that: - Attracts audiences that advertisers want to reach (affluent, high-intent consumers) - Creates brand-safe environments (not too controversial, not too offensive) - Delivers measurable engagement metrics that justify the advertising investment
The tension between these objectives is where much of the dysfunction of the attention economy originates. Content that maximizes raw impressions through emotional arousal may attract audiences or create contexts that advertisers find brand-unsafe, depressing CPM rates. Content that maintains high CPM rates (premium editorial environments, niche affluent audiences) may not achieve the scale needed to generate substantial total revenue.
Why Engagement Metrics Dominate
In the early era of digital advertising, the primary metric was the page view — each time a user loaded a web page, it registered as an impression. This rewarded content that attracted a single click, regardless of whether the user engaged meaningfully with it. The result was the era of "clickbait" — headlines engineered to generate clicks through curiosity gaps, sensationalism, and false promises of revelatory content.
As programmatic advertising became more sophisticated, engagement metrics (time on page, scroll depth, video completion rates, comments, shares) became more important signals of content quality from the advertiser's perspective. Advertisers who found that high-engagement audiences were more receptive to their messages were willing to pay higher CPM rates for certified engagement.
Social media platforms developed their own engagement metric systems: Facebook's "reactions" (like, love, haha, wow, sad, angry), Twitter's likes and retweets, YouTube's watch time and like-to-view ratios, and Reddit's upvote/downvote system all served as signals that the platform used both to rank content in feeds and to characterize audiences for advertising purposes.
The critical feature of engagement metrics that shapes content incentives is that high-arousal emotional content generates disproportionately strong engagement signals. Content that provokes anger, fear, disgust, or intense enthusiasm generates more clicks, more comments, more shares, and more return visits than content that provokes moderate or rational responses. This is not accidental — it reflects deep features of human psychology that evolved in contexts very different from the contemporary media environment.
Section 10.2: Outrage as a Business Strategy — The Outrage Cycle and Moral Contagion
Brady et al.: Moral Contagion in Social Media
One of the most important empirical contributions to understanding the relationship between emotional content and social media spread is the research by William Brady and colleagues on moral contagion — the tendency for morally loaded emotional language to spread disproportionately rapidly in social networks.
In a 2017 study published in Proceedings of the National Academy of Sciences, Brady et al. analyzed 563,000 tweets about three politically contested topics (same-sex marriage, climate change, and gun control). They found that the presence of moral-emotional language — words combining emotional valence with moral judgment — was associated with a 20% increase in retweet rates for each additional moral-emotional word in a tweet. The effect was robust across topics and political orientations: both liberal and conservative users showed the same tendency to preferentially share morally charged content.
The Brady et al. findings have several important implications: - Moral language is viral: Content that frames political issues in terms of moral violation, injustice, threat, or indignation spreads more rapidly than content that frames the same issues analytically. - Platform-level amplification: When engagement-optimizing algorithms preferentially promote high-engagement content, morally charged content gets amplified by the algorithm in addition to human sharing behavior. - Political content is particularly affected: Because political debates involve values and identity, they are especially susceptible to moral framing, making political misinformation particularly prone to viral spread.
The Outrage Cycle
Aviv Ovadya, a technology policy researcher and former chief technologist of the Center for Social Media Responsibility, has described what he calls the "Outrage Machine" — the self-reinforcing cycle by which outrage-generating content dominates attention-based platforms. The cycle works as follows:
- Content creation: A creator (individual, organization, media outlet, or bot) produces content designed to provoke outrage — a real or exaggerated story of injustice, threat, or moral violation.
- Engagement spike: The outrage content receives disproportionate engagement: angry comments, shares to express alarm, and counter-shares by people defending the target of the outrage.
- Algorithmic amplification: Engagement signals cause algorithms to promote the content further, exposing it to larger audiences.
- Revenue generation: Higher traffic generates advertising revenue for the original content creator and for the platform. The financial reward reinforces the content strategy.
- Escalation: Competition for outrage-generating attention creates pressure for increasingly intense content — each cycle must be more inflammatory than the last to stand out in an already outrage-saturated environment.
- Normalization: Prolonged exposure to high-outrage content shifts the audience's baseline expectation: moderate, analytical content now seems boring by comparison, creating an ever-escalating demand for more extreme material.
This cycle is not unique to any individual media outlet or platform; it is a structural feature of engagement-optimized, advertising-funded information distribution. Understanding it helps explain why outrage content dominates media without any conspiracy or malicious intent being necessary: it is simply the rational economic response to the incentive structure.
The Psychology of Outrage
The prevalence of outrage content in media is not purely a matter of economics; it also reflects real psychological features of human information processing. Moral outrage — the experience of witnessing a perceived injustice or violation of moral norms — is a powerful motivator of social action. Research in moral psychology (Jonathan Haidt, Jesse Graham, Craig Joseph) has documented that outrage serves important social functions: it signals group membership, enforces social norms, and motivates collective action.
But outrage also has psychological costs: chronic exposure to outrage-inducing content is associated with anxiety, hostility, and political cynicism. Research by Kevin Smith and colleagues has found that individuals with stronger physiological responses to threat-related stimuli are more likely to consume outrage media and to develop more extreme political positions. The media environment that rewards outrage may thus be producing both addicted consumers and radicalized political actors simultaneously.
Section 10.3: The Misinformation Revenue Stream — Clickbait Farms and Fake News as Advertising Arbitrage
The Economics of Fake News
The emergence of fake news as a major feature of the information ecosystem in the mid-2010s was not primarily a political phenomenon — it was an economic one. The combination of easily accessible website creation tools, large social media audiences willing to share emotionally engaging content, and automated advertising networks willing to place ads on almost any website created a low-cost, high-return business opportunity for content entrepreneurs willing to produce false or misleading content.
The economic model is simple: create a website that mimics the appearance of a news outlet. Produce content — false, exaggerated, or real stories with misleading headlines — that will generate shares and clicks on social media. Drive traffic to the website. Collect advertising revenue from the ads that appear alongside the content. Scale up by producing more content, more websites, or both.
The key economic feature of this model is advertising arbitrage: the cost of producing and distributing false content is substantially lower than the cost of producing accurate journalism, but the advertising revenue generated per page view is similar. Journalistic investigation requires reporters, editors, fact-checkers, and sometimes legal departments. Fake news requires a writer willing to fabricate or plagiarize, a cheap web hosting account, and a social media account to distribute links. The arbitrage profit — the gap between content production costs and advertising revenue — is captured by the fake news producer at the expense of legitimate journalism.
The Macedonian Fake News Farms
The most widely analyzed case of fake news as advertising arbitrage is the network of websites operated by teenagers and young adults in the small city of Veles, in the former Yugoslav republic of North Macedonia, during the 2016 US presidential election. Investigated by BuzzFeed News journalists Craig Silverman and Lawrence Alexander, and subsequently examined by dozens of researchers, the Macedonian fake news ecosystem represents a textbook case of misinformation as pure profit-seeking behavior.
The operators were typically young, apolitical Macedonians who had discovered that American political content generated disproportionately large advertising revenue — higher than virtually any other content type — because American advertisers paid premium CPM rates for American political audiences. With minimal political knowledge or convictions, they produced large quantities of pro-Trump misinformation (not because they preferred Trump but because pro-Trump content generated more shares and engagement among their target audience), hosted it on websites with names like "USADailyPolitics.com," and collected revenue through Google AdSense and similar automated advertising networks.
Key documented features of the Macedonian operations: - Some individual operators earned thousands of dollars per month — substantial income in a country with average monthly wages below $500. - Content was almost entirely plagiarized or fabricated; there was no journalistic process whatsoever. - Google AdSense, Facebook advertising integration, and similar automated networks placed brand-name advertisers' ads alongside demonstrably false content without any editorial review. - The most profitable content was extreme pro-Trump misinformation that generated enormous Facebook sharing activity. - Similar operations existed in other countries, particularly those with low labor costs and English-language capacity.
The Macedonian case is important not just as a story of bad actors but as a demonstration of system-level failure: Google's automated advertising infrastructure and Facebook's sharing-based distribution created the economic conditions that made fake news profitable, without any explicit design intention to do so.
The Clickbait Farm Model
Distinct from outright fake news is the broader ecosystem of clickbait farms — content operations that produce large quantities of low-quality, often misleading, but not necessarily false content optimized for social media sharing. Clickbait farms exist across the political spectrum and across topical niches (health, celebrity gossip, personal finance, politics) and represent a significant portion of the online content ecosystem.
Common clickbait farm techniques include: - Curiosity gap headlines: "She opened the door and couldn't believe what she saw." Headlines that promise revelatory content while withholding enough information to compel the click. - Listicle formats: "15 things your doctor won't tell you about X." Easy to produce, easy to share, relatively low research burden. - Fear-based health misinformation: Content about cancer cures, vaccine dangers, and dietary risks generates enormous engagement because it taps deep evolutionary concerns about physical threat. - Political caricature: Exaggerated or false stories about the political opposition that generate outrage among partisans. - Native traffic arbitrage: Buying cheap traffic from content recommendation networks (Taboola, Outbrain) and collecting higher CPM advertising revenue from high-value content categories.
Section 10.4: Native Advertising and Sponsored Content — Advertorial Blending and Trust Transfer
The Advertorial Problem
Native advertising — paid content designed to resemble the editorial content of the publication in which it appears — has existed in print form since at least the 19th century. The "advertorial" was a standard feature of 20th-century magazines: a paid advertisement formatted to look like an editorial article, typically with small-print disclosure at the top or bottom.
In the digital era, native advertising has become both more sophisticated and more prevalent. Major publishers including the New York Times, BuzzFeed, the Atlantic, and hundreds of others operate what are variously called "brand studios," "native content teams," or "T Brand Studios" that produce advertising content integrated with the publication's visual identity and content style.
Digital native advertising generates higher CPM rates than banner advertising because it achieves higher engagement: readers who are not certain whether they are reading editorial or paid content spend more time with it, click on it more readily, and retain its messages more deeply than they would with obviously identified advertising.
Disclosure Failures and Regulatory Standards
In the United States, the Federal Trade Commission (FTC) requires that paid content be clearly identified as advertising. In practice, disclosure standards are inconsistently implemented and inconsistently enforced. Common failure modes include:
- Minimal visual disclosure: A small "Sponsored" label in a light-gray font at the top of a page that many readers miss or ignore.
- Confusing terminology: Labels like "Partner content," "Presented by," "Brand voice," or "Paid post" that do not clearly communicate the advertising nature of the content.
- Platform inconsistency: Social media shares of native content often strip the disclosure labels, so the content circulates without any advertising identification.
- Influencer disclosure: Social media influencers who receive payment or free products in exchange for posts are required to disclose this relationship, but enforcement is limited and disclosure practices are highly inconsistent.
Research consistently finds that a significant proportion of readers — in some studies, majorities — cannot reliably distinguish native advertising from editorial content, even when disclosures are present.
The Trust Transfer Problem
The trust transfer problem is the core ethical concern with native advertising: when brands produce content in the editorial style of a trusted publication, some of the trust that readers have built toward the publication transfers to the brand's message, even when readers know the content is paid.
Research by Kim and Gupta (2012) and subsequent scholars has documented this effect: readers who see a brand message in a trusted editorial context evaluate the brand more favorably than when they see the same message in an obviously advertising context. The transfer is partially mediated by disclosure — when readers are clearly reminded that content is paid, trust transfer is reduced. But the reduction is not complete, and many readers in real-world environments do not notice or process disclosure labels.
The trust transfer dynamic has particular relevance for health and financial misinformation: when misleading health claims appear in the context of well-designed, authoritative-looking content on a trusted platform, readers may be more likely to act on those claims than if they appeared in clearly identified advertising.
Section 10.5: Subscription Models and Paywalls — Alternative Economics and Their Limitations
The Economics of Subscription Journalism
The subscription model — in which readers pay directly for access to news content — represents an alternative to advertising-based economics that, in principle, aligns revenue with reader value rather than with advertiser preferences. A subscription model is not dependent on maximizing impressions; it depends on maintaining subscriber loyalty, which in principle rewards content quality, reliability, and reader satisfaction.
The subscription model is not new: newspapers have always charged subscription fees alongside advertising revenue. But the dominance of free, advertising-supported content in the early internet era nearly eliminated subscription economics for news, as readers became accustomed to free access and publishers feared that paywalls would eliminate traffic (and thus advertising revenue).
The major subscription revival in news began in earnest around 2016-2020, driven by several factors: - The Trump effect: politically engaged readers, motivated by concern about political developments, proved willing to pay for political news. The New York Times, Washington Post, and other major outlets saw unprecedented subscription growth following the 2016 election. - The advertiser pressure exodus: high-profile brand safety incidents led major advertisers to avoid certain news topics (terrorism, gun violence), making advertising revenue less reliable for news publishers. - The Facebook traffic collapse: changes to Facebook's News Feed algorithm in 2018 reduced referral traffic to news publishers, making advertising-dependent publishers more vulnerable and subscription models more attractive.
The Partial Solutions of Subscription Models
Subscription models offer genuine advantages for information quality:
- Aligned incentives: Publishers who depend on subscriber retention have incentives to build reader trust, which is damaged by inaccuracy and damaged by sensationalism that readers eventually perceive as manipulation.
- Independence from advertiser pressure: Subscription-funded publishers can cover topics (investigative journalism about powerful corporations, coverage of politically sensitive topics) that advertisers might prefer to avoid.
- Audience quality over quantity: Subscription publishers optimize for the satisfaction of their actual subscribers, not for the maximum number of impressions.
However, subscription models also have significant limitations:
- Equity and access: Paywalls restrict information access to those who can afford to pay. If high-quality journalism is primarily available to affluent subscribers while lower-income populations rely on free, advertising-supported (and outrage-optimized) content, subscription models may worsen information inequality.
- Niche audiences: Subscription journalism naturally segments audiences by their willingness to pay, which tends to correlate with education, income, and existing civic engagement. The people most likely to subscribe to quality journalism are those who already consume it; the most information-vulnerable audiences are least likely to pay.
- Scale limitations: Large-scale investigative journalism that informs broad public debates requires wide distribution, not just satisfied subscribers. Important stories paywalled behind subscription barriers may not reach the audiences that need them most.
- Quality is not guaranteed: The subscription model removes some advertising incentives for sensationalism but does not eliminate them. Subscriber retention can be built on partisan loyalty and outrage as easily as on accurate journalism — if subscribers are motivated primarily by political identity, outrage content may perform better than balanced reporting.
Section 10.6: The Dark Side of Monetization — Conspiracy Theories as Revenue Streams
Patreon, Substack, and Direct Audience Monetization
The emergence of creator-direct monetization platforms — Patreon (launched 2013), Substack (launched 2017), and similar services — has enabled content creators to build subscription or tip-based revenue directly from audiences without dependence on either advertising or traditional publishers. These platforms have been transformative for independent journalism, enabling quality reporters to build sustainable businesses without institutional backing.
But the same infrastructure that funds serious independent journalism also funds conspiracy theorists, misinformation operators, and extremist content creators. Patreon's and Substack's value propositions — that creators should be able to build direct relationships with their audiences without platform interference — explicitly resist the content moderation that has caused some creators to be removed from advertising-based platforms.
Several high-profile misinformation operators have built substantial revenue streams through these platforms: - Conspiracy theorists removed from YouTube for policy violations who rebuilt their audiences and revenue on Substack or independent websites with Patreon funding. - Anti-vaccine advocates whose advertising-funded content was demonetized on mainstream platforms but who found willing subscribers on direct-funding platforms. - Extremist political commentators who explicitly marketed themselves to audiences frustrated with "mainstream censorship" as a reason to pay directly for their content.
The content moderation philosophy of Substack has been particularly contested. The platform's founders explicitly positioned Substack as a free-speech alternative to platforms with active content moderation policies, arguing that readers should judge content rather than platforms. Critics pointed out that this philosophy effectively subsidizes misinformation by providing creators with financial infrastructure and implicit credibility.
Super Chats and Live Stream Monetization
YouTube's "Super Chat" system — which allows viewers to pay to have their comments highlighted during live streams — has created a distinctive monetization mechanism with significant implications for misinformation. Popular live streamers who discuss political topics or conspiracy theories can generate substantial revenue from Super Chat donations from engaged audiences.
The Super Chat mechanism creates strong incentives for creators to maintain high audience engagement during streams — rewarding sensationalism, outrage, and community solidarity over analytical or balanced content. Streams that generate controversy and emotional intensity tend to attract more Super Chat donations than calm, reflective content.
Research on political live streaming has documented cases in which creators are incentivized by Super Chat donations to make more extreme claims, to validate conspiracy beliefs expressed by donating audience members, or to engage in hostile confrontations with opponents. The real-time financial feedback creates a direct link between audience outrage and creator revenue.
Merchandise as Conspiracy Revenue
A distinctive feature of several major misinformation operations — most prominently Alex Jones's InfoWars — is the use of merchandise and supplement sales as a revenue stream that is highly resistant to platform-based deplatforming. Unlike advertising revenue, which is controlled by third-party ad networks that can withdraw service, merchandise revenue is controlled directly by the content creator through their own e-commerce infrastructure.
InfoWars' business model explicitly integrates its media operation with its supplement sales: the media content creates an audience that is then sold dietary supplements, survivalist products, and branded merchandise at significant markups. The political content — conspiracy theories, fear-based reporting about societal collapse, distrust of mainstream institutions — is not purely a media product but a marketing vehicle for the merchandise operation. This tight integration between content and commerce makes the business model highly resilient: even when InfoWars was deplatformed from major social media in 2018, its merchandise revenue continued, sustaining the operation.
Section 10.7: Platform Incentives vs. Platform Responsibility — The Structural Tension
Can Advertising Platforms Prioritize Truth?
The most fundamental tension in the economics of digital information is the potential incompatibility between engagement optimization — which rewards outrage, emotion, and novelty — and accuracy — which often requires nuance, complexity, and delayed reward. This tension is structural, not merely a matter of corporate ethics or regulatory compliance.
An advertising-funded platform faces the following situation: content that generates more engagement produces more advertising revenue. Accurately assessing the engagement consequences of removing or downranking outrage content is difficult; the business consequence of doing so is direct and measurable. Therefore, even a platform leadership team that sincerely values information quality faces a powerful and ongoing structural pressure to prioritize engagement over quality.
This is not unique to social media. Commercial television networks have faced the same tension since the beginning of television. Local news operations that discovered that crime coverage generated higher ratings than governance coverage systematically shifted toward crime, regardless of its relevance to viewers' actual civic decision-making. The structural logic is identical.
What Platforms Have Tried
Major platforms have implemented several types of interventions intended to reduce outrage and misinformation without fundamentally changing their economic model:
Algorithmic tweaks: Facebook's 2018 News Feed changes, which prioritized "meaningful social interactions" over passive content consumption, were intended to reduce the dominance of outrage content. Research suggests the changes had mixed effects: they reduced overall news exposure but not specifically the ratio of accurate to inaccurate news.
Fact-checking partnerships: Facebook, Twitter, and YouTube partnered with independent fact-checking organizations to label disputed content. Research on the effectiveness of fact-check labels has found modest but real effects on sharing behavior. However, the scale of fact-checking resources is far smaller than the scale of misinformation production.
Reduced viral amplification: Twitter and YouTube have both implemented measures to reduce the algorithmic amplification of inflammatory content (Twitter's labeling of disputed election claims; YouTube's de-amplification of borderline content). The effectiveness of these measures is contested and difficult to evaluate independently.
Transparency reports: Major platforms began publishing transparency reports disclosing content removal statistics, advertising data, and government information requests. These reports have improved accountability but do not address the fundamental incentive structure.
The Limits of Platform Self-Regulation
Each of the interventions above shares a common limitation: they attempt to modify the effects of the engagement-based economic model without modifying the model itself. As long as the fundamental revenue equation depends on maximizing emotional engagement, interventions at the content level will face structural headwinds. Platform incentives will continually push back toward content that generates strong emotional reactions, because such content is more profitable.
This structural analysis suggests that addressing the business model of outrage requires more fundamental economic changes — to the advertising model itself, to the regulatory environment for online advertising, or to the revenue model of media — rather than content-level interventions alone.
Section 10.8: Alternative Economic Models for Information Ecosystems
Public Media
The strongest counterexample to advertising-driven outrage media is the public media model — news organizations funded primarily by public (government) grants rather than commercial advertising. The BBC (United Kingdom), NPR and PBS (United States), CBC (Canada), ABC (Australia), and similar organizations operate under mandates that explicitly prioritize public interest over commercial performance.
Public media organizations are not immune to the pressures of engagement and attention: they compete for audiences with commercial media and face audience expectations shaped by commercial media norms. But the absence of direct advertising dependence removes the most powerful structural incentive for outrage content.
Research on public media consistently finds that public media outlets produce more internationally focused, more policy-relevant, and less sensationalized news than their commercial counterparts. Comparative research across countries has found that countries with stronger public broadcasting traditions have more informed electorates, as measured by cross-national comparative surveys.
The public media model faces its own structural challenges: political influence on funding creates pressures on editorial independence; public media organizations in some countries have been successfully defunded or captured by political actors; and the growing portion of media consumption that occurs on commercial platforms limits the reach of public media content even when that content is high quality.
Cooperative and Reader-Funded Journalism
A growing ecosystem of nonprofit, cooperative, and reader-funded news organizations represents an alternative to both advertising-funded and subscription-funded models. Organizations like The Guardian (structured as a nonprofit trust), ProPublica (nonprofit, foundation-funded), The Correspondent (member-funded cooperative), and hundreds of local news nonprofits operate with explicit public interest mandates and without dependence on advertising revenue.
These organizations face scale constraints: without advertising revenue or major foundation funding, they typically cannot compete in scale with commercial media. But they have demonstrated that high-quality, public-interest journalism is economically viable at smaller scales, and they provide important models for what information quality-focused economics can look like.
Advertising Reform: The Ethical Advertising Approach
Some researchers and advocates have proposed reforming the digital advertising ecosystem rather than replacing it. Specific proposals include:
- Contextual advertising: Replacing behavioral targeting (advertising based on user profiles and tracking) with contextual advertising (advertising based on content context). Contextual advertising creates less incentive for platforms to maximize emotional engagement because the value of an impression is tied to content context rather than user emotional state.
- Ad-free subscriptions: Major platforms offering ad-free subscription tiers (YouTube Premium, Twitter Blue) create a revenue stream not dependent on maximizing advertising impressions.
- Revenue sharing with quality news: Some proposals would require platforms to share a portion of advertising revenue with news publishers, recognizing that news content drives platform usage.
- Algorithmic transparency requirements: Regulatory requirements to disclose algorithmic ranking criteria would enable independent researchers and regulators to assess whether platform algorithms are systematically amplifying misinformation.
Callout Box: The Brady et al. Moral Contagion Study — A Closer Look
William Brady and colleagues' finding that moral-emotional language increases retweet rates by approximately 20% per moral-emotional word has been influential in discussions of social media misinformation. But several nuances in this finding deserve attention:
The finding is about language, not content: The study found that moral-emotional language — the specific words used in tweets — predicts retweet rates. It does not directly show that false content spreads more than true content because of its moral-emotional language. The relationship between moral-emotional language and accuracy is a separate empirical question.
The effect is within partisan networks: Brady et al. found that the moral contagion effect was strongest within politically homogeneous networks. Moral-emotional language that resonates within one partisan community may fall flat (or produce backfire) in a cross-partisan context.
Context modifies the effect: Subsequent research by Brady and colleagues has found that the moral contagion effect is moderated by context. In competitive political environments, moral-emotional language increases engagement; in deliberative contexts with stronger norms of civil discourse, the effect is weaker.
Platform design matters: The Brady et al. research was conducted on Twitter, where retweet is the primary sharing mechanism. Different platforms with different engagement architectures (Facebook reactions, YouTube likes, Reddit upvotes) may show different magnitudes of the moral contagion effect.
Callout Box: Why Advertisers Fund Misinformation (Without Intending To)
The majority of advertising on misinformation websites is placed not by advertisers who want to fund misinformation but by automated advertising systems (programmatic advertising) that place ads based on audience characteristics without reviewing the content context. This creates a structural problem: brand-name advertisers whose products appear alongside false or extremist content typically had no knowledge that this was occurring and would prefer it did not.
Periodic brand safety crises — when major brands discover their advertisements appearing on extremist or conspiracy websites — generate waves of advertiser concern and platform promises of improvement. But the programmatic advertising ecosystem is so complex, so rapidly changing, and so distributed that effective brand safety management is genuinely difficult. Estimated CPM rates for most misinformation websites are low precisely because advertisers prefer not to advertise there, and programmatic systems partially reflect this preference. But the volume of misinformation content is large enough that even low CPM rates generate significant total revenue.
Key Terms
Attention Economy: An economic framework in which human attention is the scarce resource that media organizations and digital platforms compete for and sell to advertisers.
CPM (Cost Per Mille): The price an advertiser pays for 1,000 impressions of their advertisement; the primary pricing metric in digital advertising.
Engagement Metrics: Quantitative measures of audience interaction with content, including likes, shares, comments, time on page, and click-through rates, used by platforms to rank content and by advertisers to assess audience quality.
Outrage Machine: Aviv Ovadya's term for the self-reinforcing cycle by which outrage-generating content receives algorithmic amplification, generating advertising revenue that incentivizes further outrage content.
Moral Contagion: The empirically documented tendency for social media content containing moral-emotional language to spread more rapidly than equivalent content without such language (Brady et al., 2017).
Advertising Arbitrage: In the misinformation context, the practice of producing low-cost false or misleading content that generates advertising revenue comparable to high-cost legitimate journalism, capturing the profit difference.
Native Advertising: Paid advertising content designed to resemble the editorial content of the publication in which it appears.
Trust Transfer: The phenomenon by which reader trust in a publication or platform is partially transferred to advertising content appearing in that publication's context.
Programmatic Advertising: Automated, algorithmic buying and selling of digital advertising inventory, in which ads are placed based on audience data rather than editorial review.
Subscription Model: A media economic model in which readers pay directly for content access, theoretically aligning revenue with reader satisfaction rather than with advertiser preferences.
Discussion Questions
-
The chapter argues that the outrage-optimized information environment is primarily an emergent property of rational economic behavior within a particular incentive structure — not a product of malice or conspiracy. Do you find this framing convincing? What does it imply for how we should assign moral responsibility for the harms of the outrage economy?
-
The Macedonian fake news operators were motivated entirely by profit, with no ideological commitment to the content they produced. Does the absence of ideological motivation change how you evaluate their actions? Does it change how you think about regulating similar behavior?
-
Subscription models are often presented as an alternative to advertising-based media. But if high-quality subscription journalism is primarily accessible to affluent, highly educated consumers, does this improve or worsen the overall quality of the public information environment?
-
Should advertising networks like Google AdSense and Meta's Audience Network be held responsible for the content on which they place advertisements? What are the practical and legal challenges of implementing advertiser responsibility for content quality?
-
Brady et al.'s moral contagion research shows that outrage language spreads more effectively on social media. Does this mean social media users are being manipulated, or are they making rational choices to engage with content that resonates emotionally? What are the implications of each interpretation?
-
Public media organizations are cited as an alternative to advertising-funded outrage media. But public media is funded by governments, which creates potential for political influence over content. How should the independence of public media be protected? Is government-funded journalism compatible with a free press?
-
InfoWars' integration of media content with merchandise and supplement sales means that its business model is partially immune to platform deplatforming. What does this suggest about the limits of platform deplatforming as a strategy for reducing misinformation? What complementary strategies would address the economic infrastructure more directly?
-
The chapter mentions "ethical advertising" proposals including contextual advertising (replacing behavioral targeting). If contextual advertising reduced platform revenues by 20-30% (a realistic estimate), would this trade-off be justified by its effects on misinformation? Who should make this decision?
Chapter Summary
The information quality crisis of the contemporary media environment cannot be understood without understanding its economics. The advertising-based model of digital media, in which revenue depends on maximizing emotional engagement, creates powerful structural incentives for outrage content and systematic disadvantages for accurate, nuanced reporting. This is not primarily a product of malice or conspiracy; it is an emergent consequence of rational economic behavior within an incentive structure that systematically rewards emotional arousal over informational value.
The outrage cycle, described by Ovadya and empirically grounded in Brady et al.'s moral contagion research, operates at the level of the system: even actors who want to produce high-quality information face structural pressure to produce high-engagement content if they depend on advertising revenue. Fake news operations like the Macedonian websites represent the logical extreme of this dynamic: low-cost, high-engagement content that captures advertising revenue through an arbitrage of information quality.
Alternative economic models — subscription journalism, public media, cooperative news organizations, reader-funded journalism — offer partial solutions, each with their own limitations. Subscription models align incentives better than advertising but restrict access and can sustain partisan echo chambers. Public media requires political independence that is difficult to guarantee. Alternative monetization platforms like Patreon and Substack can fund both high-quality independent journalism and high-quality conspiracy theory operations, depending on what audiences want to pay for.
The most fundamental implication of this analysis is that addressing misinformation and outrage media requires economic interventions — changes to the advertising ecosystem, regulatory frameworks, funding models — not just content moderation. Fact-checking, media literacy education, and algorithmic tweaks can reduce the harms of the outrage economy at the margins, but they cannot change the structural incentives that produce it. Structural problems require structural solutions.
References
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313–7318.
Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion. Pantheon Books.
Ovadya, A. (2019). What's Credible on the Internet? We're Not Sure Anymore. (Policy paper). Center for Social Media Responsibility.
Silverman, C., & Alexander, L. (2016, November 3). How teens in the Balkans are duping Trump supporters with fake news. BuzzFeed News.
Smith, K. B., Alford, J. R., Hibbing, J. R., Martin, N. G., & Hatemi, P. K. (2017). Intuitive ethics and political orientations: Testing moral foundations as a theory of political ideology. American Journal of Political Science, 61(2), 424–437.
Smythe, D. (1977). Communications: Blindspot of Western Marxism. Canadian Journal of Political and Social Theory, 1(3), 1–27.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236.
Federal Trade Commission. (2019). Disclosures 101 for Social Media Influencers. FTC.gov.
Reuters Institute for the Study of Journalism. (Annual). Digital News Report. University of Oxford.
Lazer, D. M. J., et al. (2018). The science of fake news. Science, 359(6380), 1094–1096.
Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 44(2), 157–173.
Vox Media/New York Media/BuzzFeed. (2019). The State of Digital Media Advertising (Industry reports, various).