Appendix C: Historical Timeline — From Penny Press to Algorithmic Feeds (1833–2025)

Preface: Nothing Is New Under the Algorithm

One of the most persistent errors in discussions of social media is the assumption that what platforms do to human attention is historically unprecedented. It is not. The mechanisms differ — the speed, scale, and personalization are genuinely new — but the underlying project of capturing and monetizing human attention through psychological manipulation has a long and thoroughly documented history.

This timeline traces that history. It begins not with Mark Zuckerberg in a Harvard dorm room but with Benjamin Day selling newspapers for a penny on the streets of New York in 1833. The story it tells is one of continuous iteration: each new medium discovering the same psychological levers (novelty, outrage, social belonging, variable reward), developing more precise tools to pull them, and then being gradually joined or displaced by the next medium that pulls them better.

Reading this history does not excuse the harms of contemporary social media platforms. But it does clarify what we are actually fighting: not a set of specific bad actors but a structural logic that has been with us for nearly two centuries — the logic of attention as commodity and psychology as tool.


1833–1900: The Penny Press Era

1833 — The First Attention Economy

September 3, 1833: Benjamin Day launches the New York Sun, the first "penny press" newspaper, priced at one cent — far below the prevailing six-cent price of subscription papers aimed at merchants and professionals. The innovation was financial: Day would give the paper away cheaply, build mass readership, and charge advertisers for access to that audience.

This is the founding moment of the attention economy. The logic is identical to the logic of contemporary social media: the user is not the customer; the user is the product. The Sun reached 8,000 daily readers within six months, surpassing every existing New York paper. It did so by writing about crime, scandal, and local human interest — content optimized for emotional engagement rather than civic instruction.

Day did not call it engagement optimization. But he understood it instinctively: give people what arouses them, distribute it cheaply, sell the resulting attention to advertisers.

1844 — Real-Time Everything

May 24, 1844: Samuel Morse sends the first telegraph message between Washington and Baltimore: "What hath God wrought." Within a decade, telegraph networks would connect major American cities, and within two decades, a transatlantic cable would link the US and Europe.

The telegraph created the first real-time information network — and immediately created its first pathologies. News organizations scrambled to acquire and publish telegraphed dispatches before competitors. The premium on speed over depth was established. Content was compressed to fit telegraph economics (transmission cost per word), producing a terseness that prized simple declarative claims over complex analysis.

This is the ancestral environment of the tweet and the push notification: information valued for velocity rather than accuracy, distributed to maximize reach, consumed before reflection.

1880s–1890s — The Circulation Wars

The first great media engagement arms race was the competition between Joseph Pulitzer's New York World and William Randolph Hearst's New York Journal for newspaper circulation in the 1880s and 1890s. The tools they used are recognizable today:

  • Sensationalized crime coverage designed to provoke moral outrage and visceral fascination
  • Crusading campaigns (anti-corruption, anti-poverty) that made readers feel part of a righteous movement
  • Illustrations and eventually photographs — visual emotional intensification
  • Stunts and self-generated news events (Nellie Bly's round-the-world trip, instigated by The World)
  • Coverage of the Spanish-American War framed as a heroic crusade, with both papers accused of manufacturing the conflict to drive sales

Hearst is apocryphally quoted as telling an illustrator: "You furnish the pictures; I'll furnish the war." The quote is probably apocryphal — but the logic it captures is real. This is the ur-text of engagement optimization: if the world does not produce content outrageous enough to maximize attention, manufacture some.

"Yellow journalism" became the contemporary term of criticism. The pattern — outrage, novelty, and tribalism deployed in service of circulation — would recur in every subsequent media revolution.

1890s — Patent Medicine and the First Regulatory Vacuum

Before modern pharmaceutical regulation, the patent medicine industry spent lavishly on newspaper advertising with claims that were freely fabricated. Preparations "guaranteed" to cure cancer, restore hair, treat addiction, and increase sexual vitality proliferated in the same newspapers reporting on Hearst and Pulitzer's circulation wars.

The advertising itself was psychologically sophisticated: testimonials (fake), before-and-after narratives (fabricated), authority claims ("doctors recommend"), and urgency language ("limited supply"). The newspapers, dependent on advertising revenue, had every financial incentive to print the claims uncritically.

This first systematic regulatory vacuum in attention commerce is directly ancestral to the absence of oversight in social media advertising. The remedy — the Pure Food and Drug Act of 1906, the Federal Trade Commission in 1914 — took roughly two decades of documented harm to produce. We are currently at the equivalent point in the social media regulatory timeline.


1900–1950: Radio and Mass Advertising

1920 — Broadcasting Arrives

November 2, 1920: KDKA Pittsburgh broadcasts live coverage of the Harding-Cox presidential election results, marking the first commercial radio broadcast in the United States. Within five years, over 600 radio stations are operating nationally.

Radio represented a qualitative leap in attention capture: it was passive, entered the home, and required no literacy. For the first time, it became possible to reach millions of people simultaneously with an identical experience — a single voice, emotion, and framing shared in real time across a continental audience. The implications for persuasion at scale were immediately understood by politicians, advertisers, and propagandists alike.

1922 — The First Broadcast Advertisement

August 28, 1922: WEAF in New York airs the first radio advertisement — a 10-minute paid spot for the Hawthorne Court apartment complex in Jackson Heights, Queens. The price was $100. The advertiser-supported broadcast model, established 89 years earlier by the penny press, migrated seamlessly to the new medium.

The Federal Radio Commission (later the FCC) would eventually regulate advertising content on broadcast radio, but not before the commercial model was firmly established as the dominant structure. Every subsequent medium — broadcast television, cable, the open web, social media — would inherit this architecture by default.

1930s — Serial Narrative Hooks: The Invention of the Soap Opera

The soap opera is the first engineered behavioral hook in broadcast media. Procter & Gamble developed the format in the early 1930s: a daytime serial drama, broadcast on radio, written specifically to create narrative dependency (the cliffhanger), to reach a target demographic (housewives), and funded by soap manufacturers.

The structural logic of the soap opera — unresolved narrative, emotional investment in characters, daily scheduling to build habitual consumption — is identical to the logic of the algorithmic feed. Both are designed to be impossible to "finish." Both exploit the psychological principle (known in cognitive science as the Zeigarnik Effect) that unresolved situations create persistent cognitive tension that drives continued engagement.

The word "cliffhanger" entered the popular lexicon in this era. The algorithm's equivalent — the "you might also like" infinite recommendation queue — is a cliffhanger that never ends.

1938 — War of the Worlds: A Demonstration of Media's Power

October 30, 1938: Orson Welles broadcasts his radio adaptation of H.G. Wells's War of the Worlds in a simulated news broadcast format. Though the program was clearly labeled as fiction, a subset of listeners who tuned in late believed it described an actual Martian invasion. Reported panic — possibly exaggerated by subsequent newspapers eager to diminish radio's competition — illuminated something real: mass media could manufacture mass belief.

The episode spawned two decades of research in communication and propaganda, including Hadley Cantril's 1940 study of the broadcast's effects. This research established that psychological suggestibility in media consumption was not a marginal phenomenon but a structural feature of how people processed broadcast information — a finding that would eventually underpin the entire field of persuasion science.

1940s — The Scientific Foundation: Skinner and Variable Reinforcement

B.F. Skinner's operant conditioning research in the 1940s and 1950s provided the theoretical substrate for what would later become the behavioral design of digital platforms. His key finding for this history: variable ratio reinforcement schedules — reward delivered unpredictably, after an unpredictable number of responses — produce the most persistent and extinction-resistant behavior in animals and humans.

The slot machine was already an empirical demonstration of this principle before Skinner formalized it. Skinner's contribution was to give it a scientific vocabulary and experimental foundation. The variable ratio schedule would prove applicable to every pull-to-refresh mechanism, every social notification, every "likes received" counter — any system where the reward for a simple repeated action is delivered unpredictably.


1950–1980: Television and Consumer Psychology

1950 — The Living Room Invasion

By 1950, television sets were in roughly 9% of US homes; by 1955, that figure was 65%; by 1960, it reached 90%. The penetration rate of no prior technology — not radio, not print — approached this speed.

Television's combination of motion, sound, and visual imagery made it the most compelling attentional environment yet created. Research on television's attentional effects began almost immediately: children's attention and behavior, news consumption and political formation, the construction of consumer desire. The television set reorganized the American home — furniture was reoriented to face it, meals were eaten in front of it, bedtimes shifted around it.

The basic behavioral replacement effect documented decades later in adolescent social media use — digital media displacing sleep, social interaction, and physical activity — was first documented in research on television.

1960 — The First Televised Presidential Debate

September 26, 1960: John F. Kennedy and Richard Nixon face each other in the first televised US presidential debate. Viewers who watched on television overwhelmingly perceived Kennedy as having won; radio listeners gave Nixon the advantage. Political appearance, not only argument, became determinative in a televised age.

This moment marks the formalization of "optics" as political reality — the point at which media-managed perception became primary to political success. The debate's lessons were internalized by every political campaign thereafter: the visual and emotional presentation of information would determine outcomes that the substance of argument would not.

1960s — Persuasion Science Formalizes

The advertising industry of the 1960s systematically imported insights from social psychology into commercial persuasion. The AIDA model (Attention, Interest, Desire, Action) formalized the psychological stages of a purchase decision. Rosser Reeves developed the Unique Selling Proposition, emphasizing simple, repeated emotional claims over complex product information. David Ogilvy built a practice around testing which specific words and images most effectively moved consumer behavior.

This was the birth of what would later be called behavioral design — the application of psychological research to the engineering of specific human behaviors at scale. The mad men of Madison Avenue were the intellectual predecessors of the behavioral economists and product designers who would later build the algorithms.

1971 — The Scarcity of Attention

Herbert Simon, economist and Nobel laureate, coins what would become the foundational concept of the attention economy in a 1971 essay: "A wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it."

Simon's formulation identified the coming structural reality decades before it arrived: when information is abundant, attention is scarce. When attention is scarce, whoever commands it has economic and political power. The implication — that attention would become the primary commodity in a post-scarcity information economy — was largely ignored at the time. By 2010, it was the operating principle of the largest companies in the world.

1976 — The Electronic Slot Machine

The first slot machines using electronic random number generators (replacing mechanical reels) appeared in the mid-1970s, enabling fully programmable variable ratio reinforcement schedules without the mechanical constraints of physical reels. Casino operators could tune the reward frequency and magnitude of individual machines to optimize hold time (the length of time a gambler remains at the machine).

This is the direct technological ancestor of the social media notification system. The feedback loop — insert token, pull lever, wait for variable reward — is structurally identical to: open app, scroll, wait for like or comment. The electronic slot machine was the first consumer technology to programmably implement Skinner's variable ratio schedule at scale, for profit.

1979 — Personal Media, Personal Attention

The Sony Walkman introduced in July 1979 represents the first personal media device: audio content carried in the pocket, consumed through headphones, separating an individual's attentional environment from the shared environment of everyone else around them. The concept of immersive personal media — attention captured by a portable device rather than a shared public one — was invented here.

The Walkman's cultural significance was immediate and controversial. Critics described young people "tuning out" of their social environments in familiar terms. The isolation critique preceded the smartphone critique by four decades.


1980–2000: The Internet Era Begins

1984 — The GUI and the Mass Market Computer

The Apple Macintosh (January 1984) demonstrated that consumer computing did not require technical expertise — a graphical user interface could make the machine accessible to anyone. The decision to prioritize interface design (visual intuitiveness, mouse navigation, desktop metaphor) over technical capability was a fundamentally psychological one: the machine that would win would be the machine whose interface best matched human cognitive and motor habits.

Interface design as psychological engineering was established here as a commercial imperative. The question of what makes an interface "easy to use" turned out to be inseparable from the question of what makes it compelling to continue using.

1994 — The First Banner Advertisement

October 25, 1994: HotWired magazine runs the first banner advertisement, for AT&T, alongside content on its website. The click-through rate is approximately 44%. Within two years, click-through rates had fallen to under 1%, where they have largely remained since.

This collapse in click-through rates created the fundamental pressure that drove the transformation of web advertising from simple display to sophisticated behavioral targeting: since people would not click on general ads, advertisers needed to show specific people specific ads at specific moments. The infrastructure of user surveillance, behavioral tracking, and predictive targeting was built to solve the banner ad's failure.

1995–1998 — E-Commerce and the Tracking Infrastructure

The mid-to-late 1990s saw the construction of the behavioral tracking infrastructure that would underpin the attention economy:

  • Amazon (1994–1995): pioneered collaborative filtering ("customers who bought this also bought") — the first algorithmic recommendation system deployed at consumer scale.
  • DoubleClick (1996): built the first cross-site advertising tracking network using cookies, enabling behavioral profiles across multiple websites.
  • Google (1998): PageRank applied algorithmic curation to information retrieval — the first algorithm that determined what information billions of people would see based on computed relevance scores.

The data collection architecture of the modern surveillance economy was assembled in these years, largely without public awareness or regulatory oversight, in the regulatory vacuum created by a political consensus that "the internet" required freedom from interference to grow.

1997–1999 — Social Networks Appear

Six Degrees (1997) is widely credited as the first recognizable social networking site, allowing users to create profiles and list connections to other users. It attracted over a million users before closing in 2000. Blogger (1999) enabled user-generated content at scale, establishing the blog as a publishing format and beginning the transformation of internet users from consumers to producers of content.

These early experiments established the key behavioral patterns that would define social media: identity performance (the profile), social graph construction (the friends list), and public content creation (the post). They also established a crucial business model problem: neither six degrees nor Blogger found sustainable revenue before social media found the advertising model.


2000–2010: The Social Media Explosion

2003–2004 — The First Wave: Friendster, MySpace, Facebook

Friendster (2003) was the first social network to achieve significant scale — three million users within months — before being overtaken by its own success as servers failed under demand. MySpace (2003) added music, customizable profiles, and eventually achieved 100 million users. Facebook (2004) launched at Harvard, then expanded to other colleges, then opened to all users in 2006, offering a cleaner, less customizable design that MySpace users initially found unappealing.

Facebook's initial design insight was social specificity: real names, real institutions, real people you actually knew. This distinction — Facebook as an extension of actual social networks rather than an anonymous public forum — proved decisive. The platform that most accurately simulated real social belonging would outcompete those that merely simulated it.

2005–2006 — Video, Openness, and Mobile Beginnings

YouTube launched in 2005, acquired by Google in 2006 for $1.65 billion, establishing user-generated video as a mass medium. Twitter launched in 2006, introducing the public short-form broadcast — the status update as conversation — and the asymmetric follow model (you could follow anyone without their permission) that distinguished it from the symmetric friendship model of Facebook.

Facebook's opening to the public (2006) and the launch of the News Feed (also 2006) — a real-time stream of friends' activity — was both Facebook's most important product decision and the source of its first major controversy. Users, accustomed to static profiles, were alarmed to find their every action now broadcast to their friends. A student-organized Facebook group protesting the News Feed attracted 700,000 members in two days.

Facebook did not remove the News Feed. It added privacy controls. The precedent was set: user discomfort with platform surveillance and broadcast would be managed with optics, not design change.

2007 — The Device That Changed Everything

January 9, 2007: Steve Jobs announces the iPhone. Within three years, the combination of smartphone and mobile internet access would transform social media from a desktop activity into an always-present, pocket-carried experience.

The significance of the iPhone for social media's psychological impact cannot be overstated. The constraints of desktop computing — sitting at a specific physical location, during specific time blocks — were the last structural brake on social media's attentional demands. Mobile eliminated them. The phone in the pocket meant the platform was always present, always beckoning, accessible in every idle moment, at every meal, in every social setting.

2009 — The Like Button and the Retweet

Two product decisions made in 2009 have had effects on human psychology and political discourse that their creators could not have fully anticipated.

Facebook's Like button (February 2009): The quantification of social approval — a single click that delivers a visible, countable signal of others' positive regard — created a direct operant conditioning loop for content creation. Posts that receive more likes teach posters to create more like-attracting content. The Like made social validation legible, gamified, and addictive in ways that verbal approval never was.

Twitter's native Retweet (November 2009): The ability to broadcast someone else's content to your entire following with a single click created the viral propagation mechanism that would define Twitter's subsequent influence on political discourse. The Retweet made emotional and outrage content spread faster than analytical content, for structural reasons that have been extensively documented.

Twitter's head of consumer product, Weibo Williams, would later say publicly: "We observed this behavior [harassment and hot-take culture emerging] and we said, 'Hm, this is interesting.' And we added a 'Retweet' button... If I had a time machine, I would go back and I don't offer the Retweet as a one-click button."


2010–2020: Algorithmic Optimization

2011 — The Algorithm Takes Control

September 2011: Facebook introduces EdgeRank (later replaced by more sophisticated successors), algorithmically filtering the News Feed to show users content predicted to maximize their engagement rather than simply displaying all updates chronologically.

The algorithmic feed is the pivotal architectural decision of the social media era. Before it, the News Feed was essentially a transparent record of friends' activity. After it, what users saw was determined by a machine-learned optimization process targeting engagement — without users' knowledge, without their consent, and with incentives (advertising revenue) that had nothing to do with their well-being.

The shift from chronological to algorithmic feed transformed social media platforms from communication tools into attention optimization engines.

2012 — The Emotional Contagion Experiment

In January 2012, Facebook researchers manipulated the News Feed content of approximately 689,000 users to be either more emotionally positive or more emotionally negative, without informing them. The resulting study — published in PNAS in 2014 — demonstrated that algorithmic manipulation of content could alter users' emotional states at scale without direct human-to-human interaction.

The publication of the study in 2014 caused an immediate public backlash. The response revealed an uncomfortable truth: Facebook had been running behavioral experiments on its users without consent for years, as a normal part of product development. The ethical controversy was substantial. The regulatory consequences were minimal.

2012–2013 — The Acquisition Era

April 2012: Facebook acquires Instagram for $1 billion. The acquisition prevents a competitor from growing and incorporates Instagram's photo-sharing and visual social comparison mechanics into Facebook's portfolio.

October 2013: Snapchat, launched in 2011, rejects a $3 billion acquisition offer from Facebook. Its disappearing messages format had introduced a new psychological mechanic — impermanence — that created different engagement patterns from permanent-record platforms.

2014 — The Surveillance Profiling Revealed

Reports in 2014 revealed that Facebook maintained approximately 98 distinct data categories per user for advertising targeting, including politically sensitive categories (political affiliation, ethnicity, religious beliefs), behavioral inferences (recent travel, life events, purchase behavior), and psychographic inferences drawn from behavior patterns. Users had no meaningful visibility into these profiles and limited recourse.

The scale of behavioral profiling — combining on-platform behavior with off-platform data purchased from data brokers — was qualitatively different from what had existed in any prior advertising medium. This was not demographic targeting (showing ads to people in a particular zip code). It was individual psychological profiling deployed to predict and manipulate specific behaviors.

2016 — The Algorithm Meets Politics

2016 marks the convergence of several forces: TikTok (as Douyin) launches in China; Cambridge Analytica builds its voter micro-targeting operation using Facebook data; the US presidential election becomes the focal point for concerns about social media's political manipulation effects; and YouTube's recommendation algorithm begins its most intensive radicalization period.

The combination of algorithmic amplification of outrage, micro-targeted political advertising, coordinated inauthentic behavior by state actors, and the structural design of social platforms for engagement over accuracy produced an information environment that many observers concluded had materially affected electoral outcomes. The debate about how much effect is still active; the fact of the interference is documented.

2018 — The Cambridge Analytica Reckoning

March 2018: The Guardian and New York Times simultaneously publish revelations that Cambridge Analytica, a political data firm, had harvested the personal data of approximately 87 million Facebook users through a seemingly academic personality quiz app, using it to build psychological profiles for political micro-targeting — and that Facebook had known about the breach for two years.

The scandal represented the first moment of genuine mass public awareness of the scale of behavioral surveillance underlying social media. Facebook's stock fell 10% in a single day. Mark Zuckerberg testified before Congress. The EU's General Data Protection Regulation came into force two months later.

The scandal also revealed the gap between regulatory capacity and platform complexity: most Senators' questions revealed a fundamental misunderstanding of how the platforms worked. The regulators were decades behind the technology they were nominally overseeing.

2018 — "Meaningful Interactions" as Cover for Outrage Amplification

In January 2018, Facebook announced an algorithmic change described as prioritizing "meaningful social interactions" — content from friends and family over publishers and news. Internal documents later revealed by whistleblower Frances Haugen showed that this change, intended to respond to criticism of the platform's role in the 2016 election, had actually increased the algorithmic weight given to content that generated "angry" reactions.

The result was documented internally: content provoking anger and outrage received a five-times multiplier in engagement ranking — meaning Facebook's own algorithm systematically elevated emotionally negative, divisive content because it generated more reactions. The company's own researchers documented the effect and, according to internal communications, proposals to address it were overridden for fear of reducing overall engagement metrics.

2019 — The Orben/Przybylski Study Changes the Debate

Orben and Przybylski's Nature Human Behaviour paper (January 2019) applied Specification Curve Analysis to show that the average association between digital technology use and adolescent well-being was approximately r = 0.05 — smaller than the effect of wearing glasses. This finding became the primary academic weapon in arguments against strong regulation of social media, widely cited by platforms and some academics as evidence that the harms were overstated.

The debate it launched — still unresolved — forced greater methodological rigor on all sides and established that honest engagement with the evidence required grappling with effect sizes, not just statistical significance.


2020–2025: Reckoning and Reform

2020 — Advertiser Boycott and Platform Power

July 2020: A coalition of major advertisers including Coca-Cola, Unilever, Verizon, and hundreds of others paused advertising on Facebook as part of the #StopHateForProfit campaign, demanding changes to Facebook's content moderation policies on hate speech and misinformation. The boycott demonstrated that advertiser pressure, rather than regulatory pressure, was the mechanism most likely to produce platform behavior change — and also demonstrated its limitations: Facebook's revenue dipped briefly and then recovered fully.

2021 — The Facebook Papers

October 2021: Frances Haugen, a former Facebook product manager, releases thousands of pages of internal Facebook documents to Congress and journalists. The documents — quickly dubbed the "Facebook Papers" — showed that:

  • Facebook's internal researchers had documented harms from Instagram use on adolescent girls and had proposed product changes to reduce them
  • These proposals were rejected by leadership concerned about engagement and growth metrics
  • Facebook knew its platforms were used for human trafficking, drug sales, and political violence in developing countries and took minimal action
  • The "meaningful interactions" algorithm change had amplified outrage content as documented by internal research

The Facebook Papers were the most significant disclosure of technology company internal documents since the tobacco industry's internal memos, and the comparison was drawn explicitly by multiple commentators. They shifted the debate from "did platforms know?" to "what did they do with what they knew?"

2021–2022 — Legislative Responses

The EU Digital Services Act was proposed in 2021 and passed by the European Parliament in 2022, entering into force incrementally from 2023. The DSA imposed obligations on very large online platforms including algorithmic transparency requirements, prohibitions on targeting based on sensitive personal data, and independent algorithmic auditing.

The UK Online Safety Act passed in 2022 after years of legislative debate, creating a new duty of care obligation for platforms toward users — particularly children — and establishing Ofcom as the regulatory body.

These were the most significant pieces of social media regulation enacted since the Communications Decency Act of 1996, and they represented a genuine shift from a "hands off" to a "duty of care" regulatory philosophy.

2023 — TikTok Before Congress

March 23, 2023: TikTok CEO Shou Zi Chew testifies before the US House Energy and Commerce Committee for five hours in a session that became a bipartisan spectacle. Questions ranged from data security concerns about Chinese government access to TikTok data to more general platform safety questions. The hearings established that Congressional concern about social media had reached bipartisan consensus — but also, as in the Zuckerberg hearings five years earlier, that the questioning often reflected more confusion than expertise.

2024 — The DSA in Full Force

February 2024: The EU Digital Services Act comes into full force for Very Large Online Platforms (VLOPs), requiring platforms with over 45 million EU users to conduct independent risk assessments of their algorithmic systems, make algorithm-choice mechanisms available to users (including a non-personalized feed option), and grant independent researchers access to data for auditing purposes.

The DSA represents the first regulatory attempt to address the algorithmic optimization problem at the architectural level — not just regulating content but regulating the systems that determine what content reaches whom. Its long-term effectiveness remains to be demonstrated, but its ambition marks a threshold in the history of platform governance.

2025 — The AI Content Flood

By 2025, AI-generated content — text, images, audio, and video — has reached a scale and quality that makes reliable human-AI content distinction practically impossible for most users. Social media platforms face a novel version of the manipulation problem: not just algorithmic amplification of emotionally engaging human content, but an effectively unlimited supply of psychologically optimized synthetic content that can be produced and distributed at near-zero marginal cost.

The attention economy has acquired a new tool: not only can algorithms determine what people see, but AI can generate what they see — producing content precisely calibrated to maximize engagement for specific users. The history traced in this timeline arrives at a moment where the distance between persuasion technology and its targets has narrowed to essentially zero, and the regulatory frameworks designed for earlier media are scrambling to catch up.


Conclusion: The Through-Line

Reading this timeline, several through-lines are visible:

The advertising model is the original sin. Every time a new medium has adopted an advertiser-funded model — penny press, commercial radio, broadcast television, the open web, social media — the structural incentive to maximize attention at the cost of accuracy, emotional health, and civic quality has followed. The model precedes the harms.

Technology amplifies but does not create psychology. The variable reward schedules, social comparison drives, status anxieties, and tribal loyalties that social media exploits are features of evolved human psychology. Every medium from Hearst's newspapers to Skinner's boxes has exploited them. Social media exploits them faster, more precisely, more incessantly, and at previously impossible scale — but the psychology is ancient.

Regulation always lags. The gap between demonstrated harm and regulatory response has been consistent across every medium: roughly one to three decades. We are now roughly fifteen to twenty years into demonstrable social media harms, which places us — historically — in the middle of the regulatory lag phase.

The harms are structural, not accidental. At no point in this history have attention harms been unintended side effects that companies were racing to fix. They have consistently been features: engagement metrics measured, optimized, and defended against reform by companies whose revenue depends on maximizing them.

This history is not a counsel of despair. Regulatory responses have eventually arrived. Individual behaviors have shifted. Platforms have been shamed, reformed, and occasionally displaced. The penny press era ended; the yellow journalism era ended; the tobacco-like regulatory vacuum in digital advertising will end. The question is how much harm accumulates before it does.


Timeline sources include: Starr, P. (2004). The Creation of the Media; Wu, T. (2016). The Attention Merchants; Zuboff, S. (2019). The Age of Surveillance Capitalism; and primary historical records referenced throughout.