Case Study 24-1: Facebook and the Surveillance Capitalism Business Model

Overview

Meta Platforms, Inc. — the company that owns Facebook, Instagram, and WhatsApp — is the world's most comprehensive example of the surveillance capitalism business model. With approximately 3.3 billion daily active users across its platforms as of 2024, Meta has built an advertising revenue engine that depends on collecting behavioral data from more people than live in any country on Earth, building psychological profiles of those people, and selling advertisers the ability to reach specific individuals at specific moments with messages predicted to be persuasive.

This case study examines how Facebook's behavioral surveillance model was built, how it operates, what documented harms it has produced, how regulators have responded, and how the business has performed as privacy concerns and regulatory pressure have grown. It is not primarily a story of scandal — though there have been many scandals. It is a story about the systematic architecture of a business model and its consequences.


Building the Surveillance Machine: Facebook's Advertising Evolution

From Social Network to Behavioral Data Platform

Mark Zuckerberg launched Facebook in 2004 as a social networking service for Harvard undergraduates. The service was free because the social networking use case — maintaining connections with friends and classmates — required widespread adoption that a paid model would have prevented. But a free service requires revenue, and the obvious revenue source for a consumer internet service with a large user base was advertising.

Facebook's early advertising model was relatively unsophisticated: demographic targeting based on the profile information users provided — age, location, education, stated interests. This was more targeted than a newspaper advertisement but less targeted than what would follow. The critical development came as Facebook recognized that behavioral data — what users did, not just what they said about themselves — was far more valuable than profile data.

The behavioral data available from a social network is unusually rich. Facebook could observe who you communicated with, how often, and in what contexts. It could observe what content you lingered on, what you scrolled past. It could observe what you liked, shared, and commented on. It could observe your emotional reactions to content — anger, surprise, love, sadness. It could observe the pattern of your daily activity — when you logged in, how long you stayed, what you did during different times of day. And it could observe the behavior of your social network, which added a social-contextual dimension to individual behavioral data.

By 2010, Facebook had shifted from primarily demographic targeting to behavioral targeting, using engagement patterns, stated interests, and social graph data to build profiles that were far more predictive than demographics alone. By 2015, it had developed Custom Audiences (allowing advertisers to target their own customer lists matched to Facebook accounts) and Lookalike Audiences (identifying Facebook users similar to an advertiser's known customer base). These tools made Facebook's targeting precision extraordinary.

The Advertising Auction

Facebook's advertising system operates through a real-time auction. When a user's news feed loads, an automated auction runs in milliseconds. Advertisers have pre-specified the audience characteristics they want to reach — behavioral categories, demographic characteristics, geographic location, interests, custom audience matches — and bid amounts they are willing to pay. Facebook's system selects the highest bidder whose ad matches the user's profile and inserts that ad into the feed.

The auction is not simply the highest cash bidder. Facebook's system optimizes for what it calls "total value" — the product of the advertiser's bid, the predicted click-through rate (how likely the specific user is to click on the specific ad), and a quality assessment. This means Facebook's auction rewards accurate behavioral prediction: advertisers who can identify the users most likely to respond to their ads, and Facebook whose prediction systems correctly identify those users, both benefit.

The result is that Facebook's commercial incentive — maximizing advertising revenue — is directly aligned with maximizing the accuracy of behavioral prediction. The better Facebook's model predicts whether any given user will respond to any given ad, the more valuable the auction becomes for advertisers and the more revenue Facebook generates. This is the core commercial logic that drives relentless investment in behavioral data collection and behavioral modeling.

The Off-Facebook Activity Problem

Facebook's tracking does not stop when users leave Facebook.com. The Facebook Pixel — a small piece of tracking code placed on millions of third-party websites — sends data back to Facebook whenever a user who has Facebook cookies on their device visits any website using the Pixel. This allows Facebook to track what websites you visit, what products you view, what purchases you make on third-party sites, what news articles you read — all without you being on Facebook or knowing you are being tracked.

The social login feature — "Login with Facebook" — serves a similar function. Sites that allow Facebook login receive user data from Facebook and provide behavioral data back, extending the surveillance network beyond Facebook's own properties.

The Off-Facebook Activity tool, which Facebook introduced in 2019 to comply with European regulatory pressure, allows users to see what third-party apps and websites have shared their activity with Facebook. The disclosures are typically extensive: hundreds of websites and apps, covering activity from e-commerce to news reading to healthcare research, all feeding behavioral data into Facebook's profile of each user. Most users have no idea this tracking is occurring.


Documented Harms

Cambridge Analytica (Cross-Reference: Chapter 23 Case Study 01)

The Cambridge Analytica scandal — described in detail in Chapter 23's first case study — was perhaps the most famous of Facebook's privacy harms, but far from the only one. The scandal illustrated how Facebook's permissive data access model enabled the large-scale harvest of personal data for political manipulation, and how the company's response to early warnings about the misuse was inadequate.

The Emotional Contagion Experiment

In 2014, Facebook published a research paper in the Proceedings of the National Academy of Sciences describing an experiment in which it had manipulated the news feeds of approximately 689,000 users to test whether emotional content exposure could cause emotional contagion — whether seeing more sad content would make users post sadder content, and vice versa. The study found that it could.

The public and academic response to the paper was intense. Critics noted that the research had been conducted without the knowledge or consent of users. Facebook's defense was that consent had been obtained through users' acceptance of the Data Use Policy — a multi-thousand-word document that mentioned "research" among its list of data uses. The claim that this constituted informed consent for psychological experimentation was rejected by most observers.

The emotional contagion study is significant not primarily because of the experiment itself — which caused minimal direct harm — but for what it revealed about Facebook's capabilities and mindset. Facebook could systematically manipulate what 689,000 people saw in their news feeds and measure the psychological effects on their subsequent behavior. The company treated this capability as a legitimate research tool, subject only to internal ethics review. The public discovery that Facebook was conducting psychological experiments on its users without their knowledge was a significant reputational event.

The Rohingya Genocide and Hate Speech Amplification

In Myanmar, Facebook was the dominant social media platform and, for many users, the primary source of news and information. Facebook's recommendation algorithms, optimized for engagement, amplified content that provoked strong emotional responses — including content promoting ethnic hatred and calls for violence against the Rohingya Muslim minority. Myanmar's military campaign against the Rohingya, which the UN characterized as a genocide, was accompanied by and facilitated by a coordinated online disinformation and hate speech campaign conducted primarily through Facebook.

Facebook was warned repeatedly about the use of its platform for anti-Rohingya hate speech. The company's responses were slow and inadequate, constrained by limited investment in content moderation in Burmese language and by algorithmic systems that continued to amplify the most engaging content regardless of its content. A UN fact-finding mission concluded in 2018 that Facebook had played a "determining role" in the violence.

Facebook subsequently acknowledged that it had not done enough. The acknowledgment did not address the fundamental architecture of a recommendation algorithm optimized for engagement, whose commercial logic is indifferent to whether the content that maximizes engagement is beneficial or harmful.

Advertising to Categories Linked to Discrimination

Facebook's advertising system, with its elaborate audience targeting capabilities, has been used for discriminatory purposes. In 2019, Facebook settled a complaint from the Department of Housing and Urban Development alleging that its advertising system allowed housing advertisers to exclude users from seeing ads based on race, national origin, religion, sex, familial status, and disability — characteristics protected under the Fair Housing Act. The settlement required Facebook to create a separate portal for housing, employment, and credit advertising with limited targeting options.

The settlement illustrated a fundamental tension in Facebook's targeting model: the same capabilities that allow advertisers to target their most likely customers allow them to exclude protected groups. Algorithmic targeting that operates through behavioral proxies for protected characteristics — neighborhood, purchasing behavior, content preferences — can produce discriminatory effects even when no explicitly protected characteristic is listed as a targeting criterion.

Mental Health and Teen Girls

The most significant recent controversy about Facebook's harms came from the whistleblower disclosures of Frances Haugen in 2021. Haugen, a former Facebook product manager, provided internal research documents to journalists and regulators showing that Facebook's own research had found that Instagram — its image-sharing platform particularly popular among teenage girls — had measurable negative effects on the mental health of teenage female users.

The research, described in internal presentations, found that for 13.5% of teen girls, Instagram worsened body image issues, and that Instagram was a significant factor in eating disorders and suicidal ideation in a segment of teenage female users. Facebook's response to this research — burying it, discounting it, and continuing to optimize Instagram for engagement — became the subject of congressional testimony and ongoing litigation.


Regulatory Responses

The FTC Settlement

The Federal Trade Commission's $5 billion settlement with Facebook in 2019 was the largest fine the FTC had ever imposed. It resolved allegations that Facebook had violated a 2012 FTC consent decree that required the company to protect users' privacy. The violations included: providing users' phone numbers collected for security verification purposes to advertisers; failing to tell users their facial recognition data was being used; and misrepresenting the privacy implications of its "Friends' Apps" setting.

The $5 billion fine represented approximately three months of Facebook's revenue. The settlement imposed a 20-year privacy program requirement, including an independent privacy oversight committee. Critics noted that the fine was insufficient to deter a company of Facebook's size, and that the settlement did not require fundamental changes to the advertising surveillance model.

EU Privacy Enforcement

European data protection authorities have been more aggressive than US counterparts in challenging Facebook's privacy practices. The Irish Data Protection Commissioner (DPC), which is the lead authority for Facebook under GDPR's one-stop-shop mechanism (because Facebook's European headquarters are in Ireland), has issued several major enforcement decisions:

  • 1.2 billion euro fine (2023) for transferring European user data to the US without adequate safeguards
  • 390 million euro fine (2023) for relying on contract necessity rather than consent for behavioral advertising
  • 265 million euro fine (2022) for a data scraping breach involving 533 million users' data

These fines total several billion euros and represent meaningful financial consequences — though still modest relative to Meta's annual revenue of approximately 130 billion dollars.

Following the 2023 ruling requiring Facebook to obtain explicit consent for behavioral advertising, Meta introduced a subscription model for EU users: users who do not consent to tracking may pay approximately 10 euros per month for an ad-free, no-tracking version of Facebook and Instagram. The European Data Protection Board challenged this model in 2024, arguing that a genuine "free" alternative without tracking should be available, because charging for privacy makes consent coerced by economic pressure.

The pay-or-consent dispute illustrates the fundamental challenge of applying consent-based privacy rules to advertising-dependent platforms. The platform's business model depends on behavioral surveillance. Genuine consent requires a real alternative to consenting. A real alternative eliminates the surveillance on which the business depends. The resolution of this tension will determine whether GDPR can meaningfully constrain the surveillance capitalism model in Europe.


Post-2021 Performance: The Privacy Reckoning

Apple's ATT Framework

In April 2021, Apple introduced App Tracking Transparency (ATT) — a requirement that apps ask users for explicit permission to track them across apps and websites. The prompt is simple: a dialog box asking whether the user wants to allow tracking or not. The majority of users — approximately 75% globally — chose not to allow tracking.

ATT's impact on Facebook was significant. The ability to track users across apps and websites is foundational to Facebook's advertising measurement — its ability to attribute purchases and conversions to specific Facebook advertisements. Without cross-app tracking, Facebook could not accurately measure whether its advertising was working, which made it harder to justify prices to advertisers and harder to optimize targeting.

Facebook disclosed that ATT was expected to reduce its 2022 revenue by approximately $10 billion. The actual impact was somewhat smaller, as Facebook developed alternative measurement approaches, but the revenue impact was real and contributed to Facebook's first-ever quarterly revenue decline in mid-2022.

The Market Response

Facebook's stock price fell approximately 70% in 2022 — a decline driven by a combination of the ATT headwind, competition from TikTok for younger users, and the massive investment in the "metaverse" pivot that had not yet generated revenue. The decline was partly recovered in 2023 as Facebook's "year of efficiency" — a large-scale workforce reduction — restored profitability, and as its advertising targeting capabilities adapted to the post-ATT environment.

The ATT episode suggests that privacy-protective technical constraints on surveillance — imposed not by regulation but by a platform (Apple) with competitive interests in limiting Facebook's surveillance capability — can have significant commercial effects on the surveillance business model. This raises questions about whether competitive privacy markets, rather than regulation, might be an effective mechanism for limiting surveillance capitalism.


Lessons for Business Professionals

Business models built on surveillance create regulatory and reputational liability. The cumulative regulatory fines, litigation costs, and reputational damage from Facebook's privacy practices have been substantial. A business model that depends on extensive behavioral surveillance is a business model that creates ongoing legal and reputational risk.

Algorithm optimization for engagement creates systemic harms. Facebook's engagement-optimized recommendation systems created measurable harms — from amplifying anti-Rohingya hate speech to damaging teenage mental health — that were not unforeseeable consequences but predictable outcomes of the optimization objective. Engagement is not a proxy for user wellbeing, and systems optimized for engagement will systematically sacrifice wellbeing for engagement.

Internal research awareness of harm creates heightened accountability. The Frances Haugen disclosures revealed that Facebook knew from its own research about Instagram's harmful effects on teenage girls and did not change course. The gap between internal knowledge of harm and external action creates both ethical and legal liability. Organizations cannot claim ignorance of harms they have internally documented.

Platform design choices are ethical choices. The attention economy features of Facebook — the infinite scroll, the like button, the algorithmic feed — are design choices that shape user behavior and psychological states. Framing these as neutral design decisions immunizes them from ethical scrutiny they deserve.