46 min read

Shoshana Zuboff coined the term "surveillance capitalism" in 2014 to describe a new economic logic: one in which human experience is the raw material for behavioral prediction products sold to advertisers. Google, Facebook, and Amazon did not invent...

Chapter 24: Surveillance Capitalism and AI

The Invisible Economy of Human Experience

Shoshana Zuboff coined the term "surveillance capitalism" in 2014 to describe a new economic logic: one in which human experience is the raw material for behavioral prediction products sold to advertisers. Google, Facebook, and Amazon did not invent surveillance — but they scaled it into a business model with global reach and extraordinary power. The phrase captured something that had been difficult to name: the sense that digital services are not simply commercial exchanges, but something more extractive, more asymmetric, more troubling in its implications for human autonomy and democratic society.

The architecture of surveillance capitalism preceded AI as we now know it. The early data economy of banner advertising, click tracking, and demographic targeting was already extractive, already asymmetric, already largely invisible to the people whose data was being harvested. But AI changed the scale, sophistication, and consequence of surveillance capitalism. Machine learning made it possible to identify behavioral patterns in data that no human analyst could discern. Deep learning made image and voice data as legible as text. Recommendation algorithms made it possible to shape what billions of people read, watch, and believe with unprecedented precision. The surveillance economy of the early internet became, with AI, a behavioral modification system of extraordinary scope.

This chapter examines surveillance capitalism as a business model, analyzes AI's role in amplifying it, traces the harms it produces, and considers what alternatives might look like. For business professionals, the questions this chapter raises are not merely philosophical. They concern the ethical limits of business models, the legal consequences of surveillance practices, and the growing consumer and regulatory backlash against surveillance that is reshaping the digital economy.


Learning Objectives

By the end of this chapter, you should be able to:

  1. Define surveillance capitalism using Zuboff's framework and explain the core economic logic of behavioral data markets.
  2. Describe how major platforms collect, process, and monetize behavioral data, and explain the role of AI in making this process more effective.
  3. Identify the specific ways AI amplifies surveillance capitalism, including through behavioral prediction, micro-targeting, and sentiment analysis.
  4. Analyze the legal and ethical issues raised by AI-powered workplace surveillance.
  5. Distinguish between public surveillance in democratic and authoritarian contexts and assess the implications of each.
  6. Enumerate the specific harms of surveillance capitalism and apply them to business scenarios.
  7. Describe the regulatory responses to surveillance capitalism, including GDPR restrictions on behavioral advertising, the EU Digital Services Act, and proposed US legislation.
  8. Evaluate alternatives to surveillance capitalism and articulate what ethical data practices look like for businesses.

Section 1: What Is Surveillance Capitalism?

Zuboff's Framework

Shoshana Zuboff, emerita professor at Harvard Business School, published the foundational analysis of surveillance capitalism in a 2014 journal article and expanded it into a 700-page treatise in 2019. Her framework begins with a historical claim: something genuinely new emerged in the early twenty-first century that is not adequately captured by previous analyses of capitalism, surveillance, or the information economy.

The new thing is what Zuboff calls "behavioral surplus." Every commercial transaction generates data — data that is, in principle, relevant to the transaction. When you search for flights, the search engine needs to know your query to return results. But surveillance capitalism discovered that the data generated by human behavior could be used for something beyond its immediate commercial purpose. Your search terms, your location, the time of day, your browsing history, your emotional state as inferred from your word choices — all of this is relevant not just to returning accurate search results, but to predicting your future behavior. And predictions of future behavior are extraordinarily valuable to anyone who wants to influence that behavior: advertisers, political campaigns, insurance companies, employers.

Behavioral surplus is what remains after the data required for the service has been extracted. If you search for shoes, your search query is necessary for returning results. Your location, your browsing history, the time you spend looking at different results, the emotion expressed in your previous searches — all of this is behavioral surplus. It was generated as the incidental byproduct of your interaction with the service. It was not necessary for the service. But it is valuable for building models of your behavior and selling predictions of your future choices to advertisers.

Prediction Markets

Surveillance capitalism creates what Zuboff calls "behavioral futures markets" — markets in which the product being bought and sold is predictions of future human behavior. Advertisers pay Google and Facebook not for access to audiences, but for the ability to place their messages before specific individuals at moments when those individuals are most likely to act in the desired way. The more accurately the platform can predict behavior, the more valuable its advertising product.

This is a fundamentally different business from traditional advertising. A billboard placed on a highway reaches everyone who drives past. A newspaper advertisement reaches everyone who reads that page. These are broad bets on demographic groups. Surveillance capitalism sells something more specific: the ability to reach this specific individual, at this specific moment, with this specific message — because the platform knows (or can predict with considerable accuracy) that this individual is in a state of mind receptive to this message.

The prediction product is what is sold to advertisers. The people whose behavior is being predicted are not customers in this transaction. They are the raw material from which the prediction product is manufactured. This is the fundamental asymmetry of surveillance capitalism: the users who generate the behavioral data are not paid for it, do not own it, and have no meaningful control over how it is used.

The Instrumentarian Power

Zuboff introduces the concept of "instrumentarian power" to describe the power that surveillance capitalism produces. Unlike traditional forms of power — which modify behavior through reward, punishment, or persuasion — instrumentarian power modifies behavior by shaping the environment in which choices are made. It does not compel you to do anything. It arranges conditions so that you are more likely to make the choices the system wants you to make.

The recommendation algorithm does not tell you what to watch. It arranges what appears before you so that you are more likely to watch content that keeps you engaged, because engagement generates behavioral data and advertising revenue. The newsfeed algorithm does not tell you what to believe. It shows you content that its predictions suggest will provoke strong emotional responses, because strong emotional responses increase engagement.

This environmental shaping is difficult to perceive, difficult to resist, and difficult to hold accountable — because the mechanism is statistical and distributed across millions of individual decisions. No specific choice is coerced. But the overall pattern of choices is nudged, systematically and at scale, in directions that serve the surveillance capitalists' commercial interests.

Historical Context

Surveillance has a long history predating the internet. Governments have surveilled their citizens, employers have monitored their workers, and marketers have profiled their customers throughout the industrial era. What distinguishes surveillance capitalism is not surveillance per se but three features: scale (billions of individuals, not thousands), comprehensiveness (behavior across multiple domains of life, not just specific contexts), and the algorithmic automation of both data collection and behavioral modification.

The surveillance capitalism business model emerged from Google's discovery, around 2001, that search data could be used for targeted advertising. Google's founders initially opposed advertising — their 1998 paper argued that advertising-funded search engines were "inherently biased towards the advertisers and away from the needs of the consumers." But the commercial pressure of building a profitable business produced a different outcome. The same discovery — that behavioral data had commercial value beyond its original purpose — was made by Facebook, Twitter, Amazon, and dozens of smaller companies, producing the surveillance economy as we now know it.


Section 2: The Business Model in Detail

How Data Collection, Behavioral Modeling, and Advertising Targeting Work

The surveillance capitalism business model operates through a pipeline with three stages: data collection, behavioral modeling, and advertising targeting. Understanding each stage illuminates how the system creates value and at whose expense.

Data Collection. Surveillance capitalism platforms collect data at multiple layers simultaneously. First-party behavioral data — search queries, clicks, time spent, purchases, messages, location — is collected directly from users interacting with the platform's services. This data is immensely rich because it is behavioral: it reflects what people actually do, not what they say they would do, which is far more predictive.

But the collection extends beyond the platform itself. "Off-Facebook activity" — Facebook's tracking of user behavior across millions of third-party websites that use Facebook's tracking pixels and login buttons — means that Facebook collects behavioral data from users even when they are not using Facebook. Google's advertising network (DoubleClick, now Google Ads) tracks browsing behavior across any website using Google advertising, regardless of whether the user has a Google account.

Data from third-party sources augments this digital tracking. Data brokers sell commercially compiled profiles — purchase history, subscription records, financial data, offline behavioral data — that can be matched to platform user records and layered on top of behavioral data. The result is a profile of each user that integrates digital and physical behavior across multiple contexts.

Behavioral Modeling. The collected data is fed into machine learning models that learn to predict user behavior. These models identify patterns in historical behavior to predict future behavior. Which users are most likely to click on an advertisement? Which users are in an emotional state that makes them receptive to impulse purchases? Which users are about to make a major purchase decision? Which users are politically persuadable on a specific issue?

The models are trained on historical data and validated against observed outcomes. The iterative refinement of these models, using the billions of behavioral data points generated daily, produces prediction systems of considerable accuracy — not because they understand human psychology in any deep sense, but because they have identified statistical patterns across enormous datasets that correlate with subsequent behavior.

Advertising Targeting. The output of behavioral modeling is a targeting system — the ability to identify which users to show a specific advertisement to, and when. Advertisers specify the audience characteristics they want to reach: demographic, behavioral, psychographic, or situational. The platform's targeting system identifies users who match those characteristics and prices the resulting ad placements in real-time auctions.

The auction system — real-time bidding — operates in milliseconds. When a page loads with an ad slot, an automated auction runs: the platform offers the opportunity to show an ad to this specific user at this specific moment, advertisers bid based on the predicted value of that impression, the highest bidder wins, and the ad is shown. The entire process completes before the page finishes loading. Each user generates thousands of such auctions per day.

The Attention Economy

A second lens for understanding surveillance capitalism is the "attention economy" — the competition for finite human attention as the resource that generates advertising value. Behavioral data collection is not just about understanding users; it is about capturing and holding their attention, because the more time users spend on the platform, the more behavioral data is generated and the more advertising impressions are served.

This logic produces recommendation systems optimized for engagement rather than user wellbeing. A recommendation algorithm that suggests content that provokes outrage or anxiety may increase engagement even while causing harm. A social media newsfeed that shows content likely to provoke social comparison may increase time-on-platform even while reducing user wellbeing. The optimization target is engagement, not user benefit — and these are not the same thing.

The attention economy analysis explains features of digital platforms that otherwise seem puzzling: infinite scroll (eliminating natural stopping points), push notifications (interrupting users to re-engage them), "like" counters and social validation metrics (exploiting social psychology to increase frequency of return). These features are not incidental design choices; they are deliberate engineering of environments to maximize time-on-platform.

Google's Evolution

Google's transformation from a search engine into a surveillance capitalism platform illustrates the development of the business model in detail. In 1998, Google was a search company whose product was relevance — the ability to return accurate results for any query. Advertising was not part of the original model.

The shift came gradually. Google introduced AdWords in 2000 — keyword-targeted advertising that appeared alongside search results. This was still relatively traditional advertising, matching advertisements to search context. The transformation into surveillance capitalism came when Google began using behavioral data beyond the immediate search query — browsing history, location data, gmail content, YouTube viewing history — to target advertising. The product being sold to advertisers shifted from contextual relevance to individual behavioral prediction.

The acquisition of DoubleClick in 2008 extended Google's tracking from its own properties to the broader web, giving it visibility into user behavior across millions of websites. The result was a system in which Google could track an individual's digital life across dozens of services and millions of third-party sites, building a comprehensive behavioral profile used to sell predictions to advertisers.

Meta's Model

Meta (Facebook) built its surveillance capitalism model on the social graph — the network of connections between individuals — as an additional dimension of behavioral data. Social connections provide information that individual behavior does not: who you know, how you know them, what social groups you belong to, how your social network influences your behavior. Combined with behavioral data from Facebook, Instagram, WhatsApp, and the Facebook audience network, Meta's profile of each user integrates social, behavioral, emotional, and contextual dimensions.

Meta's advertising system allows targeting not just by demographic and behavioral characteristics, but by social context: reaching users whose friends have recently made a specific purchase, or who belong to social networks associated with specific characteristics. This social-contextual targeting exploits the documented influence of peer behavior on individual decisions in ways that conventional advertising cannot.


Section 3: AI's Role in Surveillance Capitalism

Pattern Recognition at Scale

AI has made surveillance capitalism dramatically more powerful by enabling behavioral pattern recognition at scales and with levels of nuance that were previously impossible. Before machine learning, behavioral data could be analyzed using relatively simple models — click-through rates, demographic correlations, keyword matching. These models had real predictive power but missed the complex, non-linear patterns in behavioral data that turn out to be the most predictive.

Deep learning, in particular, changed what was possible. Convolutional neural networks can analyze image data — identifying what objects appear in photographs posted to Instagram, inferring lifestyle and economic status from visual cues, tracking facial expressions in video to infer emotional states. Recurrent neural networks and transformer models can analyze text with nuance that captures sentiment, irony, persuasive intent, and emotional valence. Reinforcement learning algorithms can discover engagement-maximizing content recommendation strategies that no human engineer would have designed.

Behavioral Prediction

The application of AI to behavioral prediction has produced systems capable of identifying who you are, what you want, and how you will behave with accuracy that individuals often find uncanny. The "Target pregnancy prediction" case — in which Target's analytics team discovered that purchasing patterns could predict pregnancy with sufficient accuracy to target expectant mothers with baby-product promotions before they had disclosed the pregnancy — became famous partly because a father discovered his teenage daughter's pregnancy through Target's marketing rather than from her.

More recent AI prediction systems are far more sophisticated than Target's 2012 model. Models trained on vast datasets of behavioral, social, and contextual data can predict personality traits, political leanings, sexual orientation, mental health status, and susceptibility to specific messages with accuracy that raises serious questions about autonomy and informed consent. The individuals whose behavior is being predicted do not know what the model knows about them.

Micro-Targeting

Micro-targeting — delivering different messages to different individuals based on their predicted preferences and vulnerabilities — was the application of behavioral prediction that produced the Cambridge Analytica scandal and the most public controversy about surveillance capitalism's political implications. But micro-targeting is far more pervasive than political advertising.

Commercial micro-targeting shows different prices to different customers based on predicted willingness to pay. It shows different product recommendations to different users based on predicted tastes. It shows different credit and insurance offers based on predicted risk profiles. It adjusts the persuasive framing of the same product — emphasizing security to anxious people, status to aspirational people, value to frugal people — based on predicted psychological characteristics.

AI makes micro-targeting more effective by enabling more accurate predictions of individual psychological characteristics and more systematic testing of which messages are most effective for which segments. A/B testing at scale — testing dozens of different message framings simultaneously and routing each user to the framing most likely to produce the desired behavior based on their predicted psychological profile — is a capability that AI has made routine.

Sentiment Analysis

AI sentiment analysis — the automated analysis of text, voice, and visual content to infer emotional states — represents one of the most invasive dimensions of AI-enabled surveillance capitalism. Platforms analyze the content of user posts, comments, and messages to infer emotional states and use those inferences to improve targeting. Research published by Facebook in 2014 — and subsequently heavily criticized — described how Facebook had manipulated users' news feeds to induce emotional states and observed the effects on the emotional tone of subsequent posts. Facebook conducted this emotional contagion experiment on nearly 700,000 users without their knowledge or meaningful consent.

Sentiment analysis of voice data — from voice assistants, customer service calls, and video content — adds another dimension. AI models can infer emotional states from prosodic features (tone, pace, stress) in ways that go beyond the content of speech. Companies in the emotion recognition industry claim their AI can identify emotions from facial expressions, body language, and voice — claims that academic psychologists have largely disputed but that have not prevented commercial deployment.


Section 4: The Workplace Surveillance Dimension

AI-Powered Employee Monitoring

Surveillance capitalism's logic — collect behavioral data, build behavioral models, optimize behavior toward desired outcomes — has been applied to the employment context with consequences that raise distinct ethical and legal questions. AI-powered employee monitoring has grown dramatically, particularly following the pandemic-driven shift to remote work, when employers sought new ways to verify that remote workers were productive.

"Bossware" — a term coined by privacy advocates to describe intrusive workplace monitoring software — encompasses a range of monitoring capabilities. Keystroke logging records every key pressed on a work computer. Screenshots capture the employee's screen at regular intervals or continuously. Email and communication analysis scans the content of internal communications for keywords indicating disengagement, negativity toward management, or other behavioral signals. Productivity tracking software measures "active" computer time versus total logged hours. Location tracking via GPS or building access systems monitors physical presence and movement.

AI adds inferential capabilities beyond simple activity logging. An AI model can analyze communication patterns — frequency of email, sentiment of messages, participation in meetings, time to respond to requests — to infer employee engagement, productivity, and loyalty. Models trained on historical data can predict which employees are likely to leave (flight risk prediction), which are likely to underperform, and which are likely to raise compliance concerns.

Amazon's Model: Productivity Metrics and Automated Management

Amazon's warehouse operations represent the most extensively documented example of AI-powered workplace surveillance. Amazon tracks warehouse workers' performance against algorithmically determined productivity targets with a granularity that workers and labor researchers describe as unprecedented. "Time off task" — any period of non-scanning activity — is automatically logged. Workers who exceed time off task thresholds receive automated warnings and face disciplinary action, up to and including automated termination recommendations. The system does not distinguish between a bathroom break, a brief conversation with a coworker, a momentary technical difficulty, and deliberate shirking.

This case is examined in detail in Case Study 24-2. The broader pattern it illustrates — management by algorithm, with minimal human review of algorithmically generated performance assessments — has spread beyond Amazon's warehouses to call centers, delivery companies, and, increasingly, white-collar environments.

Keystroke Logging and Email Analysis

Financial services firms, law firms, and other professional organizations have deployed email and communication monitoring systems that go far beyond basic productivity measurement. These systems analyze the content of employee communications to identify potential compliance violations, competitive intelligence leaks, and signs of misconduct. The surveillance of professional communications raises questions about attorney-client privilege, whistleblower protection, and the ability of employees to communicate confidentially about workplace conditions.

The extension of monitoring to employee personal devices — through mobile device management software required for access to company systems — blurs the boundary between workplace monitoring and surveillance of personal life. If a company's MDM software runs on an employee's personal phone, its monitoring capabilities extend into personal use of that device.

The legal framework for workplace surveillance in the United States is remarkably permissive. The Electronic Communications Privacy Act of 1986 generally permits employers to monitor communications on employer-provided systems and networks. Many states require notice to employees before monitoring, but the notice requirement is typically satisfied by a brief acknowledgment in an employment agreement or employee handbook that the employee signs at hire. European law is significantly more protective: GDPR applies to employee data, and the data protection principles of purpose limitation, data minimization, and proportionality constrain what employers may legitimately monitor.

The absence of meaningful legal constraints on workplace surveillance in the US has produced a monitoring ecosystem that would be illegal in most European countries. The power asymmetry between employers and employees — particularly in low-wage work where employees have limited bargaining power and limited alternatives — makes consent to monitoring essentially coerced.

Worker Rights and Organizing

A particularly concerning dimension of AI-powered workplace surveillance is its potential use to monitor and suppress worker organizing. Labor law in the US prohibits surveillance of union organizing activity and retaliation against workers for exercising organizing rights. But AI communication monitoring systems can potentially detect organizing signals — increased communication with specific colleagues, mention of union-related terms, participation in off-site meetings — without the explicit intention of surveilling organizing, and without the employer needing to consciously direct such surveillance.

Amazon has faced specific allegations that it used warehouse worker data to monitor and predict union organizing activity. The National Labor Relations Board has investigated complaints that Amazon's monitoring systems were used in ways that chilled protected organizing activity. The combination of comprehensive behavioral data and predictive AI creates capabilities for labor relations management that existing labor law was not designed to address.


Section 5: Public Surveillance — Smart Cities, Cameras, and the State

Smart Cities and Surveillance Infrastructure

The "smart city" vision — cities equipped with sensor networks, connected infrastructure, and AI analysis systems — has been pursued by governments worldwide as a mechanism for improving urban services, reducing crime, and optimizing traffic and energy use. The surveillance infrastructure required for these applications — cameras, microphones, location sensors, environmental sensors — creates data collection capabilities that go far beyond what the stated service purposes require.

Smart city surveillance raises questions that are distinct from corporate surveillance because the state has coercive powers that corporations lack. Data collected by government surveillance systems can be used for law enforcement, immigration enforcement, tax enforcement, and political surveillance in ways that corporate surveillance data generally is not. The boundaries between service delivery and surveillance are blurry in smart city deployments, and citizens typically have no meaningful alternative to living in a surveilled urban environment.

London's surveillance camera network — approximately 500,000 cameras covering major public spaces — is the most extensive in the democratic world. Combined with automatic number plate recognition, facial recognition capabilities being piloted by the Metropolitan Police, and the data collected by Transport for London's smart ticketing systems, London's surveillance infrastructure creates a comprehensive record of residents' public movements that is available to law enforcement without the procedural protections that would apply to a targeted surveillance warrant.

Government Use of Commercial Surveillance Data

Governments have discovered that they can access commercial surveillance data — data collected by private companies for advertising purposes — without the legal restrictions that would apply to government-conducted surveillance. If the Fourth Amendment prohibits the government from tracking your location without a warrant, can the government simply purchase location data from a data broker who collected it from apps on your phone?

In 2022, it was reported that the Internal Revenue Service had purchased location data from a commercial data broker to track the movements of people suspected of tax evasion. The Department of Homeland Security had purchased location data for immigration enforcement. The military had purchased location data to track movements near military installations. In each case, the government was using commercially available surveillance infrastructure to conduct tracking that would require judicial authorization if the government conducted it directly.

The Supreme Court's 2018 decision in Carpenter v. United States — holding that the government needs a warrant to obtain historical cell phone location data from telecommunications companies — established some limits on government access to commercial location data. But it left many questions unresolved about data collected by commercial apps and data brokers rather than by telecommunications carriers.

China's Social Credit System

China's social credit system represents the furthest development of state surveillance using commercial-grade data collection, AI analysis, and behavioral modification mechanisms. The social credit system — which is not a single unified system but a collection of local and national programs — uses behavioral data from financial records, traffic cameras, purchase history, and social media to generate scores that affect individuals' access to transportation, credit, education, and government services.

The social credit system illustrates the potential endpoint of combining comprehensive surveillance with behavioral modification incentives. In the most expansive implementations, behaviors flagged as antisocial — traffic violations, failure to repay debts, spreading "false information" online — can result in travel bans, restrictions on business licenses, and public shaming. The system's ability to enforce social conformity at scale makes it the most powerful behavioral modification system in human history.

Western observers frequently cite China's social credit system as a warning about the trajectory of AI-enabled surveillance. What is sometimes missed is that the surveillance infrastructure of the democratic world — the location tracking, behavioral profiling, and social scoring by private companies and government agencies — has many of the same architectural features. The distinguishing characteristics are the explicit government control, the formal connection between scores and access to services, and the open use of surveillance for political conformity enforcement. But the underlying data collection, AI analysis, and behavioral modification capabilities exist in both systems.

Democratic vs. Authoritarian Surveillance

The distinction between surveillance in democratic and authoritarian contexts is significant but should not be idealized. Democratic societies have constitutional protections, independent courts, and political accountability that constrain government surveillance in ways that authoritarian states do not. Citizens of democratic states retain the right to challenge surveillance, advocate for privacy, and vote for politicians who support privacy protections.

But democratic governments use surveillance in ways that are not always subjected to adequate democratic oversight. Intelligence agencies in democratic states have conducted extensive domestic surveillance programs revealed by whistleblowers. Law enforcement has deployed surveillance technologies — including facial recognition — without legislative authorization. Commercial surveillance data has been accessed for government purposes without the procedural protections that direct government surveillance would require.

The meaningful distinction is not simply that democratic governments do not surveil — it is that democratic governance provides mechanisms for accountability, correction, and limitation that authoritarian governance does not. Those mechanisms are imperfect and require active maintenance.


Section 6: The Harm Taxonomy

Mapping the Harms of Surveillance Capitalism

Surveillance capitalism produces a range of specific, identifiable harms. Cataloging these harms is important for developing proportionate regulatory responses and for evaluating business practices that contribute to them.

Privacy Harm. The fundamental harm of surveillance capitalism is privacy harm — the collection and use of personal information in ways that violate contextual integrity, undermine autonomy, and expose individuals to risks they did not consent to. Privacy harm is sometimes characterized as non-serious because it does not directly cause physical injury or financial loss. But privacy violations enable all the other harms on this list.

Manipulation. Surveillance capitalism enables behavioral manipulation at scale. Micro-targeted advertising exploits psychological vulnerabilities — fear, anxiety, loneliness, aspirational identity — to influence purchasing decisions. Political micro-targeting exploits the same vulnerabilities to influence political beliefs and behaviors. Manipulation harms are particularly insidious because they are invisible: the person being manipulated typically does not know it is happening.

Chilling Effects. The knowledge or suspicion of surveillance modifies behavior. People search for different things, express different opinions, and associate with different people when they believe they are being watched. This chilling effect is especially harmful in contexts where it suppresses legitimate political activity, reduces whistleblowing, and discourages minority viewpoints from seeking expression.

Discrimination. Behavioral data collection enables discrimination — not only explicit discrimination based on protected characteristics, but disparate impact discrimination through seemingly neutral criteria that correlate with race, gender, religion, or national origin. Credit scoring, insurance pricing, and employment screening based on behavioral data can systematically disadvantage protected groups even when no individual decision is explicitly discriminatory.

Identity Theft and Fraud. Data collected for surveillance purposes is valuable to criminal actors. Data breaches — unauthorized access to the vast stores of personal behavioral data collected by surveillance capitalism — expose individuals to identity theft, financial fraud, and other crimes. The more comprehensive the behavioral data that is collected, the more damaging it is when inevitably breached.

Power Concentration. Surveillance capitalism concentrates economic and political power in a small number of companies with access to comprehensive behavioral data. These companies can use this data advantage to entrench their market positions (by understanding competitors' behavior through advertising data), to influence political processes (through control of information environments), and to shape regulatory outcomes (through the power that comes with surveillance capabilities).


Section 7: Children and Surveillance

The Children's Data Economy

Children represent a particularly vulnerable population in the surveillance economy. They use digital devices from an early age, often without understanding that their behavior generates commercial data. They cannot meaningfully consent to data collection. They are targeted by behavioral modification systems that exploit developmental vulnerabilities — the need for social approval, susceptibility to novelty, reward-seeking behavior — in ways that adult users can at least partially recognize and resist.

The scale of children's data collection is significant. Children as young as five routinely use tablets and smartphones, generating behavioral data that companies may retain for decades. Educational technology — classroom apps, homework platforms, learning management systems — collects behavioral data about children's academic performance, social interactions, and learning patterns that has potential implications far beyond its immediate educational purpose.

COPPA and Its Limitations

The Children's Online Privacy Protection Act, enacted in 1998 and updated in 2013, prohibits the collection of personal information from children under 13 without verifiable parental consent. COPPA applies to websites and online services directed to children, and to any site that has actual knowledge that it is collecting from children under 13.

COPPA's age verification mechanism — requiring children to enter their birth date, with service access denied if they indicate they are under 13 — is essentially non-functional. Children who want access to services simply enter a false birth date. The operator can claim it did not have "actual knowledge" of the child's age. The result is that children's data is routinely collected without parental consent on platforms nominally compliant with COPPA.

TikTok and Children's Privacy

TikTok — the short-video platform owned by Chinese company ByteDance — paid $5.7 million to settle FTC charges in 2019 that its predecessor app Musical.ly had violated COPPA by collecting personal information from children without parental consent. A second FTC enforcement action in 2023 resulted in a $92 million settlement for continued COPPA violations, including collecting data from children known to be under 13. This is the largest COPPA fine ever imposed.

TikTok's children's privacy issues extend beyond COPPA compliance. The platform's recommendation algorithm — which delivers an extraordinarily personalized content stream based on viewing behavior — has been criticized for exposing children and adolescents to content promoting eating disorders, self-harm, and extremist viewpoints. Research suggests that the algorithm's optimization for engagement can produce content sequences that increasingly extreme subject matter, targeting adolescents' psychological vulnerabilities. The connection between social media algorithmic recommendation and adolescent mental health is the subject of active research and litigation.

YouTube's COPPA Settlement

In 2019, YouTube paid $170 million to settle FTC and New York Attorney General allegations that it had violated COPPA by collecting personal information from children who watched children's channels, and using that information to target advertising at children. The settlement required YouTube to create a "made for kids" designation system for child-directed content and to restrict data collection on content in that category.

The YouTube COPPA case illustrated the structural tension between advertising-based business models and children's data protection. YouTube's ability to monetize children's content depended on advertising targeted to children, which depended on data collection from children, which COPPA prohibited. The settlement addressed the legal violation without resolving the underlying tension.


Section 8: Regulatory Responses

GDPR's Restrictions on Behavioral Advertising

GDPR created significant restrictions on the behavioral advertising that is the commercial product of surveillance capitalism. The consent required for behavioral tracking must be freely given, specific, informed, and unambiguous — a standard that the cookie consent banners that proliferated after GDPR's implementation largely failed to meet. Enforcement actions against Google and Meta by national data protection authorities found that their consent interfaces were designed to make acceptance easy and rejection difficult, violating GDPR's requirement for freely given consent.

In January 2023, the Irish Data Protection Commissioner fined Meta 390 million euros for relying on contract necessity — rather than consent — as the lawful basis for behavioral advertising. The decision required Meta to obtain explicit consent for behavioral advertising on Facebook and Instagram, or to stop conducting it. Meta's response was to introduce a "pay or consent" model — users who do not consent to tracking can pay a monthly subscription fee instead. This model has itself been challenged by the European Data Protection Board.

The EU Digital Services Act

The Digital Services Act (DSA), which took effect in 2024, establishes requirements for very large online platforms — those with more than 45 million monthly active users in the EU — that go beyond GDPR. DSA requirements include: prohibiting targeting advertising based on special categories of data (health, religion, political beliefs, sexual orientation); prohibiting targeting advertising at minors; requiring transparency in recommendation algorithm parameters; giving users the right to opt out of algorithmic recommendation; and conducting annual risk assessments of systemic risks including threats to democratic processes and user wellbeing.

The DSA represents a significant regulatory intervention in the surveillance capitalism business model, going beyond data protection to address the systemic risks that attention economy optimization creates. Its full implementation and enforcement will determine whether it changes platform behavior or is absorbed as a compliance cost.

The California Privacy Rights Act and Behavioral Advertising

California's CPRA created a new category of "sensitive personal information" (including precise geolocation, racial or ethnic origin, religious beliefs, and data concerning sex life) and gave California residents the right to limit its use and disclosure. CPRA also gave consumers the right to opt out of "sharing" of personal information (defined broadly to include making personal information available to third parties for behavioral advertising), not just its "sale." This right to opt out of sharing is specifically designed to cover the ad-tech ecosystem's practice of sharing data through real-time bidding auctions that might not technically constitute "sales."

The Proposed American Data Privacy and Protection Act

The American Data Privacy and Protection Act (ADPPA), which passed the House Commerce Committee in 2022 with bipartisan support, would have established a comprehensive federal framework that specifically addressed surveillance capitalism. Key provisions included: data minimization requirements (collecting only what is necessary for specified purposes); limitations on targeted advertising using sensitive data; prohibitions on targeting advertising to minors; algorithmic impact assessment requirements; and a private right of action. The bill did not pass the full Congress, but its provisions represent the legislative consensus on what federal privacy regulation should accomplish.


Section 9: Alternatives to Surveillance Capitalism

Contextual Advertising

The most technically straightforward alternative to behavioral advertising is contextual advertising — advertising matched to the content of a page or app rather than to behavioral profiles of users. Someone reading a recipe website sees food and cooking advertisements. Someone reading a financial news site sees investment product advertisements. The targeting is imprecise compared to behavioral profiling, but research suggests that well-executed contextual advertising can approach behavioral advertising in effectiveness for many commercial purposes.

Contextual advertising's commercial viability has been demonstrated by privacy-respecting search engines like DuckDuckGo, which charges advertisers based on search query context and maintains profitability without building individual behavioral profiles. The New York Times, which moved away from behavioral advertising following GDPR implementation in Europe, reported that its contextual advertising revenue in Europe matched pre-GDPR behavioral advertising revenue.

Subscription Models

Direct payment from users to platforms — subscriptions — eliminates the commercial need to monetize behavioral data. The user pays for the service; the service is designed to serve the user rather than to sell predictions about the user to advertisers. Subscription models align platform incentives with user interests in ways that advertising models structurally cannot.

Subscription models have limitations: they exclude users who cannot afford to pay, potentially stratifying access to information and communication infrastructure by income. Public subsidy of subscription media — the public broadcasting model — can address this stratification in some domains.

Data Cooperatives

A data cooperative is an organization in which individuals collectively pool their data, retain collective ownership of it, and make collective decisions about how it is used and by whom. Data cooperatives could negotiate on behalf of their members for compensation when commercial use of members' data is permitted, and could prohibit uses that members collectively reject.

Data cooperatives have been proposed as a mechanism for shifting the balance of power in the data economy from platforms to data subjects. Pilot projects in health data and mobility data have explored this model. The challenge is creating governance mechanisms that enable genuine collective decision-making at scale without creating new concentrated power.

What Ethical Data Practice Looks Like

Organizations that want to move beyond surveillance capitalism while remaining commercially viable face genuine challenges. The surveillance model is commercially powerful precisely because it extracts value from behavioral data without compensating data subjects or accounting for the full costs — privacy harm, manipulation, democratic damage — that it imposes. An ethical data model must be commercially viable while internalizing these costs.

Ethical data practice for organizations includes: collecting data with explicit informed consent for specific stated purposes; not combining data across contexts in ways that violate contextual integrity; not using data for manipulation or discrimination; giving individuals genuine control over their data including meaningful deletion; and compensating individuals for commercial uses of their data where compensation is appropriate.


Section 10: What Businesses Should Do

Privacy Audits

The starting point for any business that wants to move toward ethical data practice is understanding its current data flows. A privacy audit maps what data is collected, from whom, for what purpose, with what legal basis, how it is stored and secured, how it is shared with third parties, and how long it is retained. This inventory often reveals data collection and sharing practices that were implemented without adequate consideration of privacy implications and that do not survive scrutiny when examined.

Vendor Screening

Most organizations' data practices extend to their vendors — the analytics providers, advertising platforms, marketing technology systems, and cloud services that process personal data on their behalf. Screening vendors for privacy practices, restricting what data vendors may access, and contractually limiting vendors' use of data for their own purposes are essential elements of ethical data governance.

The use of third-party tracking pixels, advertising network integrations, and analytics platforms deserves particular scrutiny. These technologies often collect more behavioral data than the organization that deploys them recognizes, and share it with parties the organization has no direct relationship with. Auditing and rationalizing the third-party scripts and pixels deployed on organizational websites frequently reveals significant privacy risks.

Limiting Data Retention

Data retained beyond its useful life creates privacy risk without providing value. Every dataset of personal behavioral data is a potential target for a data breach. Every retained dataset creates potential liability for misuse. Organizations that routinely delete behavioral data when it is no longer necessary for its original purpose both reduce privacy risk and reduce the compliance burden associated with data subject rights requests.

Retention limitation is one of the most straightforward privacy controls available. It requires setting explicit retention periods for each data category, implementing automated deletion processes, and auditing retention practices to ensure compliance. It has no commercial cost when data is genuinely no longer useful, and it provides concrete privacy benefits.

Resisting the Surveillance Model

The deepest challenge for business professionals is recognizing when commercial pressures are pushing toward surveillance practices that are ethically unjustifiable. The surveillance capitalism business model is commercially powerful — it generates significant revenue precisely because it extracts value from behavioral data without compensating data subjects. Organizations that move away from it face real commercial tradeoffs.

But the commercial case for ethical data practices is also real. Consumer trust in data practices is eroding. Regulatory pressure on surveillance capitalism is increasing globally. The legal exposure created by surveillance practices — class action litigation, regulatory fines, state attorney general enforcement — is growing. And the reputational damage caused by surveillance scandals is significant and lasting.

Organizations that build commercial models on ethical data foundations — genuine consent, data minimization, purpose limitation, transparency — are building on a more durable foundation than those that depend on surveillance capitalism's behavioral extraction. The regulatory and market environment is shifting, and organizations that have internalized ethical data practices will be better positioned to adapt than those that have built business models on practices that are becoming legally and socially unacceptable.


Conclusion

Surveillance capitalism is not simply a legal problem to be solved by regulatory compliance. It is an economic system with specific architectural features — behavioral data collection, algorithmic modeling, prediction markets, and behavioral modification — that produces specific harms to privacy, autonomy, democratic participation, and equality. AI has amplified every dimension of this system, making behavioral prediction more accurate, behavioral modification more effective, and the power asymmetry between platforms and users more extreme.

Addressing surveillance capitalism requires both regulatory action — to create legal constraints that the market will not impose on itself — and business culture change — an embrace of ethical data practices as organizational values rather than compliance obligations. Neither is sufficient alone. Regulation without ethical culture produces compliance without genuine respect for the values at stake. Ethical culture without regulation allows bad actors to externalize costs that ethical actors internalize.

For business professionals, the surveillance capitalism question is ultimately a question about what kind of commercial entity you want to build and what kind of digital economy you want to contribute to. The surveillance model is available. Its commercial logic is clear. Its harms are documented. The alternative requires more work, more creativity, and acceptance of some commercial constraints. But it produces something that the surveillance model cannot: a business built on trust rather than extraction.


Section 11: Platform Power and the Democratic Challenge

The Concentration of Informational Power

Surveillance capitalism has produced a concentration of informational power unlike anything in prior economic history. The five largest technology companies — Apple, Microsoft, Alphabet (Google), Amazon, and Meta — control infrastructure through which the vast majority of digital communication, commerce, and information flows. Among these, Google and Meta are most purely surveillance capitalism companies: the product they sell is behavioral predictions derived from pervasive data collection.

This concentration matters because informational power is political power. Whoever controls what information people see, what advertisements they are exposed to, what search results they find, and what recommendations they receive has power over belief formation and decision-making at massive scale. This power operates largely without the accountability structures that democratic societies have developed for other forms of concentrated power: without the competition regulation that limits market monopoly, without the editorial accountability that governs broadcast media, without the political accountability that constrains government power.

The asymmetry is stark. Google processes approximately 8.5 billion searches per day. Each search is an opportunity to shape information access — to determine which sources appear first, which businesses are visible, which perspectives are amplified. The editorial choices embedded in search ranking are made by algorithms trained to maximize commercial metrics, without the journalistic principles or public accountability that editorial decisions in traditional media carry.

Information Ecosystems and Democratic Health

The attention economy's optimization for engagement has produced information ecosystems that prioritize emotionally provocative content over accurate, nuanced, or constructive content. Research consistently finds that misinformation spreads faster and farther than accurate information on social media platforms — a finding that reflects the engagement-maximizing properties of emotionally arousing content, not a deliberate choice by platform designers.

The connection between attention-economy information environments and democratic health is a subject of active research and significant debate. Some researchers argue that social media's effects on political polarization, trust in institutions, and democratic participation are modest and mediated by offline factors. Others argue that the manipulation of information environments at scale — through algorithmic curation, micro-targeting of political content, and amplification of outrage — has materially damaged democratic discourse.

The most direct evidence concerns documented cases where surveillance capitalism's tools were used specifically for political manipulation: Cambridge Analytica's use of psychographic profiling for electoral targeting, Russian Internet Research Agency's use of Facebook's advertising tools for disinformation campaigns in the 2016 US election, and the use of coordinated inauthentic behavior on multiple platforms to amplify specific political narratives. These cases establish that the infrastructure of surveillance capitalism is available for antidemocratic purposes — a fact that has implications for regulatory design regardless of the contested questions about more diffuse effects.

Platform Governance and the First Amendment

The regulatory challenge of surveillance capitalism intersects with fundamental questions about free speech and platform governance in democracies. The First Amendment to the US Constitution has historically been interpreted to limit government regulation of speech. Social media platforms have been treated by courts as private parties whose content moderation decisions are not government action subject to First Amendment constraints. Platforms can deplatform speakers, remove content, and curate their feeds without First Amendment constraint.

This creates an odd situation: the government is constitutionally limited in regulating speech, while private platforms with far greater reach over public discourse are legally unconstrained in how they manage it. Whether this gap in the accountability framework for speech is a feature of the constitutional order or a gap that requires new regulatory thinking is one of the central contested questions of platform governance.

The EU's approach — treating very large platforms as having special obligations proportionate to their power, requiring transparency in content curation and advertising targeting, and prohibiting certain high-risk practices — represents one approach to this challenge that does not require resolving the underlying constitutional questions in the same way they are contested in US law.


Section 12: Surveillance Capitalism's Global Dimensions

Export of Surveillance Infrastructure

The surveillance capitalism business model has been exported globally, but its applications outside wealthy democracies are particularly concerning. Facebook's "Free Basics" program — offering free access to a curated version of the internet in developing countries — extended surveillance capitalism's data collection to populations who often had no other internet access and no meaningful awareness of what they were consenting to. The program has been criticized as digital colonialism: extracting behavioral data from populations in the Global South while offering them a limited, Facebook-curated version of the internet that serves Facebook's commercial interests.

More directly concerning is the commercial export of AI surveillance technology to governments with authoritarian tendencies. Clearview AI has sold facial recognition access to government clients in dozens of countries. Other companies have sold predictive policing AI, social media monitoring tools, and behavioral analytics capabilities to governments that use them for political repression rather than public safety. The ethical obligations of companies that sell surveillance capabilities cannot be limited to their impact in markets with strong rule of law; they extend to the uses those capabilities are put to by all purchasers.

Differential Privacy Exposure by Income and Geography

Surveillance capitalism's harms are not uniformly distributed. Individuals with greater digital literacy and higher incomes have more options for privacy protection: they can afford VPN services, privacy-focused devices, and subscription services that substitute for advertising-funded ones. They have the time and knowledge to adjust privacy settings, use privacy-focused browsers, and limit data collection.

Low-income individuals and communities have less ability to opt out of surveillance. They may be more dependent on free, advertising-supported services. They may have less time or capacity to manage privacy settings. And they face specific vulnerabilities: predatory financial advertising targeting communities with limited financial literacy, health product marketing that exploits anxieties about healthcare access, and political advertising that targets specific demographic vulnerabilities identified through behavioral profiling.

The geographic dimension of surveillance capitalism's harm is also significant. Populations in jurisdictions with weak privacy law — most of the Global South, significant parts of the United States — have less legal protection than EU residents against the same platform practices. The global reach of surveillance capitalism companies means they apply different standards depending on regulatory context: more privacy protective in Europe where regulation requires it, less protective elsewhere where it does not.


Section 13: Measuring and Contesting Harm

Quantifying Surveillance Capitalism's Costs

One of the challenges in regulating surveillance capitalism is quantifying its costs. The harms it produces — privacy violations, manipulation, chilling effects, discrimination — are diffuse, difficult to attribute, and often experienced as subjective rather than material. The benefits — free services, convenient advertising, personalized recommendations — are tangible and immediate.

Several approaches to quantifying surveillance capitalism's costs have been proposed. The "willingness to pay" approach estimates the value people place on privacy by examining how much they would need to be paid to accept tracking, or how much they would pay to avoid it. Studies using this approach consistently find that people value their privacy significantly — estimated values range from tens to hundreds of dollars per year for different types of tracking — suggesting that surveillance capitalism extracts value from data subjects that is not reflected in the "free services" exchange.

A second approach calculates the costs of surveillance capitalism's secondary harms: the economic costs of data breaches enabled by surveillance data accumulation, the mental health costs of attention economy optimization, the democratic costs of AI-enabled political manipulation, and the discrimination costs of behavioral profiling. These calculations involve significant estimation uncertainty but suggest aggregate costs that substantially exceed the value delivered in free services.

Surveillance capitalism has been challenged through multiple legal theories in multiple jurisdictions. GDPR enforcement has produced significant fines but has not fundamentally altered the business model. Consumer protection enforcement in the US has produced settlements but not structural changes. Competition law challenges — focused on whether surveillance data creates barriers to competition — are ongoing in multiple jurisdictions, with uncertain outcomes.

The limits of legal challenges reflect the limits of existing legal frameworks in addressing genuinely novel forms of economic harm. Privacy law was designed for a world in which personal data collection was limited and purposeful; it struggles with a world in which collection is pervasive and incidental. Competition law was designed for markets with identifiable products and prices; it struggles with markets in which the product is behavioral prediction and the price is attention. Democratic accountability mechanisms were designed for a world in which public information infrastructure was scarce and regulated; they struggle with a world in which information infrastructure is abundant, private, and algorithmically curated.

Consumer Backlash and Market Signals

One mechanism for constraining surveillance capitalism that does not depend on regulatory action is consumer backlash — users choosing privacy-protective alternatives over surveillance-dependent services. The growth of privacy-focused search engines (DuckDuckGo), messaging applications (Signal), and browsers (Firefox, Brave) reflects genuine consumer demand for privacy alternatives, even at some cost in functionality or convenience.

Apple's strategic positioning around privacy — featuring privacy prominently in product marketing, implementing ATT, and adding privacy labels to App Store listings — reflects a commercial judgment that privacy is a competitive differentiator for its premium consumer hardware products. Apple's privacy positioning creates competitive pressure on Google and Meta to improve privacy practices, or at least to improve privacy communications. Whether this competitive pressure produces genuine privacy improvements or primarily privacy marketing is a question that requires empirical evaluation.


Conclusion

Surveillance capitalism is not simply a legal problem to be solved by regulatory compliance. It is an economic system with specific architectural features — behavioral data collection, algorithmic modeling, prediction markets, and behavioral modification — that produces specific harms to privacy, autonomy, democratic participation, and equality. AI has amplified every dimension of this system, making behavioral prediction more accurate, behavioral modification more effective, and the power asymmetry between platforms and users more extreme.

Addressing surveillance capitalism requires both regulatory action — to create legal constraints that the market will not impose on itself — and business culture change — an embrace of ethical data practices as organizational values rather than compliance obligations. Neither is sufficient alone. Regulation without ethical culture produces compliance without genuine respect for the values at stake. Ethical culture without regulation allows bad actors to externalize costs that ethical actors internalize.

For business professionals, the surveillance capitalism question is ultimately a question about what kind of commercial entity you want to build and what kind of digital economy you want to contribute to. The surveillance model is available. Its commercial logic is clear. Its harms are documented. The alternative requires more work, more creativity, and acceptance of some commercial constraints. But it produces something that the surveillance model cannot: a business built on trust rather than extraction.

The global dimensions of surveillance capitalism — its export to jurisdictions with weak privacy protection, its differential impact on vulnerable populations, and its concentration of power in a small number of companies — amplify the ethical stakes. Building AI systems that respect the persons from whom data is derived, in every jurisdiction and every demographic context, is both a legal obligation and a moral responsibility that business professionals cannot responsibly outsource to compliance departments.


Next: Chapter 24 Case Study 01 — Facebook and the Surveillance Capitalism Business Model