Case Study 5.2: Cambridge Analytica and the Behavioral Modification of Democracy
Overview
This case study examines the Cambridge Analytica scandal (2016–2018) as a case study in Zuboff's surveillance capitalism and "instrumentarian power" — the use of behavioral data extracted from social media platforms to influence political behavior at scale. It applies the full theoretical synthesis of Chapter 5 to a case that exposed the political implications of commercial surveillance.
Estimated Reading and Analysis Time: 60–75 minutes
Background: What Happened
In March 2018, reporting by The Guardian, The Observer, and the New York Times revealed that Cambridge Analytica — a British political consulting firm with ties to Steve Bannon and Robert Mercer — had improperly obtained Facebook user data on approximately 87 million Americans and used it to build psychological profiles for targeted political advertising.
The data had been collected by Aleksandr Kogan, a Cambridge University academic, who built a Facebook quiz app that collected not only the personality data of the approximately 270,000 users who took the quiz but also — through Facebook's then-available "friend API" — the data of all of those users' Facebook friends. This data was then sold to Cambridge Analytica in violation of Facebook's terms of service.
Cambridge Analytica claimed to have developed a psychographic targeting model — based on the "OCEAN" personality model (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism) — that enabled them to identify psychologically vulnerable voters and serve them specifically tailored political messages designed to exploit those vulnerabilities.
Cambridge Analytica worked on the Brexit Leave campaign, Ted Cruz's 2016 presidential primary campaign, and Donald Trump's 2016 general election campaign, among others. Its role in electoral outcomes is contested — independent researchers have found little evidence that psychographic targeting produced the claimed outsized effects. Its role in the political history of data ethics is beyond dispute.
The Data Architecture
From Facebook to Profile
The Cambridge Analytica data collection illustrates the aggregation dynamic discussed in Chapter 1 at political scale.
Facebook users took a personality quiz. The quiz generated personality data about them (Kogan's ostensibly academic research purpose). The friend API provided data about their connections' likes, interests, and demographic characteristics. Facebook's own inferred categories — interests, political affiliation, consumer preferences — were layered on top.
The resulting profiles were not crude demographic categories (white voters in Pennsylvania) but psychological maps: estimated OCEAN personality scores, predicted responses to emotional appeals, estimated cultural touchstones and anxieties. The combination of direct personality assessment and inferred behavioral data produced profiles of intimate psychological detail from data that, in isolation, seemed mundane.
Behavioral Surplus as Political Commodity
Zuboff's framework describes behavioral surplus — behavioral data extracted beyond what is needed for the stated service — as the raw material of surveillance capitalism. The Cambridge Analytica case illustrates behavioral surplus deployed as a political commodity.
Facebook users had generated behavioral data (likes, shares, friend connections, engagement patterns) in the course of using a social network. This data was extracted — first by Cambridge Analytica through the API loophole, and more broadly by Facebook itself through its standard data practices — and converted into political targeting products.
The political targeting is behavioral modification at the democratic scale: not selling a product to a consumer, but attempting to change a voter's behavior (whether they vote, how they vote, whether they are mobilized or demobilized) through messages designed to exploit psychological vulnerabilities identified through surveillance.
Instrumentarian Power in the Electoral Context
Zuboff identifies instrumentarian power as the exercise of behavioral modification through the design of informational environments. The Cambridge Analytica model is a particularly clear illustration.
The model works as follows:
-
Surveillance: Collect behavioral data about voters — their Facebook activity, their political associations, their consumption patterns, their personality traits (inferred or directly assessed).
-
Prediction: Use this data to predict each voter's psychological type, emotional vulnerabilities, and likely response to different message framings.
-
Targeting: Deliver messages designed not to inform or persuade through evidence but to trigger emotional responses — fear, disgust, resentment, hope — that are specifically calibrated to the recipient's predicted psychological vulnerabilities.
-
Behavioral modification: The targeted voter is not given different facts from other voters — they are given messages engineered to produce specific behavioral outcomes (turning out to vote, staying home, changing candidate preference) through psychological trigger rather than rational deliberation.
This is instrumentarian power operating on democracy: not coercing voters, not legislating who can vote, but engineering the informational environment in which democratic decisions are made.
Applying the Full Chapter 5 Synthesis
Foucault: Power/Knowledge
The Cambridge Analytica case illustrates the power/knowledge spiral at the political scale. Facebook's power to collect and retain behavioral data from hundreds of millions of users generated a knowledge base of extraordinary intimacy — personality traits, emotional vulnerabilities, political associations — that was then used to influence political power. The exercise of commercial power (collecting data to sell advertising) produced political power (the capacity to influence elections). The two forms of power are not separate; they are part of the same spiral.
Giddens: Ambivalence
Giddens would note that the same data infrastructure that Cambridge Analytica weaponized is used for relatively benign purposes (personalized advertising, recommendation engines, fraud detection) and for purposes with genuine public benefit (public health surveillance, academic research on political behavior, counterterrorism). The ambivalence is real: the same capabilities serve both ends. But the Cambridge Analytica case demonstrates that ambivalence is not stability — the same infrastructure can be weaponized when the political incentives to do so are strong enough.
Lyon: Social Sorting
Cambridge Analytica's targeting was social sorting: voters were classified by psychological type and political reliability, and different messages were directed at different categories. The sorting was not based on demographic categories (age, race, income) but on behavioral profiles — a more granular and, in some ways, more intimate form of classification.
The social sorting harm here is not differential access to resources (loans, insurance) but differential quality of democratic information. Voters in identified categories received emotionally manipulative messages rather than substantive political communication, while voters in other categories received different messages. The sorting produced not differential material treatment but differential epistemic environments — and therefore, potentially, differential bases for democratic decision-making.
Zuboff: Surveillance Capitalism
This is Zuboff's case study par excellence. Cambridge Analytica demonstrates precisely what Zuboff argues is the logical endpoint of surveillance capitalism's expansion: behavioral data that began as commercial commodity (advertising targeting) became political commodity (electoral manipulation). The economic logic drove behavioral data extraction; the political logic weaponized it.
Zuboff's warning — that surveillance capitalism's production of behavioral modification capacity poses a systemic threat to democratic governance — is illustrated concretely by Cambridge Analytica.
Browne/Feminist: The Intersection
The Cambridge Analytica targeting model, as documented by investigative reporting, specifically targeted white voters with messages designed to mobilize racial resentment. The racializing dimension of the targeting is not incidental: the personality profiles and predicted vulnerabilities were deployed in racially specific ways.
Feminist surveillance scholars would additionally note that the targeting of women voters through specifically gendered emotional appeals — on reproductive rights, on economic security, on safety concerns — represents the gendered gaze applied to political mobilization.
Facebook's Role and Regulatory Response
Facebook's response to the Cambridge Analytica revelations was significant. The company:
- Paid a $5 billion FTC fine in 2019 — the largest privacy fine in FTC history at the time — for misrepresenting its privacy practices
- Closed the friend API that had enabled Kogan's data collection
- Modified its advertising targeting options to limit the use of certain sensitive categories
- Created an oversight board with limited authority over content decisions
The FTC fine, while large in absolute terms, represented approximately three weeks of Facebook's revenue at the time. Critics argued that the fine was insufficient to create genuine deterrence and that structural changes to Facebook's business model — rather than procedural modifications — were required to address the underlying problem.
In Europe, Cambridge Analytica's activities prompted renewed interest in enforcement of the General Data Protection Regulation (GDPR), which had come into force in 2018. The UK's Information Commissioner's Office issued a £500,000 fine to Facebook (the maximum allowed under then-current law) and found that Cambridge Analytica had acted unlawfully.
Discussion Questions
-
Democratic Harm: Zuboff argues that surveillance capitalism poses a systemic threat to democratic governance. The Cambridge Analytica case seems to support this claim. But critics note that political advertising — including emotionally manipulative advertising — has existed for centuries. What makes behavioral-data-enabled targeting different in kind from previous forms of political persuasion?
-
The Consent Question: Facebook users "consented" to their data being used in accordance with Facebook's terms of service. The terms of service did not explicitly authorize Cambridge Analytica's collection through the friend API. But users also could not reasonably have anticipated how their data would be used. Apply the "consent as fiction" framework from Chapter 1 to the specific question of users' responsibility for what happened.
-
The Kogan Question: Aleksandr Kogan built the quiz app that collected the data. He sold it to Cambridge Analytica in violation of Facebook's terms. His data collection was conducted under the banner of academic research. What is his moral responsibility in this case? How does it compare to Cambridge Analytica's? To Facebook's?
-
Foucault's Spiral: The chapter argues that the Cambridge Analytica case illustrates the power/knowledge spiral producing political power from commercial power. Does this analysis mean that surveillance capitalism inevitably threatens democracy? Or can the commercial data infrastructure exist without the political application?
-
The Effectiveness Question: Independent research found little evidence that Cambridge Analytica's psychographic targeting produced its claimed effects. Does this affect your assessment of the ethical harm? Is an attempt to manipulate voters through behavioral data less harmful if the attempt was largely ineffective?
-
Jordan's Data: Jordan uses Facebook and has taken online quizzes. Jordan's data was likely among those compromised in the Cambridge Analytica collection. Does knowing this change how Jordan should think about their social media use? What, practically, could Jordan do differently — and would it matter?
-
Regulatory Response: The FTC's $5 billion fine and the GDPR enforcement represent two different regulatory approaches. Which is more likely to change companies' behavior? What would a regulatory framework need to look like to actually address the structural problem Zuboff identifies?
Chapter 5 | Case Study 5.2 | Part 1: Foundations | The Architecture of Surveillance