Case Study 34-1: Cambridge Analytica and the Behavioral Futures Market in Politics
When Prediction Products Target Democracy
Background
In March 2018, reporting by The Guardian and The New York Times revealed that Cambridge Analytica, a UK-based political data company, had obtained personal data on approximately 87 million Facebook users without their knowledge or consent, and had used that data to build psychological profiles used in political advertising for the 2016 Trump campaign and the Brexit "Leave" campaign.
The Cambridge Analytica scandal was a watershed moment in public understanding of surveillance capitalism. It demonstrated concretely — with specific numbers, named companies, and documented political outcomes — what Zuboff's analysis described abstractly: that behavioral data collected under the guise of social networking was being used to produce prediction products sold in a behavioral futures market, with political behavior as the targeted outcome.
The Data Collection: How It Happened
Cambridge Analytica did not hack Facebook. It obtained the data through a mechanism Facebook had explicitly permitted.
In 2013, a Cambridge University researcher named Aleksandr Kogan developed a personality quiz application for Facebook called "thisisyourdigitallife." Facebook's terms at the time allowed third-party apps to collect not just data about the app user but also data about the user's friends — without those friends' knowledge or consent. Approximately 270,000 users took the quiz. Their data, plus the data of their Facebook friends, produced a dataset covering approximately 87 million people.
Kogan sold this dataset to Cambridge Analytica, which used it to build psychological profiles of American voters.
The Psychological Modeling: OCEAN and Psychographic Targeting
Cambridge Analytica's pitch rested on the OCEAN (or "Big Five") personality model — a well-established framework in personality psychology that characterizes individuals on five dimensions:
- Openness (to experience)
- Conscientiousness
- Extraversion
- Agreeableness
- Neuroticism
The key research underlying Cambridge Analytica's approach came from work by Michal Kosinski, David Stillwell, and Thore Graepel at Cambridge University, published in 2013. The paper, "Private Traits and Attributes Are Predictable from Digital Records of Human Behavior," showed that Facebook "likes" — the simple act of liking a page or post — were remarkably predictive of OCEAN personality scores, as well as of more sensitive attributes: sexual orientation, race, political views, religion, and intelligence.
The model could predict personality traits from a small number of likes: 10 likes produced more accurate predictions than the judgments of a co-worker; 150 likes were more accurate than a family member; 300 likes were more accurate than a spouse.
Cambridge Analytica's claimed application: match individual voters to personality profiles, then target them with political messaging specifically designed to resonate with their psychological profile. Neurotic voters might be targeted with fear-based messaging; open voters with hopeful, change-oriented messaging; conscientious voters with rule-of-law messaging.
The Political Application and Its Disputed Effectiveness
Cambridge Analytica CEO Alexander Nix made sweeping claims for the company's capabilities: "Today in the United States we have somewhere close to four or five thousand data points on every individual... We have profiled the personality of every adult in the United States of America — 220 million people."
The company claimed that psychographic targeting was far more effective than conventional demographic targeting and that it had contributed to Trump's victory in 2016 and to Brexit.
These claims have been extensively contested. Academic researchers who have studied psychographic targeting generally find:
- Some evidence of modest effectiveness — personality-matched messaging does slightly outperform non-matched messaging in laboratory conditions
- Significant skepticism about the scale of claimed effects — real-world political outcomes involve many factors; attributing specific vote margins to psychographic targeting is methodologically difficult
- Evidence of exaggeration — internal Cambridge Analytica documents revealed in later proceedings suggested the company oversold its capabilities to clients
- The "prediction product" problem — even if the personality modeling was accurate, the behavioral modification (persuading specific voters to vote differently) faces limits that prediction products always face: people are not simple functions of their personality profiles
The effectiveness question matters for policy: if psychographic political targeting genuinely works at the scale claimed, it represents a serious threat to democratic political process. If it is largely hype — expensive data processing producing modest incremental effects — the threat is real but more limited.
Facebook's Knowledge and the Regulatory Response
Facebook knew that Kogan's app was collecting friends' data. It permitted this under its platform policies and continued to permit similar data collection by third-party developers even as concerns were raised internally. The company was aware by 2015 that the Kogan data had been transferred to Cambridge Analytica — in violation of Facebook's terms of service — but took no public action and made no effort to notify affected users.
The regulatory consequences: - FTC settlement (2019): Facebook paid a $5 billion penalty — the largest FTC fine in history — for violating a 2012 consent decree requiring better privacy protection. Critics noted that $5 billion was approximately one month of Facebook's revenue at the time; some FTC commissioners dissented, arguing the penalty was insufficient and that Zuckerberg personally should have faced accountability. - SEC settlement: Facebook paid $100 million to settle SEC charges that it failed to adequately disclose risks from misuse of user data. - GDPR investigations: The Irish DPC and other EU regulators investigated Facebook's data practices in connection with Cambridge Analytica. - Congressional hearings: Zuckerberg testified before the Senate and House, generating extensive publicity and revealing the depth of congressional unfamiliarity with how Facebook actually worked.
What Cambridge Analytica Demonstrates About Surveillance Capitalism's Political Risks
The Cambridge Analytica case demonstrates concretely several of the political risks Zuboff identifies:
Behavioral futures markets have no sector boundaries. The same prediction products sold to consumer advertisers can be sold to political campaigns. The behavioral futures market does not distinguish between selling someone shoes and selling someone a political candidate. The commercial infrastructure of surveillance capitalism is available to political actors.
The consent gap creates structural political vulnerability. The 87 million people whose data was used by Cambridge Analytica did not consent. They did not know it was happening. They had no opportunity to object or opt out. This is not an abuse of the system — it was the system operating as designed, with data collected from one context (social networking) applied in another (political targeting).
Regulatory response can be too slow and too weak. Facebook knew about the data misuse in 2015; the public learned in 2018; the FTC settlement came in 2019. During that interval, the data was used in major political campaigns. The timeline illustrates the surveillance-regulation gap that this textbook has documented throughout.
Corporate accountability requires more than fines. A $5 billion fine that costs approximately four weeks of revenue does not change the calculus for a company deciding whether to permit data practices that generate behavioral surplus. Meaningful accountability requires either penalties large enough to genuinely threaten the business model or structural changes to the model itself.
Analysis Questions
1. Facebook permitted the data collection under its terms of service. When that collection was misused, was Facebook responsible? Who bears moral and legal responsibility — Kogan, Cambridge Analytica, Facebook, or the 270,000 users who took the quiz and whose friends' data was thereby exposed?
2. Cambridge Analytica's effectiveness claims have been contested. Does the effectiveness question matter for the ethical evaluation? Is non-consensual behavioral profiling for political purposes wrong regardless of whether it works?
3. Zuboff argues that the behavioral futures market "does not ask permission." The Cambridge Analytica case illustrates this. Does this mean that the solution must be structural (banning behavioral futures markets in politics) rather than behavioral (consumers being more careful about what they "like")?
4. The $5 billion FTC fine was the largest in history and represented approximately one month of Facebook's revenue. Some commissioners proposed that Zuckerberg personally face legal accountability. What level and type of accountability would be sufficient to change corporate behavior? How should we think about deterrence for companies of this scale?
5. The Cambridge Analytica scandal focused public attention on political advertising. Less attention was paid to the much larger behavioral targeting apparatus applied to commercial advertising. Why might the political application be more alarming than the commercial? Or is the distinction between political and commercial targeting less clear than it appears?
This case study connects to Chapter 34 Section 34.3 (behavior modification capability), Section 34.8 (regulation debates), and backward to Chapter 13 (social media) and Chapter 14 (behavioral targeting). It connects forward to Chapter 35 (facial recognition and political databases) and Chapter 38 (AI and predictive systems).