Case Study 16.2: "Algorithmic Manipulation"

Cambridge Analytica, Psychographic Targeting, and Democratic Ethics


Overview

In March 2018, a series of investigative reports by The Guardian, The New York Times, and The Observer simultaneously published what would become one of the defining technology scandals of the decade. Cambridge Analytica — a British political data firm — had harvested the personal Facebook data of approximately 87 million users without their consent and used it to build psychographic profiles for political micro-targeting. The firm had deployed these profiles in the 2016 US presidential election, the Brexit referendum, and political campaigns in at least a dozen other countries.

The Cambridge Analytica scandal forced a global reckoning with AI-powered political advertising and its implications for democratic integrity. It raised questions that remain unresolved: When AI uses behavioral data to predict and target individuals' psychological vulnerabilities for political persuasion, is that legitimate political campaigning or manipulation? Does the context of democracy make AI-powered persuasion more or less ethically troubling than commercial advertising? And what, if anything, can regulation do to constrain it?


How Cambridge Analytica Worked

Cambridge Analytica's core capability was psychographic profiling — creating profiles of individuals based on inferred personality characteristics rather than (or in addition to) demographic characteristics. The theoretical foundation was a model of personality called the Big Five or OCEAN model, which describes personality along five dimensions: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Cambridge Analytica's approach was to use Facebook behavioral data to infer individuals' scores on these dimensions, and to then tailor political messaging specifically designed to appeal to or exploit particular personality profiles.

The data that enabled this profiling was obtained through an academic researcher named Aleksandr Kogan, who created a Facebook application that collected personality survey data from users who consented to share their information with his research. Under Facebook's data policies at the time (subsequently changed), applications could also collect behavioral data from the Facebook friends of surveyed users — without those friends' knowledge or consent. Kogan's application ultimately collected data on approximately 87 million Facebook users, the vast majority of whom had never used the app or consented to data collection.

Kogan shared this data with Cambridge Analytica in violation of his agreement with Facebook, which required that data be used only for academic research. Cambridge Analytica used the data to train predictive models that could infer personality characteristics for individuals in the general population from their Facebook activity, even without having their direct survey responses. The result was a model that could estimate OCEAN personality scores for any Facebook user with sufficient behavioral history — and Facebook's platform provided behavioral histories for essentially the entire US adult population.

With these personality models in hand, Cambridge Analytica offered political campaigns the capability to deliver different messages to individuals based on their personality profiles. A voter who scored high in Neuroticism (anxiety-prone) might receive ads emphasizing threats to be feared; a voter who scored high in Conscientiousness might receive ads emphasizing duty, rules, and responsibility; a voter who scored high in Openness might receive ads emphasizing change and novelty. The messaging was not just demographically segmented; it was psychographically optimized to exploit each individual's particular psychological makeup.


The 2016 Campaign: What Cambridge Analytica Did

Cambridge Analytica claimed significant influence on the outcome of the 2016 US presidential election. The Trump campaign paid Cambridge Analytica approximately $6 million for services that included voter targeting, opposition research, and get-out-the-vote data analysis. CEO Alexander Nix and other company executives made public claims that Cambridge Analytica's psychographic targeting was responsible for Trump's unexpected victory in key swing states.

Independent analysts and political scientists have been skeptical of these claims. Psychographic targeting's effectiveness is empirically uncertain — the academic literature on the OCEAN model's predictive value for political behavior is limited and contested, and several subsequent studies have cast doubt on the claim that Cambridge Analytica's methods were as precise or effective as the company claimed. Cambridge Analytica may have been as much a marketing firm overselling unproven capabilities as a genuinely powerful AI-based manipulation engine.

But the debate about Cambridge Analytica's effectiveness is in some ways secondary to the ethical questions its methods raise — questions that are not resolved by uncertainty about the methods' efficacy. If Cambridge Analytica's psychographic targeting worked as claimed, it would represent the use of AI to exploit individuals' psychological vulnerabilities to shift their political behavior, without their knowledge or consent. If it did not work as claimed, it would represent the deliberate misrepresentation of AI capabilities to political clients who paid millions of dollars for illusory influence. Both possibilities raise serious ethical concerns.

The Brexit Connection

Cambridge Analytica was also engaged by political entities involved in the 2016 Brexit referendum in the United Kingdom. The Leave.EU campaign — associated with Nigel Farage's UKIP party — acknowledged working with Cambridge Analytica and using its data services for the Leave campaign, though the nature and extent of the relationship was disputed.

The Brexit connection illustrated the global reach of AI-powered political manipulation: the same data, the same models, and the same micro-targeting techniques were being deployed simultaneously in multiple democratic contests in different countries, coordinated by a single firm with access to data on hundreds of millions of individuals. The cross-border nature of the operation challenged the jurisdiction of any single national regulator and revealed the inadequacy of national electoral law frameworks for the AI era.

The UK's Information Commissioner's Office (ICO) investigated Cambridge Analytica extensively, finding that it had breached UK data protection law in multiple respects. The investigation resulted in a £500,000 fine against Facebook (the maximum possible under law at the time) and a substantial enforcement report detailing the data protection failures that had enabled Cambridge Analytica's data harvesting. Cambridge Analytica itself filed for bankruptcy in May 2018 before enforcement action against it was completed.


The Facebook Reckoning

The Cambridge Analytica scandal had consequences for Facebook that extended far beyond the ICO's fine. Facebook CEO Mark Zuckerberg testified before the US Congress and the European Parliament in April 2018, in hearings that revealed the depth of lawmakers' ignorance about how social media platforms worked and the depth of public anger about data privacy and manipulation. The hearings were a public spectacle that generated relatively limited concrete legislative action in the United States, though GDPR had already been finalized in the EU and would come into force in May 2018.

Facebook settled with the FTC in 2019 for $5 billion — the largest privacy-related fine in US history at the time — for violations of a 2012 consent decree governing its privacy practices. The Cambridge Analytica data harvesting was central to the FTC's finding that Facebook had failed to enforce its own data policies and had exposed users to data collection they had not consented to.

The settlement imposed significant requirements on Facebook's data governance practices and created a new independent privacy committee within Facebook's board structure. Critics argued that $5 billion, while record-setting in absolute terms, was trivially small for a company with Facebook's revenues and did not impose any meaningful financial discipline on its behavior.

The Cambridge Analytica scandal also accelerated regulatory attention to the intersection of AI, behavioral data, and political advertising. The EU's DSA and several national electoral laws have adopted restrictions on political micro-targeting. The UK's Electoral Commission and ICO have issued guidance on political advertising and data protection. And several US states have enacted disclosure requirements for political digital advertising, though federal comprehensive regulation has not passed.


The Ethics of Psychographic Political Targeting

The Cambridge Analytica case raises ethical questions that go beyond data privacy violations and into fundamental questions about the ethics of AI-powered persuasion in democratic contexts.

The manipulation question. Conventional political advertising uses broadly similar techniques to persuade voters: emotional appeals, selective emphasis on favorable facts, messaging tailored to different audience segments. The question is whether psychographic targeting that specifically exploits identified psychological vulnerabilities crosses a line from legitimate persuasion into manipulation. The philosophical literature distinguishes between persuasion (which engages the recipient's rational agency) and manipulation (which bypasses or exploits it). Targeting individuals based on their specific psychological vulnerabilities — their anxiety, their conscientiousness, their tribal identity — seems to be oriented toward exploitation rather than engagement of rational agency.

The consent question. Political persuasion is generally not subject to consent requirements. Campaigns can run TV ads that viewers have not chosen to see, mail flyers to homes that have not requested them, and knock on doors of voters who have not expressed any interest in being visited. But psychographic micro-targeting based on data collected without users' consent is a different matter: it uses personal information to construct individualized psychological profiles specifically for the purpose of manipulation, and the individuals affected have neither consented to the data collection nor to its use for political influence.

The information asymmetry question. Conventional political advertising is visible to all: a TV ad seen by a voter is seen by other voters and journalists and political opponents, who can scrutinize its accuracy and context. Political micro-targeting creates information asymmetry: different voters receive different messages, tailored to what each is most susceptible to, and no voter sees what other voters are being told. This asymmetry makes the content of political micro-targeting difficult to scrutinize, undermining the public accountability that democratic discourse requires.

The epistemic justice question. From a democratic theory perspective, the Cambridge Analytica approach treats voters not as citizens with views to be respected and engaged but as psychological profiles to be exploited. The goal is not to persuade through superior argument but to identify and press the psychological buttons most likely to produce the desired behavior. This approach treats the outcome of an election as an optimization problem to be solved by a data firm, rather than as the collective judgment of an informed citizenry. That is a fundamentally antidemocratic orientation, regardless of which side it serves.


Regulatory Responses and Their Limitations

The regulatory response to Cambridge Analytica-style psychographic political targeting has been fragmented and largely inadequate.

In the United States, the Honest Ads Act — proposed legislation that would extend broadcast political advertising disclosure requirements to digital advertising — has been introduced in Congress repeatedly without passing. Federal Election Commission guidance on digital political advertising disclosure remains less prescriptive than the broadcast analog. Several states have enacted disclosure requirements for online political advertising, but enforcement has been limited.

In the European Union, the proposed EU Regulation on Transparency and Targeting of Political Advertising would prohibit targeting based on sensitive personal data (including inferred personality characteristics) for political advertising purposes and would require disclosure of the targeting parameters used for any political ad. This regulation was under negotiation as of 2024 and represented the most comprehensive proposed regulatory response to AI political targeting globally.

The UK's Electoral Commission, DCMS committee, and ICO have each taken positions on political data and advertising practices, contributing to a regulatory environment that is significantly more attentive to these issues than the US federal government but still developing.

The fundamental challenge for regulation is that many of the techniques Cambridge Analytica used are not qualitatively different from mainstream commercial advertising practices. Behavioral data collection, personality prediction, and psychographic segmentation are standard offerings from major ad tech firms. The Cambridge Analytica scandal was distinctive in the scale of its data violations and the political context of its targeting, but its underlying techniques — data aggregation, predictive modeling, psychological segmentation — are routine commercial advertising capabilities. Regulating political micro-targeting without regulating commercial micro-targeting is a narrow and potentially unstable approach.


Lessons for Business Ethics

The Cambridge Analytica case has several important lessons for business professionals working at the intersection of AI and marketing.

"Legal" is not "ethical." Cambridge Analytica operated for years in spaces where its practices, while ethically problematic, were arguably legal under applicable law. The firm's demise came from data violations — Kogan's breach of his Facebook agreement, and the use of that data for non-academic purposes — but the underlying psychographic targeting methodology itself was not illegal. The law is an insufficient guide to ethical practice in AI marketing.

AI capabilities create ethical obligations. The fact that AI makes psychographic profiling and individualized manipulation possible does not mean those capabilities should be used. Organizations that develop or deploy capabilities they cannot defend on ethical grounds are taking risks — reputational, legal, and institutional — that careful organizations should not take.

Power asymmetries demand constraint. When AI marketing capabilities are deployed in contexts of fundamental power asymmetry — where one party (the marketer) has detailed psychological intelligence about the other party (the consumer or voter) who has no comparable intelligence about the marketer — the case for ethical constraints is strongest. The greater the power asymmetry, the more important are consent, transparency, and limits on exploitation.

The democratic context is ethically distinctive. Political advertising using AI raises ethical concerns that go beyond commercial advertising, because political persuasion is connected to the exercise of fundamental democratic rights. Organizations that deploy AI-powered political targeting — political parties, campaign consultants, platform vendors — bear particular responsibilities to the democratic systems in which they operate.


Reflection Questions

  1. Is psychographic political targeting fundamentally different from conventional demographic political targeting (targeting ads to voters over 65, or to suburban women, or to rural men), and if so, what makes it different?

  2. Facebook's platform enabled Cambridge Analytica's data harvesting through policies that allowed third-party apps to collect friends' data without those friends' consent. What organizational responsibility does Facebook bear for the political manipulation that resulted?

  3. Cambridge Analytica's effectiveness is empirically uncertain. Does it matter whether the manipulation actually worked? Is attempted manipulation of psychological vulnerabilities ethically objectionable regardless of success?

  4. The EU's proposed political advertising regulation would prohibit targeting based on inferred personality characteristics. Would this regulation, if enacted, effectively prevent Cambridge Analytica-style political targeting? What would remain permissible under such a regulation?

  5. Many of the techniques Cambridge Analytica used — behavioral data collection, predictive modeling, psychological segmentation — are standard commercial advertising practices. Should commercial psychographic targeting be regulated differently from political psychographic targeting? On what grounds?