34 min read

In 2016, two of the most consequential political events in recent Western history unfolded within months of each other: the Brexit referendum in June and the US presidential election in November. Both outcomes surprised pollsters and experts. Both...

Chapter 29: AI and Democratic Processes

Opening: The Information Environment That Changed Everything

In 2016, two of the most consequential political events in recent Western history unfolded within months of each other: the Brexit referendum in June and the US presidential election in November. Both outcomes surprised pollsters and experts. Both were subsequently analyzed through the lens of AI-enabled information manipulation: algorithmic content amplification on Facebook, Twitter, and YouTube; micro-targeted political advertising using psychographic profiles; coordinated networks of automated accounts (bots) producing and distributing political content; and the early deployment of disinformation operations facilitated by AI tools.

Whether these AI-enabled interventions "decided" either outcome remains genuinely contested and is probably, in the technical sense, unanswerable: political outcomes have many causes, margins of victory are sensitive to countless variables, and isolating the contribution of any single factor — let alone a diffuse factor like "algorithmic content environment" — is methodologically forbidding. But the claim that these interventions shaped the information environment in which citizens made decisions is not seriously contested. Facebook's own internal research, disclosed by whistleblower Frances Haugen in 2021, showed that its algorithms systematically amplified outrage-inducing content because that content drove higher engagement metrics, regardless of its accuracy or contribution to civic discourse. The platform's own researchers had identified the problem and proposed interventions; those interventions were largely rejected because they would have reduced engagement — and engagement was the business model.

This chapter examines AI's relationship to democracy — not just the threat narrative, but the full picture. AI enables disinformation at scale; AI also enables detection of disinformation. AI algorithms amplify political division; AI tools also enable more sophisticated civic participation. AI is used to manipulate voters; AI is also used to translate policy documents into 50 languages so more citizens can participate in democratic deliberation. Understanding both dimensions — the threats and the opportunities — is necessary for anyone who will make decisions about AI deployment in contexts that touch democratic institutions.

The stakes are not abstract. Democracy depends on citizens making choices based on something reasonably approximating reality. When the information environment becomes sufficiently corrupted by AI-generated falsehood, coordinated manipulation, or algorithmic amplification of the most inflammatory content irrespective of its truth, democracy does not function as a mechanism for collective self-governance — it becomes a mechanism for whoever most effectively controls the information environment to determine outcomes. That is a prospect serious enough to demand serious analysis.


Section 1: AI and Information Ecosystems

How Algorithmic Curation Shapes What Citizens See

The most pervasive and poorly understood form of AI's influence on democratic processes is not the dramatic deepfake or the coordinated disinformation campaign — it is the mundane, continuous operation of recommendation and ranking algorithms that determine what billions of people see when they open their social media feeds, their news aggregators, their video platforms.

Facebook's News Feed, YouTube's recommendation engine, Twitter/X's timeline algorithm, TikTok's For You page, and Google News all operate on the same fundamental principle: they rank and surface content based on predictions about what will maximize engagement — the time users spend on the platform and the interactions (likes, shares, comments, clicks) they generate. These predictions are produced by machine learning models trained on historical engagement data. Content that generates high engagement gets surfaced to more users, generating more engagement, in a self-reinforcing cycle.

The problem is not that this optimization is technically flawed — it is quite good at what it optimizes for. The problem is that what it optimizes for correlates poorly and sometimes negatively with what is conducive to informed democratic participation. Content that generates high engagement is often emotionally provocative, outrage-inducing, identity-affirming, and conflict-rich. Accurate, nuanced political reporting that conveys genuine complexity does not typically generate as much engagement as content that confirms existing beliefs, provokes strong negative emotions about outgroups, or presents politics as a battle between good and evil.

Facebook's internal research team, operating before the 2016 election, identified that posts with the "angry" reaction (the red emoji) were five times as likely to be distributed by the algorithm as posts generating other reactions. The team proposed reducing the amplification weight on "angry" reactions; the recommendation was rejected. A 2019 internal Facebook document, later disclosed, described concern about the company's role in driving political polarization. A 2021 presentation found that 64% of people who joined extremist groups on Facebook did so because the recommendation algorithm suggested the group.

Filter Bubbles and Echo Chambers: The Evidence

The "filter bubble" hypothesis — associated with Eli Pariser's 2011 book — holds that algorithmic personalization creates information environments that show users only content confirming their existing beliefs, insulating them from disconfirming information and opposing views. The concept has become conventional wisdom. The empirical evidence for it is more mixed than the conventional wisdom suggests.

Research by Levi and Jackman (2017) and others found that social media users are exposed to more cross-cutting political content than offline media consumption would suggest — precisely because social networks connect people across ideological lines in ways that local newspapers and cable news channels do not. A 2019 study by Guess, Nyhan, and Reifler found that older Americans were the primary consumers of political misinformation online, and that they shared such content regardless of algorithmic recommendation, suggesting individual behavior patterns are as important as algorithmic curation.

At the same time, research consistently finds that people selectively engage with cross-cutting content in ways that reinforce rather than challenge prior beliefs. Encountering opposing views on social media may actually increase partisan hostility rather than reducing it — a 2018 study by Christopher Bail and colleagues found that Twitter users who followed bots that retweeted content from the opposing political party became more politically polarized, not less. The mechanism appears to involve motivated reasoning: encountering opposing views in an adversarial context tends to strengthen prior beliefs rather than challenge them.

The current state of the evidence suggests that filter bubbles are real but less hermetically sealed than Pariser's framing implied, that echo chambers (self-selected information environments) are a more accurate description than filter bubbles (algorithmically imposed isolation), and that the political effects of algorithmic curation operate through polarization and outrage amplification more than through exposure filtering. This distinction matters for policy: restricting algorithmic personalization may not be the right intervention if the problem is primarily content amplification rather than content filtering.

The Attention Economy and Its Political Effects

The political effects of the attention economy — the ecosystem of platforms that monetize user attention — extend beyond specific algorithmic choices to the structural incentives those choices reflect. Platforms that depend on advertising revenue optimized for attention will systematically favor content that captures attention; political content that is emotionally charged, tribally inflaming, and identity-threatening captures attention effectively. This is not a bug but an emergent feature of a business model.

Political content that is sober, complex, and accurate — the kind of content that healthy democratic deliberation requires — does not perform as well in the attention economy. The practical consequence is a structural disadvantage for quality journalism and a structural advantage for inflammatory partisan content, conspiracy theories, and outrage-maximizing political messaging. This is not simply a content problem that better content moderation can solve; it is a structural problem that requires either structural regulatory intervention or business model change.


Section 2: Disinformation and AI

The Scale Problem

Disinformation is not new. Propaganda, fabricated news, and deliberate manipulation of political information have existed as long as politics itself. What AI changes is the scale, speed, and cost at which disinformation can be produced and distributed. A single person with access to a capable large language model can produce, in hours, volume of plausible-appearing political content that would previously have required an organization with dozens of writers to generate over weeks.

This scale change is not merely quantitative; it is qualitative. When disinformation was expensive to produce, it required organizational resources that created some accountability — a disinformation campaign required a budget, staff, and coordination that left footprints. AI-generated disinformation can be produced cheaply, rapidly, and without the organizational infrastructure that created those footprints. Detection based on identifying coordination patterns is less effective when disinformation can be produced by a single actor at massive scale.

The content quality has also improved substantially. Early AI-generated text was recognizable by characteristic grammatical patterns and semantic awkwardness. Large language models produce text that is, for most purposes, indistinguishable from human-generated text. The "uncanny valley" problem that limited the persuasive effectiveness of earlier AI disinformation has substantially diminished.

Deepfakes and Synthetic Media

Deepfakes — AI-generated synthetic video and audio that realistically depicts real people saying or doing things they did not say or do — represent the most viscerally alarming form of AI disinformation. The technology has advanced from obviously artificial (the original face-swap deepfakes of 2017-2018) to production quality that defeats casual visual inspection. The combination of voice cloning (which can replicate a person's voice from seconds of audio) and face synthesis (which can generate video of a person's face synchronized to speech) enables convincing fabrications of political figures saying or doing virtually anything.

Documented cases of political deepfakes include: a 2019 video depicting Belgian Prime Minister Sophie Wilmès giving a climate speech she never gave; a 2020 video of an Indian news anchor that had been manipulated to make her appear to endorse a political candidate; multiple fabricated audio clips of political figures in Eastern European countries circulated during elections; and a 2023 deepfake audio of UK Labour leader Keir Starmer that circulated widely on social media.

The 2024 election cycle in the United States and elsewhere saw the first large-scale deployment of AI-generated political content in actual campaigns. A January 2024 deepfake audio recording using a realistic imitation of President Biden's voice was robocalled to New Hampshire voters, telling them not to vote in the primary. The incident prompted federal regulatory attention and illustrates the operational capability of AI-enabled political manipulation.

The Attribution Problem

A critical challenge for disinformation response is attribution — identifying the source of disinformation campaigns. Attribution matters because it determines legal exposure for producers, enables platform enforcement against coordinated inauthentic behavior, and provides evidence for regulatory and diplomatic responses. AI-generated disinformation is substantially harder to attribute than human-generated disinformation because it lacks the distinctive stylistic fingerprints that enable attribution of human writing, can be produced at scale that makes volume anomalies less detectable, and can be distributed through architectures designed to obscure origin.

Meta and Twitter/X have released periodic "coordinated inauthentic behavior" reports detailing networks they have identified and removed. These reports provide important documentation of disinformation operations but also implicitly acknowledge the limitations of current detection: operations are identified after the fact, following harm, and the networks identified represent those that were detected, not the universe of what existed.


Section 3: Micro-Targeted Political Advertising

Cambridge Analytica and Psychographic Targeting

The Cambridge Analytica affair — the British data analytics firm's use of Facebook user data to build psychographic profiles of American voters and target political advertising with unprecedented granularity — became the defining scandal of AI and political manipulation. Its complexity requires careful treatment, because the story combines genuine manipulation with significant exaggeration.

The core facts: Cambridge Analytica's parent company, SCL Group, harvested Facebook data from approximately 87 million users through a personality quiz app developed by Cambridge academic Aleksandr Kogan. The quiz, "thisisyourdigitallife," was installed by roughly 300,000 users — but its terms of service allowed it to collect data from those users' Facebook friends, without those friends' knowledge or consent. Cambridge Analytica used this data, combined with consumer data and voter files, to build psychographic models — OCEAN personality scoring (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism) — that it claimed could predict political behavior and enable precisely targeted persuasion messaging.

Cambridge Analytica worked for the Ted Cruz campaign in the 2016 Republican primary and later for the Trump campaign. It also worked with Leave.EU, a pro-Brexit campaign in the UK referendum. The firm's internal documents, disclosed through academic research, press investigation, and whistleblower testimony, showed aggressive claims about the effectiveness of its targeting — claims that were amplified in the company's own marketing materials.

What the evidence actually shows about the effectiveness of psychographic micro-targeting is more equivocal. Academic research examining Cambridge Analytica's actual methodology has found significant methodological weaknesses in its psychographic models — the ability to translate OCEAN scores into reliably effective political persuasion is more limited than the company claimed. A careful analysis by the Columbia Journalism Review found that Cambridge Analytica's boasts about its capabilities were significantly overstated and that it delivered relatively conventional targeted digital advertising rather than psychographically precise mind-control.

The ethical problems with Cambridge Analytica's operation are serious and real, but they are primarily about: unauthorized data collection without meaningful user consent; the conversion of personal data into political influence tools; and the creation of an accountability vacuum in which platforms facilitated data extraction they should have prevented. The specific claim that psychographic micro-targeting fundamentally altered voter behavior in ways that determined electoral outcomes is not as well-supported as the narrative suggests.

What the Evidence Shows About Microtargeting Effectiveness

The academic research on political micro-targeting effectiveness is mixed and context-dependent. Some studies find significant persuasive effects for well-targeted political messaging; others find effects too small to move electoral outcomes at scale. The most important honest conclusion is uncertainty: we do not know with precision how effective AI-enabled micro-targeting is at changing voter behavior, because controlled experimental evidence on real elections is nearly impossible to obtain and the mechanisms involved are complex.

What is clear is that micro-targeting raises serious democratic process concerns independent of its effectiveness. Political advertising that is shown to a carefully selected audience and invisible to the general public cannot be fact-checked, responded to, or held to public accountability standards by journalists or political opponents. The disclosure problem — the inability of the public to see what messages candidates and campaigns are directing at specific voter segments — is a genuine threat to democratic accountability regardless of whether the targeting "works" in any measured behavioral sense.


Section 4: AI and Electoral Integrity

Voter Roll Manipulation and Election Administration

AI's role in electoral integrity extends beyond political messaging to the administration of elections themselves. Voter registration databases, which determine who is eligible to vote and where, are increasingly managed with algorithmic tools — for maintenance, deduplication, and cross-state matching. These tools have significant implications for voter access.

The Interstate Crosscheck program, operated by Kansas Secretary of State Kris Kobach until 2019, used a simple matching algorithm to identify potential duplicate voter registrations across states — with the aim of removing ineligible voters. The program's methodology was severely criticized by statisticians: it matched on first name, last name, and date of birth, producing an enormous false positive rate that disproportionately flagged voters with common names prevalent in minority communities. Studies estimated that Crosscheck produced hundreds of false positives for every true duplicate, potentially disfranchising eligible voters who happened to share names with voters in other states.

Algorithmic voter roll purges represent a real and documented risk to electoral integrity. Several states have used algorithms to flag voters for removal from rolls based on inactivity, address changes, or cross-reference matching, with documented cases of eligible voters being incorrectly removed shortly before elections with insufficient time to reinstate them.

AI-Assisted Gerrymandering

Legislative redistricting — the drawing of electoral district boundaries — has always been susceptible to political manipulation. AI has dramatically amplified the scale and sophistication of gerrymandering by enabling the processing of enormous amounts of demographic and voting history data to design district maps optimized for partisan advantage. A human gerrymanderer drawing district boundaries manually in the 1980s could achieve some degree of partisan advantage; an AI system analyzing precinct-level voting data, demographic composition, and hundreds of thousands of possible boundary configurations can optimize for partisan advantage with a precision that human mapmakers could not approach.

The 2010 redistricting cycle saw early applications of computing power to gerrymandering; the 2020 cycle saw AI-assisted redistricting at scale. The Republican State Leadership Committee's REDMAP project, which targeted state legislative control in the 2010 elections specifically to control the redistricting process, has been documented using increasingly sophisticated algorithmic tools. The practical consequence has been legislative district maps that produce, for example, Pennsylvania's congressional delegation favoring one party 13 to 5 in a state that is roughly evenly split in statewide elections.

Court challenges to algorithmic gerrymandering have had limited success. The US Supreme Court held in Rucho v. Common Cause (2019) that federal courts cannot hear partisan gerrymandering claims — leaving the challenge to state courts and state constitutional provisions, which have produced more mixed results.


Section 5: AI in Government

Algorithmic Decision-Making in Public Administration

Democratic governments make millions of decisions annually affecting citizens' lives: who receives benefits, who is flagged for tax audit, who receives priority in public housing, who is identified for child welfare intervention, who is granted parole. AI is increasingly deployed to assist or automate these decisions, with implications for both efficiency and accountability.

The efficiency case for AI in government administration is real. Manual processing of benefit applications is slow, expensive, and subject to inconsistent application of rules — different human reviewers make different decisions on similar cases. AI systems can process applications faster, apply rules consistently, and identify potential fraud more accurately than manual review. Governments facing fiscal pressure and administrative backlogs have genuine incentives to deploy AI decision support.

The risks are equally real. When algorithmic systems make or strongly influence decisions about citizens' access to government benefits, those citizens may not know that an algorithm was involved, cannot understand the factors that drove the decision, and may lack effective mechanisms to challenge algorithmic outputs. The accountability principle that is foundational to democratic governance — that government decisions can be explained, challenged, and if wrong, reversed — is undermined by opaque algorithmic decision-making.

Automated Benefits Systems and Their Failures

Several documented failures of automated benefits systems illustrate the concrete risks. Michigan's MiDAS fraud detection system, deployed in 2013 to identify unemployment insurance fraud, used an algorithm that the state itself later acknowledged was wrong 93% of the time — flagging tens of thousands of legitimate claimants as fraudsters, imposing penalties that included repayment demands, fines, and collections actions. Many claimants had their bank accounts garnished or tax refunds seized based on algorithmic determinations that were overwhelmingly incorrect. A class action lawsuit resulted in a $20 million settlement.

The Netherlands' SyRI system (Systeem Risico Indicatie) used a risk model to identify potential social welfare fraud by scoring individuals based on data from multiple government databases — tax records, housing data, employment records, debt records. A Dutch court found SyRI violated human rights law in 2020, finding that the system was not sufficiently transparent to allow meaningful challenge of its outputs and that its profiling disproportionately targeted lower-income and migrant communities.

Australia's "Robodebt" scandal became the most extensively investigated case of automated government decision-making causing widespread harm. The system, operating from 2016 to 2019, automatically matched Centrelink benefit payment records against average Australian Tax Office income data to identify alleged overpayments — sending hundreds of thousands of debt notices to benefit recipients, many of which were based on errors in the income averaging methodology. The system created the legal presumption that those who received notices had to prove they did not owe the debt, reversing the normal burden of proof. A royal commission found the scheme was unlawful, resulted in $1.7 billion in incorrect debt notices, and was associated with documented cases of severe psychological harm and several suicides among targeted individuals.


Section 6: AI for Civic Participation

The Positive Use Cases for AI in Democracy

Having catalogued AI's threats to democratic processes, it is important to engage seriously with AI's genuine potential to strengthen them. Not all AI applications in democratic contexts are threats; some represent meaningful opportunities.

AI-assisted translation enables multilingual democracy at scales previously impossible. A policy consultation document that reaches only English-speaking citizens systematically excludes linguistic minorities. Machine translation, which has improved dramatically through neural network methods, can make government documents, consultation processes, and political information accessible across language barriers. This is not hypothetical: the EU Parliament uses AI translation across 24 official languages; some municipal governments use AI translation to reach communities that speak languages not served by professional translators.

AI tools for policy analysis can, in principle, make technical government decisions more legible to ordinary citizens. Complex regulatory proposals — a telecommunications spectrum allocation, a climate policy mechanism, a financial regulation — can be summarized and explained by AI in ways that genuinely improve civic engagement with technical government. This is augmentation of democratic participation rather than its manipulation.

The vTaiwan Model

Taiwan's experimental vTaiwan deliberative platform, developed beginning in 2015 with the involvement of digital minister Audrey Tang, represents the most extensively studied use of AI tools for democratic deliberation. The platform uses Polis, an AI-assisted polling and clustering tool, to enable large-scale democratic conversations in which hundreds of thousands of citizens can participate and the AI identifies areas of genuine consensus that might otherwise be obscured by adversarial political dynamics.

The mechanism: participants respond to statements and generate their own; the AI clusters respondents by similarity of response patterns and identifies statements that generate high agreement across clusters — "bridging" positions that do not simply reflect majority opinion but rather reflect shared ground across divided groups. This approach has been used to develop policy on ride-sharing regulation, online alcohol sales, and financial services in Taiwan, with the AI-enabled process credited with surfacing genuine consensus that traditional parliamentary processes might have missed.

The vTaiwan model is not a panacea — participation has been voluntary and has sometimes drawn unrepresentative samples — but it represents a documented example of AI tools genuinely expanding and improving the quality of democratic deliberation rather than manipulating or undermining it.


Section 7: Platform Power and Democratic Accountability

Content Moderation at Scale

Social media platforms that operate at global scale — Facebook, YouTube, Twitter/X, TikTok — face an impossible moderation challenge: they host billions of pieces of content daily, across hundreds of languages, touching every context of human communication, and they are expected by various audiences to: protect free expression, prevent incitement to violence, limit disinformation, enable political debate, and comply with different legal requirements in different jurisdictions. No combination of human moderation and AI content moderation can fully satisfy all of these requirements simultaneously.

The practice is therefore one of constant imperfect trade-offs, mediated largely by AI systems that classify content and make removal, demotion, and amplification decisions at scale with limited human review. These systems make consequential errors in both directions: they remove legitimate political speech (documented cases include removal of political satire, journalistic reports citing extremist statements, and minority-language content that AI misclassified as violating policies) and they fail to remove genuinely harmful content (documented cases include incitement to violence, coordinated disinformation campaigns, and harassment that met policy standards for removal but was not caught).

The Facebook Myanmar Case

The most severe documented consequence of platform content moderation failure in a democratic context is discussed in the case study accompanying this chapter. But the Myanmar case is not unique in type, only in scale of harm. Platform AI moderation systems have consistently performed worse in languages other than English, for cultural contexts outside the Western mainstream, and for types of speech that were underrepresented in the training data used to develop moderation models.

The political consequence is a systematic bias in global content governance: English-language, Western-context content receives more moderation attention (and more sophisticated moderation tools) than content in the majority of the world's languages and contexts. Democratic processes in non-English-speaking countries receive less protection from AI-enabled manipulation precisely because the AI systems are less capable in those contexts.


Section 8: The Disinformation Arms Race

AI Detection of AI-Generated Content

As AI-generated disinformation has become more sophisticated, investment in AI detection of AI content has grown. Watermarking, provenance attestation, and AI content classifiers represent the primary approaches.

Watermarking involves embedding signals in AI-generated content that allow detection — either invisible watermarks embedded in image pixel patterns, subtle word choice patterns in text, or metadata structures that survive reformatting. OpenAI and Google DeepMind have developed watermarking approaches for their respective systems. The problem is that watermarks are vulnerable to adversarial removal: a watermarked image run through a simple image processing step loses many watermark signals; watermarked text that is lightly paraphrased loses the linguistic pattern signatures. Watermarking works for honest actors; adversarial actors specifically seeking to produce undetectable AI content can circumvent current watermarking approaches.

The Coalition for Content Provenance and Authenticity (C2PA) is developing technical standards for content provenance — cryptographically signed metadata attached to content at creation that allows its origin and modification history to be verified. This approach is more robust than pure watermarking because it attests the origin rather than embedding a signal in the content itself. But it requires adoption across content creation tools, platforms, and media organizations — a coordination challenge that is proceeding slowly.

AI content classifiers — models trained to distinguish AI-generated text or images from human-generated content — have faced the fundamental challenge that classifiers trained on existing AI outputs become obsolete when new AI systems emerge. OpenAI released and then withdrew an AI text classifier after finding its accuracy was insufficient to deploy at scale with acceptable false positive rates.

The Cat-and-Mouse Dynamic

The fundamental structure of the AI disinformation landscape is adversarial — a dynamic in which detection capabilities and evasion capabilities evolve together, with no equilibrium point in sight. Detection systems trained on current AI outputs are retrained on better AI outputs; AI systems are iteratively improved to evade detection. This is not a solvable technical problem; it is an ongoing competition between offense and defense.

This structural insight has important implications for regulatory and policy approaches. Technical solutions that aim to solve the disinformation problem by detecting AI content at the point of distribution will face permanent obsolescence as AI capabilities evolve. Effective responses require institutional approaches — media literacy, platform accountability, disclosure requirements, and democratic resilience — that do not depend on technical detection winning the arms race.


Section 9: Regulatory Responses

EU Digital Services Act

The EU Digital Services Act (DSA), which entered full enforcement in 2024, represents the most comprehensive regulatory framework currently applying to platform AI and democratic processes. The DSA requires very large online platforms (VLOPs) and very large online search engines (VLOSEs) to:

Conduct systematic risk assessments for "systemic risks" including electoral integrity, civic discourse, and disinformation. Implement reasonable risk mitigation measures, with auditing by approved external auditors. Provide access to researchers, including access to recommender system data, enabling independent analysis of algorithmic effects on information ecosystems. Give users meaningful choice over whether to receive algorithmic recommendations or chronological feeds. Prohibit advertising based on sensitive data including political opinions. Provide transparency about content moderation decisions.

The DSA's application to the 2024 EU Parliament elections represented its first significant deployment in an electoral context. The European Commission opened formal proceedings against multiple VLOPs for potential DSA violations related to election-related systemic risk mitigation failures.

EU AI Act and Election AI

The EU AI Act, adopted in 2024, designates certain AI applications in electoral and democratic contexts as "high risk," requiring conformity assessments, transparency obligations, and human oversight. It prohibits certain AI applications including real-time remote biometric identification in public spaces for law enforcement (with limited exceptions) and social scoring by public authorities. For election-relevant AI systems — voter targeting, content ranking in electoral contexts, AI deployed by political campaigns — the Act creates specific requirements but stops short of comprehensive prohibition.

US Regulatory Landscape

The United States has taken a more fragmented regulatory approach. The Federal Election Commission has jurisdiction over paid political advertising, and in 2023 received petitions to regulate AI-generated content in political advertising — petitions that remained under deliberation through 2024. Several states enacted legislation specifically addressing AI in political advertising: California, Michigan, and Minnesota required disclosure when AI-generated imagery or audio was used in political advertising. The federal DEFIANCE Act (2024) created civil liability for non-consensual intimate deepfakes. A broader federal framework for AI and elections remained absent through the 2024 election cycle.


Section 10: Building Resilient Democracy

AI Literacy for Citizens

The most fundamental long-term response to AI threats to democracy is civic AI literacy — the capacity of ordinary citizens to understand, at a practical level, how algorithmic systems shape their information environment and what techniques disinformation actors use. This is not primarily a technical literacy requirement; citizens do not need to understand transformer architectures. It is a critical information literacy — the ability to assess source credibility, recognize manipulation techniques, understand that emotionally provocative content may be specifically designed to trigger sharing behaviors, and seek multiple sources before treating information as established.

Media literacy education — traditionally focused on print and broadcast media — needs updating for the AI era. The question "who made this and why?" applies to content when the "who" might be an AI system and the "why" might be engagement optimization rather than truth communication. The visual manipulation that previous generations needed to understand (photo editing, staged imagery) has been extended to video, audio, and text in ways that require updated skeptical habits.

Institutional Resilience

Democratic institutions — electoral systems, courts, legislatures, regulatory agencies — can be designed with resilience features that reduce their vulnerability to AI-enabled manipulation. Resilient electoral systems invest in human-verified paper audit trails that cannot be altered by hacking of electronic systems; in decentralized administration that prevents single-point-of-failure attacks; in post-election audits that validate outcomes; and in transparent processes that allow independent verification.

Regulatory institutions can build capacity to investigate and respond to AI-enabled manipulation of democratic processes — but this requires institutional investment that has been slow to materialize. Most electoral regulatory bodies were designed for the analogue era and lack the technical capacity to analyze AI-enabled disinformation operations, assess algorithmic amplification effects, or evaluate the democratic impact of AI political advertising tools.

Democratic AI Governance

Perhaps the most important long-term structural response to AI's threats to democracy is ensuring that AI governance itself is democratic — that the rules governing AI deployment are made through legitimate democratic processes rather than through private corporate decision-making or closed regulatory negotiation.

The decisions that platforms make about content moderation, algorithmic amplification, and data collection are political decisions with profound effects on democratic outcomes. Making these decisions through opaque internal processes, accountable only to shareholders, is inconsistent with democratic principles. Regulatory frameworks that require transparency, mandate independent oversight, and create meaningful accountability for AI systems that shape democratic processes are necessary conditions for maintaining democratic self-governance in an AI-enabled information environment.


Section 11: AI and Political Polarization — The Structural Drivers

Beyond Bad Actors — The Systemic Problem

Discussion of AI and democratic processes frequently focuses on bad actors — foreign intelligence services, domestic extremist groups, political operatives willing to deploy disinformation. These actors are real and their activities are documented. But they are not the only or even the primary source of AI's democratic harms. The more pervasive and in some ways more difficult problem is systemic: AI systems deployed by legitimate, law-abiding commercial actors for entirely non-malicious purposes produce democratic harms as emergent consequences of their design.

The social media platforms that amplify polarizing political content are not trying to damage democracy; they are trying to maximize engagement because engagement drives advertising revenue. The political campaign that micro-targets voters with algorithmically optimized messaging is not trying to manipulate democracy; it is trying to win an election. The recommendation systems that surface increasingly extreme content to users who engage with political videos are not trying to radicalize viewers; they are trying to keep those viewers watching longer. In each case, the AI system is functioning exactly as designed, and democratic harm is an externality of commercially rational behavior.

This structural framing matters because it changes the appropriate regulatory response. Regulations that target bad actors — prohibiting disinformation, requiring disclosure of coordinated campaigns, mandating disclosure of AI-generated political content — are necessary but insufficient if the structural incentives of legally operating AI systems produce democratic harm as a byproduct. Structural regulatory responses — requiring platforms to assess and mitigate systemic risks, imposing algorithmic amplification standards, creating accountability for the political consequences of engagement optimization — are necessary to address the problem at its source.

The Polarization Evidence

The relationship between algorithmic content curation and political polarization has been studied extensively, with results that are nuanced and contested but broadly concerning. A 2023 set of studies conducted in collaboration with Meta, published in Science and Nature, provided the most rigorous evidence to date. Researchers randomized whether Facebook and Instagram users received algorithmically curated feeds or reverse-chronological feeds during the 2020 US election period. Key findings: chronological feeds reduced exposure to extreme content (content from highly partisan accounts outside the user's network); the reduction in extreme content exposure, however, did not produce measurable changes in political attitudes or engagement with political information.

This finding is important but easily misread. The absence of a measurable attitude change from a short-term experimental modification does not mean that algorithmic amplification of extreme content has no political effects; it means that attitude change is slow and not detectable within a short experimental window. The experiment also could not address cumulative effects over years of algorithmic curation, or network-level effects that operate through social norms rather than individual attitude change.

What the evidence more consistently supports is that algorithmic amplification accelerates the spread of emotionally provocative content, that politically extreme content is systematically more emotionally provocative (because it is produced by actors whose success depends on strong emotional engagement), and that this pattern rewards and therefore incentivizes increasingly extreme political content production. Whether this produces attitude change or primarily produces behavior change — more emotional engagement, more identity-based political behavior, less cross-partisan deliberation — is a genuine empirical uncertainty. But that it shapes the information environment in which democratic choices are made is not seriously contested.


Section 12: The Global Variation in AI Democratic Risks

Different Threats in Different Contexts

AI's threats to democratic processes look different depending on the democratic context in which they operate. The analysis in this chapter has focused primarily on established Western democracies — the United States, Europe, and similar contexts. But AI's democratic impacts operate globally, in contexts with very different institutional configurations.

In emerging democracies with weak institutional infrastructure — independent courts, free press, professional election administration — AI disinformation can have more decisive effects than in established democracies with redundant institutional resilience. The Myanmar case, examined in this chapter's case study, illustrates the extreme: an AI-enabled information operation in a context with minimal institutional resilience, a vulnerable minority population, and political elites willing to exploit the information environment for genocidal ends. The Myanmar case is extreme, but the pattern — AI disinformation being more effective in contexts with weaker institutions — applies across a spectrum.

In competitive authoritarian regimes — governments that hold elections but use state power to tilt the playing field — AI tools can be deployed by incumbent governments to maintain power while maintaining the appearance of democratic legitimacy. AI-assisted disinformation targeting the political opposition, AI-enabled surveillance of civil society and political activists, and AI-assisted micro-targeting of electoral messaging for incumbent benefit are all available tools for incumbents with state resources. The political environments in Hungary, Turkey, Thailand, and multiple other countries illustrate how AI can be deployed to maintain nominally democratic forms while hollowing out their substance.

In fully authoritarian contexts — China, Russia, North Korea, Saudi Arabia — the democratic process threat is less about AI corrupting elections (elections in these contexts are already non-competitive) than about AI enabling the surveillance and control infrastructure that suppresses civil society, monitors dissent, and maintains authoritarian stability. The AI-enabled social credit systems, predictive policing, and mass surveillance infrastructure developed in these contexts represent a different dimension of AI's democratic challenge — the use of AI to prevent the conditions under which democratic participation could emerge.

Technology Transfer and Democratic Degradation

One of the most concerning global patterns in AI and democracy is the transfer of surveillance and information control technology from authoritarian contexts to others. Chinese AI companies have exported surveillance infrastructure — facial recognition, AI-assisted content moderation for the purpose of political censorship, social credit-adjacent scoring systems — to governments in Africa, the Middle East, and Southeast Asia. Russian information operations infrastructure, developed and refined in domestic contexts before deployment against Western democratic targets, represents a different form of technology transfer.

The institutional development needed to govern these technology transfers — export controls on dual-use surveillance technology, human rights due diligence requirements for technology companies operating in authoritarian contexts, and international norms against the use of AI for political repression — is nascent and inadequate relative to the pace of deployment.

The Complicity Question for Technology Companies

For business professionals in the technology sector, the global variation in AI democratic risks raises a direct ethical question: what obligations do technology companies have regarding how their products are used in different political contexts? A facial recognition system sold to a Western police department with rule-of-law constraints and civil liberties protections raises different ethical concerns than the same system sold to an authoritarian government for monitoring political dissidents. A content moderation AI deployed in a context with a free press and independent judiciary raises different concerns than the same system deployed to suppress opposition political speech in an authoritarian context.

The technology industry has historically been ambivalent about this question, sometimes invoking commercial neutrality ("we sell to all lawful customers") and sometimes imposing voluntary restrictions (Google's Project Maven withdrawal, Microsoft's facial recognition sales restrictions). The EU AI Act and analogous legislation in other democracies increasingly make these choices legal questions as well as ethical ones: certain AI applications are prohibited regardless of the customer's identity. The Human Rights Due Diligence requirements emerging in European corporate law require companies to assess and mitigate the human rights risks of their value chain — including risks arising from how their products are used by customers.

For AI companies specifically, the question of how products are used to affect democratic processes — in the home country and globally — is inescapably an ethical question that cannot be resolved by commercial neutrality. The recommendation algorithm that amplifies political disinformation in a Western democracy also amplifies it in an emerging democracy with weaker institutional resilience; the facial recognition system that enables biometric surveillance of citizens affects citizens differently depending on the legal protections available to them. Companies that design, deploy, and profit from AI systems have ongoing accountability for how those systems affect democratic participation — accountability that extends across the full range of contexts in which those systems operate.


Summary

AI's relationship to democracy is not reducible to a simple threat narrative or a simple opportunity narrative. The threats are real and serious: disinformation at scale, algorithmic amplification of division, micro-targeted manipulation of voters, deepfakes of political figures, and AI-assisted attacks on electoral infrastructure. These threats are already operational, not hypothetical, as documented cases from the 2016 US election through the 2024 global election cycle demonstrate.

The opportunities are also real: AI translation that enables multilingual democratic participation, AI analysis tools that can make complex policy more legible to ordinary citizens, AI-assisted deliberation platforms that enable genuine consensus-building at scale, and AI detection tools that can reduce the effectiveness of disinformation campaigns.

Navigating these tensions requires the same framework applicable to AI throughout this textbook: transparency, accountability, human oversight, and democratic governance of systems that have democratic consequences. Platforms that make decisions affecting democratic discourse must be accountable for those decisions; citizens need the literacy to recognize manipulation; institutions need the resilience to function despite AI-enabled attacks; and the governance of AI systems must itself be democratically legitimate.

Democracy is not simply a procedure — it is a commitment to the idea that collective self-governance requires citizens who can make informed choices about their collective future. AI that corrupts the information environment in which those choices are made does not merely break a rule; it undermines the precondition for democratic life.


Next: Chapter 30 examines AI across the criminal justice system — from predictive policing to sentencing algorithms — and asks whether algorithmic justice can be just at all.