Chapter 29: Quiz — AI and Democratic Processes

20 questions. Mix of multiple choice, true/false, and short answer.


Multiple Choice

1. Which of the following best describes the "filter bubble" hypothesis as proposed by Eli Pariser?

A) Social media algorithms make users more exposed to opposing political views B) Algorithmic personalization creates information environments showing users primarily content that confirms their existing beliefs C) Platform recommendation systems are designed to maximize political polarization D) Internet users deliberately choose to consume only news that agrees with their political views

Answer: B — Pariser's filter bubble hypothesis holds that algorithmic personalization insulates users from disconfirming information and opposing views. The empirical evidence for this strong version of the thesis is more mixed than popular discourse suggests.


2. Facebook's internal research, disclosed by whistleblower Frances Haugen, found that posts with the "angry" reaction were how much more likely to be distributed by the algorithm compared to posts generating other reactions?

A) Twice as likely B) Three times as likely C) Five times as likely D) Ten times as likely

Answer: C — Internal Facebook research found "angry" reactions were weighted five times more heavily than other reactions in distribution decisions, systematically amplifying outrage-inducing content.


3. The Cambridge Analytica scandal primarily involved:

A) Hacking into election administration systems to alter vote counts B) Using Facebook user data harvested without proper consent to build psychographic voter profiles C) Creating deepfake videos of political opponents D) Using AI to generate mass quantities of fake news articles

Answer: B — Cambridge Analytica used data on approximately 87 million Facebook users, harvested through a quiz app without adequate consent, to build psychographic profiles for voter targeting.


4. The vTaiwan model uses the AI tool Polis to:

A) Monitor social media for disinformation during elections B) Detect AI-generated political content C) Enable large-scale deliberation that identifies areas of consensus across divided groups D) Translate government documents into multiple languages

Answer: C — Polis clusters participants by similarity of response patterns and identifies "bridging" positions that generate agreement across otherwise divided groups, enabling genuine democratic consensus-building.


5. The Interstate Crosscheck voter roll maintenance program was criticized primarily because:

A) It was operated by a foreign government B) Its simple name-matching algorithm produced a very high false positive rate that disproportionately flagged voters with common names prevalent in minority communities C) It required voters to re-register after every election D) It shared voter data with private corporations

Answer: B — Crosscheck's matching methodology (first name, last name, date of birth) produced enormous false positive rates, with analysts estimating hundreds of false matches per true duplicate, disproportionately affecting minority voters.


6. The "liar's dividend" refers to:

A) Financial profits earned by disinformation producers B) The legal immunity platforms receive under Section 230 C) The ability of political actors to disclaim authentic content by alleging it is AI-generated D) Revenue earned by AI companies from political advertising

Answer: C — The liar's dividend is the epistemic advantage bad actors gain from the known existence of deepfake technology, which allows them to deny authentic compromising content as fabricated.


7. Which regulatory framework established the most comprehensive requirements for systemic risk assessment by very large online platforms related to electoral integrity?

A) The US Digital Millennium Copyright Act B) The EU Digital Services Act C) The UN Declaration on AI in Elections D) The OECD AI Principles

Answer: B — The EU Digital Services Act requires VLOPs to conduct systemic risk assessments including for electoral integrity risks and to implement proportionate mitigation measures with external audit requirements.


8. The January 2024 New Hampshire Biden robocall was produced using:

A) Paid actors hired to impersonate Biden's voice B) AI voice cloning technology C) Repurposed audio from a genuine Biden speech edited to change context D) A written script read by a professional voice actor

Answer: B — Analysis by audio forensics experts identified the voice as synthetically generated using AI voice cloning technology.


9. Australia's "Robodebt" automated debt recovery system was found problematic primarily because:

A) It was hacked by foreign actors B) It targeted political opponents of the ruling party C) Its income averaging methodology generated legally invalid debt notices on a massive scale, reversing the burden of proof to recipients D) It used racial profiling to select audit targets

Answer: C — Robodebt generated approximately 470,000 debt notices using an income averaging methodology that was found to be legally invalid; the system placed the burden on benefit recipients to prove they did not owe the debt, reversing normal legal presumptions.


10. The US Supreme Court's decision in Rucho v. Common Cause (2019) held that:

A) Partisan gerrymandering is unconstitutional B) Algorithmic tools cannot be used in redistricting C) Federal courts cannot hear partisan gerrymandering claims D) State legislative maps must be independently reviewed for partisan fairness

Answer: C — The Court held that partisan gerrymandering claims are non-justiciable political questions that federal courts cannot resolve, leaving challenges to state courts and state constitutional provisions.


True/False

11. Academic research consistently demonstrates that exposure to cross-cutting political content on social media reduces political polarization.

Answer: False — Research, including a significant 2018 study by Christopher Bail and colleagues, has found that exposure to opposing political views on social media can actually increase political polarization, not reduce it, particularly when that exposure occurs in adversarial contexts.


12. Facebook had Burmese-language content moderation capacity proportional to its user base in Myanmar before the 2017 violence.

Answer: False — Facebook had minimal Burmese-language content moderation capacity relative to its large Myanmar user base, a documented failure cited by the UN fact-finding mission and Facebook's own commissioned human rights assessment.


13. The EU Platform Work Directive (2024) only addresses gig worker classification and has no provisions related to algorithmic management transparency.

Answer: False — The directive includes provisions requiring that workers be informed about algorithmic management decisions affecting them and have the right to have significant algorithmic decisions reviewed by a human.


14. AI content detection technology is reliable enough to definitively identify AI-generated video or audio in real-time election contexts with minimal false positives.

Answer: False — Detection technology achieves high accuracy on training-similar content but struggles to generalize to content from new generation systems, making real-time high-confidence detection in election contexts currently unreliable.


15. Section 230 of the Communications Decency Act provides platforms complete immunity from any legal liability related to content on their platforms.

Answer: False — Section 230 provides broad but not complete immunity; it does not cover federal criminal liability, sex trafficking content (FOSTA-SESTA), or, potentially, platform conduct in algorithmic amplification decisions that courts have not fully resolved.


Short Answer

16. Explain the difference between a "filter bubble" and an "echo chamber" in the context of political information consumption. Why does this distinction matter for policy responses to political polarization?

Model Answer: A filter bubble is primarily algorithmic — the platform system curates away content that doesn't match a user's prior interests, leaving them in an information bubble they didn't choose. An echo chamber is primarily behavioral — users actively select media and social connections that confirm their views, creating self-reinforcing information environments through their own choices. The empirical evidence suggests that algorithmically-imposed filter bubbles are less hermetically sealed than Pariser originally suggested, while self-chosen echo chambers driven by selective engagement are a more significant factor in political polarization. The distinction matters for policy because restricting algorithmic personalization may not be the right intervention if the primary mechanism is selective engagement; more promising responses might address the incentive structures that make cross-cutting engagement feel threatening, or focus on building social trust that makes engagement across difference more rewarding.


17. What is the core accountability problem raised by the Myanmar genocide case for AI-powered social media platforms?

Model Answer: The Myanmar case reveals the accountability gap created by algorithmic amplification: Facebook's recommendation algorithm actively selected and amplified hate speech content targeting the Rohingya because that content generated high engagement signals, making Facebook an active participant in spreading the content rather than a passive host. But existing legal frameworks (primarily Section 230 in the US context) were designed for passive hosting, not active algorithmic amplification; they created limited liability for the consequences. The ethical accountability problem is that no identifiable individual or decision-maker was clearly responsible for the amplification: the engineers designed the engagement algorithm without knowledge of Myanmar's specific human rights context; the business decision-makers chose engagement optimization without fully tracing downstream harms; the content moderation team lacked the language capacity to intervene effectively. This diffusion of responsibility across organizational actors, combined with legal structures that limit formal accountability, allowed a genocide-facilitating information operation to proceed largely without accountability.


18. Why do academic researchers find it difficult to measure the actual electoral effectiveness of AI-enabled political micro-targeting?

Model Answer: Measuring micro-targeting effectiveness faces fundamental methodological obstacles. Controlled experiments on real elections — the gold standard for causal inference — are nearly impossible to conduct, because researchers cannot randomly assign voters to receive different information treatments, cannot blind researchers to treatment conditions in real political environments, and cannot observe the counterfactual (what would have happened without targeting). Observational studies face severe confounding: voter segments that receive targeted messages differ in systematic ways from those that don't, making it impossible to isolate the effect of the targeting from pre-existing differences. Platform data on what messages were sent to whom is proprietary and not available to researchers. And the mechanism of political persuasion is complex — individual message effects are small and aggregate effects are sensitive to multiple contextual factors that are difficult to model. The result is genuine scientific uncertainty about how large the effects are, even when the practices themselves are clearly documented.


19. Describe the C2PA (Coalition for Content Provenance and Authenticity) approach to addressing AI-generated disinformation and explain its main limitation.

Model Answer: The C2PA approach embeds cryptographically signed metadata in content at creation, documenting its origin and modification history. When content carries a valid C2PA provenance chain — signed by a verified camera, verified editing software, and a verified publisher — its authenticity can be confirmed. This is more robust than trying to detect AI characteristics in the content itself, because it attests origin rather than analyzing content. The main limitation is adoption: the approach only provides provenance assurance for content created with C2PA-compliant tools and distributed through C2PA-aware platforms. Content generated without C2PA compliance, or where provenance data is stripped (through compression, screenshots, or platform processing), carries no provenance information — and the absence of provenance data does not prove content is AI-generated. An adversarial actor specifically seeking to spread synthetic content without detection can simply avoid C2PA-compliant tools and distribute through channels that don't enforce provenance. The system works for authentic content provenance among honest actors but provides limited protection against deliberate evasion.


20. If technical detection of AI-generated political content is inadequate for the demands of election disinformation response, what non-technical approaches offer the most promise?

Model Answer: The most promising non-technical approaches operate through institutional resilience rather than technical detection. First, independent journalism with established credibility and resources for rapid fact-checking — newsrooms that can investigate viral content quickly, publish debunking effectively, and maintain sufficient public trust that their verification is credited by a substantial portion of the electorate. Second, civic education in critical information literacy — teaching citizens to evaluate source credibility, recognize emotionally manipulative content, and seek verification before sharing politically convenient claims. This does not require technical deepfake detection skills; it requires habitual skepticism toward unverified viral content. Third, legal frameworks that create ex ante deterrence — criminal and civil liability for election disinformation that makes the risk of being caught a meaningful deterrent even when technical attribution is difficult. Fourth, election administration resilience — systems with paper audit trails, post-election audits, and decentralized administration that can function despite information environment attacks. Fifth, international cooperation that enables faster attribution and diplomatic consequences for state-sponsored AI disinformation operations.