Quiz: The Failure Modes of the Future
Q1. "Algorithmic consensus" differs from human consensus because:
(a) It is always correct (b) Multiple AI systems converge on the same answer through shared training data and architecture — not through independent evaluation — creating synthetic agreement that appears like independent confirmation but is structurally hollow (c) It involves more people (d) It is slower than human consensus
Answer
**(b)** The danger: convergent AI outputs are mistaken for independent validation. Five AI systems saying the same thing is not five independent assessments — it is one statistical pattern reproduced five times.Q2. "Confidence laundering" refers to:
(a) Making money from confidence (b) AI's confident presentation washing uncertainty out of information — presenting provisional, uncertain, or wrong claims as settled facts because the AI generates fluent, authoritative-sounding text regardless of accuracy (c) Building confidence in students (d) Confident public speaking
Answer
**(b)** AI systems generate equally confident outputs whether they are correct or hallucinating. The confident presentation launders uncertainty — the user cannot distinguish genuine knowledge from fluent pattern-matching.Q3. "Model monoculture" is dangerous because:
(a) It makes AI boring (b) When a small number of foundational models underlies millions of applications, a systematic error in the foundation propagates simultaneously and invisibly across the entire information ecosystem — creating a single point of failure (c) There are too many models (d) Models are expensive
Answer
**(b)** Methodological diversity (Checklist D9) is one of the strongest protections against systematic error. AI monoculture reduces that diversity to an unprecedented degree.Q4. "Epistemic pollution" is the feedback loop in which:
(a) Pollution affects thinking (b) AI-generated text enters the training data for future AI systems, which produce more AI-generated text, which enters the training data for the next generation — potentially amplifying errors and degrading information quality with no error-checking mechanism (c) Too much information exists (d) Information is censored
Answer
**(b)** Unlike human intellectual inheritance (which involves evaluation and critique), AI inheritance involves only statistical pattern reproduction — amplifying whatever patterns exist, whether correct or erroneous.Q5. The chapter argues that the most important near-term action is:
(a) Banning AI (b) Applying existing tools (Scorecard, Checklist, design principles) to AI outputs with the same rigor as any other source — not exempting AI from critical evaluation because it is impressive or new (c) Building new AI systems (d) Ignoring AI
Answer
**(b)** The accelerated old failure modes are addressed by existing tools. The key is discipline: treat AI-generated claims as claims that need scoring, not as facts that need acceptance.Q6. "Training data as fossilized bias" means:
(a) AI systems use old data (b) Historical biases, errors, and limitations are embedded in AI training data and reproduced by AI systems without evaluation or interrogation — perpetuating past errors not through deliberate choice but through statistical inheritance (c) Training data is incomplete (d) Bias can be removed easily
Answer
**(b)** Humans can interrogate their biases through reflection and education. AI systems cannot interrogate their training data — they can only reproduce patterns found within it. The bias is structural and often opaque.Q7. The chapter identifies four new tools needed for AI-era failure modes. Which is NOT one of them?
(a) Model diversity requirements (b) Training data audits (c) AI output labeling (d) Banning all AI research
Answer
**(d)** The four tools are: model diversity requirements, training data audits, AI output labeling, and feedback loop monitoring. These are structural interventions, not prohibitions.Q8. The existing failure mode toolkit (Scorecard, Checklist, design principles) can address AI-era challenges:
(a) Completely — no modifications needed (b) Partially — the tools work for accelerated old failure modes but need supplementation for genuinely new failure modes (algorithmic consensus, model monoculture, epistemic pollution) (c) Not at all — everything is different in the AI era (d) Only with government regulation
Answer
**(b)** The existing tools address the *human* side of AI use — evaluating claims, assessing institutional health, designing better systems. The genuinely new failure modes require new tools because they operate through mechanisms that have no historical precedent.Scoring Guide
- 7-8 correct: Excellent. You can distinguish old failure modes (accelerated) from new ones (created) and apply the appropriate tools.
- 5-6 correct: Good. Review the four genuinely new failure modes and why they have no historical analogue.
- Below 5: Re-read the chapter focusing on the structural differences between human and AI knowledge production.