Chapter 25 Quiz: Decision Support, Analysis, and Strategic Thinking


Question 1: According to the chapter, which of the following is AI's PRIMARY contribution to complex decision-making?

A) Providing the objectively correct answer based on large-scale data analysis B) Helping you think more clearly by structuring analysis, surfacing options, and challenging reasoning C) Replacing human judgment for well-defined decision types D) Eliminating emotional bias from the decision process entirely

Answer **B** — Helping you think more clearly by structuring analysis, surfacing options, and challenging reasoning. The chapter is explicit that AI cannot and should not replace human judgment in complex decisions. Its value is amplifying the quality of your thinking — structuring messy problems, generating options you haven't considered, testing assumptions, and arguing against your preferred conclusions. Option A is incorrect because AI reflects average patterns and lacks your specific context. Option C is wrong for most professional decisions. Option D overstates AI's impact on emotional bias.

Question 2: What is the "AI echo chamber" risk in decision-making?

A) AI recommendations creating privacy issues when shared across an organization B) Using AI to confirm decisions you've already made rather than genuinely challenging your thinking C) AI systems that learn from each other and amplify existing biases D) Repeating the same AI prompt multiple times and getting similar answers

Answer **B** — Using AI to confirm decisions you've already made rather than genuinely challenging your thinking. The echo chamber risk is that people frame their AI interactions to produce validation rather than genuine challenge. You can prompt AI to argue for your preferred option until it produces satisfying support, or run devil's advocate analysis but dismiss all the arguments reflexively. The warning sign: if AI analysis always confirms what you already believed, you may be curating inputs rather than thinking.

Question 3: When using the devil's advocate technique with AI, what framing produces the most useful output?

A) "What are the pros and cons of this option?" B) "Please review this decision and identify potential issues" C) "Make the strongest possible case AGAINST this choice. Don't be balanced." D) "What are the three main risks of this decision?"

Answer **C** — "Make the strongest possible case AGAINST this choice. Don't be balanced." The adversarial framing is essential to the technique's value. When you ask for "pros and cons" or "potential issues," AI tends toward diplomatic balance — providing mild criticism alongside support. Explicitly instructing AI to argue against your position, without balance, produces the pointed, uncomfortable challenges that are most valuable for testing whether your reasoning holds up.

Question 4: What is a "critical assumption" in the context of decision analysis?

A) The assumption that is most likely to be false B) The assumption that all stakeholders agree on C) The assumption that, if false, would most completely undermine your decision D) The assumption that was most carefully validated before the decision

Answer **C** — The assumption that, if false, would most completely undermine your decision. The critical assumption is not necessarily the most likely to be wrong (that would be the "riskiest assumption"). It's the one where falsity would be most consequential for your decision. Identifying it focuses your validation energy on what matters most. A decision can have many assumptions; most of them don't matter much if they're wrong. The critical assumption is the one that does.

Question 5: According to the chapter, in which type of decision does AI decision support add the most CONSISTENT value?

A) Novel strategic decisions in rapidly changing markets B) Personnel and hiring decisions C) Well-structured decisions with clear criteria, measurable outcomes, and historical data D) Values-laden decisions involving trade-offs between competing priorities

Answer **C** — Well-structured decisions with clear criteria, measurable outcomes, and historical data. The research section of the chapter notes that AI decision support shows the clearest quality improvements in well-structured decisions — medical diagnostics, financial risk assessment, engineering design optimization — where parameters are clear and historical data provides calibration. For novel, ambiguous, or values-laden decisions, AI adds less consistent value and requires more critical evaluation.

Question 6: What distinguishes "scenario analysis" from a standard pros/cons analysis?

A) Scenario analysis uses quantitative scores while pros/cons is qualitative B) Scenario analysis evaluates options across multiple plausible futures rather than a single assumed future C) Scenario analysis is designed for decisions with more than five options D) Scenario analysis requires more historical data to run effectively

Answer **B** — Scenario analysis evaluates options across multiple plausible futures rather than a single assumed future. The key insight of scenario analysis is that many complex decisions are made under genuine uncertainty about how external conditions will evolve. A pros/cons or matrix analysis implicitly assumes a single future state. Scenario analysis explicitly defines multiple possible futures and asks how each option performs across them — identifying "robust" choices that work reasonably well across scenarios vs. "bets" that are excellent in one scenario but poor in others.

Question 7: Alex's case study in this chapter describes a situation where AI analysis led her toward a wrong recommendation. What was the primary cause of the failure?

A) AI used an outdated analytical framework for the decision type B) The AI model was miscalibrated for market entry decisions C) AI analyzed a generic version of her situation that lacked critical context she hadn't provided D) Alex failed to run the devil's advocate prompt before accepting the recommendation

Answer **C** — AI analyzed a generic version of her situation that lacked critical context she hadn't provided. Alex's case study illustrates a key principle: AI analyzes the situation you describe, not your actual situation. When she provided the missing regulatory context, the AI's recommendation changed. This is one of the most important warnings in the chapter: the more your situation departs from the generic case, the more carefully you must validate AI analysis against the specific context AI doesn't have.

Question 8: What is the "outsource the decision" failure mode?

A) Delegating too many decisions to junior team members using AI tools B) Using AI as a substitute for human judgment rather than as support for it C) Relying on external consultants instead of AI for complex analysis D) Outsourcing the data collection phase of decision analysis to AI

Answer **B** — Using AI as a substitute for human judgment rather than as support for it. The "outsource the decision" failure is when you use AI to avoid the discomfort of deciding — reading AI output looking for permission rather than tools. It's characterized by reading AI output and waiting for it to tell you what to do, accepting recommendations without understanding the reasoning, and feeling relieved rather than clearer after reading AI analysis. The problem isn't just quality — it's that you lose the learning and self-knowledge that comes from making hard calls.

Question 9: What does Porter's Five Forces framework analyze?

A) The five key success factors for a specific project B) The five stages of market development for a new product C) The competitive forces that determine industry profitability and competitive intensity D) The five stakeholder groups that influence strategic decisions

Answer **C** — The competitive forces that determine industry profitability and competitive intensity. Porter's Five Forces analyzes: (1) competitive rivalry among existing players, (2) threat of new entrants, (3) threat of substitute products, (4) bargaining power of suppliers, and (5) bargaining power of customers. It's most useful for market entry and competitive positioning decisions. The chapter notes it as one of several frameworks that AI can populate systematically given context, freeing the decision-maker to focus on which insights matter most.

Question 10: When should a decision record be written?

A) Only after a decision has gone wrong, as a post-mortem B) At the end of the project, to document what decisions were made C) At the time of the decision, capturing reasoning, options rejected, and review triggers D) Before the decision, as a structured planning document

Answer **C** — At the time of the decision, capturing reasoning, options rejected, and review triggers. Decision records are most valuable when written at the time of decision — before the outcome is known and before memory of the reasoning has faded or been revised by post-hoc rationalization. Their primary uses are: preventing decision drift (re-litigating decisions without acknowledging they were made), enabling learning (comparing expected vs. actual outcomes), and clarifying accountability. A decision record written after the outcome is contaminated by outcome bias.

Question 11: The chapter warns that AI can generate "compelling but wrong" arguments. What is the best practice for guarding against this risk?

A) Only use AI for analysis when you have independent data to verify conclusions B) Always have a human expert review AI-generated decision analysis C) Explicitly identify what AI cannot know about your specific situation and stress-test the analysis against that context D) Limit AI to generating options lists rather than evaluating them

Answer **C** — Explicitly identify what AI cannot know about your specific situation and stress-test the analysis against that context. The most reliable defense against compelling-but-wrong AI analysis is a systematic habit of asking: "What doesn't AI know about my specific situation that might change this analysis?" This forces you to surface the context gaps that generate the most dangerous errors. Options A and B are also useful but are reactive rather than preventive. Option D is too restrictive — option evaluation is where structured AI analysis adds most of its value.

Question 12: What is the Socratic method as used in AI-assisted decision support?

A) Presenting multiple philosophical perspectives on a decision B) Using AI to question assumptions through dialogue rather than providing recommendations C) Applying classical logical analysis to decision frameworks D) Building decision trees based on ancient Greek philosophical categories

Answer **B** — Using AI to question assumptions through dialogue rather than providing recommendations. The Socratic technique as described in the chapter asks AI to generate questions — not answers — that challenge your reasoning and surface your assumptions. The key instruction is "ask only questions; no answers, no suggestions." This is valuable because it forces you to think through the implications yourself, rather than accepting AI's conclusions. The dialogue structure — your answers prompting follow-up questions — often surfaces the real crux of a decision that wasn't obvious in the initial framing.

Question 13: According to the chapter's ethics section, what is the fundamental test for whether a decision should be made by a human vs. supported by AI?

A) Whether the decision involves more than three options B) Whether the decision has financial implications C) Whether there is a person who should be accountable for the consequences if the decision goes wrong D) Whether the decision involves information that is publicly available

Answer **C** — Whether there is a person who should be accountable for the consequences if the decision goes wrong. The chapter states: "If this decision turns out badly, is there a person who should face the consequences? If yes, that person (not AI) should make the decision." Accountability is irreducibly human — it requires a person who learns from consequences, adjusts their judgment, and is responsible to others. This test doesn't exclude AI from supporting the decision; it determines that a human, not AI, must be the final decision-maker.

Question 14: What is the "average judgment" problem with AI in decision-making?

A) AI takes too long to generate analysis for time-sensitive decisions B) AI's training reflects collective patterns that may be wrong for decisions requiring contrarian or context-specific insight C) AI produces average-quality analysis compared to specialized decision support tools D) AI weights all decision criteria equally rather than reflecting their true importance

Answer **B** — AI's training reflects collective patterns that may be wrong for decisions requiring contrarian or context-specific insight. AI models learn from vast amounts of text that reflects the aggregate of human thinking — including consensus views, conventional wisdom, and common patterns. For decisions where the right answer requires thinking that breaks from the average (category-creating products, unusual market situations, organizations with non-standard constraints), AI will be poorly calibrated in ways that expert human judgment might not be. The chapter notes this is especially acute for novel situations without good historical analogues.

Question 15: Which of the following is a warning sign that you are "outsourcing the decision" to AI rather than using AI as decision support?

A) Running more than one decision framework for the same decision B) Feeling relieved after reading AI output rather than clearer C) Disagreeing with AI's recommended option D) Taking longer than expected to complete the AI analysis

Answer **B** — Feeling relieved after reading AI output rather than clearer. Relief is the emotional signature of outsourcing — you feel better because someone (something) else has taken on the burden of deciding. Clarity is the appropriate outcome of good decision support — you feel more certain about your own reasoning, better able to articulate why you're making the choice you're making. Other warning signs mentioned in the chapter: accepting AI recommendations without being able to explain why you agree, and framing prompts as "What should I do?" rather than "Help me think through X."