Chapter 38 Quiz: AI, Society, and the Future of Work
Question 1
The Luddites (1811--1816) are often invoked as an example of irrational resistance to technology. What is a more accurate characterization of their movement?
A) They were unskilled workers who could not adapt to new technology and resisted all forms of mechanization B) They opposed the specific deployment of machinery to replace skilled artisans with cheaper unskilled labor while eliminating customary worker protections C) They were factory owners who feared competition from more advanced mills D) They were academic economists who predicted permanent mass unemployment from automation
Question 2
Frey and Osborne's widely cited 2013 study estimated that 47% of US jobs were at "high risk" of computerization. What is the most common misinterpretation of this finding?
A) That the study examined only manual labor occupations B) That the study predicted 47% of workers would lose their jobs, when the authors actually estimated that 47% of jobs had task profiles susceptible to automation C) That the study was funded by technology companies with a vested interest in promoting automation D) That the study only examined the United States, when it actually covered all OECD countries
Question 3
When Arntz, Gregory, and Zierahn (2016) reanalyzed the Frey and Osborne methodology at the task level rather than the occupation level, what happened to the estimate of jobs at high risk of automation?
A) It increased from 47% to 62% B) It remained approximately the same at 47% C) It decreased dramatically to approximately 9% D) It increased to 80% when accounting for AI advances
Question 4
What is the key distinction between "task displacement" and "job displacement"?
A) Task displacement affects manual tasks; job displacement affects cognitive tasks B) Task displacement occurs when specific tasks within a job are automated (changing the job's composition), while job displacement occurs when enough tasks are automated that the position is no longer economically justified C) Task displacement is temporary; job displacement is permanent D) Task displacement only occurs in manufacturing; job displacement occurs across all sectors
Question 5
The Eloundou et al. (2023) study on LLM exposure found a striking inversion of previous automation patterns. What was the inversion?
A) LLMs primarily affect manual labor rather than cognitive work B) Higher-wage workers with more education were more exposed to LLM automation than lower-wage workers --- the opposite of previous automation waves C) Workers in developing countries were less affected than workers in developed countries D) Older workers were less exposed than younger workers
Question 6
The "centaur model" of human-AI collaboration originated from which context?
A) A McKinsey consulting framework for enterprise AI deployment B) Garry Kasparov's proposal for "Advanced Chess" in which human players could use computer assistance, with human-computer teams outperforming both humans and computers alone C) A US Department of Defense program for human-AI teaming in military operations D) An academic study of human-robot collaboration in manufacturing
Question 7
According to the chapter, effective human-AI augmentation requires redesigning three things simultaneously. Which three?
A) The technology, the training data, and the evaluation metrics B) The organizational chart, the compensation structure, and the performance review process C) The technology (what the AI does), the workflow (how work is organized), and the role (what the human is responsible for) D) The hardware, the software, and the user interface
Question 8
The Autor, Levy, and Murnane (2003) framework demonstrated that computers substitute for workers performing _____ tasks while complementing workers performing _____ tasks.
A) cognitive ... manual B) routine ... non-routine C) expensive ... inexpensive D) simple ... complex
Question 9
What makes current AI fundamentally different from previous waves of automation, according to the chapter?
A) AI is more expensive to deploy than previous automation technologies B) AI is the first technology with the demonstrated capacity to perform non-routine cognitive tasks, expanding the frontier of automation into domains previously considered safe C) AI eliminates jobs faster than any previous technology D) AI only affects white-collar workers, while previous automation affected blue-collar workers
Question 10
Which of the following best characterizes the "judgment dimension" of tasks?
A) Tasks requiring advanced mathematical computation B) Tasks involving the evaluation of trade-offs among competing values where the criteria for "good" are themselves contested C) Tasks requiring rapid decision-making under time pressure D) Tasks requiring integration of information from multiple data sources
Question 11
In Athena's customer service transformation, the AI chatbot now handles 65% of customer inquiries. Which of the following accurately describes the workforce outcome?
A) 400 agents were laid off and the remaining 800 handle all inquiries at lower pay B) The team was reduced from 1,200 to 800: 200 agents were retrained into new roles, 200 positions were eliminated through attrition, and remaining agents handle more complex cases at higher pay C) All 1,200 agents were retained but moved to part-time schedules D) The team was outsourced to a third-party contractor at lower cost
Question 12
Grace Chen says Athena "chose the harder path --- managing the transition rather than optimizing the headcount." What was the approximate cost of this choice?
A) $500,000 in reskilling investment B) $4 million in reskilling investment, with all reductions achieved through attrition over 18 months rather than immediate layoffs C) $20 million in severance payments D) No additional cost --- the transition was cost-neutral
Question 13
The chapter identifies the geographic concentration of AI employment as a source of inequality. According to the Brookings Institution research cited, what percentage of AI jobs are concentrated in just 15 metro areas?
A) 20% B) 40% C) 60% D) 90%
Question 14
Lena Park's "AI Governance Triangle" identifies three dimensions that governance systems must balance. What are they?
A) Speed, accuracy, and cost B) Innovation, protection, and participation C) Privacy, security, and transparency D) Regulation, enforcement, and compliance
Question 15
The chapter discusses universal basic income (UBI) as a potential response to AI-driven displacement. What does the evidence from UBI pilot programs generally show?
A) Recipients stop working entirely, confirming the incentive concern B) Recipients do not stop working; they tend to invest in education, health, and entrepreneurship, though pilots are small and short-term C) UBI is prohibitively expensive at any scale D) UBI has no measurable effect on recipients' behavior or well-being
Question 16
According to the Card, Kluve, and Weber (2022) meta-analysis of government-sponsored retraining programs, what are the typical effects on earnings?
A) Large positive effects: 25--40% increases in earnings B) Modest positive effects: typically 5--10% increases, with outcomes varying enormously by program quality and context C) No measurable effects: retraining has no impact on earnings D) Negative effects: retrained workers earn less than non-retrained peers
Question 17
The chapter states that AI safety and existential risk involve a debate between which two broad positions?
A) Whether AI should be open-source or closed-source B) Whether increasingly capable AI systems could pose fundamental risks to humanity (the concern) vs. whether such concerns are speculative and distract from concrete present-day AI harms (the skeptical response) C) Whether AI should be regulated by governments or by industry D) Whether AI will achieve consciousness within 10 years or 100 years
Question 18
Which of the following is NOT one of the six principles in the chapter's "Framework for Responsible Leadership"?
A) Be honest about the impact B) Design for augmentation by default C) Maximize shareholder value from AI deployments D) Accept that you will get it wrong
Question 19
The chapter's Principle 1 ("Be Honest About the Impact") specifically warns against which practice?
A) Deploying AI without board approval B) Using open-source AI models instead of proprietary ones C) Telling employees that AI will only "help them do their jobs better" while also planning to reduce headcount D) Investing in reskilling programs that may not succeed
Question 20
NK closes the chapter by asking: "We spent thirty-seven chapters learning how to build AI. Now we need to ask: build it for whom?" Which broader theme does this question most directly connect to?
A) The Build-vs-Buy Decision (whether to develop AI internally or purchase vendor solutions) B) The Hype-Reality Gap (distinguishing genuine AI capability from marketing) C) Responsible Innovation (ethics as a component of sustainable innovation, not a constraint) D) Data as a Strategic Asset (data quality and governance)
Answer Key
-
B --- The Luddites opposed the specific deployment of machinery to replace skilled artisans while eliminating worker protections, not technology per se.
-
B --- Frey and Osborne estimated susceptibility to automation, not actual job loss. The distinction between technical automatability and actual automation (which depends on economic, legal, social, and organizational factors) was widely overlooked.
-
C --- Task-level analysis showed that most occupations contain a mix of automatable and non-automatable tasks, dramatically reducing the estimate of jobs at "high risk."
-
B --- Task displacement changes job composition; job displacement eliminates positions entirely. Most automation produces task displacement, but sustained task displacement can eventually lead to job displacement.
-
B --- LLMs perform non-routine cognitive tasks (writing, analysis, communication), which are disproportionately concentrated in higher-wage, higher-education occupations --- reversing the historical pattern.
-
B --- Kasparov proposed Advanced Chess after losing to Deep Blue, and the resulting human-computer teams demonstrated the power of complementary collaboration.
-
C --- Deploying AI without redesigning the technology, workflow, and role typically produces either technology rejection or automation bias, not genuine augmentation.
-
B --- The routine/non-routine distinction, rather than cognitive/manual, is the key dimension determining susceptibility to computer automation.
-
B --- Previous automation waves were limited to routine tasks. AI's ability to perform non-routine cognitive tasks (understanding language, generating creative content, making judgments) represents a qualitative shift.
-
B --- The judgment dimension captures normative complexity --- tasks where the criteria for success are contested --- which resists automation not because of computational limits but because there is no clear optimization target.
-
B --- Athena's approach combined retraining (200 agents moved to new roles), attrition (200 positions eliminated without layoffs), and role enhancement (remaining agents handle more complex, higher-paid work).
-
B --- $4 million in reskilling, with attrition-based reductions over 18 months. The approach was more expensive in the short term but preserved institutional knowledge and workforce morale.
-
C --- 60% of AI jobs concentrated in 15 metro areas containing only 28% of the US population, representing extreme geographic concentration.
-
B --- Innovation (continued technical progress), Protection (preventing harm), and Participation (affected communities having a voice). The three dimensions are in genuine tension.
-
B --- Pilot evidence generally contradicts the work-incentive concern, but pilots are limited in scale and duration, making generalization uncertain.
-
B --- Retraining produces modest positive effects that vary dramatically by context, highlighting the gap between reskilling rhetoric and retraining reality.
-
B --- The debate involves genuine disagreement among serious researchers about whether advanced AI poses fundamental risks (Russell, Bengio) or whether such concerns distract from concrete present-day harms (LeCun, Ng).
-
C --- The framework emphasizes honesty, transition investment, augmentation design, worker inclusion, systemic thinking, and humility --- not shareholder value maximization.
-
C --- Using augmentation language to describe an automation strategy is specifically identified as a trust-destroying practice that the customer service representative's letter illustrates.
-
C --- NK's question is the culmination of the Responsible Innovation theme: ethics and human impact are not constraints on innovation but integral to determining what innovation is worth pursuing.