Chapter 28: Quiz — AI and Employment

20 questions. Mix of multiple choice, true/false, and short answer.


Multiple Choice

1. The 2013 Frey and Osborne study estimated what percentage of US jobs were at "high risk" of computerization?

A) 14% B) 25% C) 47% D) 62%

Answer: C — Frey and Osborne's Oxford Martin School study estimated 47% of US jobs were at high risk, though this figure has been contested on methodological grounds.


2. The key methodological difference between the Frey-Osborne estimate and the OECD 2018 estimate was:

A) The OECD studied more countries B) The OECD analyzed tasks within occupations rather than treating occupations as units C) The OECD focused only on manufacturing jobs D) The Frey-Osborne study used more recent data

Answer: B — The OECD's lower 14% estimate resulted from analyzing specific tasks within jobs, recognizing that most occupations contain both automatable and non-automatable components.


3. Which of the following occupational categories is MOST resistant to current AI automation?

A) Routine data processing B) Basic legal document review C) Skilled trades requiring physical dexterity in unpredictable environments D) Standard financial reporting

Answer: C — Skilled trades (plumbing, electrical, HVAC) require physical manipulation in highly variable, unpredictable environments that general-purpose AI and robotics cannot yet navigate effectively.


4. The "centaur" model of human-AI collaboration refers to:

A) AI systems that can independently solve complex problems B) Human-AI teams that outperform either alone in specific domains C) AI systems trained to mimic human decision-making D) Regulatory frameworks that combine human oversight with AI efficiency

Answer: B — The centaur metaphor, from chess, describes human-AI teams that leverage the complementary strengths of each to achieve better outcomes than either alone.


5. California's Assembly Bill 5 (AB5) and the subsequent Proposition 22 battle was primarily about:

A) Mandatory AI impact assessments for employers B) Data privacy rights for gig workers C) Classification of gig workers as employees vs. independent contractors D) Minimum wage requirements for AI-managed workers

Answer: C — AB5 applied a broad test for employee classification that would have classified most gig workers as employees; Proposition 22, funded by $200 million from platforms, created an exemption.


6. The EU Platform Work Directive (2024) established what key presumption?

A) Platform workers are presumed to be independent contractors unless they prove otherwise B) Platform workers are presumed to be employees, with platforms able to rebut this presumption C) All platform work must be conducted through licensed employment agencies D) Algorithmic management is prohibited in the European Union

Answer: B — The directive reversed the burden of proof, presuming employment and requiring platforms to demonstrate genuine contractor independence.


7. Amazon's internal performance management metric "TOT" stands for:

A) Total Output Targets B) Time Off Task C) Task Optimization Threshold D) Total Operator Throughput

Answer: B — Time Off Task (TOT) tracks periods when workers' scanners are not recording productive activity, with automated alerts when thresholds are exceeded.


8. The WGA strike settlement included all of the following AI protections EXCEPT:

A) AI cannot write or rewrite literary material covered by WGA agreements B) AI-generated content cannot be used as "source material" to reduce writer rates C) Studios are prohibited from using AI to generate content in pre-development stages before writers are hired D) Writers must be told when AI-generated material is given to them

Answer: C — The WGA did not win restrictions on AI use in pre-development stages before WGA agreements apply. The other three provisions were included in the settlement.


9. Which government's workforce management model involves works councils (Betriebsrat) with genuine co-governance rights over workplace technology changes?

A) United States B) United Kingdom C) Singapore D) Germany

Answer: D — Germany's works councils have codetermination rights — not merely consultation but co-governance — over workplace changes including AI-driven monitoring systems.


10. What does the research evidence show about generic government retraining programs for displaced workers?

A) They consistently produce job outcomes equivalent to the displaced jobs B) They have poor track records, with weak job placement success C) They are most effective for older workers with extensive experience D) They work well when combined with UBI payments

Answer: B — Programs like the US Trade Adjustment Assistance have shown limited effectiveness, often producing job placements below the displaced workers' prior compensation levels.


True/False

11. The historical evidence from previous automation waves shows that new job categories reliably emerge quickly enough to prevent significant unemployment during technological transitions.

Answer: False — While new job categories have historically emerged, they have done so over generational timescales and in different geographic locations than the displaced work, resulting in prolonged transition periods with significant human cost.


12. AI automation primarily threatens lower-skilled, lower-wage occupations, leaving higher-educated professionals largely unaffected.

Answer: False — AI capabilities now extend to many tasks requiring significant education, including legal analysis, financial reporting, code generation, and medical diagnosis support, extending disruption risk up the credential ladder.


13. The WGA settlement prohibited studios from training AI models on writers' existing work without compensation.

Answer: False — The WGA did not win training data protections; studios retained the ability to use existing scripts and literary material as AI training data, which remains a significant concern for writers.


14. Algorithmic deactivation — the removal of gig workers from platforms by automated systems — is equivalent to termination without cause for an employee, but without the employment law protections that apply to employees.

Answer: True — Because gig workers are classified as independent contractors, they do not receive the notice, severance, and unemployment insurance protections that apply when employees are terminated.


15. The McKinsey Global Institute's 2017 analysis estimated that 49% of jobs would be fully automated.

Answer: False — McKinsey's estimate applied to 49% of work activities (tasks within jobs), not 49% of jobs. This is an important distinction, as eliminating some tasks from a job does not eliminate the job.


Short Answer

16. Explain the "augmentation vs. replacement" distinction in AI deployment. Why is this distinction ethically significant?

Model Answer: Augmentation means using AI to amplify the productivity or capabilities of human workers — the same AI contract review system allows each lawyer to handle more work. Replacement means using AI to eliminate the need for human workers — using the same system to reduce the number of lawyers needed. The distinction is ethically significant because it is determined by organizational choice, not technical necessity. The moral responsibility for workforce displacement attaches to the humans making deployment decisions, which means organizations cannot ethically treat workforce reduction as a "technological inevitability." The choice of augmentation vs. replacement reflects values about what organizations owe workers and how productivity gains should be shared.


17. What is the "accountability gap" in algorithmic management, and why is it ethically problematic?

Model Answer: The accountability gap refers to the diffusion of responsibility when algorithmic systems make consequential decisions about workers. When a human manager disciplines a worker, the manager is identifiable and accountable — can be questioned, challenged, and held responsible. When an algorithm generates discipline recommendations, the accountability is distributed among engineers who designed the system, managers who approved deployment, and executives who set performance standards, with no single actor clearly responsible for the specific decision affecting the specific worker. This is ethically problematic for several reasons: workers cannot effectively challenge decisions when no identifiable human is responsible; organizational learning about wrongful or harmful decisions is impaired; and the diffusion of accountability can be used strategically to insulate organizations from the moral and legal consequences of their management practices.


18. What distinguishes a "just transition" framework from standard market adjustment in the context of AI-driven employment disruption?

Model Answer: A just transition framework explicitly acknowledges that AI deployment imposes costs on specific workers and communities — costs that are not self-generated and that are a consequence of deliberate organizational and policy choices. Standard market adjustment treats displacement as a background economic process to which individuals must adapt, with public policy playing a minimal role. A just transition requires: advance notice, so workers can plan; meaningful income support during transition; genuine retraining investment with actual job placement outcomes; community-level economic development where concentrated displacement creates local shocks; and worker and community voice in the decisions that create the displacement. The ethical basis is that those who benefit from the transition (capital owners, productivity-capturing organizations) have obligations to those who bear its costs.


19. Why might AI's labor market impact be qualitatively different from previous automation waves, rather than simply another cycle of disruption and recovery?

Model Answer: Two key differences are argued: First, AI's domain generality — previous automation was domain-specific (a robot replaces assembly workers, a spreadsheet replaces bookkeepers), but AI systems generalize across domains simultaneously, potentially disrupting many job categories faster than new categories emerge. Second, AI is automating cognitive and communicative work — the types of tasks that previous waves created to absorb displaced workers. When the knowledge economy jobs that absorbed industrial displacement are themselves subject to AI disruption, the historical pattern of "displaced workers move into higher-skilled roles" may not hold, because the higher-skilled roles may simultaneously face disruption. The pace of this transition, and whether institutional and policy responses can keep up, is genuinely uncertain.


20. An organization is deploying AI in its customer service department in ways that will result in eliminating approximately 200 positions over 18 months. Beyond legal compliance with WARN Act requirements, what ethical obligations does the organization have to the affected workers?

Model Answer: Ethical obligations exceeding legal minimums include: transparent communication about the AI displacement driving the restructuring (rather than opaque "organizational change" language); advance notice substantially exceeding the 60-day WARN Act minimum, providing real time to seek alternative employment; severance calibrated to the genuine disruption imposed, not the minimum legally required; genuine retraining investment — not resume workshops but employer-partnered skill development with job placement commitments; outplacement support with real networks and connections; and, where 200 displaced workers represent a significant economic shock to a specific community, engagement with local economic development responses. The organization may also have obligations regarding transparency to remaining workers about AI deployment plans, to avoid a climate of ongoing uncertainty that impairs engagement and retention.