Chapter 38 Further Reading: AI, Society, and the Future of Work
The Automation Debate and Labor Economics
1. Autor, D. H., Levy, F., & Murnane, R. J. (2003). "The Skill Content of Recent Technological Change: An Empirical Exploration." Quarterly Journal of Economics, 118(4), 1279--1333. The foundational paper establishing the task-based framework for understanding automation's impact on employment. Autor, Levy, and Murnane demonstrated that computers substitute for routine tasks (both cognitive and manual) while complementing non-routine tasks. This framework --- analyzing tasks rather than occupations --- became the standard approach for all subsequent research on automation and employment, including the studies by Frey and Osborne, the OECD, and McKinsey discussed in the chapter. Essential reading for anyone who wants to understand the analytical foundations of the automation debate.
2. Frey, C. B., & Osborne, M. A. (2017). "The Future of Employment: How Susceptible Are Jobs to Computerisation?" Technological Forecasting and Social Change, 114, 254--280. The paper behind the "47 percent" headline. Originally circulated as a working paper in 2013, the published version includes additional analysis and context. The methodology --- expert classification of 70 occupations, generalized to 702 via machine learning --- is debatable, but the paper's influence on the public debate is undeniable. Read alongside the OECD reanalysis (Arntz et al., 2016) for the task-level critique, and alongside Autor (2015) for the broader economic context.
3. Arntz, M., Gregory, T., & Zierahn, U. (2016). "The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis." OECD Social, Employment and Migration Working Papers, No. 189. The critical reanalysis that reduced the "high risk" estimate from 47 percent to 9 percent by examining individual tasks within occupations rather than classifying entire occupations. The paper demonstrated that most jobs contain a mix of automatable and non-automatable tasks, and that the Frey and Osborne approach overstated risk by treating mixed-task jobs as fully automatable. A model of careful methodological critique that improved the entire field's analytical framework.
4. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). "GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models." arXiv preprint arXiv:2303.10130. The study that identified the inversion of the traditional automation pattern: higher-wage, higher-education workers are more exposed to LLM automation than lower-wage workers. The methodology (combining human annotation with GPT-4 self-assessment) is novel and raises its own questions, but the finding is consequential. Particularly valuable for the appendix detailing task-level exposure assessments for specific occupations.
5. Acemoglu, D., & Restrepo, P. (2019). "Automation and New Tasks: How Technology Displaces and Reinstates Labor." Journal of Economic Perspectives, 33(2), 3--30. The most rigorous economic framework for understanding how automation affects employment through two channels: the displacement effect (machines replace workers in existing tasks) and the reinstatement effect (new tasks are created that require human labor). Acemoglu and Restrepo show that historical balance between displacement and reinstatement is not guaranteed --- it depends on the rate and direction of innovation, the flexibility of labor markets, and institutional factors. More technically demanding than the other readings on this list, but essential for a complete understanding.
The Future of Work
6. Autor, D. H. (2015). "Why Are There Still So Many Jobs? The History and Future of Workplace Automation." Journal of Economic Perspectives, 29(3), 3--30. The best single overview of the automation debate from the perspective of labor economics. Autor explains why predictions of mass unemployment have historically been wrong (new tasks emerge, consumer demand shifts, complementary skills increase in value) while identifying the conditions under which the current wave might be different. Written before the generative AI revolution but still remarkably relevant. The ideal starting point for anyone approaching this topic for the first time.
7. Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton. The book that introduced the concept of a "second machine age" in which digital technologies do for mental power what the steam engine did for muscle power. Brynjolfsson and McAfee argue that the economic impact of digital technologies is just beginning and that the coming decades will bring both enormous prosperity and wrenching inequality. More optimistic than many accounts, but honest about the distributional challenges. The companion volume, Machine, Platform, Crowd (2017), extends the analysis to platform economics and AI.
8. Susskind, D. (2020). A World Without Work: Technology, Automation, and How We Should Respond. Metropolitan Books. The most thoughtful book-length treatment of the possibility that AI-driven automation could permanently reduce the demand for human labor. Susskind, an Oxford economist, takes the "this time is different" argument seriously without becoming alarmist. His analysis of how economic theory needs to adapt to a world of increasingly capable machines is rigorous and accessible. Particularly valuable for the policy chapters, which discuss UBI, retraining, and alternative economic models.
9. Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio. A practical guide to human-AI collaboration from Wharton professor Ethan Mollick. Drawing on extensive classroom experimentation with generative AI, Mollick provides concrete frameworks for augmentation --- how to use AI as a creative partner, analytical assistant, and thought provoker without surrendering human judgment. Directly relevant to the centaur model discussion and the question of how to design for augmentation rather than automation.
AI and Inequality
10. Muro, M., Maxim, R., & Whiton, J. (2019). "Automation and Artificial Intelligence: How Machines Are Affecting People and Places." Brookings Institution. The definitive analysis of the geographic distribution of AI's employment impact in the United States. The report demonstrates that AI employment is far more geographically concentrated than technology employment generally, with implications for regional inequality, political polarization, and the viability of pro-technology policies in communities that bear the costs of automation without sharing in its benefits. The data visualizations are particularly effective.
11. Korinek, A., & Stiglitz, J. E. (2021). "Artificial Intelligence, Globalization, and Strategies for Economic Development." NBER Working Paper No. 28453. Nobel laureate Joseph Stiglitz and AI economist Anton Korinek examine how AI affects developing countries --- a perspective too often missing from analyses centered on wealthy nations. The paper argues that AI may undermine the labor-cost advantages on which many developing countries have built their growth strategies, and proposes policy responses including technology transfer, capacity building, and international coordination. Essential reading for anyone concerned about the global equity dimensions of AI.
12. Autor, D. H. (2019). "Work of the Past, Work of the Future." AEA Papers and Proceedings, 109, 1--32. Autor's analysis of labor market polarization --- the hollowing out of middle-skill, middle-income occupations --- and its implications for inequality, social mobility, and democratic stability. The paper demonstrates that the polarization pattern that began with IT automation has intensified, and that AI is likely to accelerate it further. A concise, data-rich treatment that connects technology economics to broader social and political dynamics.
Policy Responses
13. Madsen, P. K. (2006). "How Can It Possibly Fly? The Paradox of a Dynamic Labour Market in a Scandinavian Welfare State." In National Identity and the Varieties of Capitalism: The Danish Experience, ed. J. L. Campbell et al. McGill-Queen's University Press. The classic academic treatment of Denmark's flexicurity model. Madsen explains how a system of high labor market flexibility combined with generous social protection produces paradoxically high levels of worker security and rapid job transitions. The analysis is specific to the Danish context but identifies principles --- distributed risk, active labor market policies, tripartite governance --- that are relevant to AI transition policy anywhere.
14. Card, D., Kluve, J., & Weber, A. (2018). "What Works? A Meta-Analysis of Recent Active Labor Market Program Evaluations." Journal of the European Economic Association, 16(3), 894--931. The most comprehensive meta-analysis of government-sponsored retraining and employment programs, covering over 200 studies from multiple countries. The findings are sobering: retraining programs produce modest average effects on employment and earnings, with significant variation by program type, target population, and economic context. Essential reading for anyone involved in designing or evaluating reskilling programs --- it provides a reality check on the rhetoric of "retraining" as a solution to AI displacement.
15. Standing, G. (2017). Basic Income: And How We Can Make It Happen. Pelican Books. The most accessible and comprehensive case for universal basic income. Guy Standing, a development economist and founding member of the Basic Income Earth Network (BIEN), addresses the major objections (cost, work incentives, political feasibility) while drawing on evidence from pilot programs worldwide. Useful for understanding the UBI debate discussed in the chapter, even for readers who ultimately disagree with Standing's conclusions.
AI Safety and Governance
16. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. Stuart Russell, one of the most respected AI researchers in the world, makes the case that advanced AI systems pose genuine risks and that the AI research community needs to rethink its approach to building AI from the ground up. Russell does not predict doom; he argues that the current framework (build AI that optimizes for specified objectives) is fundamentally unsafe, and proposes an alternative (build AI that is uncertain about human objectives and defers to human judgment). The most rigorous and measured treatment of AI safety from a leading researcher.
17. Dafoe, A. (2018). "AI Governance: A Research Agenda." Centre for the Governance of AI, Future of Humanity Institute, University of Oxford. A comprehensive mapping of the AI governance landscape: who makes decisions about AI development and deployment, what mechanisms exist for accountability, and what gaps remain. Dafoe identifies key governance challenges including the concentration of AI capability in a small number of actors, the difficulty of international coordination, and the tension between innovation and precaution. Useful as a framework for thinking about the democratic governance questions raised in the chapter.
18. Floridi, L., & Cowls, J. (2019). "A Unified Framework of Five Principles for AI in Society." Harvard Data Science Review, 1(1). Synthesizes the principles common to multiple AI ethics frameworks (including those from the EU, OECD, IEEE, and various corporate codes) into five core principles: beneficence, non-maleficence, autonomy, justice, and explicability. Provides a shared vocabulary for discussing AI's societal impact and evaluating governance proposals. Connects directly to the responsible leadership framework in this chapter.
AI and Education
19. Aoun, J. E. (2017). Robot-Proof: Higher Education in the Age of Artificial Intelligence. MIT Press. Northeastern University president Joseph Aoun argues that education must evolve to emphasize "humanics" --- the uniquely human literacies (data literacy, technological literacy, and human literacy) that AI cannot replicate. The book proposes a model of experiential, lifelong learning designed to produce graduates who can work alongside AI rather than compete with it. Directly relevant to the chapter's discussion of how AI changes what we need to teach.
20. Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign. A balanced assessment of AI's potential to transform education, covering AI tutoring systems, adaptive learning platforms, and automated assessment. The authors identify both the promise (personalized instruction at scale) and the risks (algorithmic bias in educational AI, the reduction of education to measurable outcomes, the digital divide in access to AI-powered learning tools). Essential reading for anyone interested in the intersection of AI and education policy.
Business Leadership and Organizational Change
21. Davenport, T. H., & Kirby, J. (2016). Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. Harper Business. Davenport and Kirby identify five strategies for humans to remain valuable in an AI-augmented workplace: stepping up (providing big-picture thinking), stepping aside (doing work that AI is not suited for), stepping in (monitoring and adjusting AI systems), stepping narrowly (specializing in areas too narrow for AI investment), and stepping forward (building the next generation of AI tools). The framework is practical and applicable, connecting directly to the skills premium discussion in this chapter.
22. Edmondson, A. C. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley. While not specifically about AI, Edmondson's work on psychological safety is directly relevant to managing AI transitions. Workers who fear for their jobs are unlikely to collaborate with AI deployment, share concerns about AI failures, or participate honestly in retraining programs. Creating the psychological safety that enables productive adaptation to AI --- rather than fearful resistance --- is a leadership challenge that this book addresses with both research rigor and practical guidance.
Societal Perspectives
23. Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. Kate Crawford examines AI not as a technological achievement but as a political and economic system that extracts resources --- data, labor, minerals, energy --- from communities worldwide. The book traces the full supply chain of AI, from the mines that produce the minerals in AI hardware to the low-wage workers who label training data. Essential for understanding the global North-South dynamics and the labor exploitation dimensions discussed in the chapter.
24. O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. Mathematician Cathy O'Neil documents how algorithms --- in hiring, credit scoring, criminal justice, and education --- can systematically disadvantage vulnerable populations. The book coined the influential concept of "weapons of math destruction": models that are opaque, unaccountable, and operate at scale. While published before the generative AI wave, its core arguments about algorithmic inequality remain urgently relevant and connect to the bias and inequality themes of Chapters 25 and 38.
25. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. Shoshana Zuboff's landmark analysis of how technology companies extract and monetize human behavioral data, creating a new form of capitalism that threatens autonomy, democracy, and human agency. While focused on surveillance rather than automation, Zuboff's analysis of the concentration of power in technology companies is directly relevant to the chapter's discussion of democratic governance and algorithmic sovereignty. The book is long (700+ pages) but rewards engagement with its core thesis.