Appendix I: Bibliography
This bibliography uses a three-tier citation system. Tier 1 sources are verified publications that exist and can be looked up. Tier 2 sources are attributed claims — findings or arguments attributed to a named person, organization, or widely reported event, where the specific citation details may be approximate. Tier 3 entries identify the illustrative and composite examples used in the textbook. All tiers are clearly labeled to maintain intellectual honesty about the evidence base.
Citation Tier System
| Tier | Label | What It Means | How to Use It |
|---|---|---|---|
| Tier 1 | Verified Source | A specific publication that exists and can be retrieved. Full bibliographic details provided. | Cite directly in academic work |
| Tier 2 | Attributed Claim | A finding, argument, or data point attributed to a named source. The claim is widely reported but we cannot guarantee the exact phrasing or page number. | Verify before citing in academic work; cite as "attributed to" if verification is not possible |
| Tier 3 | Illustrative Example | A composite, fictional, or pedagogical example created for this textbook. Based on real patterns and documented phenomena but not a report of a specific real event. | Do not cite as evidence; reference as "illustrative example from [textbook name]" |
Tier 1: Verified Sources
Foundational AI Research
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Referenced: Ch. 9 (§9.3, §9.4), Ch. 17 (§17.2)
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), 610–623. Referenced: Ch. 5 (§5.6)
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the Conference on Fairness, Accountability and Transparency (FAccT), 77–91. Referenced: Ch. 6 (§6.4), Ch. 9 (§9.4)
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. Referenced: Ch. 10 (§10.2, §10.3)
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25. Referenced: Ch. 2 (§2.4), Ch. 6 (§6.2)
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. Referenced: Ch. 9 (§9.2), Ch. 15 (§15.3)
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252. Referenced: Ch. 2 (§2.4), Ch. 6 (§6.2)
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. Referenced: Ch. 2 (§2.1)
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. Referenced: Ch. 2 (§2.5), Ch. 5 (§5.2)
Bias, Fairness, and Ethics
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. Referenced: Ch. 9 (§9.4), Ch. 17 (§17.3)
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. Referenced: Ch. 9 (§9.3) — establishes the mathematical impossibility of satisfying multiple fairness criteria simultaneously when base rates differ across groups
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS). Referenced: Ch. 9 (§9.3) — one of the foundational papers proving the impossibility of simultaneously satisfying calibration and equal false positive/negative rates across groups
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT), 220–229. Referenced: Ch. 4 (§4.5), Ch. 13 (§13.6)*
Privacy and Surveillance
Hill, K. (2020, January 18). The secretive company that might end privacy as we know it. The New York Times. Referenced: Ch. 6 (§6.4), Ch. 12 (§12.3)
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. Referenced: Ch. 12 (§12.1)
AI History and Philosophy
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955/2006). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. Referenced: Ch. 1 (§1.2), Ch. 2 (§2.1)
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4), 115–133. Referenced: Ch. 2 (§2.1)
Minsky, M. L., & Papert, S. A. (1969). Perceptrons: An introduction to computational geometry. MIT Press. Referenced: Ch. 2 (§2.2)
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536. Referenced: Ch. 2 (§2.3), Ch. 3 (§3.6)
AI and Work
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30. Referenced: Ch. 10 (§10.2)
AI Governance and Regulation
European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. Referenced: Ch. 13 (§13.2)
AI and Environment
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. Referenced: Ch. 18 (§18.1) — one of the first papers to quantify the carbon cost of training large NLP models
Healthcare AI
Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books. Referenced: Ch. 15
Tier 2: Attributed Claims
The following are findings, arguments, or data points attributed to named individuals, organizations, or widely reported events. They are included because they are substantive and well-known, but the specific publication details may be approximate or the claims may come from public statements, interviews, or reports rather than peer-reviewed publications.
Herbert Simon's 1965 prediction. Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do." This prediction is widely cited in AI history and attributed to a 1965 speech. It is referenced in Ch. 2 (§2.2) as an example of over-optimistic AI predictions.
Larry Tesler's formulation of the AI effect. The quip "AI is whatever hasn't been done yet" is widely attributed to computer scientist Larry Tesler. The exact source and date of the original statement are uncertain. Referenced in Ch. 1 (§1.4).
OECD estimates of automation susceptibility. Various OECD reports (notably Arntz, Gregory, & Zierahn, 2016, and OECD Employment Outlook 2019) have estimated 9-14% of jobs as at high risk of automation — substantially lower than the Frey & Osborne estimate. Referenced in Ch. 10 (§10.2).
Amazon hiring tool scrapped due to gender bias. Amazon's experimental AI recruiting tool that discriminated against women was first reported by Reuters in October 2018 (Dastin, J. "Amazon scraps secret AI recruiting tool that showed bias against women"). The tool was never used in actual hiring decisions. Referenced in Ch. 9 (§9.1).
ChatGPT reaching 100 million users. The widely reported claim that ChatGPT reached 100 million monthly active users in approximately two months after its November 2022 launch was initially reported by UBS analysts in February 2023. Referenced in Ch. 2 (§2.5), Appendix C.
Fei-Fei Li's contributions to ImageNet. Fei-Fei Li led the creation of ImageNet and has been a prominent advocate for human-centered AI. Her public statements and Stanford HAI leadership are widely documented. Referenced in Ch. 6.
Biden Executive Order on AI (October 2023). Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," was signed on October 30, 2023. Referenced in Ch. 13, Appendix C.
Hollywood writers' and actors' strikes (2023). The WGA strike (May-September 2023) and SAG-AFTRA strike (July-November 2023) included provisions addressing AI use in writing and acting. Referenced in Ch. 10, Ch. 11, Appendix C.
Geoffrey Hinton's departure from Google. Hinton left Google in 2023 to speak freely about AI risks. His public statements about AI safety have been widely reported. Referenced in Ch. 20.
Expert surveys on AGI timelines. Various surveys of AI researchers (including surveys by Katja Grace, AI Impacts, and others) have reported wide disagreement on AGI timelines, with median estimates often ranging from 2040 to 2060 or later, and significant probability mass on "never." The specific surveys cited vary. Referenced in Ch. 1 (§1.3), Ch. 20 (§20.3).
Lee Sedol's retirement. The Go world champion retired in 2019, citing AI as a factor. He stated: "With the debut of AI in Go games, I've realized that I'm not at the top even if I become the number one." Referenced in Ch. 2 (§2.4), Appendix C.
Tier 3: Illustrative Examples
The following are composite, fictional, or pedagogical examples created for this textbook. They are based on documented real-world patterns and technologies but do not describe specific real systems, people, or events.
| Example | Description | Based On | Chapters |
|---|---|---|---|
| ContentGuard | A composite social media content moderation system that uses ML classifiers and rule-based filters to moderate content across 47 countries and 12 languages. Based on publicly documented challenges of content moderation at scale. | Real challenges reported at Meta, YouTube, Twitter/X, and other platforms | Ch. 1, 4, 7, 9, 13, 17, 19, 21 |
| MedAssist AI | A composite hospital AI diagnostic tool that performs differently across patient demographics. Based on documented disparities in medical AI systems. | Research on diagnostic AI accuracy disparities, including Obermeyer et al. (2019) and documented FDA-cleared diagnostic tools | Ch. 1, 6, 8, 9, 12, 15, 18, 21 |
| Priya's Semester | A composite character — a college student navigating generative AI use in coursework. Illustrates academic integrity dilemmas and the line between tool use and dishonesty. | Widely reported experiences of students and educators since ChatGPT's release in 2022 | Ch. 1, 5, 8, 11, 14, 17, 21 |
| CityScope Predict | A composite predictive policing system under consideration by a city government. Based on documented predictive policing programs and their controversies. | Real programs such as PredPol/Geolitica, documented LAPD and Chicago PD predictive policing initiatives, and academic critiques thereof | Ch. 1, 7, 9, 12, 13, 17, 19, 21 |
| Jordan | A brief character used in Ch. 1 to illustrate the invisibility of AI in daily life. | Common experiences of technology use | Ch. 1 |
Recommended Further Reading
The following 15 sources provide excellent entry points for deeper exploration of the topics covered in this textbook. They are listed in the order most useful for a reader working through the book sequentially.
-
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. A sweeping examination of AI's material, social, and political infrastructure. Excellent companion to Chapters 4, 10, 12, 18, and 19. Crawford traces AI from lithium mines to data centers to show that AI is not just code — it is a system of extraction.
-
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. An accessible introduction to how algorithms can encode and amplify inequality. Particularly relevant to Chapters 7, 9, and 17. O'Neil's concept of "weapons of math destruction" — opaque, unaccountable, and destructive mathematical models — is foundational to AI ethics discourse.
-
Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs. The definitive work on how personal data became a commodity and surveillance became a business model. Essential reading for Chapter 12, with implications for Chapters 13 and 19.
-
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity Press. Examines how racial bias is embedded in technology design and how algorithms can reproduce racial hierarchy. Particularly relevant to Chapters 9 and 17.
-
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. Investigates how automated decision systems target and harm poor and working-class communities. Directly relevant to Chapters 7, 9, and 17.
-
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press. A clear-eyed assessment of what computers can and cannot do, pushing back against techno-solutionism. Relevant to Chapters 1, 3, and 8.
-
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. Examines how search algorithms produce racist and sexist results. Relevant to Chapters 7 and 9.
-
Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books. An optimistic but evidence-based assessment of AI in healthcare. The primary companion for Chapter 15.
-
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux. One of the best general introductions to AI for non-technical readers. Covers the topics of Chapters 1-6 with clarity and nuance.
-
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. Examines the opacity of algorithmic systems in finance and media. Relevant to Chapters 7, 12, and 13.
-
Susskind, D. (2020). A world without work: Technology, automation, and how we should respond. Metropolitan Books. A thoughtful analysis of automation's impact on employment. The primary companion for Chapter 10.
-
Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press. A philosophical examination of AI ethics by a leading philosopher of information. Relevant to Chapters 9, 13, 17, and 20.
-
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking. An AI researcher's argument for why the alignment problem is real and how to approach it. The primary companion for Chapter 20.
-
Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. Develops the concept of data colonialism. Relevant to Chapters 12 and 19.
-
Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin Harcourt. An insider's perspective on the U.S.-China AI competition. The primary companion for Chapter 19, though some analysis may be dated.
A Note on Currency
AI is a fast-moving field. Sources published even a few years ago may not reflect the current state of the technology or the regulatory landscape. The recommended readings above were selected for the durability of their analytical frameworks, not for the recency of their technical details. When using any source — including this textbook — always check when it was written and consider what may have changed since.