Appendix C: Bibliography

This bibliography compiles all primary sources cited throughout the textbook, organized by category. Citations follow APA 7th edition format. DOIs and URLs are provided where available. Readers should note that URLs for online sources were verified as of the manuscript's completion date; web resources may be subject to link rot and should be verified before use.


Part 1: Books

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint. [Listed here for reference; see also Part 2.]

Aristotle. (2009). Nicomachean ethics (D. Ross, Trans.; L. Brown, Ed.). Oxford University Press. (Original work c. 350 BCE)

Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press. https://fairmlbook.org

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Bengio, Y., Lecun, Y., & Hinton, G. (2018). Deep learning. MIT Press.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.

Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House.

Calo, R., Froomkin, A. M., & Kerr, I. (Eds.). (2016). Robot law. Edward Elgar Publishing.

Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.

Chalmers, D. J. (2023). Reality+: Virtual worlds and the philosophy of mind. W. W. Norton.

Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York University Press.

Christian, B. (2020). The alignment problem: Machine learning and human values. W. W. Norton.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Danaher, J., & McArthur, N. (Eds.). (2017). Robot sex: Social and ethical implications. MIT Press.

Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407. [Listed here; full citation in Part 2]

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.

Fineman, M. A. (2008). The vulnerable subject: Anchoring equality in the human condition. Yale Journal of Law and Feminism, 20(1), 1–23. [See also Part 2]

Floridi, L. (Ed.). (2015). The onlife manifesto: Being human in a hyperconnected era. Springer.

Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.

Floridi, L. (Ed.). (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press.

Fogg, B. J. (2002). Persuasive technology: Using computers to change what we think and do. Morgan Kaufmann.

Frank, R. H. (2016). Success and luck: Good fortune and the myth of meritocracy. Princeton University Press.

Gilligan, C. (1982). In a different voice: Psychological theory and women's development. Harvard University Press.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. https://www.deeplearningbook.org

Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.

Harari, Y. N. (2016). Homo deus: A brief history of tomorrow. Harper.

Hardt, M., & Recht, B. (2022). Patterns, predictions, and actions: A story about machine learning. Princeton University Press. https://mlstory.org

Helbing, D. (Ed.). (2019). Towards digital enlightenment: Essays on the dark and light sides of the digital revolution. Springer.

Hill, K. (2022). Your face belongs to us: A secretive startup's quest to end privacy as we know it. Random House.

Kant, I. (1998). Groundwork of the metaphysics of morals (M. Gregor, Trans.). Cambridge University Press. (Original work published 1785)

Koerner, B. (2016). The skies belong to us: Love and terror in the golden age of hijacking. Broadway Books. [Referenced for historical context on security theater]

Kolber, A. (2019). Therapeutic forgetting: The legal and ethical implications of memory dampening. Vanderbilt Law Review. [See Part 2]

Lessig, L. (1999). Code and other laws of cyberspace. Basic Books.

Lum, K., & Chowdhury, R. (2021). What is an "algorithm"? It depends whom you ask. MIT Technology Review. [See Part 4]

MacAskill, W. (2022). What we owe the future. Basic Books.

Mann, S., Nolan, J., & Wellman, B. (2003). Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments. Surveillance and Society, 1(3), 331–355. [See Part 2]

Mill, J. S. (2001). Utilitarianism (G. Sher, Ed.). Hackett Publishing. (Original work published 1863)

Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.

Ngo, R., Chan, L., & Mindermann, S. (2022). The alignment problem from a deep learning perspective. arXiv. [See Part 2]

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. University of California Press.

Nussbaum, M. C. (1990). Love's knowledge: Essays on philosophy and literature. Oxford University Press.

Nussbaum, M. C. (2011). Creating capabilities: The human development approach. Harvard University Press.

O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Press.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.

Perez, C. C. (2019). Invisible women: Data bias in a world designed for men. Abrams Press.

Peters, J. D. (2015). The marvelous clouds: Toward a philosophy of elemental media. University of Chicago Press.

Piketty, T. (2014). Capital in the twenty-first century (A. Goldhammer, Trans.). Harvard University Press.

Rawls, J. (1971). A theory of justice. Harvard University Press.

Rawls, J. (2001). Justice as fairness: A restatement (E. Kelly, Ed.). Harvard University Press.

Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don't need to? A lesson from an explainable AI competition. Harvard Data Science Review. [See Part 2]

Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.

Scanlon, T. M. (1998). What we owe to each other. Harvard University Press.

Sejnowski, T. J. (2018). The deep learning revolution. MIT Press.

Sen, A. (1999). Development as freedom. Oxford University Press.

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. University of Illinois Press.

Srnicek, N. (2016). Platform capitalism. Polity Press.

Sunstein, C. R. (2017). #Republic: Divided democracy in the age of social media. Princeton University Press.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. [See Part 2]

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S., Richardson, R., Schultz, J., & Schwartz, O. (2018). AI now report 2018. AI Now Institute. [See Part 3]

Wrangham, R. (2019). The goodness paradox: The strange relationship between virtue and violence in human evolution. Pantheon Books.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.


Part 2: Academic Articles

Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. Proceedings of the 35th International Conference on Machine Learning (ICML), 60–69. https://proceedings.mlr.press/v80/agarwal18a.html

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. [See also Part 4] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Barocas, S., & Hardt, M. (2017). Fairness in machine learning [Tutorial]. NeurIPS 2017. https://fairmlbook.org

Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, 149–159. https://doi.org/10.1145/3287560.3287598

Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29. https://arxiv.org/abs/1607.06520

Bostrom, N., & Cirkovic, M. M. (Eds.). (2008). Global catastrophic risks. Oxford University Press.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230

Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society, 6, 1–19. https://doi.org/10.17351/ests2020.277

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047

Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint. https://arxiv.org/abs/1808.00023

Crawford, K., & Paglen, T. (2019). Excavating AI: The politics of images in machine learning training sets. Excavating AI. https://excavating.ai

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. [See Part 4]

Datta, A., Sen, S., & Zick, Y. (2016). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. Proceedings of the 2016 IEEE Symposium on Security and Privacy, 598–617. https://doi.org/10.1109/SP.2016.42

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226. https://doi.org/10.1145/2090236.2090255

Eubanks, V. (2014). Want to predict the future of surveillance? Ask poor communities. The American Prospect. [See Part 4]

Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic fairness from a non-ideal perspective. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 57–63. https://doi.org/10.1145/3375627.3375828

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for datasets. Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning. https://arxiv.org/abs/1803.09010

Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. Proceedings of the International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1412.6572

Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic accountability. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 19–31. https://doi.org/10.1145/3351095.3372840

Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29. https://arxiv.org/abs/1610.02413

Hirsch, T., Merced, K., Narayanan, S., Imel, Z., & Atkins, D. (2017). Designing contestability: Interaction design, machine learning, and mental health. Proceedings of the 2017 ACM Conference on Designing Interactive Systems, 95–99. https://doi.org/10.1145/3064663.3064703

Imai, K., & Jiang, Z. (2020). Discussion of "prediction, estimation, and attribution." Journal of the American Statistical Association, 115(530), 536–540. https://doi.org/10.1080/01621459.2020.1731261

Jacobs, A. Z., & Wallach, H. (2021). Measurement and fairness. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 375–385. https://doi.org/10.1145/3442188.3445901

Kearns, M., Neel, S., Roth, A., & Wu, Z. S. (2018). Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. Proceedings of the 35th International Conference on Machine Learning, 2564–2572. https://proceedings.mlr.press/v80/kearns18a.html

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Human decisions and machine predictions. Quarterly Journal of Economics, 133(1), 237–293. https://doi.org/10.1093/qje/qjx032

Kleinberg, J., Ludwig, J., Mullainathan, S., & Rambachan, A. (2018). Algorithmic fairness. AEA Papers and Proceedings, 108, 22–27. https://doi.org/10.1257/pandp.20181018

Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. Advances in Neural Information Processing Systems, 30. https://arxiv.org/abs/1703.06856

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x

Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57. https://doi.org/10.1145/3236386.3241340

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30. https://arxiv.org/abs/1705.07874

Madaio, M. A., Stark, L., Vaughan, J. W., & Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3313831.3376445

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, 279–288. https://doi.org/10.1145/3287560.3287574

Morley, J., Cowls, J., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 260, 113172. https://doi.org/10.1016/j.socscimed.2020.113172

Narayanan, A. (2018). 21 fairness definitions and their politics [Tutorial]. FAT* 2018. https://fairmlbook.org/tutorial2.html

Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119–157.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

O'Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. [See Part 1]

Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–435. https://doi.org/10.1145/3306618.3314244

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778

Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 94, 192–233.

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x

Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105–114. https://doi.org/10.1609/aimag.v36i4.2577

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598

Sjoding, M. W., Dickson, R. P., Iwashyna, T. J., Gay, S. E., & Valley, T. S. (2020). Racial bias in pulse oximetry measurement. New England Journal of Medicine, 383(25), 2477–2478. https://doi.org/10.1056/NEJMc2029240

Solon, O. (2018). The rise of "pseudo-AI": How tech firms quietly use humans to do bots' work. The Guardian. [See Part 4]

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. https://doi.org/10.18653/v1/P19-1355

Sweeney, L. (2013). Discrimination in online ad delivery. Queue, 11(3), 10–29. https://doi.org/10.1145/2460276.2460278

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841–887. https://arxiv.org/abs/1711.00399

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. R. (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP, 353–355. https://doi.org/10.18653/v1/W18-5446

Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., Ringel Morris, M., Rankin, J., Rogers, E., Salas, M., & West, S. M. (2019). Disability, bias, and AI. AI Now Institute. https://ainowinstitute.org/disabilitybiasai-2019.pdf

Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist — it's time to make it fair. Nature, 559, 324–326. https://doi.org/10.1038/d41586-018-05707-8


Part 3: Reports and Official Documents

Algorithmic Justice League. (2021). Safe face pledge: Industry commitment to prohibit harmful uses of facial recognition. https://www.safefacepledge.org

Biden, J. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110). White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Bureau of Consumer Financial Protection. (2022). CFPB warns companies against illegal use of artificial intelligence, including in credit decisions. Consumer Financial Protection Bureau. https://www.consumerfinance.gov/about-us/newsroom/cfpb-warns-companies-against-illegal-use-of-artificial-intelligence-including-in-credit-decisions/

Cunneen, M., Mullins, M., Murphy, F., & Shannon, S. (2019). Artificial driving intelligence and moral agency: Examining the decision ontology of unavoidable road traffic accidents through the lens of the trolley problem. [Report]. Applied Artificial Intelligence, 33(3). https://doi.org/10.1080/08839514.2018.1560124

Equal Employment Opportunity Commission. (2023). Select issues: Assessing adverse impact in software, algorithms, and artificial intelligence used in employment selection procedures. EEOC. https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines-0

European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (COM/2021/206 final). Publications Office of the EU. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

European Parliament and Council of the EU. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (AI Act). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689

European Parliament and Council of the EU. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679

Federal Trade Commission. (2022). Loot boxes, dark patterns, and the FTC: A look at how digital technology can trick consumers [Staff report]. FTC. https://www.ftc.gov/system/files/ftc_gov/pdf/Bringing_Dark_Patterns_to_Light-508.pdf

Federal Trade Commission. (2023). Generative AI raises competition concerns [Blog post]. FTC. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns

Food and Drug Administration. (2021). Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. FDA. https://www.fda.gov/media/145022/download

Food and Drug Administration. (2023). Marketing submission recommendations for a predetermined change control plan for artificial intelligence-enabled device software functions [Draft guidance]. FDA. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence

G7 Hiroshima AI Process. (2023). Hiroshima process international guiding principles for all AI actors and international code of conduct for organizations developing advanced AI systems. G7. https://www.g7hiroshima.go.jp/en/

Government Accountability Office. (2022). Artificial intelligence: An accountability framework for federal agencies and other entities (GAO-21-519SP). GAO. https://www.gao.gov/products/gao-21-519sp

High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

House of Lords Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able? House of Lords. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

International Committee of the Red Cross. (2021). ICRC position on autonomous weapon systems. ICRC. https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems

National Institute of Standards and Technology. (2019). Four principles of explainability for artificial intelligence (NIST IR 8312). NIST. https://doi.org/10.6028/NIST.IR.8312

National Institute of Standards and Technology. (2019, 2020, 2021). Face recognition vendor technology (FRVT) Part 3: Demographic effects (NISTIR 8280). NIST. https://doi.org/10.6028/NIST.IR.8280

National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100-1). NIST. https://doi.org/10.6028/NIST.AI.100-1

Office of Science and Technology Policy. (2022). Blueprint for an AI Bill of Rights: Making automated systems work for the American people. White House. https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Organisation for Economic Co-operation and Development. (2019). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). OECD. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Partnership on AI. (2019). Report on algorithmic risk assessment tools in the U.S. criminal justice system. Partnership on AI. https://partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-us-criminal-justice-system/

Senate Select Committee on Intelligence (U.S.). (2019). Report on Russian active measures campaigns and interference in the 2016 U.S. election, Volume 2: Russia's use of social media. U.S. Senate. https://www.intelligence.senate.gov/publications/report-select-committee-intelligence-united-states-senate-russian-active-measures

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137

United Nations Secretary-General's Roadmap for Digital Cooperation. (2020). Roadmap for digital cooperation. United Nations. https://www.un.org/en/content/digital-cooperation-roadmap/

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S., Richardson, R., Schultz, J., & Schwartz, O. (2018). AI now report 2018. AI Now Institute. https://ainowinstitute.org/AI_Now_2018_Report.pdf

Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., Ringel Morris, M., Rankin, J., Rogers, E., Salas, M., & West, S. M. (2019). AI now report 2019. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf

World Economic Forum. (2018). The new physics of financial services: How artificial intelligence is transforming the financial ecosystem. WEF. https://www3.weforum.org/docs/WEF_New_Physics_of_Financial_Services.pdf

World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance. WHO. https://www.who.int/publications/i/item/9789240029200


Part 4: Journalism and Long-Form

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Angwin, J., & Tobin, A. (2016, October 28). Facebook lets advertisers exclude users by race. ProPublica. https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race

Angwin, J., Tobin, A., & Varner, M. (2017, September 14). Facebook (still) letting housing advertisers exclude users by race. ProPublica. https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin

Chin, J., & Wong, C. (2016, November 28). China's new tool for social control: A credit rating for everything. The Wall Street Journal. https://www.wsj.com/articles/chinas-new-tool-for-social-control-a-credit-rating-for-everything-1480351590

Dastin, J. (2018, October 9). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Eubanks, V. (2015, January 15). Want to predict the future of surveillance? Ask poor communities. The American Prospect. https://prospect.org/power/want-predict-future-surveillance-ask-poor-communities/

Fussell, S. (2019, November 6). Why do Amazon's Alexa and other digital assistants default to being female? The Atlantic. https://www.theatlantic.com/technology/archive/2019/11/why-are-digital-assistants-always-female/601330/

Giansiracusa, N. (2020, March 1). How algorithms can fight bias instead of creating it. Wired. https://www.wired.com/story/how-algorithms-can-fight-bias-instead-of-creating-it/

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation." AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741

Hao, K. (2019, June 6). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here's what it says. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Hill, K. (2020, January 18). The secretive company that might end privacy as we know it. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Hill, K. (2020, June 24). Wrongfully accused by an algorithm. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

Isaac, M., & Frenkel, S. (2018, March 19). Facebook's data privacy scandal: Cheat sheet. The New York Times. https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-explained.html

Ivory, D., Protess, B., & Bennett, K. (2015, April 7). In American lending, signs of a shift toward fairness. The New York Times. https://www.nytimes.com/interactive/2015/04/02/business/dealbook/the-race-gap-in-american-mortgage-lending.html

Kirchner, L., & Goldstein, M. (2015, October 25). How a largely invisible system is determining who gets a loan in America. ProPublica. https://www.propublica.org/article/how-a-largely-invisible-system-is-determining-who-gets-a-loan-in-america

Lecher, C. (2018, March 21). What happens when an algorithm cuts your health care. The Verge. https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy

Lohr, S. (2020, October 22). Facial recognition is accurate, if you're a white guy. The New York Times. https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html

Mac, R., & Tiffany, K. (2021, March 9). Facebook still can't figure out what it's doing with facial recognition. BuzzFeed News. https://www.buzzfeednews.com/article/ryanmac/facebook-facial-recognition-tool

Metz, C. (2021, August 25). Who is making sure the AI machines aren't racist? The New York Times. https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html

Noble, S. U. (2012, May). Missed connections: What search engines say about women. Bitch. https://www.bitchmedia.org/article/missed-connections

Obermeyer, Z., & Mullainathan, S. (2019, September 19). Diagnosing bias in algorithms used to manage the health of populations. STAT News. https://www.statnews.com/2019/09/19/machine-learning-clinical-prediction/

O'Neil, C. (2014, March 26). The danger of big data in the justice system. Bloomberg. https://www.bloomberg.com/opinion/articles/2014-03-26/the-danger-of-big-data-in-the-justice-system

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020, January 27). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828 [Cross-listed with Part 2]

Sinders, C. (2019, February 7). The four key things I learned designing for AI. UX Collective. https://uxdesign.cc/the-four-key-things-i-learned-designing-for-ai-5fcc97d66cd2

Temple-Raston, D. (2021, May 11). A 'Ridiculously easy' test shows how flawed facial recognition is. NPR. https://www.npr.org/2021/05/11/995165881/a-ridiculously-easy-test-shows-how-flawed-facial-recognition-is

The Markup. (2021, August 25). The secret bias hidden in mortgage-approval algorithms. The Markup. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms

The Markup. (2021, November 16). Millions of Black Americans are being misdiagnosed by algorithms. The Markup. https://themarkup.org/news/2021/11/16/millions-of-black-americans-are-being-misdiagnosed-by-algorithms

Thompson, S. A., & Warzel, C. (2019, December 19). Twelve million phones, one dataset, zero privacy. The New York Times. https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html

Valentino-DeVries, J., Singer, N., Keller, M. H., & Krolik, A. (2018, December 10). Your apps know where you were last night, and they're not keeping it secret. The New York Times. https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html


Part 5: Websites and Online Resources

AI Fairness 360 (AIF360). IBM Research. A comprehensive open-source toolkit containing bias detection and mitigation algorithms. https://aif360.mybluemix.net

AI Incident Database. Partnership on AI. A crowdsourced repository of documented AI failures and harms. https://incidentdatabase.ai

AI Now Institute. New York University. Research center studying the social implications of artificial intelligence. https://ainowinstitute.org

Algorithmic Justice League. Founded by Joy Buolamwini. Organization challenging harmful biases in AI systems through research, advocacy, and art. https://www.ajl.org

Allen Institute for Artificial Intelligence (AI2). Research institute focused on beneficial AI, with resources including the Semantic Scholar AI paper database. https://allenai.org

ArXiv.org — Computer Science: Artificial Intelligence. Free preprint server hosting a large proportion of AI research. https://arxiv.org/list/cs.AI/recent

Bias in AI — Google PAIR. People + AI Research at Google, with resources on human-AI interaction and fairness. https://pair.withgoogle.com

Center for AI Safety. Organization focused on reducing societal-scale risks from AI. https://www.safe.ai

Cybersecurity and Infrastructure Security Agency (CISA) AI Guidance. U.S. government guidance on AI security risks. https://www.cisa.gov/ai

Deon: An ethics checklist for data scientists. DrivenData. A practical pre-deployment checklist for ML projects. https://deon.drivendata.org

Distill.pub. A machine learning research journal prioritizing clarity and interactive visualization. https://distill.pub

Electronic Frontier Foundation — AI. EFF resources on AI and civil liberties. https://www.eff.org/issues/ai

European Data Protection Board. Body ensuring consistent application of GDPR across EU member states. https://edpb.europa.eu

Fairlearn. Microsoft Research open-source toolkit for assessing and improving fairness of AI systems in Python. https://fairlearn.org

Future of Life Institute. Organization working to reduce existential risks from AI. https://futureoflife.org

Google Model Cards. Documentation framework and examples from Google. https://modelcards.withgoogle.com/about

Hugging Face — Model Hub. Open-source platform hosting AI models with documentation, including model cards and bias disclosures. https://huggingface.co

LIME (Local Interpretable Model-agnostic Explanations). GitHub repository for the LIME explainability library. https://github.com/marcotcr/lime

ML Commons — Croissant. Standard for machine learning dataset documentation. https://mlcommons.org

NIST AI Resource Center. National Institute of Standards and Technology hub for AI standards, the AI RMF, and related resources. https://airc.nist.gov

OECD.AI Policy Observatory. OECD platform monitoring AI policy developments globally and providing comparative policy analysis. https://oecd.ai

Partnership on AI. Multi-stakeholder organization bringing together academics, civil society organizations, and technology companies to develop best practices for AI. https://partnershiponai.org

Responsible AI Institute. Certification and assessment programs for responsible AI governance. https://www.responsible.ai

SHAP (SHapley Additive exPlanations). GitHub repository for the SHAP library, the most widely used model explanation tool. https://github.com/slundberg/shap

Stanford Encyclopedia of Philosophy — Artificial Intelligence. Authoritative philosophical reference entries on AI ethics, consciousness, and related topics. https://plato.stanford.edu/entries/artificial-intelligence/

The Alan Turing Institute — Data Ethics Group. UK national AI institute resources on data ethics and responsible AI. https://www.turing.ac.uk/research/research-programmes/data-ethics-group

UNESCO AI Ethics Dashboard. UNESCO platform tracking implementation of the 2021 Recommendation on the Ethics of AI globally. https://www.unesco.org/ethics-ai

What-If Tool. Google PAIR interactive visualization tool for understanding ML model behavior, including fairness analysis. https://pair-code.github.io/what-if-tool/


Bibliographic note: This bibliography represents sources cited and consulted across the textbook's 39 chapters. In compiling it, we have endeavored to include only real, verifiable sources. For the most current versions of regulatory documents, readers are advised to consult official government and institutional websites directly. Academic articles are cited with DOI where available; these permit permanent digital access independent of URL changes. For books, the most recent edition is cited unless the original publication date is historically significant to the discussion.

The pace of AI development means that some sources — particularly regulatory frameworks and empirical studies of specific systems — will have been superseded by subsequent developments by the time this textbook reaches readers. We encourage instructors to supplement this bibliography with current scholarship and regulatory developments, and to consult the AI Incident Database and AI Now Annual Reports for ongoing documentation of AI-related harms and governance developments.