Appendix G: Index

Alphabetical index of topics, names, organizations, concepts, laws, and technical terms with chapter references. Cross-references are indicated with "see also." Section numbers (e.g., §7.2) indicate the primary discussion; chapter numbers alone indicate broader treatment.


A

Accountability, algorithmic, Ch. 17; §17.4; see also governance Accuracy-interpretability trade-off, §7.5 Adaptive learning, §16.3; see also personalized learning Adversarial examples, §6.6; §8.1; see also robustness AGI, see artificial general intelligence AI Act (EU), see EU AI Act AI agents, §21.1 AI effect, §1.4; §2.6 AI geopolitics, Ch. 19 AI literacy, defined §1.6; as civic skill, Ch. 1, 21; framework, §21.4 AI safety, Ch. 20; see also alignment problem AI winters, §2.2, §2.3; timeline, Appendix C Alignment problem, §20.1; see also specification gaming, reward hacking, value alignment AlexNet, §2.4; §6.2; Appendix B AlphaGo, §2.4; Appendix C Amazon hiring tool, §9.1 Angwin, Julia, see ProPublica COMPAS analysis Annotation, §4.3; see also data labeling Anthropic, §20.4; see also constitutional AI Artificial general intelligence (AGI), §1.3; §21.1; expert predictions, §1.3 Artificial intelligence, definition, §1.2; as spectrum of techniques, §1.3; history, Ch. 2, Appendix C Attention mechanism, see self-attention, transformer "Attention Is All You Need," §2.5; §5.2; Appendix B Augmentation (vs. automation), §10.1 Automated essay scoring, §16.4 Automation anxiety, historical, §10.1 Automation bias, §8.4; §15.4

B

Backpropagation, §2.3; §3.6 Benchmark studies, §2.4; limitations, Appendix A Bender, Emily, see stochastic parrots Bennis, Warren, §10 Bias, algorithmic, Ch. 9; in computer vision, §6.4; in content moderation, §9.5; in healthcare, §15.3; in hiring, §9.1; in policing, §17.1; pipeline model, §9.1; see also individual bias types Bias audit, §9.5; framework, Appendix D Biometric data, §12.3 Boden, Margaret, §11 Brussels effect, §19.3 Buolamwini, Joy, §6.4; §9.4; Appendix B; see also Gender Shades

C

Calibration (confidence), §8.4 Calibration (fairness), §9.3 Carbon footprint of AI, §18.1; §18.2 Cascading failure, §8.5 CCPA (California Consumer Privacy Act), §12.5 Chain-of-thought prompting, §14.2 ChatGPT, §2.5; §5.1; Appendix C Chilling effect, §12.4 China, AI strategy, §13.4; §19.2 CityScope Predict (anchor example), §1.5; Ch. 7, 9, 12, 13, 17, 19, 21 Clearview AI, §6.4; §12.3; Appendix B Clinical decision support, §15.1 Collaborative filtering, §7.2 COMPAS, §9.3; §17.2; Appendix B; see also ProPublica COMPAS analysis Compute divide, §19.4 Confidence scores, §8.4; vs. accuracy, §8.4 Conformity assessment, §13.6 Consent fatigue, §12.2 Constitutional AI, §20.4 Content-based filtering, §7.2 ContentGuard (anchor example), §1.5; Ch. 4, 7, 9, 13, 17, 19, 21 Convolutional neural network (CNN), §6.2 Copyright and AI, §11.4 Correlation vs. causation, Appendix A Creative industries, AI impact, §11.5 Creativity, AI and, Ch. 11; philosophical perspectives, §11.3 Criminal justice, AI in, Ch. 17; see also predictive policing, risk assessment Cyc project, §2.3; Appendix C

D

DALL-E, §11.1; Appendix C Dartmouth Conference (1956), §1.2; §2.1; Appendix C Data bias pipeline, §9.1 Data brokers, §12.2 Data colonialism, §19.5 Data labeling, §4.3; §4.6; working conditions, §4.6 Data minimization, §12.5 Data provenance, §4.5 Data, as AI foundation, Ch. 4 Datasheets for datasets, §4.5 Deep Blue, §2.4; Appendix C Deep learning, §2.4; §3.6; revolution, Appendix C Deepfakes, §6.5 Demographic parity, §9.3 Deskilling, §16.6 Diagnostic AI, §15.1; see also MedAssist AI Diffusion models, §11.2 Digital divide (educational), §16.5 Digital footprint, §12.2 Digital sovereignty, §19.5 Dijkstra, Edsger, §1 Disparate impact, §9.4; §17.3 Distributional shift, §8.3 Due process, §17.3 Durable frameworks, §21.4

E

Edge cases, §8.1 Education, AI in, Ch. 16; see also intelligent tutoring systems, academic integrity ELIZA, §2.2; Appendix C Embodied carbon, §18.2 Environmental impact of AI, Ch. 18 Equal protection, §17.3 Equalized odds, §9.3 EU AI Act, §13.2; risk categories, §13.2; conformity assessment, §13.6; Appendix C E-waste, §18.2 Existential risk (x-risk), §20.3 Expert systems, §2.3; collapse, §2.3; see also XCON

F

Face embedding, §6.4 Facial recognition, §6.4; §12.3; disparities, §6.4; bans, §12.3; see also Gender Shades, Clearview AI FACTS Framework, §1.6; application, Appendix D Fairness, definitions, §9.3; impossibility theorem, §9.3; in criminal justice, §17.2; see also demographic parity, equalized odds, calibration Fairness through unawareness, §9.2 Fairness-accuracy trade-off, §9.5 FDA clearance vs. approval, §15.5 Feature (machine learning), §3.1 Feedback loops, §7.6; §17.1; in policing, §17.1 Few-shot prompting, §14.2 Fifth Generation Computer Project (Japan), §2.3; Appendix C Fine-tuning, §5.4 Frey, Carl Benedikt, and Osborne, Michael, §10.2; Appendix B

G

GANs, see generative adversarial networks Gardner, Howard (multiple intelligences), §1.2 Gebru, Timnit, §5.6; §6.4; §9.4; see also Gender Shades, stochastic parrots Gender Shades study, §6.4; §9.4; Appendix B General AI, see artificial general intelligence Generalization, §3.4 Generative adversarial networks (GANs), §11.2 Generative AI, Ch. 11; in education, §16.2; environmental cost, §18.1 Ghost data, §4.4 Gig economy, §10.5 Global South, AI and, §19.4 Google, §5.6; BERT, §5.2; Gemini, §5.1 Governance, AI, Ch. 13; global approaches, §13.2–13.4; Ch. 19 GPT (series), §2.5; §5.1; Appendix C Graceful degradation, §8.6 Green AI, §18.5 Ground truth, §4.3

H

Hallucination (AI), §8.2; in LLMs, §5.5; verification, §8.6; §14.3 Health equity, §15.3 Healthcare AI, Ch. 15; bias in, §15.3; regulation, §15.5; see also MedAssist AI Hinton, Geoffrey, §2.4; §3.6; Appendix C Historical bias, §9.2 Hollywood strikes (2023), §10.4; §11.5; Appendix C Human-AI collaboration, §14.6; in creative work, §11.6 Hype cycle, §2.6; Appendix C

I

ImageNet, §2.4; §6.2; controversies, §6.2; Appendix B, C Inference (privacy), §12.2 Inference energy, §18.1 Innovation principle, §13.5 Intelligent tutoring systems, §16.1 Interpretability, §7.5; see also accuracy-interpretability trade-off

J

Jevons paradox, see rebound effect Jobs, AI impact on, Ch. 10; task-based framework, §10.2

K

Kasparov, Garry, §2.4; Appendix C Knowledge engineering, §2.3 Krizhevsky, Alex, see AlexNet

L

Labels (data), §3.1; §4.3 Large language models, Ch. 5; mechanism, §5.1; limitations, §5.5; environmental cost, §18.1; see also GPT, transformer Learning analytics, §16.4 Li, Fei-Fei, §6; ImageNet, §6.2; Appendix C Lighthill Report, §2.2; Appendix C LISP machines, §2.3 Luddites, §10.1

M

Machine learning, §1.2; Ch. 3; types (supervised, unsupervised, reinforcement), §3.1–3.3; pipeline, §9.1 McCarthy, John, §1.2; §2.1 Measurement bias, §9.2 MedAssist AI (anchor example), §1.5; Ch. 6, 8, 9, 15, 18, 21 Meta-analysis, Appendix A Metadata, §12.2 Minsky, Marvin, §2.1; §2.2; Perceptrons, §2.2 Model (machine learning), §3.4 Model efficiency, §18.5 Multimodal AI, §21.1

N

Narrow AI, §1.3; all current AI as, §1.3 Neural networks, §3.6; convolutional, §6.2; see also deep learning Next-token prediction, §5.1 Northpointe, see COMPAS

O

Obermeyer, Ziad, et al., §9.2; §15.3; Appendix B Object detection, §6.3 OpenAI, §5.1; GPT series, §2.5; ChatGPT, Appendix C Osborne, Michael, see Frey and Osborne Overfitting, §3.5

P

Pacing problem, §13.1 Panopticon effect, §12.4 Parameter, §3.6 Perceptron, §2.1; Appendix C Personalized learning, §16.3 Pixel, §6.1 Precautionary principle, §13.5 Predictive policing, §17.1; feedback loops, §17.1; CityScope Predict, §1.5 Pre-training, §5.3 Privacy, Ch. 12; as power, §12.1; regulatory approaches, §12.5; personal strategies, §12.6; see also surveillance, GDPR Priya's Semester (anchor example), §1.5; Ch. 5, 8, 11, 14, 17, 21 Proctoring, AI, §16.4 Prompt engineering, §14.2 ProPublica COMPAS analysis, §9.3; §17.2; Appendix B Protected class, §9.4 Proxy variable, §9.2; in healthcare, §15.3

R

Rebound effect, §18.6 Recidivism prediction, §17.2; see also COMPAS, risk assessment Red-teaming, §20.4 Regulatory capture, §13.5 Regulatory sandbox, §13.2 Reinforcement learning, §3.3; reward hacking, §20.2 Reinforcement learning from human feedback (RLHF), §5.4; §20.4 Replication crisis, AI, Appendix A Representation bias, §9.2 Right to be forgotten, §12.5 Right to explanation, §17.3 Risk assessment instruments (criminal justice), §17.2 Risk-based regulation, §13.2 RLHF, see reinforcement learning from human feedback Robustness, §20.2 Rosenblatt, Frank (Perceptron), §2.1; Appendix C Runaway feedback loop, §17.1; see also feedback loop

S

Scenario planning, §21.3 Searle, John (Chinese Room), §5.6 Selection bias, §4.4 Self-attention, §5.2; see also transformer Self-regulation (industry), §13.5 Sociotechnical system, §21.5 Soft law, §13.5 Specification gaming, §20.2 Stable Diffusion, §11.1; Appendix C Statistical significance, Appendix A Sternberg, Robert (triarchic intelligence), §1.2 Stochastic parrot, §5.6; Appendix B Style transfer, §11.1 Superintelligence, §20.3 Supervised learning, §3.1 Surveillance, AI-powered, Ch. 12; in education, §16.4; in policing, §17.1 Surveillance capitalism, §12.1

T

Task-based framework (automation), §10.2 Techno-nationalism, §19.2 Technological determinism, §21.5 Temperature (LLM), §5.1 Tesler, Larry (AI effect), §1.4 Test set, §3.4 Transformer architecture, §2.5; §5.2; Appendix B, C Training data, §1.2; Ch. 4; representativeness, §4.4 Training-validation-testing pipeline, §3.4 Transfer learning, §5.4 Turing, Alan, §2.1; "Computing Machinery and Intelligence," §2.1; Appendix C Turing Test, §2.1

U

Underfitting, §3.5 Unsupervised learning, §3.2

V

Validation set, §3.4 Value alignment, §20.1 Vaswani, Ashish, et al., see transformer, "Attention Is All You Need" Verification (of AI outputs), §8.6; §14.3

W

Watson (IBM), §2.4; Appendix C Weizenbaum, Joseph, see ELIZA Work, AI and, Ch. 10; see also automation, augmentation, algorithmic management

X

XCON (R1), §2.3; Appendix C X-risk, see existential risk

Z

Zero-shot prompting, §14.2 Zuboff, Shoshana, §12.1; see also surveillance capitalism