26 min read

> "Education is not the filling of a pail, but the lighting of a fire."

Learning Objectives

  • Identify the major applications of AI in education (tutoring, assessment, personalization)
  • Evaluate claims about AI's potential to improve learning outcomes
  • Analyze the risks of AI in education (surveillance, equity, deskilling)
  • Assess academic integrity challenges in the age of generative AI
  • Formulate a position on appropriate AI use in educational settings

"Education is not the filling of a pail, but the lighting of a fire." — Attributed to W. B. Yeats (though the attribution is contested — a fitting irony for a chapter about verifying claims)


Chapter Overview

It is 8:15 on a Wednesday morning. In a high school classroom in suburban Atlanta, 28 students sit at individual laptops. Each screen shows a different math problem. One student is working on quadratic equations; another, three seats away, is still on linear functions. A third student, who has already mastered both topics, is exploring systems of equations. The adaptive learning platform adjusts in real time, serving easier problems when a student struggles and harder ones when they demonstrate mastery. The teacher circulates, stopping to help students whose frustration levels — flagged by the software — exceed a threshold.

Fifteen hundred miles away, at a community college in New Mexico, a 34-year-old returning student named Elena takes an online exam. She is alone in her apartment. A webcam monitors her face, and AI proctoring software analyzes her eye movements, head turns, and ambient sounds. Elena glances at the ceiling to think through a problem, and the software flags the movement as a potential cheating indicator. She scratches her face and the software logs an "anomaly." She shares a small apartment with her three children, and when her seven-year-old walks into the background, the software flags a "third party in the testing area." Elena finishes the exam feeling more anxious about the surveillance than about the test itself.

And in a dorm room at Priya's university, a first-year student opens a chatbot, pastes an essay prompt, and receives a polished 800-word response in eleven seconds. He reads it, changes the font, and submits it. He will receive a B+. He will learn nothing.

Three scenes. Three very different versions of AI in education. One is potentially transformative. One is potentially invasive. One is a straightforward case of academic dishonesty. The challenge — for educators, students, policymakers, and you — is developing the literacy to tell them apart and the frameworks to respond.


In this chapter you will learn to:

  1. Identify the major applications of AI in education — tutoring, assessment, personalization, and administration
  2. Evaluate claims about AI's potential to improve learning outcomes
  3. Analyze the risks of AI in education — surveillance, equity, and deskilling
  4. Assess academic integrity challenges in the age of generative AI
  5. Formulate a position on appropriate AI use in educational settings

Learning Paths

Fast Track (50 minutes): Read sections 16.1, 16.2, and 16.6. Complete the Check Your Understanding prompts and the Project Checkpoint.

Deep Dive (2.5–3 hours): Read all sections, complete the debate framework and scenario walkthroughs, read both case studies, and work through the exercises.


Spaced Review — Concepts from Earlier Chapters

🔁 From Chapter 5 (Large Language Models): LLMs generate text by predicting the next token. They do not understand the material they produce. This is why a student can submit an AI-generated essay that is fluent and well-structured but contains subtle errors — and why AI detection tools face fundamental limitations.

🔁 From Chapter 9 (Bias and Fairness): AI systems reflect the biases in their training data. In education, this means that AI tutoring systems, automated grading tools, and proctoring software may work differently for students from different backgrounds, potentially widening existing achievement gaps.

🔁 From Chapter 12 (Privacy and Surveillance): AI surveillance tools collect intimate data about behavior, and the knowledge of being watched changes how people act (the Panopticon effect). AI proctoring in education is a direct application of these concepts.


16.1 Intelligent Tutoring Systems: A Long History

AI in education is not new. If you think the conversation started with ChatGPT, you are off by about five decades.

The first intelligent tutoring systems (ITS) emerged in the 1970s and 1980s. These were software programs designed to simulate a one-on-one tutor — providing individualized instruction, adapting to the student's level, and offering immediate feedback. The idea was grounded in a well-known finding from education research: the "two-sigma problem," identified by educational psychologist Benjamin Bloom in 1984.

Bloom found that students who received one-on-one tutoring performed, on average, two standard deviations above students who received conventional group instruction. That is a massive effect — it means the average tutored student outperformed 98 percent of students in a traditional classroom. The problem? One-on-one tutoring for every student is prohibitively expensive. Bloom's challenge to the field was clear: find instructional methods that achieve the same results at scale.

Intelligent tutoring systems were one response to that challenge. Early systems like SCHOLAR (1970s, geography), SOPHIE (1970s, electronics troubleshooting), and later Carnegie Learning's Cognitive Tutor (1990s, mathematics) attempted to model the student's understanding and adapt instruction accordingly.

These systems were not chatbots. They did not generate free-form text. They worked within defined domains — typically mathematics, science, or programming — where problems had clear right and wrong answers and where the reasoning process could be modeled computationally. They asked structured questions, analyzed student responses, identified misconceptions, and selected the next appropriate problem or explanation.

What the Evidence Shows

The evidence on intelligent tutoring systems is substantial and generally positive. A 2014 meta-analysis published in Educational Psychology Review by researchers including Kurt VanLehn examined studies of ITS across multiple domains. The key finding: ITS were almost as effective as human tutors, and significantly more effective than conventional classroom instruction, for well-defined problem-solving tasks.

The caveat — "well-defined problem-solving tasks" — is important. ITS have been most successful in mathematics and science, where problems have clear solution paths. They have been less successful in domains that require open-ended reasoning, creative thinking, or the kind of contextual judgment that resists formalization.

💡 Key Insight: The history of intelligent tutoring systems teaches an important lesson: AI in education is most effective when the domain is well-structured and the learning objectives are clearly defined. As we move into less structured domains — writing, critical thinking, ethical reasoning — the challenges multiply.

🔄 Check Your Understanding: What is Bloom's "two-sigma problem"? Why was it significant for the development of educational AI?


16.2 Generative AI in the Classroom: The New Reality

The arrival of large language models in 2022 and 2023 changed the educational AI landscape overnight. Unlike intelligent tutoring systems, which operate within structured domains, generative AI can produce text on any topic — essays, reports, code, creative writing, problem solutions, analysis — at a level that is often indistinguishable from student work.

This created an immediate crisis in education. Not because the technology itself was harmful, but because the education system was built on an assumption that is no longer true: that if a student submits polished, competent work, the student must have produced that work through a process of learning.

The Scale of the Disruption

Consider what generative AI can now do for a student:

  • Write an essay that receives a passing or better grade in most undergraduate courses
  • Solve math problems and show the work
  • Write functional computer code for introductory and intermediate programming assignments
  • Summarize readings, generate discussion responses, and draft research proposals
  • Translate between languages
  • Answer exam questions in free-response format

This is not a niche capability. It affects virtually every field and every level of education. A 2023 survey by the Stanford Internet Observatory found that among college students who reported using AI tools, the most common uses were writing essays, completing homework, and studying for exams.

Three Institutional Responses

Educational institutions have responded to generative AI in broadly three ways:

1. Prohibition. Some institutions banned AI tools outright. This was the most common initial response. The problem: enforcement is extremely difficult. AI detection tools — software that claims to identify AI-generated text — have significant limitations. They produce both false positives (flagging human-written text as AI-generated, which can be devastating for the wrongly accused student) and false negatives (missing AI-generated text that has been lightly edited). A 2023 study found that AI detectors were significantly more likely to flag text written by non-native English speakers as AI-generated — raising serious equity concerns.

2. Integration. Other institutions embraced AI tools and redesigned assignments around them. Some professors now require students to use AI as part of the assignment — generating a first draft with AI, then critically analyzing and improving it. This approach treats AI as a tool to be mastered rather than a temptation to be resisted.

3. Adaptation. A middle path involves maintaining traditional learning objectives while changing how they are assessed. This might mean more in-class writing, oral exams, project-based assessment, or process-focused evaluation (grading the research and revision process, not just the final product).

📊 Scenario Walkthrough: The Professor's Dilemma

Professor Okonkwo teaches an introductory political science course. She assigns a 1,500-word essay analyzing the causes of a recent international conflict. After ChatGPT launched, she suspects that a significant portion of submitted essays are AI-generated or AI-assisted.

Consider three possible responses:

Response A: She buys an AI detection tool and runs all essays through it. Students flagged by the tool are called in for academic integrity hearings.

Response B: She redesigns the assignment. Students must submit a research log showing their sources, their evolving thesis, and three drafts showing how their argument developed. The essay itself is 30% of the grade; the process documentation is 70%.

Response C: She makes AI use part of the assignment. Students must generate an AI draft, then write a 1,000-word critical analysis identifying the draft's strengths, weaknesses, factual errors, and missing perspectives, with a final 500-word rewrite incorporating their analysis.

Which response best serves the learning objectives? Which is most practical? Which is most equitable? There may not be a single right answer — but the reasoning process is what matters.

Priya's Perspective

We have followed Priya through Chapter 14 as she developed a personal AI policy. Now let us see how that policy interacts with institutional reality.

Priya is taking five courses this semester. Each has a different AI policy:

  • Political Science 201: "AI tools may be used for research and brainstorming but not for drafting. All submitted work must be your own writing."
  • Statistics 305: "AI calculators and code assistants are encouraged. Explain your reasoning in your own words."
  • Philosophy 210: "AI use of any kind is prohibited on all assignments."
  • Communications 150: "You may use AI, but you must disclose how you used it and include your prompts as an appendix."
  • History 101: No AI policy mentioned in the syllabus. The professor has not discussed it.

Priya finds this landscape confusing but navigable — because she has a personal AI policy to guide her through the ambiguity. The student in the dorm room from our opening, who submitted an AI-generated essay without reflection, does not have that framework. He is not making a principled decision. He is taking the path of least resistance.

🔄 Check Your Understanding: What are the three main institutional responses to generative AI in education? For each, identify one advantage and one limitation.


16.3 Personalized Learning: Promise and Evidence

"Personalized learning" is one of the most frequently invoked — and most loosely defined — promises of educational AI. The basic idea is appealing: every student learns differently, at different paces, with different strengths and gaps. AI could, in theory, create an individualized learning path for each student, adjusting content, pace, difficulty, and style in real time.

This is the vision behind the Atlanta classroom from our opening. It is the vision behind products like Khan Academy's Khanmigo (an AI tutoring assistant), IXL, DreamBox, and dozens of other adaptive learning platforms.

What Does the Evidence Actually Show?

The honest answer: the evidence is mixed, and much of it is weaker than the marketing claims suggest.

What works: Adaptive math platforms have shown modest positive effects in several studies, particularly for students who are behind grade level. A large-scale randomized controlled trial of Carnegie Learning's math software, for example, found small but statistically significant improvements in algebra performance. Khan Academy's personalized practice has shown benefits in some studies, particularly when combined with teacher support.

What is unclear: Whether the gains come from personalization specifically (the AI adapting to the individual student) or from increased practice time (students using the software simply spend more time on math). This distinction matters. If the benefit comes from more practice, a non-AI workbook would work just as well. If the benefit comes from intelligent adaptation, then the AI is adding something unique.

What is concerning: Several large-scale implementations of personalized learning have produced disappointing results. The RAND Corporation studied schools using personalized learning technologies and found "no significant effects" on academic achievement in many cases. The Gates Foundation, which invested heavily in personalized learning, commissioned evaluations that found uneven results — some positive, some neutral, some negative.

🔬 Research Spotlight: The Personalization Paradox

Here is an underappreciated finding: some research suggests that too much personalization can actually harm learning. When an adaptive system removes all struggle — always presenting problems at just the right difficulty level, always providing hints at just the right moment — it can prevent students from developing the persistence and problem-solving strategies that come from wrestling with challenging material.

The educational psychologist Robert Bjork coined the term "desirable difficulties" to describe challenges that slow initial learning but enhance long-term retention and transfer. A system optimized for immediate performance (getting the right answer now) may undermine the kind of effortful processing that produces durable learning.

This does not mean personalization is bad. It means that naive personalization — always making things easier, always reducing friction — may be counterproductive. The best educational AI would need to model not just what the student knows, but what kind of challenge would be most beneficial right now.

The Teacher's Role

One of the most important — and most frequently overlooked — factors in whether educational AI works is the teacher. Technology advocates sometimes frame AI as a replacement for traditional instruction. The evidence points in a different direction: AI works best as a supplement to skilled teaching, not a substitute for it.

In the most successful implementations of adaptive learning, teachers used data from the platform to identify struggling students, adjust their instruction, and provide targeted human support. The AI handled the practice and feedback; the human handled the motivation, the context, the relationship. When AI was deployed instead of teacher-led instruction, results were generally worse.

💡 Key Insight: The most effective educational AI is not the one that eliminates the need for teachers. It is the one that gives teachers better information about their students and more time to use it.


16.4 The Surveillance Classroom: Proctoring, Monitoring, and Student Privacy

Let us return to Elena, the community college student from our opening, sweating through an AI-proctored exam while the software catalogs her eye movements.

AI proctoring is software that monitors students during online exams, using webcams, microphones, and screen-sharing to detect potential cheating. These systems use AI to analyze facial movements, eye tracking, background noise, and typing patterns, flagging "suspicious behavior" for review by a human proctor or an instructor.

The market for AI proctoring exploded during the COVID-19 pandemic, when millions of students shifted to online learning. Companies like Proctorio, ExamSoft, Respondus, and ProctorU saw enormous growth. By some estimates, AI proctoring was used for tens of millions of exams during 2020 and 2021 alone.

The Problems

AI proctoring has generated significant controversy, and the concerns are well-documented:

Racial bias. Multiple reports and studies have documented that facial recognition components of proctoring software perform worse for students with darker skin tones. Students have reported being told to "increase lighting" or reposition themselves because the software could not detect their face. In extreme cases, students have been locked out of exams entirely because the software could not verify their identity.

Disability discrimination. Students with conditions that cause involuntary movements (such as Tourette syndrome), students who use screen readers or other assistive technologies, and students with attention conditions that cause frequent eye movement may be disproportionately flagged by proctoring software.

Environmental assumptions. AI proctoring assumes that the student has a private, quiet, well-lit space with reliable internet and a functioning webcam. Elena — who shares a small apartment with three children — does not meet this assumption. Neither do students in multigenerational homes, students living in domestic violence situations, or students in housing-insecure conditions. The software penalizes them for their circumstances.

The anxiety effect. Research has shown that the knowledge of being surveilled during an exam can increase test anxiety, particularly for students who are already anxious or who belong to groups that are more likely to be falsely flagged. A 2021 study found that students who knew they were being AI-proctored reported higher anxiety levels and lower performance than students taking the same exam without proctoring.

Accuracy and false flags. The systems generate a high volume of "flags" — eye movements, head turns, background sounds — most of which are innocent. Instructors who review these flags face a version of the base rate problem: when most flags are false positives, the system generates noise rather than useful signal.

⚖️ Debate Framework: AI Proctoring

Position A: AI proctoring is a necessary tool for maintaining academic integrity in online education. - Online exams without proctoring are vulnerable to widespread cheating. - Students who cheat devalue the degrees of students who do not. - AI proctoring is less invasive than requiring in-person testing, which creates access barriers. - The technology is imperfect but improvable.

Position B: AI proctoring is a surveillance technology that harms students and should be banned. - It disproportionately affects students of color, students with disabilities, and low-income students. - It increases test anxiety, potentially lowering performance for the students it claims to protect. - It assumes guilt and forces students to prove their innocence. - Alternative assessment methods (oral exams, project-based assessment, open-book exams) can maintain integrity without surveillance.

Your position: What safeguards, if any, would make AI proctoring acceptable to you? Is there a middle ground, or is the technology fundamentally flawed?

The Broader Surveillance Landscape

AI proctoring is the most visible form of educational surveillance, but it is not the only one. Schools are also using AI to:

  • Monitor students' emails and chat messages for signs of self-harm, bullying, or violence
  • Track students' physical movements through school buildings
  • Analyze students' online activity on school-issued devices
  • Predict which students are "at risk" of dropping out or failing

Each of these applications raises legitimate safety concerns — schools do have a responsibility to protect students — and legitimate privacy concerns — students are not prisoners, and treating them like suspects can undermine the trust that learning requires.

🔄 Check Your Understanding: Identify three specific populations of students who might be disproportionately harmed by AI proctoring software. For each, explain the mechanism of harm.


16.5 Digital Divide: Who Benefits from Educational AI?

If you have reliable internet, a quiet space to study, a personal laptop, and parents with college degrees, educational AI probably looks like an exciting set of tools. If you lack any of those things, it may look more like another barrier.

The digital divide in education is not just about access to devices and internet — though those gaps remain significant. It is about:

Infrastructure access. AI-powered educational tools require reliable internet connections and modern devices. In the United States, approximately 17 million children lack home internet access, according to Federal Communications Commission estimates. In low-income countries, the gap is far wider.

Digital literacy. Using AI tools effectively requires skills that are unevenly distributed. The prompting techniques we covered in Chapter 14 are a form of literacy. Students who develop these skills — often those with more educated parents, better-resourced schools, and greater exposure to technology — will benefit more from AI tools than students who lack them.

Language and cultural bias. Most AI tools are trained primarily on English-language data and reflect the cultural contexts of wealthy, English-speaking countries. Students who speak other languages, use non-standard English dialects, or come from underrepresented cultural backgrounds may receive lower-quality AI assistance.

The tutoring gap, amplified. One of the promises of educational AI is to provide tutoring at scale. But if wealthy families combine AI tutoring with human tutoring, private schools integrate AI into already-excellent instruction, and under-resourced schools use AI as a replacement for human instruction, the technology will widen rather than narrow the achievement gap.

👁️ Perspective-Taking: Consider two students, both using the same AI tutoring platform.

Student A attends a well-funded suburban school. She has a quiet bedroom, a fast laptop, and parents who help her with homework. Her teacher uses the AI platform's data to identify her weak areas and provides additional instruction during office hours.

Student B attends an underfunded urban school. He does homework at the kitchen table while his younger siblings watch TV nearby. His phone is his primary device. His school adopted the AI platform to compensate for a teacher shortage — there is no teacher reviewing his data or providing follow-up instruction.

Both students are "using AI in education." Their experiences could not be more different.

CityScope Predict Parallel

The equity dynamics of educational AI mirror those of CityScope Predict, the predictive policing system we have been following since Chapter 1. Just as CityScope Predict risks directing more police resources to already-over-policed communities, educational AI risks directing better resources to already-better-served students. And just as CityScope Predict uses historical data that reflects systemic inequality, educational AI uses achievement data that reflects existing disparities in school funding, family resources, and community support.

The tools are different. The pattern is the same.


16.6 Redesigning Education for the AI Era

So what should education look like when students have access to tools that can produce competent work on their behalf? This is not a question about cheating. It is a question about purpose.

If the purpose of education is to produce correct answers to known questions, then AI has made much of education obsolete. A chatbot can answer most test questions, write most student essays, and solve most problem sets faster and more consistently than a student can.

But if the purpose of education is to develop the capacity to think — to analyze, question, create, reason, argue, empathize, persist through difficulty, and form independent judgments — then AI has not made education obsolete. It has made it more important than ever. Because those capacities cannot be developed by a machine on your behalf. They can only be developed through the struggle of doing the work yourself.

Five Principles for AI-Era Education

Based on the evidence and the debates surveyed in this chapter, here are five principles that could guide the redesign of education for the AI era:

1. Teach the thinking, not the product. If a chatbot can produce the product (an essay, a solution, a report), then the product alone is no longer sufficient evidence of learning. Educators need to assess the thinking process — through drafts, process documentation, oral defense, or collaborative work that demonstrates engagement.

2. Make AI literacy a foundational skill. Just as students learn to read critically, evaluate sources, and construct arguments, they should learn to prompt effectively, verify outputs, recognize bias, and develop personal AI policies. This is not a separate course. It is a dimension of every course.

3. Preserve desirable difficulties. Not all struggle is bad. The struggle of writing a first draft, debugging code, or wrestling with a difficult text is where deep learning happens. Educational AI should be designed to support this struggle, not eliminate it.

4. Address equity first. Before deploying any educational AI tool, ask: Who has access? Who does not? Will this tool narrow or widen existing gaps? If the answer is "it will widen gaps," either redesign the deployment or do not deploy.

5. Keep humans at the center. The most effective educational technology is a tool in the hands of a skilled teacher, not a replacement for one. AI should give teachers better information, more time, and greater capacity — not make teachers obsolete.

📊 Scenario Walkthrough: Redesigning an Assignment

Here is a traditional assignment: "Write a 2,000-word research paper on the causes of climate change."

A student can paste this prompt into a chatbot and receive a competent paper in seconds. The assignment, in its traditional form, no longer works.

Here is one way to redesign it:

  1. Week 1: Students identify three specific questions about climate change that they find genuinely interesting. They explain why each question matters to them personally. (AI can help brainstorm but not substitute for personal interest.)
  2. Week 2: Students find five sources — at least two that disagree with each other — and write a 500-word annotated bibliography explaining how each source relates to their question. (This requires actual reading, not summarization.)
  3. Week 3: Students write a 1,000-word first draft. They are required to submit it alongside a "process memo" describing what was hardest, where they got stuck, and how they worked through the difficulty.
  4. Week 4: Students use an AI tool to critique their draft (they submit the AI's feedback alongside their response to it). They then revise and submit a final draft with a 300-word reflection on how the feedback (both human and AI) changed their thinking.

This assignment is AI-resistant — not because AI cannot contribute, but because the assessment is designed to make the student's thinking visible. The product is no longer the only thing being graded. The process is.

🔄 Check Your Understanding: Explain the concept of "desirable difficulties" in education. Why might an AI system that removes all struggle from learning actually harm long-term skill development?


16.7 Chapter Summary

Education is one of the domains where AI's potential and AI's risks are most tightly intertwined — because the people affected are learners, many of them young, and the stakes involve not just knowledge but identity, equity, and opportunity.

AI in education has a long history. Intelligent tutoring systems have been studied since the 1970s, and the evidence shows they can be effective — particularly in structured domains like mathematics, and particularly when combined with skilled teaching.

Generative AI has disrupted the classroom. The ability to produce student-level work in seconds challenges traditional assessment, raises academic integrity concerns, and forces a rethinking of what education is for.

Personalized learning is promising but oversold. The evidence is mixed. The most successful implementations combine adaptive technology with strong human instruction. Personalization without teacher support often fails. And naive personalization — always reducing difficulty — may undermine the "desirable difficulties" that produce deep learning.

AI proctoring raises serious equity and privacy concerns. These systems disproportionately harm students of color, students with disabilities, and students in non-ideal home environments. The surveillance model conflicts with the trust that effective education requires.

The digital divide shapes who benefits. Access to devices, internet, quiet spaces, and digital literacy skills determines whether educational AI narrows or widens existing gaps. Without deliberate equity-focused deployment, AI risks creating a two-tiered educational system.

Education must be redesigned, not abandoned. If AI can produce the product, then the product alone is no longer sufficient evidence of learning. Educators need to assess thinking processes, teach AI literacy as a foundational skill, preserve desirable difficulties, and keep human teachers at the center.

📋 Key Concepts Introduced in This Chapter

Concept Definition
Intelligent tutoring system (ITS) Software designed to provide individualized instruction, adapting to the student's level
Bloom's two-sigma problem The finding that one-on-one tutoring produces two standard deviations of improvement over group instruction
Adaptive learning Educational technology that adjusts content and difficulty based on student performance
Personalized learning Tailoring educational paths to individual student needs, strengths, and interests
AI proctoring Software that monitors students during exams using webcams and AI analysis
Automated essay scoring AI systems that evaluate written work, often using NLP to assess structure, content, and style
Desirable difficulties Challenges that slow initial learning but enhance long-term retention and transfer
Digital divide (educational) Disparities in access to technology, internet, and digital skills that affect educational outcomes
Learning analytics Data-driven analysis of student behavior and performance to inform instruction
Deskilling The erosion of human skills through over-reliance on automated systems

🎯 Project Checkpoint: AI Audit Report — Step 16

Your task: Analyze the educational implications of the AI system you are auditing.

Instructions:

  1. Consider how your system relates to education. Address at least two of the following: - Could your system be used in educational contexts? How? What would the benefits and risks be? - Does your system raise academic integrity concerns? (For example, could students use it to complete assignments?) - Does your system have implications for the skills that future workers and citizens will need? - Does your system contribute to or address the digital divide?

  2. Connect to this chapter's themes. In 300–400 words, analyze your system using at least two of the following concepts: personalized learning, surveillance, deskilling, desirable difficulties, the digital divide, or AI literacy.

  3. Reflection question: Based on your analysis, do you think AI literacy courses like this one should be required for all students? Why or why not? How does your experience with your audit system inform your answer?

Deliverable: 300–400 words added to your AI Audit portfolio.


What's Next

In Chapter 17: AI and Justice — Criminal Justice, Civil Rights, and Accountability, we turn to one of the most consequential domains of AI application. CityScope Predict, the predictive policing system that has been a recurring example since Chapter 1, takes center stage. We will examine how AI is used in criminal justice — from predictive policing to risk assessment in sentencing — and wrestle with the constitutional, ethical, and practical questions these uses raise. If healthcare AI tests our commitment to equity, and educational AI tests our understanding of learning, justice AI tests our deepest beliefs about fairness, freedom, and what it means to be held accountable.