> "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
Learning Objectives
- Define artificial intelligence and distinguish it from popular culture depictions
- Identify examples of AI systems in daily life
- Explain the difference between narrow AI and general AI
- Evaluate AI claims in media using a critical thinking framework
- Begin the AI Audit Report by selecting a system to analyze
In This Chapter
- Chapter Overview
- 1.1 AI Is Everywhere (And You Might Not Notice)
- 1.2 What Do We Mean by "Intelligence"?
- 1.3 Narrow AI vs. General AI: Managing Expectations
- 1.4 The AI Effect: Why We Keep Moving the Goalposts
- 1.5 Introducing Our Four AI Systems
- 1.6 Your AI Literacy Toolkit: A Framework for Critical Thinking
- 1.7 Chapter Summary
- 🎯 Project Checkpoint: AI Audit Report — Step 1
- What's Next
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." — Edsger Dijkstra, computer scientist
Chapter Overview
You probably interacted with artificial intelligence before you finished breakfast today. Your phone's alarm adapted to your sleep cycle. Your email app sorted spam from real messages. A recommendation engine chose which news stories appeared at the top of your feed. And if you asked a voice assistant for the weather forecast, a cascade of AI systems converted your speech to text, interpreted your intent, fetched the data, generated a natural-language answer, and converted it back to speech — all in under two seconds.
And yet, if someone stopped you on the street and asked, "What is artificial intelligence?", you might struggle to give a confident answer. That is not your fault. The term "artificial intelligence" has been stretched, distorted, and weaponized by marketers, journalists, filmmakers, and politicians until it can seem to mean everything and nothing at the same time. One headline promises AI will cure cancer by 2030; the next warns it will destroy all jobs by 2028. A movie shows a sentient robot falling in love; your actual experience is a chatbot that cannot figure out you want to cancel your subscription.
This chapter is where we start sorting fact from fiction. We will not ask you to learn to code or memorize algorithms. Instead, we will build something more valuable: a clear mental model of what AI actually is, what it can and cannot do today, and — most importantly — a framework for thinking critically about every AI claim you encounter from this point forward.
In this chapter you will learn to:
- Define artificial intelligence and distinguish it from popular culture depictions
- Identify examples of AI systems you already encounter in daily life
- Explain the difference between narrow AI and general AI
- Evaluate AI claims in media using a critical thinking framework
- Begin the AI Audit Report by selecting a real AI system to analyze throughout the course
Learning Paths
Fast Track (45 minutes): Read sections 1.1, 1.3, 1.5, and 1.6. Complete the Self-Assessment and Project Checkpoint.
Deep Dive (2.5–3 hours): Read all sections, complete the Check Your Understanding prompts, attempt the Productive Struggle exercise, read both case studies, and begin your AI Audit Report.
1.1 AI Is Everywhere (And You Might Not Notice)
Consider a typical weekday morning for a college sophomore named Jordan. Jordan wakes up when their phone's adaptive alarm rings during a light sleep phase. They scroll through social media — the posts they see have been curated by recommendation algorithms that predict what will keep them scrolling. They open a navigation app to check their commute; the app's traffic predictions rely on machine learning models trained on millions of trips. At the campus coffee shop, they pay with a tap of their phone; a fraud-detection system silently verifies the transaction is legitimate. In their first class, the professor mentions that the university's new advising tool uses AI to suggest course schedules.
Jordan has encountered at least six AI systems before 10 a.m., and they were fully aware of exactly zero of them.
This is the first important thing to understand about modern AI: most of it is invisible. It is not the humanoid robot of science fiction. It is a layer of software woven into the infrastructure of daily life — sorting, predicting, recommending, filtering, and deciding things on your behalf, often without your knowledge or consent.
Here is an incomplete list of places AI is working right now, as you read this sentence:
- Your email inbox. Spam filters use machine learning to classify messages.
- Search engines. The results you see (and the order you see them in) are shaped by AI ranking systems.
- Streaming services. Netflix, Spotify, YouTube, and TikTok all use recommendation algorithms to decide what you see next.
- Banking. Fraud detection systems monitor transactions in real time.
- Healthcare. Some hospitals use AI to help radiologists read X-rays and CT scans.
- Agriculture. Computer vision systems monitor crop health from drone and satellite imagery.
- Hiring. Some companies use AI to screen resumes before a human ever reads them.
- Criminal justice. Risk-assessment tools estimate the likelihood that a defendant will reoffend.
📊 Real-World Application: Pick any three items from the list above. For each one, ask yourself: (a) Did I know AI was involved? (b) Who benefits from this system? (c) Who might be harmed? You have just performed a rough version of the power analysis we will develop throughout this book.
The sheer range of that list reveals something crucial. "Artificial intelligence" is not one thing. It is an umbrella term covering dozens of different techniques applied to thousands of different problems. A spam filter and a self-driving car are both called "AI," but they have about as much in common as a bicycle and a Boeing 747 — both are vehicles, but they work in fundamentally different ways, at fundamentally different scales.
This brings us to our first big question: if AI is not one single thing, then what exactly do we mean when we use the term?
1.2 What Do We Mean by "Intelligence"?
Here is a disarmingly simple definition of artificial intelligence:
Artificial intelligence is the design of computer systems that perform tasks typically associated with human intelligence — such as recognizing patterns, making predictions, understanding language, or making decisions.
Read that definition again. Notice what it does not say. It does not say the computer thinks. It does not say the computer understands. It says the computer performs tasks that we associate with intelligence. This distinction matters enormously, and we will return to it throughout the book.
But why is the definition slippery in the first place? The problem is the word "intelligence." Psychologists have debated the definition of human intelligence for over a century without reaching consensus. Howard Gardner proposed multiple intelligences — linguistic, logical-mathematical, spatial, musical, interpersonal, and more. Robert Sternberg argued for analytical, creative, and practical intelligence. Some researchers question whether "intelligence" is a single thing at all, or just a convenient label we paste over a collection of loosely related abilities.
If we cannot agree on what human intelligence is, defining artificial intelligence becomes a moving-target problem.
💡 Intuition: Think of "intelligence" not as a single dial that goes from 0 to 100, but as a mixing board in a recording studio — dozens of separate sliders, each controlling a different ability. A system can have one slider pushed very high (like playing chess) while every other slider is at zero (it cannot tell you what chess is, or why people enjoy it, or what a chess piece looks like).
The field of AI was formally born at a 1956 workshop at Dartmouth College, where computer scientist John McCarthy coined the term. The original proposal was breathtaking in its ambition: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Seven decades later, we have made staggering progress on specific aspects of intelligence — but the grand vision of a machine that can match the full breadth and flexibility of human cognition remains unrealized.
This gap between the original ambition and the current reality is one reason AI conversations get so confused. When a tech CEO says "AI," they might mean a narrow pattern-recognition system that took five years and $100 million to build. When a journalist says "AI," they might be imagining something closer to the Dartmouth dream. When a moviegoer says "AI," they are probably picturing a conscious robot. These three meanings are wildly different, but they all travel under the same two-letter label.
🔄 Check Your Understanding: In your own words, explain why defining "artificial intelligence" is difficult. What makes the word "intelligence" the source of the problem?
Throughout this book, we will be precise about which kind of AI we are discussing. That precision starts with the single most important distinction in the field.
1.3 Narrow AI vs. General AI: Managing Expectations
If you take only one concept from this chapter, make it this one.
Narrow AI (also called weak AI) is a system designed to perform a specific task or a small set of related tasks. It can often perform that task at superhuman levels — but it cannot do anything outside its designated lane.
Examples of narrow AI: - A chess engine that can defeat any human grandmaster — but cannot play checkers unless separately programmed to do so. - A medical imaging system that can detect certain tumors in radiology scans — but cannot tell you the patient's name, recommend a treatment, or comfort a frightened family. - A language translation system that can convert English to French — but does not "know" what the words mean in any human sense.
General AI (also called strong AI or artificial general intelligence / AGI) would be a system with the flexibility, adaptability, and breadth of human cognition. It could learn a new task without being specifically trained for it. It could transfer knowledge from one domain to another. It could reason about novel situations it has never encountered.
Here is the critical fact: as of 2026, general AI does not exist. Every AI system you interact with today — every single one — is narrow AI. The voice assistant on your phone is narrow AI. The chatbot that writes surprisingly coherent essays is narrow AI (it is very good at one thing: predicting the next word in a sequence). The self-driving car prototype is narrow AI. The AI that generates images from text prompts is narrow AI.
This does not mean these systems are unimpressive. Some of them are extraordinary. But they are all specialists, not generalists. They excel within the boundaries they were designed for and fail — sometimes spectacularly — outside those boundaries.
🚪 Threshold Concept: AI is a spectrum of techniques, not a single technology.
This is one of those ideas that, once you truly grasp it, changes how you see everything else. There is no single "AI" sitting inside your phone or your hospital's diagnostic system. There are many different techniques — rule-based systems, statistical models, neural networks, reinforcement learning, and more — each suited to different kinds of problems. Calling all of them "AI" is like calling a hammer, a screwdriver, a laser cutter, and a 3D printer all "tools." It is technically accurate but almost useless for understanding what any of them actually does.
From this point forward, whenever you hear someone say "AI can do X" or "AI will do Y," train yourself to ask: Which AI? Which technique? Trained on what data? Designed by whom? Tested how?
Why does the narrow-vs.-general distinction matter so much? Because almost all of the fear, hype, and confusion around AI comes from conflating the two. When someone warns that "AI will take all the jobs," they are (usually unconsciously) imagining general AI — a system that can do everything a human can do. When you actually look at the evidence, what you find is narrow AI automating specific tasks within jobs, which changes those jobs but does not simply delete them. The real story is more nuanced, more interesting, and more actionable — but you can only see it once you stop treating AI as a monolith.
📊 Real-World Application: Large language models (LLMs) like ChatGPT, Claude, and Gemini are sometimes described as steps toward general AI because they can handle a wide range of language tasks. But notice: they are still fundamentally doing one thing — predicting the next token in a sequence. They do not have goals, desires, sensory experiences, or an understanding of the physical world. Their versatility is impressive, but it is breadth within a single modality, not the cross-domain flexibility that would characterize true general intelligence.
🔄 Check Your Understanding: A company advertises that its product uses "advanced AI." Based on what you have learned so far, what follow-up questions should you ask before evaluating that claim?
1.4 The AI Effect: Why We Keep Moving the Goalposts
In the 1960s, researchers predicted that a computer capable of playing chess at a grandmaster level would be a clear sign of artificial intelligence. In 1997, IBM's Deep Blue defeated world champion Garry Kasparov. The reaction? Many people shrugged and said, "That's not really AI. It's just brute-force computation."
In the 2000s, people said that if a computer could understand natural language well enough to win a quiz show, that would be real AI. In 2011, IBM's Watson crushed the best human players on Jeopardy!. The reaction? "Impressive, but it's just search and statistical correlation."
In the 2010s, people said that if a computer could generate original text, art, or music that was indistinguishable from human-created work, that would finally be AI. By 2023, generative AI systems were doing all three. The reaction? "It's just autocomplete on steroids."
This pattern has a name: the AI effect. The computer scientist Larry Tesler once quipped, "AI is whatever hasn't been done yet." The moment a machine accomplishes something we used to call intelligent, we redefine intelligence to exclude it.
The AI effect reveals something deep about human psychology: we have a powerful need to believe that real intelligence is special — that it is something uniquely ours, something no machine can touch. Every time AI encroaches on territory we thought was exclusively human, we retreat to higher ground and declare, "Well, that doesn't count."
There is nothing wrong with this instinct. In fact, it points to genuinely important philosophical questions about consciousness, understanding, and meaning that we will explore in later chapters. But the AI effect also creates a practical problem: it makes it very hard to have an honest conversation about what AI systems can actually do right now, because we are perpetually busy either overhyping the future or dismissing the present.
💡 Intuition: The AI effect works like a ratchet. Every time AI achieves something, the definition of "real" intelligence clicks one notch higher. This means that by definition, current AI will always seem unimpressive compared to the imagined AI of the future. Keep this ratchet in mind whenever you read a headline that says AI "still can't" do something — the goalposts may have just moved.
⚖️ Myth vs. Reality
Myth Reality AI is a single, powerful technology AI is an umbrella term for dozens of different techniques AI systems understand what they are doing Current AI systems process patterns; they do not "understand" in the human sense We will have human-level AI within a few years Expert surveys show wide disagreement; median estimates range from 2040 to "never" AI will replace all human workers AI automates specific tasks, changing jobs rather than simply eliminating them If it works well, it must not be "real" AI The AI effect — we redefine intelligence to exclude what machines can already do
1.5 Introducing Our Four AI Systems
Throughout this book, we will return again and again to four AI systems. Each one is a composite example — based on real technologies and real documented issues, but assembled into a single, coherent scenario for clarity. (We label these Tier 3: illustrative composites.) Think of them as case studies you will get to know deeply over the coming chapters, examining each one from multiple angles as you build your AI literacy.
ContentGuard: The Social Media Moderator
Imagine a major social media platform — one with hundreds of millions of users posting text, images, and video every day. The platform employs a content moderation system called ContentGuard. It uses a combination of machine learning classifiers and rule-based filters to automatically flag and remove content that violates the platform's policies: hate speech, harassment, graphic violence, misinformation, and more.
ContentGuard processes over 10 million posts per day. No team of human moderators could handle that volume, so the AI system makes the first pass, and human reviewers handle appeals and edge cases. The system is trained on historical data: millions of posts that human moderators previously labeled as either "violating" or "not violating" the platform's rules. But here is the tension: language is contextual, cultural, and constantly evolving. A phrase that is a slur in one community may be a term of empowerment when used by members of that community. Satire looks a lot like the thing it satirizes. ContentGuard regularly makes errors — and those errors are not random. Research has shown that automated moderation systems disproportionately flag content from certain racial and linguistic communities while missing coded hate speech used by others.
We will use ContentGuard to explore questions about bias, free speech, scale, accountability, and what happens when you automate decisions that require deep cultural understanding.
MedAssist AI: The Diagnostic Partner
At a large teaching hospital, the radiology department has adopted MedAssist AI, a diagnostic tool that analyzes medical images — chest X-rays, mammograms, skin lesion photographs — and highlights potential abnormalities for physicians to review. In clinical trials, MedAssist performed impressively: it matched or exceeded the accuracy of experienced radiologists on certain types of findings.
But after six months of real-world deployment, troubling patterns emerged. MedAssist's accuracy was significantly lower for patients with darker skin tones when analyzing dermatological images. It performed less well on chest X-rays from patients with certain body types. And the hospital discovered that some physicians were becoming over-reliant on the tool — spending less time on their own analysis and deferring to MedAssist's judgment, even in ambiguous cases where clinical experience should have taken precedence.
MedAssist will help us explore healthcare equity, the problem of biased training data, the psychology of automation trust, and the question of who is responsible when an AI-assisted diagnosis goes wrong.
Priya's Semester: The Student and the Machine
Priya Sharma is a second-year political science major juggling five courses, a part-time job, and a student organization. When generative AI tools became widely available, Priya — like millions of students — started using them. She uses a chatbot to brainstorm essay outlines and to explain concepts from her statistics course in plain language. She uses an AI writing assistant to check her grammar and suggest clearer phrasing. For one particularly overwhelming week, she asked a chatbot to draft a rough first paragraph of a policy brief, which she then rewrote substantially.
Priya is not trying to cheat. She is trying to survive. But she is genuinely unsure where the line is. Is using AI to brainstorm different from using it to draft? Is an AI grammar checker different from AI-generated text? Her university's academic integrity policy mentions "unauthorized assistance" but does not specifically address AI tools. Her professors have different rules: one bans AI entirely, another encourages it, and a third has not mentioned it at all.
We will follow Priya throughout the book to explore questions about education, academic integrity, skill development, and what it means to learn in an era when a machine can produce passable work on your behalf.
CityScope Predict: The Algorithmic Beat Cop
The city of Millhaven (population 340,000) is considering adopting CityScope Predict, a predictive policing system. The system analyzes historical crime data — types of offenses, locations, times, and demographic information — to generate "heat maps" predicting where crimes are most likely to occur in the coming week. Police commanders would use these maps to allocate patrol resources.
Proponents argue that CityScope Predict will make policing more efficient and could actually reduce biased policing by replacing subjective "gut feelings" with data-driven analysis. Critics counter that the historical crime data reflects decades of racially biased policing — if certain neighborhoods were over-policed in the past, the data will show more crime there, and the algorithm will recommend more policing, creating a feedback loop. The city council is deeply divided, and community members are demanding a voice in the decision.
CityScope Predict will guide our exploration of criminal justice, civil liberties, algorithmic feedback loops, public accountability, and the question of whether a system can be "objective" when it is trained on data produced by a biased process.
🔄 Check Your Understanding: Choose one of the four systems above. Without looking back at the text, describe (a) what the system does and (b) one potential concern associated with it. Then check your answer against the description.
1.6 Your AI Literacy Toolkit: A Framework for Critical Thinking
You now know that AI is not one technology but a spectrum, that every current system is narrow AI, and that our tendency to move the goalposts (the AI effect) distorts public conversation. You have met four AI systems that will serve as our ongoing laboratories. Now let us equip you with a practical tool you can use starting today.
We call it the FACTS Framework — five questions to ask whenever you encounter an AI system or an AI claim.
| Letter | Question | What it helps you assess |
|---|---|---|
| F — Function | What specific task does this system perform? | Separates the actual capability from the marketing language |
| A — Accuracy | How well does it work, and for whom? | Catches uneven performance across different groups |
| C — Consequences | Who benefits and who might be harmed? | Identifies power dynamics and equity implications |
| T — Training | What data was it trained on, and who curated that data? | Reveals potential sources of bias and limitation |
| S — Stewardship | Who is responsible when it goes wrong? | Clarifies accountability and governance |
Let us test-drive the FACTS Framework on a familiar example. Suppose you read a headline: "New AI System Can Detect Depression from Social Media Posts with 87% Accuracy."
- F — Function: The system analyzes social media text to predict whether the author shows signs of depression. It does not diagnose depression — it predicts it from text patterns.
- A — Accuracy: 87% sounds high, but accuracy alone is incomplete. What is the false-positive rate? (How often does it flag someone who is not depressed?) What is the false-negative rate? (How often does it miss someone who is depressed?) Does it work equally well across languages, dialects, and cultural communication styles?
- C — Consequences: If the system is used by a health service to offer resources, the consequences of a false positive might be mild (someone gets offered support they do not need). But if it is used by an employer or insurance company, a false positive could cost someone a job or raise their premiums. Same technology, radically different stakes.
- T — Training: What social media posts was it trained on? Posts from clinically diagnosed individuals? Self-reported survey respondents? English-speaking users from a specific demographic? The training data shapes everything the system can and cannot see.
- S — Stewardship: If the system flags someone incorrectly and that information is shared with third parties, who is responsible? The platform? The AI company? The entity that purchased the tool?
Notice how five straightforward questions transform you from a passive consumer of AI headlines into an active, critical evaluator. You do not need a computer science degree. You need the right questions.
🧩 Productive Struggle: Try applying the FACTS Framework to one of the four anchor systems (ContentGuard, MedAssist AI, Priya's Semester, or CityScope Predict). You will find that some questions are harder to answer than others — and that the difficulty of answering is itself informative. Which question gave you the most trouble? Why? Write down your attempt even if it feels incomplete. We will revisit and refine your analysis in later chapters.
🪞 Self-Assessment: Where Do You Stand?
Rate yourself on a scale of 1 (not at all confident) to 5 (very confident) for each statement. There are no wrong answers — this is a baseline you will return to at the end of the course.
- I can explain what AI is to a friend in plain language.
- I can tell the difference between narrow AI and general AI.
- I can identify AI systems I interact with in my daily life.
- I know how to evaluate AI claims I see in the news.
- I understand how AI might affect different communities differently.
- I feel confident participating in conversations about AI policy.
Save your scores. We will ask you to retake this assessment at the midpoint and end of the course.
👁️ Perspective-Taking: Imagine you are a member of a community that has historically been subject to biased policing. How might your reaction to CityScope Predict differ from that of someone who has always felt protected by law enforcement? Now imagine you are a police chief under pressure to reduce crime rates with a shrinking budget. How does the system look from that vantage point? Being AI-literate means holding multiple perspectives simultaneously — not to say "everyone has a point," but to understand why people with different experiences reach different conclusions about the same technology.
1.7 Chapter Summary
Let us step back and review the ground we have covered.
AI is not one technology — it is many. The term "artificial intelligence" encompasses dozens of different techniques applied to thousands of different problems. Treating AI as a single, monolithic entity leads to confusion, hype, and poor decision-making.
Every AI system you use today is narrow AI. These systems can be extraordinarily powerful within their specific domain, but they cannot generalize, adapt, or understand in the way humans do. General AI — a system with human-like cognitive flexibility — does not currently exist.
The AI effect keeps moving the goalposts. We have a persistent habit of redefining intelligence to exclude whatever machines can already do. This makes it hard to accurately assess the present state of the technology.
AI is already deeply embedded in daily life. From email filters to medical diagnostics to criminal justice risk assessments, AI systems are making decisions that affect real people — often invisibly.
Critical thinking is your most important tool. The FACTS Framework (Function, Accuracy, Consequences, Training, Stewardship) gives you a structured way to evaluate any AI system or claim. You do not need technical expertise. You need the right questions.
AI literacy is a civic skill. As AI systems increasingly shape hiring, healthcare, education, policing, and information access, understanding these systems is not optional — it is a prerequisite for meaningful participation in democratic society.
📋 Key Concepts Introduced in This Chapter
Concept Definition Artificial intelligence Computer systems that perform tasks typically associated with human intelligence Narrow AI (weak AI) AI designed to perform a specific task or narrow set of tasks General AI (AGI) A hypothetical AI with human-like cognitive flexibility across domains The AI effect The tendency to redefine "intelligence" to exclude what machines can already do FACTS Framework A five-question critical-thinking tool for evaluating AI systems and claims
🎯 Project Checkpoint: AI Audit Report — Step 1
Your task: Choose a real AI system that you will analyze throughout this course in your AI Audit Report. This is a cumulative project — each chapter will add a new layer of analysis.
For this chapter, complete the following:
-
Select your system. Choose a real AI system you can research. It could be a recommendation algorithm (Spotify, YouTube, TikTok), a hiring tool (HireVue, Pymetrics), a healthcare tool, a generative AI system, a credit-scoring model, or anything else that qualifies as AI. The only rule: it must be a real, currently deployed system, not a fictional one.
-
Write an initial profile (200–300 words): - What is the system called? - What company or organization created it? - What task does it perform? - Who uses it? - Who is affected by its outputs?
-
Apply the FACTS Framework. Answer each of the five FACTS questions to the best of your current ability. It is fine to write "I don't know yet" for some answers — identifying what you don't know is a key part of the process.
-
Reflection question: Based on your initial analysis, what is the single most important question you want to answer about this system by the end of the course?
Deliverable: 1–2 pages. Submit as the first entry in your AI Audit portfolio.
What's Next
In Chapter 2: A Brief History of AI — Booms, Busts, and Breakthroughs, we will trace the arc of artificial intelligence from its optimistic beginnings at the 1956 Dartmouth workshop through the "AI winters" when funding dried up and promises went unfulfilled, to the deep learning revolution that powers today's systems. Understanding where AI has been — including its failures and false starts — is essential for evaluating where it might be going. You will discover that many of today's "revolutionary" ideas are actually decades old, and that the history of AI is as much a story about human ambition, funding cycles, and shifting cultural anxieties as it is about technology.
You will also apply the FACTS Framework to a historical AI system, giving you practice with the tool while building your sense of how the field has evolved.
Related Reading
Explore this topic in other books
AI Literacy Using AI Effectively AI Literacy AI in Education AI Engineering Societal Impact of AI Working with AI The AI-Augmented Professional AI & ML for Business The AI-Powered Organization