Case Study 2: Learning to Code by Building
Project-Based Learning in Action
Background
Priya Desai is a 28-year-old marketing analyst who has decided to learn to code. She has no computer science background. Her college degree is in communications. But her job increasingly requires her to work with data — pulling reports, cleaning spreadsheets, running basic analyses — and she's noticed that the colleagues who can write even simple scripts in Python finish in twenty minutes what takes her two hours of manual clicking.
So Priya signs up for an online Python course. It has twelve modules, each with video lectures, reading material, and coding exercises. She starts in January, full of enthusiasm.
(Priya Desai is a composite character based on common patterns in adult self-directed learning — Tier 3, illustrative example.)
By March, Priya is stuck. Not because the course is too hard — she's completed eight of the twelve modules and scored above 90% on every quiz. The exercises ask her to write short code snippets: "Write a function that takes a list of numbers and returns the average." "Write a loop that prints every other item in a list." She can do these. She's proven it.
But when she opens a blank file and tries to write a script to automate one of her actual work tasks — pulling data from three spreadsheets, cleaning it, and producing a summary report — she has no idea where to start. The gap between "write a function that returns the average" and "build something useful from scratch" feels like a canyon.
Priya's problem isn't a lack of knowledge. She knows variables, loops, functions, conditionals, lists, and dictionaries. She's passed every quiz. The problem is that her knowledge is fragmented — each concept exists as an isolated skill in her memory, disconnected from the others and from any real-world context. She's never had to decide which tool to use, let alone how to combine them.
This is the gap between exercise-based learning and project-based learning. And it's one of the most common frustrations in self-directed skill development.
The Diagnosis: Missing Kolb's Full Cycle
Let's analyze Priya's learning through the lens of this chapter's frameworks.
Kolb's cycle: Priya has been doing Phase 1 (concrete experience — completing exercises) and Phase 4 (active experimentation — trying different code snippets). But she's been skipping Phase 2 (reflective observation — stepping back to think about what she's doing and why) and Phase 3 (abstract conceptualization — building mental models about how the pieces fit together). The exercises don't require reflection because they have single correct answers. The course doesn't ask her to conceptualize broader principles because each module covers one topic in isolation.
Practice level: Priya is doing purposeful practice — she has goals (complete the module), focuses her attention (on the current exercise), and pushes beyond her comfort zone (each new concept is genuinely new). But she's missing the expert feedback and real-world application that would make it deliberate practice. The autograder tells her whether her code runs correctly, but it doesn't tell her whether her approach was efficient, whether her code is readable, or whether she's developing good problem-solving habits.
Transfer: This connects directly to Chapter 11. Priya has near-transfer skills — she can write a loop that looks like the example loop she just studied. But she has almost no far-transfer capability — she can't take the abstract concept of "iteration" and apply it to a novel problem in a different context. Her knowledge is bound to the surface features of the exercises rather than the structural principles underneath.
The Intervention: Building a Real Project
In April, Priya changes her approach. Instead of continuing through modules 9-12, she sets aside the course and decides to build something real: an automated reporting tool for her job.
The project is simple in concept — pull data from three CSV files, clean and merge the data, calculate key metrics, and produce a formatted summary — but complex in execution, because it requires integrating multiple skills simultaneously and making dozens of decisions that no exercise has prepared her for.
Week 1: The Struggle
Priya opens a blank Python file and stares at it. For the first time in her learning journey, nobody is telling her what to write. There's no starter code, no function signature to fill in, no expected output to match. She has to make decisions: What should her program do first? How should she structure it? Should she use functions or write everything in sequence?
She writes thirty lines of code. It doesn't work. She rewrites it. It works but produces wrong numbers. She finds a bug — she was merging the spreadsheets on the wrong column. She fixes it. Now the numbers are right, but the output is a mess of raw data instead of a formatted report.
At the end of week 1, Priya has a working (if ugly) script. It took her twelve hours to build something a skilled Python developer could write in one. She feels frustrated and behind.
But something interesting has happened. In the process of building this script, she has learned more about how Python actually works — not as a collection of isolated concepts, but as an integrated system — than she learned in eight modules of the online course. Specific discoveries:
- She now understands why functions exist (because writing the same data-cleaning logic three times was painful and error-prone)
- She now understands error handling (because her script crashed when a CSV file had missing values, and she had to figure out how to deal with that)
- She now understands data types in a visceral way (because she spent forty-five minutes debugging an error that turned out to be caused by treating a string as a number)
- She now understands the difference between "working" and "good" code (because her script works but is so tangled that she can barely understand it herself)
None of these are new concepts. She studied all of them in her course. But studying them as isolated topics and encountering them as problems she needed to solve are fundamentally different experiences. The course taught her the facts. The project taught her the understanding.
Week 2-3: The Iteration
Priya doesn't just build the tool and move on. She applies the Reflection Loop Protocol from Section 21.6:
What happened? "I built a working script, but it's messy, fragile, and hard to modify. If the spreadsheet format changes even slightly, the whole thing breaks."
What surprised me? "I was surprised by how much time I spent on data cleaning versus actual analysis. Maybe 70% of my code is just handling messy data. The course never mentioned this."
What principle can I extract? "Real-world coding is mostly about handling edge cases and unexpected inputs, not about writing elegant algorithms. I need to think about what can go wrong, not just what should go right."
What will I do differently? "I'm going to rewrite the script using functions for each step. That way, if the spreadsheet format changes, I only have to fix one function instead of hunting through a hundred lines of spaghetti code."
She rewrites. The second version is cleaner. She shows it to a colleague, Raj, who has more coding experience. Raj gives her feedback she never would have gotten from an autograder:
"Your variable names are terrible. What does df2 mean? Three months from now, you won't know what this code does. Name it cleaned_sales_data."
"You're loading all three files at the beginning. What if one of them doesn't exist? Your script will crash with a confusing error. Add a check."
"This section where you calculate the metrics — pull it into its own function. Right now it's buried in the middle of the data cleaning, and it's hard to find."
This is expert feedback — specific, actionable, targeted at her actual weaknesses. Not "your code is wrong" (an autograder can do that) but "your code works but here's why it's fragile, hard to read, and difficult to maintain." This is the difference between purposeful practice and something closer to deliberate practice.
Week 4: The Expansion
Priya's tool works. She runs it every Monday morning and produces the report that used to take her two hours in about thirty seconds. Her manager notices and asks if she can build similar tools for two other reports.
Now something remarkable happens: the second tool takes her five hours instead of twelve. The third takes three. She's not just getting faster — she's building reusable components. The data-cleaning functions she wrote for the first tool work, with minor modifications, for the second. The reporting format she developed transfers to the third.
This is transfer in action — the very concept she would have encountered in Chapter 11. Her knowledge is no longer bound to the surface features of individual exercises. She's extracted structural principles (how to structure a data pipeline, how to write modular functions, how to handle errors) that apply across projects.
Analysis: Why Project-Based Learning Transformed Priya's Skill
1. The project forced integration. Exercises test one concept at a time. Projects require combining concepts simultaneously. This integration is the cognitive work that builds deep, flexible understanding. Priya didn't just learn about functions — she learned why functions exist, because she experienced the pain of not using them.
2. The project generated authentic problems. Nobody had to manufacture a reason for Priya to learn error handling. The first time her script crashed on missing data, she had a genuine, urgent need to understand try/except blocks. The motivation was intrinsic — she wanted her tool to work — rather than extrinsic (passing a quiz). This connects to Chapter 17's discussion of intrinsic motivation and self-determination theory.
3. The project completed Kolb's cycle. Week 1 was concrete experience. The Reflection Loop was reflective observation and abstract conceptualization. Weeks 2-4 were active experimentation informed by genuine insight. The cycle repeated naturally because the project kept generating new challenges.
4. Expert feedback elevated the practice. Raj's code review provided feedback that no autograder could offer — feedback on code quality, readability, and maintainability rather than just correctness. This is the difference between "right answer" feedback and "expert process" feedback. Priya's code worked before the review and after the review — but after the review, it was better in ways she couldn't have identified on her own.
5. Transfer emerged from repeated application. Building three tools instead of one forced Priya to abstract from specific solutions to general patterns. By the third tool, she was thinking in terms of reusable components and common structures — exactly the abstract schemas that Chapter 11 identifies as the foundation of transfer.
6. The "tutorial hell" trap was broken. Many self-directed learners fall into what's colloquially called "tutorial hell" — endlessly consuming tutorials and courses without ever building anything independently. Each new tutorial feels productive (illusion of competence from Chapter 1), but the learner never develops the ability to work without the scaffold. Priya broke this cycle by shifting from consumption to creation.
The Broader Lesson: Beyond Coding
Priya's story is about coding, but the principles apply to any skill where exercises and theory are necessary but insufficient:
In writing: You can study grammar, read about essay structure, and complete fill-in-the-blank exercises. But learning to write requires writing — real essays on real topics, with real feedback from readers who tell you not just what's technically correct but what actually communicates.
In science: You can memorize the steps of the scientific method and ace a quiz about experimental design. But learning to do science requires designing and running experiments, handling the mess of real data that doesn't match your hypothesis, and figuring out what went wrong.
In music: You can practice scales and etudes and pass technique exams. But learning to perform — to interpret a piece, to communicate with an audience, to recover from mistakes in real time — requires performing.
In leadership: You can read books about management and take leadership courses. But learning to lead requires leading — making real decisions, handling real conflicts, receiving real feedback from people who depend on you.
In every case, the pattern is the same: structured knowledge provides the foundation, but building something real provides the understanding. And the learning is maximized when the building is accompanied by reflection (Kolb's cycle), expert feedback (cognitive apprenticeship), and progressive challenge (deliberate practice).
Discussion Questions
-
Priya scored above 90% on every module quiz but couldn't build a simple script from scratch. Using concepts from this chapter and Chapter 10 (desirable difficulties), explain what her quiz scores were actually measuring and why they failed to predict her real-world capability.
-
The phrase "tutorial hell" describes learners who endlessly consume learning content without building anything independently. Using Chapter 1's concept of illusions of competence and Chapter 10's storage strength vs. retrieval strength framework, analyze why tutorial hell feels productive even though it isn't.
-
Raj's feedback focused on code quality (readability, maintainability, error handling) rather than correctness. Why is this type of feedback difficult to get from automated systems? What does this tell us about the role of human experts in deliberate practice?
-
Priya's second tool took five hours instead of twelve, and her third took three. Using Chapter 11's transfer framework, explain what was transferring. Was it near transfer or far transfer? What evidence would you need to distinguish between the two?
-
Design a project-based learning experience for a subject you're currently studying (academic, professional, or personal). What would the project be? What concepts would it force you to integrate? How would you build in reflection and expert feedback?
Your Turn
If you're stuck in "exercise mode" — completing assignments, passing quizzes, but not sure you could actually do the thing you're learning to do — try this project-based pivot:
-
Identify a real problem. Not a textbook problem, but something in your actual life that your learning could solve. It doesn't have to be big. Priya started with one automated report.
-
Build a first version. Accept that it will be ugly, slow, and incomplete. The goal isn't perfection — it's integration. You'll learn more from your messy first attempt than from another ten exercises.
-
Apply the Reflection Loop. After your first version, step back: What happened? What surprised me? What principle can I extract? What will I do differently?
-
Find an expert reviewer. Show your work to someone more skilled and ask for process feedback, not just correctness feedback: "What would you do differently? Where is this fragile? What am I not seeing?"
-
Build version two. Incorporate the feedback and your own reflections. Notice what's easier the second time — that ease is the signature of genuine learning.
This case study connects to: Chapter 1 (illusions of competence), Chapter 7 (retrieval practice — building from scratch is retrieval of integrated knowledge), Chapter 10 (desirable difficulties — the struggle of building from scratch builds storage strength), Chapter 11 (transfer from exercises to projects, surface vs. structural similarity), Chapter 14 (planning a learning project), Chapter 17 (intrinsic motivation and the self-determination theory connection), Chapter 25 (from novice to expert through progressively harder challenges).