35 min read

David's laptop has a folder called "ML_Projects." Inside it are fourteen directories.

Chapter 26: Learning to Code — The Deliberate Practice Path Through Tutorial Hell


David's laptop has a folder called "ML_Projects." Inside it are fourteen directories.

He knows what's in each one without opening them. There's the neural-net-from-scratch notebook he built alongside a YouTube series — complete, working, and understood at the time. He tried to modify it two weeks later for a different dataset and couldn't figure out where to start. There's the Kaggle titanic competition notebook he copied from a tutorial, got the same 82% accuracy as the tutorial, and never understood why the feature engineering choices were made the way they were. There's the gradient descent implementation he built from the O'Reilly book, which works on the toy example and fails on anything else because he never understood the assumptions well enough to know when they were violated.

There are eleven others in a similar condition.

David is not unintelligent. He has twenty years of experience as a software architect. He can read Python fluently. He understands, at a conceptual level, what a neural network is doing. He is genuinely motivated to learn machine learning.

He is, however, stuck in tutorial hell. And he knows it. The symptoms are specific and diagnostic: he can follow along and produce working code; he cannot build anything new. He can explain what gradient descent is doing conceptually; he cannot build a working implementation on a dataset he has not seen before. He can recognize a correctly structured training loop; he cannot write one from scratch without a reference.

The graveyard of half-finished Jupyter notebooks is not evidence of failure to try. It is evidence that the method of trying is wrong.


Why Tutorial Hell Exists

Tutorial hell is not a personal failing. It is the predictable result of a learning method that feels productive but is not, applied to a domain that requires a specific kind of skill.

Here is the mechanism.

Programming tutorials are, by design, optimized to minimize learner struggle. The instructor knows where the hard parts are, explains them clearly, and writes code that works. The learner follows along, understands each step as it is explained, and produces code that produces correct output. The experience feels like learning. And at one level, it is: you are building recognition of patterns, acquiring vocabulary, developing familiarity with the domain.

But this kind of learning — following a demonstration with comprehension — produces a specific, limited kind of knowledge. It builds the ability to recognize a solution being presented to you. It does not build the ability to generate solutions from problems.

This is the declarative-procedural gap applied to programming. You can have rich declarative knowledge about what gradient descent does and why — knowledge that would let you pass a quiz, follow along with an implementation, and explain it clearly to someone else — while having essentially zero ability to implement it correctly on a new problem from scratch.

The tutorial-following experience is almost entirely declarative. The thing you actually need — the ability to write original code to solve problems you have not seen before — is procedural. And the gap between them is not small. It is vast. And it is non-obvious, because tutorial-following feels like programming.

The Psychology of Tutorial Seduction

Tutorial hell isn't just a technical problem. It's a psychological one, and understanding the psychology is essential for escaping it.

Tutorials are engineered to feel productive. You're actively engaged: pausing the video, typing code, watching the output run. You're making progress through the material. You're keeping up. At the end of each section, the code works, the output matches, the lesson is complete. This produces a continuous stream of small successes that feels remarkably like learning.

What you're actually experiencing is guided discovery with the hard parts removed. The instructor has already figured out what to do. The code structure is already designed. The debugging has already been done. You're following a clearly marked path through territory that has been fully prepared for you, and you are mistaking the ease of the path for skill.

Video tutorials are particularly dangerous for this reason, and it's worth being specific about why. With a written tutorial, you at least have to read and mentally process each step. With a video, you can watch passively with a background sense of comprehension, letting the instructor carry you through every decision point without your brain ever having to work hard. The sense of understanding is real; the actual learning is minimal.

There's also the comfort gradient to contend with. When you're following a tutorial, you know — consciously or not — that success is guaranteed if you keep following. The tutorial exists to produce working code, and you will produce working code if you follow it. This is the opposite of what building original software feels like. Building original software involves sustained uncertainty, decisions with no clear right answer, code that doesn't work and error messages that aren't helpful, and extended periods of not knowing what to do next. Tutorials completely avoid this experience. So every time you hit the discomfort of not knowing where to start on something original, the tutorial represents relief from that discomfort. And relief feels good. And so you open another tutorial.

This isn't weakness. This is a normal human response to the structure of the rewards. The solution is not willpower. The solution is redesigning the practice so that the comfortable path leads through struggle rather than around it.


The Knowledge-Execution Gap

Here is the specific cognitive phenomenon at the heart of tutorial hell, stated plainly:

Reading a correct solution and understanding it is not the same skill as generating a correct solution from a problem statement.

These feel similar from the inside because both involve "understanding the code." But they are genuinely different cognitive operations, supported by different kinds of knowledge, and one of them transfers to real programming while the other mostly does not.

When you read a gradient descent implementation and understand it, you are using recognition memory: you encounter a pattern, your brain matches it to stored representations, and you experience understanding. This is a real cognitive event. It just does not build the neural structures for generation.

When you implement gradient descent from scratch on a problem you have not seen before, you are using generative recall: you have to activate your knowledge of what gradient descent needs to do, translate that into an algorithm, translate that algorithm into code, execute it mentally to check whether it does what you intend, and debug it when it does not. This process — messy, slow, error-prone — is what builds the knowledge that transfers.

The generation effect (covered in Chapter 7) is relevant here with full force: generating information yourself produces dramatically better retention and understanding than receiving the same information from an external source. In programming, the generation event is writing original code — not copying code, not filling in blanks, not running someone else's implementation. Writing your own, from the problem statement, with no help for as long as you can manage it.

David had been almost entirely on the receiving end of programming knowledge for four years of ML study. He had read many explanations, run a lot of provided code, followed along with many implementations. He had generated almost nothing from scratch. And his inability to build his own ML projects was the direct consequence.


The Deconstruction Method

There's a middle ground between "follow a tutorial step by step" and "build something entirely from scratch with no guidance" that is underused and remarkably effective. Call it the deconstruction method.

Here's the approach. Find a small, working project in your domain — a hundred to three hundred lines, doing something real, written reasonably well. A small web scraper. A data processing pipeline. A machine learning model trained on a simple dataset. Clone it. Study it until you understand every line. Then delete all the logic, leaving only the structure and comments. And rebuild it from scratch, from memory and your own understanding, without looking at the original.

The deletion-and-rebuild step is where the learning happens. When you read a complete implementation, you get the comfortable sense of understanding each part as you encounter it. When you try to rebuild it from an empty file, you immediately discover which parts you actually understood (you can reconstruct them) and which parts you followed along with without internalizing (you can't reconstruct them and don't know where to start).

Each part you can't reconstruct is a precise location of a learning gap. Not a vague sense of "I need to understand this better" — a specific place where your understanding broke down. That precision is valuable. You know exactly what to study.

After the rebuild attempt — successful or not — compare your version to the original line by line. What did you do the same? What did you do differently? Was your different approach equally valid, or did you miss something the original handled? What does the original do that you didn't think to do?

This comparison step often produces more learning than the rebuild itself. You're looking at the gap between how you understood the problem and how an experienced developer understood it. Those gaps are a map of what to develop next.

The deconstruction method works because it turns reading code — a mostly passive activity — into an active generation task. It forces the transition from recognition to production in a context where you have a working reference to check against and learn from.


Deliberate Practice for Programmers

If tutorial-following is the wrong approach, what is the right one? The answer is the same as in every other skill domain: deliberate practice — working at the edge of your current ability, on specific sub-skills, with clear goals and immediate feedback.

But deliberate practice looks different for programming than it does for swimming or musical instrument. The sub-skills are different, the feedback mechanisms are different, and the challenge of identifying where exactly your edge is differs too.

Identifying Your Specific Sub-Skills

Programming is not a single skill. It is a cluster of related skills that develop somewhat independently:

Problem decomposition: Looking at a complex problem and figuring out how to break it into smaller pieces that can be solved independently.

Algorithm design: Knowing what algorithmic approaches exist, when to apply which one, and how to adapt general approaches to specific problems.

Implementation: Translating an algorithm into working code in a specific language.

Debugging: Diagnosing why code is not doing what you intended and fixing the cause rather than the symptom.

Reading code: Understanding what existing code does, including code you did not write and code in unfamiliar styles.

System design: Making good architectural decisions about how to structure code for maintainability, extensibility, and correctness.

API fluency: Knowing what tools, libraries, and functions exist and how to use them effectively.

David's weakest sub-skills were problem decomposition and algorithm design in ML contexts. He was actually quite good at implementation once he understood what algorithm he was implementing. His debugging was decent. His system design was excellent — that was his day job.

The deliberate practice principle: identify the specific sub-skill that is your current limiting factor, and design practice that targets it specifically. Do not work on everything at once. Work on the thing that matters most right now.


Error Messages as Teachers

Here's a diagnostic that cleanly separates beginners from developers who are actually learning: beginners fear error messages. Developers who are learning well love them.

This isn't personality difference. It's a mindset difference that can be deliberately cultivated, and cultivating it changes the entire experience of learning to code.

Beginner mindset: an error message means something went wrong. You failed. The code is broken. The error message is an obstacle between you and a working program, and the goal is to make it go away as quickly as possible, ideally by undoing whatever you just did.

Learning-oriented mindset: an error message is free, immediate, precise information. The computer just told you, for free, that something specific is wrong, often pointing you to the exact line and the exact nature of the problem. This is incredibly valuable feedback compared to most learning environments, where feedback is delayed, expensive, or vague. A good error message is a gift.

[Evidence: Moderate] The research on productive failure — encountering problems and errors before receiving instruction — shows that structured struggle produces deeper understanding than smooth, error-free instruction. Errors aren't disruptions to learning. For programming specifically, they're central to it.

The Debugging Mindset as Learning Stance

There is a specific way of approaching bugs that is optimally set up for learning, and it looks like this.

When your code doesn't work, before touching anything, write down your hypothesis. Not "the code is broken" — a specific hypothesis: "I believe the error on line 47 is occurring because I'm passing an integer where the function expects a string, and the conversion is happening later in the pipeline than I assumed." Then investigate: what evidence supports or refutes this hypothesis? What additional information would help you determine whether you're right?

This is hypothesis-driven debugging, and it's both better at actually fixing bugs and far better as a learning practice than the alternative — trial-and-error modification where you change things until the error disappears.

Trial-and-error fixing produces fragile, poorly understood code and no learning about why the error occurred. Hypothesis-driven debugging builds your mental model of how the language, the libraries, and the specific system you're working with actually behave. Over time, your hypotheses become more accurate. Your mental model of the system improves. You get faster and better at debugging, and your code improves because you understand its behavior.

The specific protocol: when you hit an error, before searching for a solution, spend five minutes trying to predict exactly what the error means and where in the logic it originates. Write it down. Then investigate. Even if you're wrong, this exercise builds the causal reasoning about code behavior that distinguishes experienced developers from perpetual beginners.


Building Projects You Actually Care About

The project-based learning literature is clear on one thing: motivation is not a nice-to-have in project learning. It's load-bearing.

When you're following a tutorial, motivation barely matters. The tutorial structure carries you through whether you care about the subject or not. But when you're building something original — making decisions, solving problems without a guide, pushing through the inevitable stuck points — caring about what you're building is often the only thing that keeps you going.

[Evidence: Moderate] Self-determination theory research shows that intrinsic motivation — doing something because it's genuinely interesting or meaningful to you — produces dramatically better persistence and deeper learning than extrinsic motivation. Projects you genuinely care about produce this kind of intrinsic motivation. Tutorial exercises almost never do.

The practical implication: when choosing projects to escape tutorial hell, optimize for things you actually want to exist. Not the project that seems most instructive. Not the project that's recommended in a curriculum. The project that, if it worked, would genuinely delight you.

David wanted an ML model that could predict which of his current tickets were likely to become blockers based on characteristics of past tickets. This was not the canonical beginner ML project. It was his actual problem. He cared about it. When the feature engineering didn't work and he had to figure out why, he didn't quit — because he wanted the thing to work. The caring is what made the struggling worthwhile.

Choosing Projects in Your Zone of Proximal Development

The right project difficulty is the same concept as i+1 in language learning: above your current ability, but not so far above that it becomes paralyzing. A project that requires exactly the skills you already have produces comfort but not growth. A project that requires you to figure out five completely unfamiliar things simultaneously produces paralysis.

The sweet spot is a project that mostly makes sense to you — you can sketch the architecture, you can identify the main components, you understand what inputs go in and outputs come out — but that requires you to figure out several specific things you haven't done before. One or two unknowns is ideal. Five is too many.

If you can't figure out your Zone of Proximal Development from self-assessment alone, the deconstruction method helps. Build something small in an area adjacent to what you want to build. If you can rebuild it comfortably, you're ready for the next level. If you can't, you've found your current edge.


The Hello World to Real Project Gap

The biggest obstacle between "I have done some tutorials" and "I can build things" is the gap between structured exercises with clear answers and real projects with real ambiguity.

Tutorial exercises have answer keys. Real projects do not. Tutorial exercises have clearly defined requirements. Real projects have vague requirements that change. Tutorial exercises end when the code works. Real projects have maintenance, debugging, and evolution.

Most learners try to jump this gap by picking a big, interesting project and building it. They immediately encounter ambiguity at every level — what exactly should this do? how should it be structured? what libraries should I use? — and get stuck. The project is abandoned.

The issue is not that projects are the wrong approach — projects are the right approach. The issue is the size of the jump. You need scaffolding between the tutorial level and the real project level.

Bridging the Gap Deliberately

Stage 1: Tutorial completion, then solo reconstruction. When you finish a tutorial section, close the tutorial completely and implement what you just learned on a different problem from scratch. Not a variation of the tutorial's problem — a genuinely different application of the same concept. Set a timer for twenty to thirty minutes and work without assistance. Then compare what you built to the tutorial's approach.

This is the minimum viable escape from tutorial hell. Even this small change — close the tutorial and rebuild on a fresh problem — immediately reveals what you actually understood versus what you merely followed along with.

Stage 2: Guided projects with deliberate constraints. Build small projects — fifty to two hundred lines — where you have deliberately restricted your use of references. The constraint: you can look up syntax and documentation, but not solutions to the specific problem you are solving. This forces you to think about the problem yourself while allowing you to get unstuck on implementation details.

Stage 3: Real projects with real ambiguity. Projects where you do not fully know what you are building at the start, where requirements are unclear, where there is no correct answer to look up. These are the hardest, most uncomfortable, and most valuable learning experiences available.

The key at every stage is the period of independent struggle before consulting references or assistance. That struggle period — the frustrating, uncertain, often unproductive-feeling time when you are trying to figure it out yourself — is when the deepest learning happens.


Using AI Coding Assistants Without Short-Circuiting Your Learning

This is the most current and pressing challenge facing programming learners, so it deserves direct, expanded treatment. AI coding assistants are genuine game changers for professional developers. For learners, they are simultaneously the most useful tool available and the most dangerous trap in the ecosystem.

The danger is precise. If you ask AI to write code that you then read and understand, you have done exactly what tutorial-following does. You have activated recognition ("I see how this works") without practicing generation ("I need to figure out how to build this"). The code is generated for you; the struggle of figuring out how to generate it is bypassed; and that struggle is the learning event.

[Evidence: Preliminary for the AI-specific context; Strong for the underlying generation effect principle] The generation effect — generating information yourself produces far better retention than receiving it — applies directly. Generating your own solution, even an incorrect one, before consulting AI produces substantially more learning than consulting AI first.

What Actually Happens When You Ask AI to Write Your Code

Here's the specific sequence to understand. You encounter a coding problem. You feel the discomfort of not knowing where to start. You ask AI for the code. AI produces fluent, working code in seconds. You read it. It makes sense — you can follow the logic, you understand what each part does. You feel like you've learned something. You move on.

Three days later, you encounter a similar problem. The AI-produced code is not in your memory — it was in the chat window, not in your head. You feel just as stuck as before. You ask AI again. The cycle repeats.

You are not getting more capable. You are getting better at asking AI for code. Those are different things, and only one of them is durable.

Now here's the contrast. You encounter a coding problem. You feel the discomfort of not knowing where to start. You spend twenty minutes genuinely trying. Your attempt is incomplete and has bugs. You ask AI: "I tried to implement X this way, and I got stuck at Y. What am I missing?" AI explains the specific gap in your approach. You fix it yourself, using the explanation. Your brain has now worked through the problem, generated a solution, encountered an obstacle, received targeted feedback, and revised. That sequence produces retention and generalization.

The Protocol: Generate First, Then Consult

Step 1: Attempt the problem yourself. Set a timer for twenty to thirty minutes. Work on the problem with no AI assistance. Write code. Make it wrong. Debug it. Look up syntax and documentation — not solutions. Generate your best attempt.

Step 2: If genuinely stuck, consult AI as a question-and-answer tool, not a solution generator. Instead of "write code that implements X," ask: "I am trying to implement X. I have tried approach Y and it is failing because Z. What am I missing conceptually?" You are asking for understanding, not for the answer.

Step 3: Understand any AI-generated code before using it. Read every line. If you cannot explain every line aloud, you do not understand it. Better: close the AI output, write your own implementation based on your new understanding, then compare to what AI generated.

Step 4: Test your understanding. After any AI-assisted learning, close the conversation and try to implement the same functionality from memory. If you cannot, you read it but did not learn it.

The Specific Danger: Understanding Code You Don't Actually Understand

There's a particular failure mode with AI-generated code that's worth naming explicitly. AI produces code that is well-structured, well-commented, and internally consistent. This means it's easy to read and feel like you understand it, even when you don't.

Reading comprehension and actual understanding are different things in code, just as they are in language. You can read a Python function using list comprehensions and dictionary unpacking, follow the logic, see that it produces the right output, and feel like you've understood it — while having no ability whatsoever to write a similar function yourself because you haven't internalized the underlying concepts.

The test is always: close the code and reproduce it. Or better: close the code and solve a similar problem from scratch. If you can't do either, you have read the code without learning from it.

Productive Uses of AI for Programming Learning

As a Socratic tutor: "Ask me questions about my approach to this problem instead of giving me the answer." AI that interrogates your thinking rather than replaces it produces learning.

As a code reviewer: "Here is my implementation. What are the problems with it, and what would an experienced developer do differently?" This is critique-then-learn, which is much more effective than solution-then-copy.

For explaining code you have already written: "Explain this code to me step by step and identify any potential issues or edge cases I might have missed." AI reviewing your code — not writing it — is a genuinely good use case.

For generating practice problems: "Generate five practice problems at intermediate difficulty testing dynamic programming in Python. Give me the problem statements now and solutions only when I ask." AI as a source of deliberate practice material is excellent and underused.

For explaining error messages: "Here is the error message I'm getting and the relevant code. Don't fix the code — explain what this error means and why this type of error occurs, so I understand it better." This builds debugging knowledge rather than just resolving the immediate problem.

The fundamental principle: use AI to challenge and extend your thinking, not to replace it. Every time AI writes code you could have written yourself, it has taken a learning opportunity from you.


The Programmer's Learning Plateau

David's problem wasn't just that he was stuck in tutorial hell. It was that even after escaping it — even after he could build things — he stagnated at a certain level and stopped growing.

This is the programmer's learning plateau, and it affects intermediate developers more than beginners. Beginners know they're beginners. Intermediate developers know they can build things and often stop actively trying to improve, because the capability they have is sufficient for most of what they need to do. They become comfortable.

Comfortable is where growth stops.

[Evidence: Moderate] Research on expert development in programming-adjacent cognitive skills consistently shows that experts who stop deliberately stretching their abilities plateau at a functional but sub-expert level. Continued development requires continued deliberate practice — not just continued practice.

What Intermediate Programmers Do Wrong

The pattern is consistent. An intermediate programmer develops a comfortable set of approaches — the libraries they know, the patterns they like, the types of problems they solve well — and gravitates toward work that fits those approaches. Their execution within this comfort zone improves marginally. Their overall capability stops expanding.

They avoid the parts of programming that feel hard: unfamiliar algorithmic territory, unfamiliar codebases, code review that exposes gaps, design decisions on systems they haven't built before. These uncomfortable areas are exactly where growth lives, and the avoidance pattern keeps them from getting to it.

Identifying What Sub-Skills Are Holding You Back

The diagnostic question is: where in my work do I feel genuinely uncertain? Not "I need to look up the syntax for this" uncertain — that's normal. But "I don't know how to approach this class of problem" uncertain, or "I often produce solutions that work but that more experienced developers point out are structurally wrong" uncertain.

These uncertainty zones are your development frontier. They're not something to route around. They're what to practice.

For David, the frontier was ML problem formulation — not implementing known algorithms but figuring out which algorithm approach suited a given problem. He had been routing around this uncertainty by always starting with tutorials that told him what approach to use. His deliberate practice started with the one step before implementation: taking a business problem and figuring out the ML framing himself, before looking at how others had approached similar problems.

Deliberate Practice at the Intermediate Level

At the intermediate level, deliberate practice often looks like going into environments that provide expert feedback: code review from more experienced developers, open source contribution to well-maintained projects, code challenges where you can see others' solutions after submitting yours, pairing with developers more experienced than you.

The feedback from an experienced developer looking at your code — "this works, but this is how I'd approach it structurally and here's why" — is exactly the kind of high-quality feedback that deliberate practice requires. It's hard to manufacture this feedback. You mostly have to go find environments where it occurs naturally.


Learning Different Languages and Paradigms

One of the most effective ways to deepen your programming mental models is to learn a second programming language — and particularly to learn one that uses a different programming paradigm.

This might seem counterintuitive. If you're trying to get better at Python for ML, why spend time learning Haskell? The answer is about what learning a second paradigm teaches you about the first one.

[Evidence: Moderate] Research on second language learning (the linguistic kind, covered in Chapter 25) shows that learning a second language deepens your understanding of your first, because it makes visible the assumptions that were invisible when the first language was all you knew. The same mechanism operates in programming languages and paradigms.

When you know only object-oriented programming, you don't think about the fact that you're thinking object-orientally. Object-oriented thinking seems like the natural way code works. When you learn a functional language, you discover that there's another coherent way to structure programs — where you avoid mutable state, pass functions as arguments, and compose operations rather than inheriting behavior. This forces you to examine the assumptions you'd been making without knowing it.

The specific benefits: learning a functional language like Haskell or Elixir makes you write better object-oriented code, because you become aware of which state is necessary and which is incidental, and you start minimizing the latter. Learning a logic language like Prolog changes how you think about problem decomposition. Learning SQL's declarative approach deepens your understanding of when imperative and when declarative thinking is more natural for a given problem.

You don't have to become expert in every paradigm. The benefit comes from genuine engagement with the core ideas of the new paradigm — enough that you start thinking in it, even a little. Two to three months of serious engagement with a new paradigm is often enough to produce durable improvements in how you use the paradigms you already know.

The second language also accelerates dramatically compared to the first. David found that learning a new language after twenty years of programming took weeks, not months, to reach productive competence — because the concepts transferred. He wasn't learning what a loop is. He was learning how this language expresses looping. The underlying concepts were already internalized. Only the syntax and idioms were new.


Building in Public and Learning in Public

One underused strategy for accelerating programming learning: making your work visible.

Code that only you see gets evaluated only by you. Code that others see gets evaluated by others — which means feedback you could not generate yourself, from perspectives you do not have.

Code review. Participating in code review — giving and receiving — is among the most effective professional learning activities in programming. When you review someone else's code, you have to understand it well enough to evaluate its quality. When your code is reviewed, you get specific feedback on specific decisions from people who might approach the problem differently.

You do not need to be in a professional context to get code review. GitHub allows anyone to submit pull requests to open-source projects, and many maintainers give substantive feedback. Online communities — certain subreddits, Discord servers, forums like Code Review on Stack Exchange — provide structured feedback on code you post.

Open source contribution. Contributing to open source projects — even small contributions like fixing documentation, reproducing bugs, or making minor code improvements — puts you in a feedback-rich environment with real codebases and real review from maintainers who care about code quality.

The learning that happens from having an experienced maintainer point out that your proposed change does not handle an edge case, or that there is a more idiomatic approach, is qualitatively different from anything a tutorial can provide. It is situated, specific, and connected to real code that real people use.

Writing about what you are learning. Explaining programming concepts in writing forces a level of clarity that passive learning does not. The Feynman technique applies directly: if you cannot explain it simply, you do not understand it yet.


Reading Code as a Learning Strategy

One of the most undervalued programming learning techniques is also one of the simplest: read other people's code.

Writing code gets most of the attention in programming education. Reading code is treated as something you do when you have to understand existing code — not as a deliberate learning activity. This framing is backwards.

Expert programmers read code constantly. They read library source code when they want to understand what a function actually does. They read open-source projects to see how experienced developers structure systems. They read colleagues' code in review. Reading good code does for programming intuition what reading good writing does for writing skill: it exposes you to patterns, idioms, and approaches you would not generate yourself.

How to read code actively:

Predict before you read. Look at a function signature and docstring, then predict what the function does before reading the body. Compare your prediction to the implementation. The gap between your prediction and the actual code is where learning happens.

Explain as you read. Read each block of code and explain to yourself what it is doing and why. If you cannot explain a line, look it up, understand it, and continue.

Identify patterns. What idioms and patterns does this code use that you have not seen before? Collect these. They expand your programming vocabulary.

Where to find code worth reading: Library source code for tools you use regularly. Small, well-regarded open-source projects under a thousand lines on GitHub. The GitHub Trending section in your target language. Production-quality codebases from companies that have published engineering blogs alongside open-source work.


The Mental Models of Programming

What distinguishes expert programmers from novices is not primarily speed or knowledge of more syntax. It is the mental models they use while programming.

Mental execution. Expert programmers can trace code execution in their minds — following the state of variables, predicting the output of functions, seeing what control flow will take — without running the code. This mental simulation ability is developed through the deliberate practice of tracing code by hand before running it, predicting outputs before checking them, and building detailed mental representations of how programs behave.

When you encounter a bug, mental execution lets you reason about what the program is doing versus what you intended it to do. This is the basis of expert debugging: diagnosing the cause rather than fixing symptoms.

Abstraction. Expert programmers think at multiple levels of abstraction simultaneously. They can zoom out to the system architecture, zoom in to the specific function, and move between levels fluidly. Novices often get stuck at one level — either too abstract ("I need to process the data somehow") or too concrete ("what does this specific function do").

Debugging as hypothesis testing. Expert programmers approach bugs as scientists: form a hypothesis about what is wrong, design an investigation that would confirm or refute the hypothesis, and update based on results. Novices approach bugs as technicians: change something and see if the error goes away.

The explicit practice: when you encounter a bug, before touching the code, write down your hypothesis. "I believe the issue is X in location Y because Z." Then investigate. Over months of this practice, your hypotheses become more accurate and your debugging becomes dramatically faster.


David Breaks Out of Tutorial Hell

The change David made was not adding more practice time. He did not have more time. He changed the structure of the time he already spent.

He stopped opening new tutorials until he had demonstrated to himself that he could implement the concepts from the previous one. The test was simple: close the tutorial, open a blank notebook, implement the same concept on a different problem. If he could not do it in thirty minutes of genuine effort, he was not ready to move on.

The first time he tried this with gradient descent, he could not do it. He had thought he understood the tutorial completely. He found out he had understood the tutorial's explanation of gradient descent — he had not built an independent understanding he could execute on his own.

He went back. Read the chapter again, but this time wrote his own notes rather than just reading. Then tried to implement it again. This time he got about seventy percent of the way through before getting stuck on the learning rate update. The error messages were more diagnostic than before. He knew enough to ask a more specific question.

Three weeks after starting this approach, he implemented a working gradient descent from scratch on a dataset from his own work. It took him four hours. He made seven mistakes he caught and fixed himself, and two he had to look up. The implementation worked.

That working implementation had cost him more time and frustration than any tutorial he had ever followed. It had also produced more genuine learning than the previous four years of tutorial-following combined.


Try This Right Now: The Tutorial Escape Challenge

If you are currently following a programming tutorial or course, do this immediately after reading this section.

Step 1: Finish your current section or module normally.

Step 2: Close the tutorial completely. No peeking.

Step 3: On a blank editor, implement what you just learned on a problem that is NOT the tutorial's example. A similar problem, but your own version. It can be simpler — but it has to be entirely yours.

Step 4: Spend at least twenty minutes genuinely attempting this before looking anything up.

Step 5: After your attempt — successful or not — open the tutorial and compare what you wrote to the tutorial's approach. What did you do the same? What did you do differently? What did you discover you did not actually understand?

This twenty-minute exercise tells you more accurately what you learned from the tutorial than any quiz the tutorial provides. If your mind went blank at step 3, you have identified the problem. The tutorial produced recognition, not generation. Now you know what to work on.


The Progressive Project: Escape Tutorial Hell in 12 Weeks

This is a concrete protocol for moving from tutorial-dependent to genuinely capable over twelve weeks. It requires commitment to uncomfortable practice, but the structure is designed to make that discomfort productive rather than paralyzing.

Weeks 1-2: Audit and baseline

List every tutorial, course, and project you've started in your domain. For each, rate yourself honestly: could you rebuild the core concepts from scratch right now, without the tutorial? How many of your "completed" tutorials have actually produced durable capability? This audit is usually sobering. It's also clarifying — you're identifying specifically where the recognition-without-generation gap exists.

Pick one concept from a tutorial you "completed" and try to rebuild it from scratch right now. This is your baseline. It reveals your actual starting point.

Weeks 3-5: The reconstruction habit

For every tutorial section or lesson you engage with, enforce a reconstruction step before moving on. Close the tutorial, open a blank file, implement the concept on a different problem. Spend at least twenty minutes before looking at anything. Track how often you succeed versus how often you discover you didn't understand what you thought you did.

Begin a practice problem habit: three sessions per week, twenty to thirty minutes each, on Exercism, LeetCode, or equivalent. Don't aim to solve them optimally. Aim to solve them yourself, without help, even if your solution is awkward.

Weeks 6-8: First original project

Identify a small project you actually want to exist — something that would be genuinely useful or interesting to you, in the zone of proximal development described earlier. Build it. You will get stuck. Stay stuck for at least thirty minutes before looking anything up. Track what you had to figure out versus what you already knew. This project is the transition from tutorial-follower to builder.

Apply the "generate first" protocol rigorously with AI assistance. Twenty to thirty minutes of your own attempt before any AI consultation. When you do consult AI, ask for explanations, not solutions.

Weeks 9-11: Read and deconstruct

Find one open-source project in your domain that's small enough to understand completely — under five hundred lines. Spend two weeks reading it actively, predicting before you read each function, explaining each section. Then delete the logic and try to rebuild it. This is the deconstruction method applied to real code.

Submit your code for feedback at least once — through a code review request in an appropriate community, through a pull request to a project, or through a trusted developer colleague. The feedback, however uncomfortable, is the most valuable thing available at this stage.

Week 12: Assess and plan forward

Return to the concept you tested in week one. Implement it again from scratch. How does this attempt compare? What can you do now that you couldn't do then? What remains your limiting factor?

Design your next twelve-week cycle based on what you've identified as your current growth frontier.

Minimum program: - Complete one section of a tutorial or course, then immediately attempt a solo reconstruction on a different problem. Spend a minimum of twenty minutes before looking anything up. - Spend twenty minutes per week on deliberate problem-solving that is not tutorial-following.

Developing program: - All of the above, plus: read one piece of open-source code per week. Explain every line before moving on. - Apply the "generate first" protocol consistently with AI. - Practice explicit debugging hypothesis formation: before making any change to buggy code, write down your diagnosis.

Full program: - Deliberate problem-solving practice three times per week, twenty to thirty minutes each - Two original mini-projects per month that are not tutorial replications - Weekly code reading session: library source code, open-source project, or similar - Rubber duck debugging habit: articulate full diagnosis before making any change - One code review experience per month: either submit your code for review, or review someone else's code and give substantive written feedback - Active engagement with at least one programming community where your work can be seen and evaluated by others