Case Study 2: David Finds a Mentor
David had been trying to learn machine learning independently for two years when he ran into a wall.
The wall wasn't conceptual — or not purely. He was developing real understanding, particularly after his tutorial-hell escape (Chapter 26). His understanding of gradient descent, regularization, and the mechanics of neural network training had become genuinely solid.
The wall was practical. He understood the theory of machine learning well enough to read papers and follow implementations. But when his team was actually building ML systems at work, he consistently found himself out of his depth on the questions that actually determined whether systems worked in production: how to handle distribution shift, when to trust model evaluation metrics, how to design data pipelines that wouldn't degrade over time, how to explain model behavior to stakeholders who didn't understand ML.
These weren't things he could learn from textbooks, because they were mostly tacit knowledge — the accumulated judgment of practitioners who had built and operated real systems, experienced their failures, and developed intuitions that couldn't be fully articulated.
He needed someone who had built things and broken things.
Finding the Right Person
David worked at a mid-sized software company with a data science team of twelve. One member, Fatima, had a reputation that David had noticed from a distance: she was the person other data scientists went to when they were stuck. Not just stuck on code — stuck on the judgment calls that determined whether a project was set up to succeed.
He'd never spoken to her directly. She was senior to him and in a different reporting chain.
He began by doing what the chapter recommends: he watched and listened. Over about six weeks, he paid attention to what Fatima said in the few all-hands meetings where she presented, read a blog post she'd written about feature engineering for time series data, and looked at the documentation of a pipeline she'd led the development of.
He noticed several things: she had specific views about when to use certain approaches, she thought carefully about failure modes before building, and she consistently talked about how to make systems maintainable by people who weren't the original builder. These were exactly the dimensions of practical ML craft he needed to develop.
The Ask
David composed a message to Fatima that took him three drafts to get right. The key elements:
- Specific about what he was struggling with: not "machine learning in general" but "building ML systems that remain reliable and interpretable after they're deployed"
- Specific about why he thought she could help: "I've read your post on time series feature engineering and the documentation on the recommendation pipeline — you clearly think about these problems in a way I'd like to understand better"
- Bounded ask: "Would you be willing to have coffee once or twice? I have a few specific questions about how you approach reliability in production ML."
Fatima said yes. She had a fifteen-minute response time. "Happy to chat. I always have opinions about this."
The Structure of the Relationship
Their first coffee was a thirty-minute conversation. David came with five specific questions, written down. Not generic questions ("what do you think about X") but questions grounded in specific situations he'd encountered:
"I have a model that's performing well on our evaluation set but we've seen a few cases where it's making obviously wrong predictions on recent data. My first instinct was to retrain more frequently. But I've been reading about distribution shift and now I'm not sure retraining is the right answer. How do you think about this problem?"
Fatima spent twenty minutes on this question alone — not just answering it but explaining how she thinks about it, what signals she looks for, what she's seen go wrong in similar situations, and what monitoring she'd set up. David was taking notes on his phone.
At the end of the first meeting, David did something that would determine whether this became a real mentoring relationship or a one-time coffee: he asked for a follow-up. "Would it be okay if I came back to you when I run into specific situations? I learn much better when I can anchor to concrete examples."
Fatima agreed.
The Ongoing Practice
Over the following six months, David had eight conversations with Fatima — about once every three weeks. Each one was structured the same way:
Before the meeting: David prepared by: 1. Writing down the specific situation or question he wanted to discuss 2. Writing his current best thinking on the issue 3. Writing two or three follow-up questions in case the main question got answered quickly
During the meeting: He let Fatima talk, and he listened. He asked follow-up questions to understand her reasoning, not just her conclusions: "Why that approach rather than X?" and "What would tell you this wasn't working?" He took notes.
After the meeting: He wrote a brief summary of what he'd learned — not just the answers Fatima had given but the reasoning framework behind them. He filed these in a "what I learned from Fatima" document that grew to about twenty pages over six months.
The document became a reference he returned to repeatedly. When he encountered the same type of situation at work, he'd check: did Fatima's framework apply here? Was he thinking about it the same way?
What David Learned
The content of what he learned from Fatima is less important than the form. Fatima gave him two things that no textbook had:
Failure patterns. She had specific knowledge of how ML systems fail that comes only from having built them and watched them fail. "Distribution shift in production is always worse than you think because your monitoring is almost always measuring the wrong things — here's what I've learned to monitor instead." This wasn't available in any paper he'd read.
Judgment heuristics. When do you need to worry about this? When is this a sign of a real problem vs. normal variance? When should you invest in a more sophisticated approach vs. when is the simple thing good enough? These heuristics are the product of years of practical experience, and they can be transmitted in conversation in a way they can't be transmitted in text.
The reasoning process. Perhaps most valuable: watching how Fatima thought through problems. Not just her conclusions but how she got there — what questions she asked first, what assumptions she checked, how she weighed competing considerations. This is the tacit knowledge that makes experts expert, and it's largely invisible from outside.
The Outcome
At the six-month mark, David was leading ML work on a project that, under his previous approach, he would have handled technically competently but practically poorly — good model, bad deployment.
Under his post-Fatima approach, he set up monitoring that caught a data quality issue three weeks before it would have degraded the model; he structured the feature engineering pipeline in a way that his colleagues could maintain and extend; and he documented his decisions in a way that made the system's failure modes explicitly known rather than discovered in production.
His manager noticed. "Whatever you've been doing for your ML education, it's showing up in the work."
David pointed to the mentoring relationship as the single biggest factor: "You can read papers all day and develop conceptual understanding, but the practical judgment — knowing what to worry about, when to trust what you're seeing, how to build things that last — that comes from people who have actually done it. There's no substitute."
What Makes This Generalizable
David's mentoring relationship was effective for specific reasons:
He asked for something specific. Vague requests for mentoring are easy to decline. A specific, bounded ask in an area of genuine demonstrated competence is much harder to decline and much easier to fulfill.
He prepared for every interaction. Coming with specific questions, with his own current thinking articulated, made every conversation more productive. It also signaled to Fatima that her time was being used well, which sustained the relationship.
He asked for reasoning, not just answers. "Why that approach?" and "What would tell you it wasn't working?" produced the framework, not just the conclusion. Frameworks transfer; conclusions are situational.
He documented and reviewed. The twenty-page learning document was the reflection practice applied to mentoring. Without it, the conversations would have left useful impressions. With it, they left a transferable record.
The structure is replicable in any professional context with any mentor. What matters is not finding the perfect mentor — it's bringing the preparation and orientation that makes any expert's time worth giving.