Chapter 40 Exercises: Staying Ahead of the Curve
These exercises help you build and test your personal system for staying current with AI developments.
Exercise 1: Source Audit
List all the sources you currently use to stay informed about AI:
- Newsletters
- Social media accounts
- Podcasts
- Websites
- Colleagues
- Other
For each source, answer: - How often do I engage with it? - What percentage of the content is signal (actionable, relevant to my work) vs. noise? - Would I notice if this source disappeared?
Based on your audit, identify which sources to keep, which to reduce, and which to add. Aim for a total that you can realistically maintain in 45-60 minutes per week.
Exercise 2: Build Your Capability Testing Battery
Following Raj's model, develop a personal capability testing battery for AI tools in your domain.
Define 5-8 test tasks that: - Represent work you actually do - Span a range from simple and well-defined to complex and nuanced - Cover the quality dimensions most important in your work
For each test task, define what "excellent," "acceptable," and "poor" output looks like.
Run your battery on the primary AI tool you currently use. Then run it on one alternative tool you haven't tried or haven't recently tried. What do you find?
Exercise 3: The Capabilities Map
Create a simple map of AI capabilities relevant to your work:
Current capabilities (mature and reliable in your use): List the AI capabilities you currently rely on regularly and find reliable.
Developing capabilities (useful but inconsistent in your use): List capabilities you've tried but find inconsistent — where AI sometimes performs well and sometimes doesn't.
Emerging capabilities (heard about but not yet tried): List capabilities you've read about or heard from colleagues that you haven't tried yet.
Unknown territory (may exist but you haven't explored): Based on Chapter 40's capability trajectories, list one or two capabilities that may exist or be emerging that could be relevant to your work.
Review this map quarterly. Which items have moved from emerging to developing? From developing to mature? What new items have appeared in the emerging category?
Exercise 4: The First-Hand Assessment
Choose one capability from the "developing" or "emerging" column of your map from Exercise 3. Spend 90 minutes doing a first-hand assessment:
- Set up the test
- Run your capability battery (or a simplified version)
- Form your own view: how good is this, actually?
- Compare your assessment to what you'd read or heard about it
Write a 300-word assessment. Is it better or worse than the coverage suggested? What surprised you? What would you use it for, and what wouldn't you use it for?
Exercise 5: The Principles Mapping Exercise
Take three principles from this book that you've found most valuable:
For each principle, answer: - How does this principle apply to the AI capabilities you currently use? - How would this principle apply to the emerging capabilities described in this chapter (agentic AI, reasoning models, multimodal, large context)? - Does any new capability challenge the principle, or does the principle apply without modification?
The goal of this exercise is to confirm that your principles are durable across capability changes — and to identify any edge cases where new capabilities require updating your mental model.
Exercise 6: The "What Would Change My Practice?" Exercise
Think through the emerging capabilities described in this chapter. For each, write one to two sentences about what would have to be true about that capability for it to meaningfully change how you work:
- Agentic AI: What would it have to reliably do for you to delegate a workflow to it?
- Reasoning models: What level of analytical reliability would change which analysis tasks you delegate to AI?
- Large context windows: At what context size would you change how you use AI for document-heavy work?
- Memory and persistence: What would a reliable memory system change about how you manage context across projects?
This exercise builds the mental model of "when to update my practice" — rather than updating it reactively, you're defining the capability thresholds that matter to your specific work.
Exercise 7: Design Your Staying-Current System
Using the framework from this chapter, design your personal staying-current system:
Tier 1 (Weekly, ~45 minutes): - Source 1: [Name, why you chose it, how long it typically takes] - Source 2: [Name, why you chose it] - Any other weekly touchpoints
Tier 2 (Monthly, ~90 minutes): - What kind of deeper reading will you do? - How will you find it?
Tier 3 (Quarterly, ~2-3 hours): - What does your quarterly capability exploration session look like? - When specifically will you schedule it?
Your noise filter: - List five types of AI coverage you will consciously skip
Implement this system for one month and then assess: Is it sustainable? Is the signal-to-noise ratio high enough? What would you adjust?
Exercise 8: The Adjacent Domain Scout
Identify a professional domain adjacent to yours where AI may be more or less advanced than in your own field.
Research how AI is being used in that domain: What use cases are established? What challenges are they encountering? What quality standards have emerged?
Write a brief (250-word) "what can I learn from this adjacent domain?" reflection. What approaches or lessons from that domain might transfer to yours?
Exercise 9: The Skeptical Read
Find one piece of thoughtful, well-argued AI skepticism — not reflexive "AI is bad" coverage, but a substantive critical analysis of AI capabilities, limitations, or risks.
Read it carefully and write a response: Where do you agree? Where do you disagree? Has it changed how you think about any aspect of AI use?
The goal is to test your views against the strongest critical argument, not to confirm your existing positions.
Exercise 10: The Technology Adoption Self-Assessment
Using the technology adoption curve concept from this chapter, honestly assess where you sit:
- Are you an early adopter who found AI tools useful before most people in your field?
- Are you in the early majority — using AI tools now that they've proven themselves?
- Are you in the late majority — still developing an AI practice after most peers have adopted?
- Are you a laggard — still resistant or very slow to adopt?
There's no wrong answer. But the honest answer tells you something about your natural relationship with technology change.
Follow-up questions: - What would it take for you to move one step earlier on the adoption curve? - Is your current position serving you professionally? What's the cost/benefit of your adoption pace?
Exercise 11: The "Principles Over Features" Reflection
Identify one AI feature or capability that you initially thought was essential but later found less important than you expected. And one that you initially underestimated.
Reflect: What does this tell you about the relationship between features and principles in your AI practice?
Exercise 12: The Future Practice Letter
Write a short letter (300-500 words) to yourself to be read in two years.
Describe: - Where your AI practice is today - What capabilities you think will have matured by then - What questions about AI's evolution you're most uncertain about - What you hope your AI practice looks like two years from now
Store this letter somewhere accessible. When two years have passed, read it and reflect on what you got right, what you got wrong, and what surprised you.