Chapter 22: Key Takeaways

Debugging and Troubleshooting with AI — Summary Card

  1. Follow the four-phase debugging cycle. Reproduce and document the bug, present structured information to AI, evaluate and apply the suggested fix with understanding, then learn from the experience and document the resolution.

  2. Use the DESCRIBE framework for debugging conversations. Structure your AI prompts with: Describe expected behavior, Error message (complete), Source code (relevant portions), Context (environment, recent changes), Reproduce steps, Investigated already, Behavior observed, and Environment details.

  3. Always include complete error context. Full stack traces, relevant source code, environment details, and what you have already tried. The quality of AI debugging assistance is directly proportional to the quality of information you provide.

  4. Read stack traces bottom-up for the error, but look mid-trace for the cause. The error type and location are at the bottom, but the root cause is often in an earlier frame where incorrect data or logic initiated the failure chain.

  5. Present both successful and failed operations when sharing logs. AI identifies bugs by comparing what differs between working and failing cases. Include temporal context, filter to relevant components, and annotate suspicious entries.

  6. Combine interactive debugging with AI analysis. Capture pdb sessions, variable states, and execution paths, then share them with AI. Use AI to suggest what to inspect next, creating a feedback loop between debugger and AI.

  7. Profile before and after optimization. Use cProfile for CPU bottlenecks and memory_profiler for memory issues. Always measure the impact of AI-suggested optimizations rather than assuming they help. Trust the profiler over intuition.

  8. Provide full environment details for configuration issues. Python version, installed packages, virtual environment status, OS information, and environment variables are all critical for diagnosing environment and configuration bugs.

  9. Approach dependency conflicts with complete information. Share your requirements file, currently installed packages, the full error message, and what features you need from each conflicting package. AI can suggest compatible versions or alternative packages.

  10. Evaluate AI suggestions critically before applying them. Understand why a fix should work, test it in isolation, and verify it does not introduce new issues. As Chapter 14 warns, AI can confidently suggest incorrect solutions.

  11. Write regression tests after every bug fix. A bug fix without a test is an invitation for the bug to return. Tests document the bug, prevent regression, and serve as executable specifications (see Chapter 21).

  12. Build your debugging intuition through the learning loop. After each AI-assisted debugging session, understand the root cause, identify the pattern, learn the diagnostic approach, and update your mental model of the system.

  13. Maintain a personal bug database. Record the symptom, root cause, fix, pattern category, and prevention strategy for bugs you encounter. Review periodically to identify recurring themes and knowledge gaps.

  14. Know when AI will not help. Visual bugs, timing-sensitive issues requiring real-time observation, data-dependent bugs requiring production data access, and systemic architecture problems are areas where human judgment and specialized tools are primary, with AI playing a supporting role.

  15. Progress from "what is wrong?" to "which approach has better tradeoffs?" Your relationship with AI should evolve over time: beginners ask for diagnosis, intermediate developers ask for confirmation, and advanced developers use AI as a sounding board for comparing solutions.