Further Reading: Chapter 7 — Understanding AI-Generated Code
An annotated bibliography of resources for deepening your code reading, review, and analysis skills.
Books
1. Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin (2008)
The classic text on writing readable, maintainable code. While focused on Java, the principles of naming, function design, error handling, and code organization are universally applicable. Chapters 2 (Meaningful Names), 3 (Functions), and 7 (Error Handling) are especially relevant to evaluating AI-generated code. A foundational read for anyone who reviews code regularly.
2. Code Reading: The Open Source Perspective by Diomidis Spinellis (2003)
One of the few books dedicated specifically to the skill of reading code rather than writing it. Spinellis walks through real open-source codebases, teaching systematic approaches to understanding unfamiliar code. Though the code examples are dated, the techniques for structural analysis and code navigation remain highly relevant.
3. Refactoring: Improving the Design of Existing Code by Martin Fowler (2018, 2nd Edition)
While primarily about changing code, this book's catalog of code smells trains you to recognize problematic patterns — exactly the skill needed when reviewing AI-generated code. The second edition uses JavaScript examples but the concepts apply to any language.
4. The Art of Readable Code by Dustin Boswell and Trevor Foucher (2011)
A concise, practical guide to writing code that is easy to understand. The emphasis on naming, commenting, and simplifying logic aligns directly with the quality evaluation skills covered in this chapter. At under 200 pages, it is an efficient read with high information density.
5. Secure Coding in Python by various contributors (OWASP Foundation)
The OWASP Python security resources provide practical guidance on the security vulnerabilities discussed in section 7.9. Covers injection attacks, authentication flaws, and secure coding patterns with Python-specific examples. Available freely online through the OWASP website.
Articles and Online Resources
6. Google Engineering Practices: Code Review Guidelines
Google's publicly shared code review documentation describes how one of the world's largest engineering organizations approaches code review. The section on "What to look for in a code review" closely parallels the checklist approach in section 7.10. Available at https://google.github.io/eng-practices/review/.
7. How to Read Code by Aria Stewart (2019)
A concise article that introduces systematic approaches to understanding unfamiliar codebases. Stewart's method of "expanding circles" — starting from a small piece you understand and gradually widening — is a practical complement to the structural analysis technique in section 7.2.
8. PEP 8 — Style Guide for Python Code
The official Python style guide that defines the naming conventions, import ordering, and formatting standards referenced throughout this chapter. Every Python developer should read this at least once. Available at https://peps.python.org/pep-0008/.
9. Common Python Security Vulnerabilities and How to Avoid Them by Anthony Shaw
A practical survey of security issues specific to Python, including injection attacks, deserialization risks, and dependency vulnerabilities. The article provides code examples of both vulnerable and secure patterns, making it an excellent complement to section 7.9.
10. Big-O Complexity Cheat Sheet
An accessible reference for algorithmic complexity that covers the common data structure operations and sorting algorithms. Useful when you need to quickly verify the performance characteristics of AI-generated code. Available at https://www.bigocheatsheet.com/.
Research Papers and Technical Reports
11. Do Users Write More Insecure Code with AI Assistants? by Perry et al. (2023)
A Stanford research study examining whether developers using AI coding assistants produce more security vulnerabilities than those coding without AI help. The findings are nuanced and provide empirical backing for the security review practices advocated in this chapter.
12. An Empirical Study of AI-Generated Code Quality by various researchers
Multiple studies have examined the quality of code produced by language models. Common findings include that AI-generated code tends to be functionally correct but may have subtle quality issues, confirming the need for the review skills taught in this chapter. Search for recent publications on arXiv for the latest findings.
Tools
13. Pylint and Flake8
Python static analysis tools that can automate many of the checks discussed in this chapter, including unused imports (F401), naming conventions, code complexity, and potential bugs. Integrating these tools into your workflow provides an automated first pass before manual review.
14. Bandit — Python Security Linter
A tool specifically designed to find common security issues in Python code, including use of eval(), hardcoded passwords, SQL injection patterns, and insecure deserialization. Running Bandit on AI-generated code automates many of the security checks from section 7.9.
15. Radon — Python Complexity Metrics
A tool for computing cyclomatic complexity, maintainability index, and other code quality metrics for Python code. Useful for objectively measuring the complexity of AI-generated functions and identifying those that need simplification or refactoring.