Chapter 30: Further Reading
Annotated Bibliography
1. Software Engineering at Google: Lessons Learned from Programming Over Time — Titus Winters, Tom Manshreck, and Hyrum Wright (O'Reilly, 2020)
Chapter 9 ("Code Review") provides an in-depth look at how Google conducts code review at scale, including the cultural norms, tool support, and process design that make review effective across thousands of engineers. The discussion of readability reviewers and the emphasis on review as mentorship is particularly relevant to teams adopting AI-assisted development.
2. A Cognitive Complexity Model for Automated Code Review — G. Ann Campbell (SonarSource, 2018)
The foundational paper defining cognitive complexity as a metric. Campbell explains why cyclomatic complexity fails to capture human readability and presents the cognitive complexity scoring rules used by SonarQube and other tools. Essential reading for understanding the "why" behind complexity metrics beyond simply applying thresholds.
3. Effective Python: 90 Specific Ways to Write Better Python — Brett Slatkin (Addison-Wesley, 3rd Edition, 2024)
While not specifically about code review, this book defines many of the Pythonic patterns that reviewers should look for and that AI-generated code often misses. Items on generators, context managers, and descriptive naming are particularly useful as a shared vocabulary for review feedback.
4. The Checklist Manifesto: How to Get Things Right — Atul Gawande (Metropolitan Books, 2009)
Gawande's exploration of how checklists improve outcomes in aviation, medicine, and construction provides powerful arguments for using review checklists in software development. The book's discussion of "do-confirm" versus "read-do" checklists directly informs how to design effective code review checklists.
5. Accelerate: The Science of Lean Software and DevOps — Nicole Forsgren, Jez Humble, and Gene Kim (IT Revolution Press, 2018)
This research-backed book identifies the practices that predict high-performing software delivery teams. The chapters on continuous integration, trunk-based development, and team culture provide empirical support for many of the quality practices recommended in this chapter, including fast feedback loops and quality gates.
6. Managing Technical Debt: Reducing Friction in Software Development — Philippe Kruchten, Robert Nord, and Ipek Ozkaya (Addison-Wesley, 2019)
The most comprehensive treatment of technical debt in the literature. The authors provide frameworks for identifying, measuring, and prioritizing technical debt that go well beyond the introductory coverage in this chapter. Particularly valuable for teams trying to quantify and communicate debt to non-technical stakeholders.
7. Ruff Documentation and Rule Reference — Astral (https://docs.astral.sh/ruff/)
The official Ruff documentation is remarkably well-organized and serves as both a configuration guide and a catalog of Python anti-patterns. The rule reference, which explains the rationale behind each lint rule with examples, is an excellent learning resource for understanding what static analysis can catch.
8. Best Kept Secrets of Peer Code Review — Jason Cohen (SmartBear, 2006)
Despite its age, this short book remains one of the best practical guides to code review. Based on a study of 2,500 code reviews at Cisco, it provides data-driven recommendations on review size, duration, and effectiveness. The finding that review effectiveness drops dramatically after 200-400 lines of code continues to be validated by modern studies.
9. Building Secure and Reliable Systems — Heather Adkins, Betsy Beyer, Paul Blankinship, et al. (O'Reilly, 2020)
Written by Google security and reliability engineers, this book covers how to integrate security review into the development process. Chapters on design review, code review for security, and automated analysis provide a framework for the security-focused review practices discussed in this chapter.
10. Refactoring: Improving the Design of Existing Code — Martin Fowler (Addison-Wesley, 2nd Edition, 2018)
Fowler's catalog of refactoring patterns is the essential reference for addressing the code quality issues that reviews identify. When a reviewer says "this function is too complex" or "this code has feature envy," Fowler's book provides the specific refactoring techniques to resolve the issue. The second edition uses JavaScript examples, but the patterns are language-agnostic.
11. Mypy Documentation: Type Checking Python — Jukka Lehtosalo et al. (https://mypy.readthedocs.io/)
The mypy documentation covers not just tool usage but the broader philosophy of gradual typing in Python. The sections on strict mode, type narrowing, and protocol types are especially useful for teams implementing type checking as a quality gate. The documentation's guidance on incrementally adopting type hints aligns well with the progressive quality gate strategy.
12. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation — Jez Humble and David Farley (Addison-Wesley, 2010)
The foundational text on CI/CD pipelines. While the tools have evolved since publication, the principles of automated quality gates, deployment pipelines, and fast feedback remain definitive. Chapters on commit stage, automated acceptance testing, and the deployment pipeline directly inform the quality gate design discussed in this chapter.
13. The DevOps Handbook — Gene Kim, Jez Humble, Patrick Debois, and John Willis (IT Revolution Press, 2nd Edition, 2021)
This practical companion to Accelerate provides detailed guidance on implementing the practices that drive software delivery performance. The chapters on creating fast feedback loops, integrating security into daily work, and creating a learning culture are directly applicable to building the quality-first culture described in Section 30.10.
14. Clean Code: A Handbook of Agile Software Craftsmanship — Robert C. Martin (Prentice Hall, 2008)
Martin's classic defines many of the code quality standards that review checklists enforce: meaningful names, small functions, the Single Responsibility Principle, and the "Boy Scout Rule" (leave code cleaner than you found it). While some advice is dated and the examples are in Java, the core principles remain foundational for code review practice.
15. Working Effectively with Legacy Code — Michael Feathers (Prentice Hall, 2004)
Feathers' book is essential reading for teams dealing with AI-generated code that has accumulated technical debt. His techniques for getting legacy code under test—identifying seams, breaking dependencies, and characterization testing—are directly applicable when improving code quality in existing AI-assisted codebases.