Chapter 8: Further Reading

Annotated Bibliography

1. "Prompt Engineering Guide" — DAIR.AI (2024)

URL: https://www.promptingguide.ai/ Relevance: A comprehensive, community-maintained guide to prompt engineering techniques. While not specific to coding, it covers foundational techniques (zero-shot, few-shot, chain-of-thought) that directly apply to code generation. Start with the "Basics" section and then explore the "Techniques" section for strategies that extend beyond this chapter.

2. "Best Practices for Prompt Engineering with the OpenAI API" — OpenAI Documentation (2024)

URL: https://platform.openai.com/docs/guides/prompt-engineering Relevance: OpenAI's official guide to prompt engineering includes specific recommendations for structured outputs, system messages, and iterative refinement. The strategies described are applicable across AI models, not just OpenAI's. Particularly useful for understanding how token limits and temperature settings affect code generation quality.

3. "The Art of the Prompt: How to Get the Best Out of AI Code Assistants" — Addy Osmani (2024)

Relevance: Written by a senior engineering leader at Google, this resource focuses specifically on prompting AI coding tools in professional software development contexts. It provides practical, experience-based advice on structuring code prompts, managing multi-file context, and iterating on AI output. Excellent complement to the foundational principles covered in this chapter.

4. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" — Wei et al. (2022)

URL: https://arxiv.org/abs/2201.11903 Relevance: This influential research paper introduces chain-of-thought prompting, where asking the model to "think step by step" dramatically improves its performance on complex tasks. For coding, this technique helps with algorithmic problems, debugging, and architectural decisions. The concept is extended in Chapter 12 (Advanced Prompting Techniques).

5. "Language Models are Few-Shot Learners" — Brown et al. (2020)

URL: https://arxiv.org/abs/2005.14165 Relevance: The foundational GPT-3 paper demonstrates how providing examples in a prompt (few-shot learning) dramatically improves output quality. This research underpins the template approach discussed in Section 8.9 — by providing an example of the desired output format, you leverage few-shot learning to guide the model's generation.

6. "GitHub Copilot: Best Practices and Prompt Crafting" — GitHub Blog (2024)

URL: https://github.blog/developer-skills/github/how-to-write-better-prompts-for-github-copilot/ Relevance: GitHub's official guidance on writing effective prompts specifically for Copilot, the most widely used AI code completion tool. Covers inline prompting, chat prompting, and workspace context. While Copilot-specific, the principles of providing context through comments and function signatures apply universally.

7. "Anthropic's Claude Prompt Engineering Guide" — Anthropic Documentation (2024)

URL: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview Relevance: Anthropic's official guide to prompting Claude, covering system prompts, structured output, and chain-of-thought techniques. Includes specific advice on XML-tagged sections for organizing complex prompts, a technique particularly effective for code generation tasks with multiple requirements.

8. "A Survey of Large Language Models for Code" — Zheng et al. (2024)

URL: https://arxiv.org/abs/2311.10372 Relevance: A comprehensive academic survey of how large language models handle code generation, completion, repair, and review. Provides the theoretical foundation for understanding why certain prompting strategies work better than others for code tasks. Recommended for readers who want to understand the "why" behind prompt engineering, building on the concepts from Chapter 2.

9. "Effective Python: 90 Specific Ways to Write Better Python" — Brett Slatkin (3rd Edition, 2024)

Relevance: While not about prompt engineering, this book is invaluable for knowing what to ask for in your Python prompts. Understanding Python best practices helps you write constraints like "use walrus operator for assignment expressions in while loops" or "prefer dataclasses over named tuples for this use case." The more you know about good code, the better your prompts for generating it.

10. "Clean Code: A Handbook of Agile Software Craftsmanship" — Robert C. Martin (2008)

Relevance: The classic reference on code quality. Clean Code principles translate directly into prompt constraints: meaningful names, small functions, single responsibility, and clear abstractions. When your prompt asks for "clean code," having read this book helps you specify exactly what that means — which is critical for the specificity pillar.

11. "Structured Generation with LLMs" — Willison (2024)

URL: https://simonwillison.net/series/prompt-engineering/ Relevance: Simon Willison's blog series on prompt engineering includes practical, hands-on experiments with structured output generation. His work on getting LLMs to produce consistent JSON, markdown, and code structures directly supports the output formatting pillar discussed in Section 8.6.

12. "The Pragmatic Programmer (20th Anniversary Edition)" — Thomas & Hunt (2019)

Relevance: The chapter on "Programming by Coincidence" is especially relevant to vibe coding. Thomas and Hunt argue that you should understand why your code works, not just that it works. This principle applies to prompts: understand why a prompt produces good results so you can replicate the approach. The book's emphasis on DRY (Don't Repeat Yourself) also motivates the template library approach from Case Study 2.

13. "Reflexion: Language Agents with Verbal Reinforcement Learning" — Shinn et al. (2023)

URL: https://arxiv.org/abs/2303.11366 Relevance: This paper introduces the concept of AI agents that reflect on their own output to improve it — essentially automating the prompt-response feedback loop described in Section 8.10. Understanding this research helps you appreciate why iterative refinement works and previews the agentic coding approaches covered in Chapter 36.

14. "Writing for Software Developers" — Philip Kiely (2020)

Relevance: Clear writing and clear prompting share the same foundations: precise language, logical structure, awareness of your audience, and elimination of ambiguity. Kiely's advice on technical writing transfers directly to prompt writing. Particularly relevant to the clarity pillar (Section 8.2) and the principle that structure beats length.