Chapter 12: Further Reading — Advanced Prompting Techniques

An annotated bibliography of resources for deepening your understanding of the prompting techniques covered in this chapter. Resources are organized by topic and annotated with relevance to vibe coding practice.


Chain-of-Thought and Reasoning

1. Wei, J., et al. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." Advances in Neural Information Processing Systems (NeurIPS), 2022.

The foundational paper that formalized chain-of-thought prompting. Wei and colleagues demonstrated that simply adding "Let's think step by step" to prompts dramatically improved performance on arithmetic, commonsense, and symbolic reasoning tasks. While the paper focuses on general reasoning rather than coding specifically, the underlying principle — that explicit reasoning improves output quality on complex tasks — applies directly to algorithm design and multi-step coding problems. Essential reading for understanding why the technique works.

2. Kojima, T., et al. "Large Language Models are Zero-Shot Reasoners." Advances in Neural Information Processing Systems (NeurIPS), 2022.

This paper introduced "zero-shot chain-of-thought" — the discovery that simply appending "Let's think step by step" to a prompt (without any examples) significantly improves reasoning performance. For vibe coders, this is the minimal version of chain-of-thought prompting: even a brief instruction to reason before coding improves output quality, though more structured reasoning prompts produce even better results.

3. Zhou, D., et al. "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models." International Conference on Learning Representations (ICLR), 2023.

Introduces a prompting strategy where complex problems are first decomposed into simpler sub-problems, each of which is solved in sequence. This is closely related to the decomposition prompting technique in Section 12.5 and provides empirical evidence for why breaking complex coding tasks into smaller pieces improves overall solution quality.


Few-Shot Learning and In-Context Learning

4. Brown, T., et al. "Language Models are Few-Shot Learners." Advances in Neural Information Processing Systems (NeurIPS), 2020.

The GPT-3 paper that popularized few-shot prompting. While the paper covers few-shot learning broadly, the sections on in-context learning are directly relevant to understanding why providing examples in your prompt works so effectively. The paper demonstrates that large language models can learn patterns from just a few examples without any parameter updates — the theoretical foundation for few-shot prompting in vibe coding.

5. Min, S., et al. "Rethinking the Role of Demonstrations in In-Context Learning." Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.

A surprising and important paper that investigates which aspects of few-shot examples actually matter. The authors found that the format and label space of examples matter more than the correctness of individual example labels. For vibe coders, this means that the structure of your few-shot examples (how they are formatted, what fields they include) may be more important than having perfectly representative content in each example.


Prompt Engineering Practices

6. White, J., et al. "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT." arXiv preprint arXiv:2302.11382, 2023.

A systematic catalog of prompt patterns organized by category: output customization, error identification, prompt improvement, interaction, and context control. Several patterns in this catalog directly correspond to techniques in this chapter, including the "persona pattern" (role-based prompting), the "template pattern" (prompt libraries), and the "recipe pattern" (decomposition). Useful as a reference for expanding your prompting vocabulary beyond the ten techniques covered here.

7. Zamfirescu-Pereira, J.D., et al. "Why Johnny Can't Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts." Proceedings of the 2023 CHI Conference on Human Factors in Computing, 2023.

A research study on how non-experts approach prompt writing and where they struggle. Key findings include: people tend to under-specify their needs, they do not iterate enough on prompts, and they struggle to provide the right level of context. This paper provides evidence-based motivation for the meta-prompting and prompt library techniques in Sections 12.4 and 12.10 — systematic approaches that address exactly the failures non-experts exhibit.


Software Engineering with AI

8. Vaithilingam, P., Zhang, T., and Glassman, E. "Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models." CHI Conference on Human Factors in Computing, 2022.

An empirical study of how developers actually use AI code generation tools, including their prompting strategies and the types of errors they encounter. The paper identifies common failure modes — including the lack of explicit constraint specification and the tendency to accept initial output without refinement — that the techniques in this chapter directly address. Particularly relevant to understanding why constraint satisfaction prompting (Section 12.6) and iterative approaches improve real-world outcomes.

9. Barke, S., James, M.B., and Polikarpova, N. "Grounded Copilot: How Programmers Interact with Code-Generating Models." Proceedings of the ACM on Programming Languages (OOPSLA), 2023.

Investigates how programmers interact with code-generating AI in practice, identifying two primary modes: "acceleration" (using AI to write code faster for known tasks) and "exploration" (using AI to learn and discover approaches for unfamiliar tasks). The Socratic prompting technique (Section 12.8) directly supports the exploration mode, while techniques like few-shot and constraint satisfaction support the acceleration mode. Reading this helps you understand which technique to reach for based on your current mode of work.


Prompt Optimization and Meta-Prompting

10. Zhou, Y., et al. "Large Language Models Are Human-Level Prompt Engineers." International Conference on Learning Representations (ICLR), 2023.

Introduces Automatic Prompt Engineer (APE), a method for using language models to generate and select effective prompts. This is the research foundation for the meta-prompting technique in Section 12.4. The paper demonstrates that AI-generated prompts often outperform human-written prompts, supporting the case for using AI to improve your prompting practice rather than relying solely on manual prompt crafting.

11. Pryzant, R., et al. "Automatic Prompt Optimization with 'Gradient Descent' and Beam Search." Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.

Proposes a systematic method for iteratively improving prompts using feedback from the model's outputs. While the paper describes an automated system, the underlying principle — evaluate prompt output, identify weaknesses, and refine the prompt accordingly — is exactly the iterative meta-prompting process described in Section 12.4. Useful for understanding the theory behind why iterative prompt refinement converges on better results.


Practical Guides and Applied Resources

12. Anthropic. "Prompt Engineering Guide." docs.anthropic.com, 2024-2025.

Anthropic's official documentation on effective prompting for Claude models. Covers techniques including role prompting, chain-of-thought, and few-shot examples with specific guidance for Claude's behavior and capabilities. Regularly updated to reflect the latest model capabilities. Essential reference for vibe coders using Claude Code or Claude-based tools.

13. OpenAI. "Prompt Engineering Guide." platform.openai.com, 2024-2025.

OpenAI's official guide to prompt engineering for GPT models. Includes practical advice on structuring prompts, using system messages, and getting consistent outputs. While model-specific in some details, the general principles align closely with the techniques in this chapter. Useful as a complementary reference, especially for teams using multiple AI tools.

14. Saravia, E. "Prompt Engineering Guide." promptingguide.ai, 2023-2025.

A comprehensive, community-maintained resource that catalogs prompting techniques with examples. Covers many of the techniques in this chapter (chain-of-thought, few-shot, role-based) plus additional techniques not covered here. The site includes a useful taxonomy of techniques and practical examples across multiple domains. Good as an ongoing reference after completing this chapter.

15. Microsoft. "Prompt Engineering Techniques." learn.microsoft.com, 2024-2025.

Microsoft's documentation on prompt engineering for Azure OpenAI Service. Includes practical guidance on few-shot learning, chain-of-thought, and system message design. Particularly useful for teams working in Microsoft's ecosystem with GitHub Copilot and Azure-hosted models. Includes enterprise-oriented advice on prompt management and governance that complements Section 12.10 on prompt libraries.


For readers who want to go deeper, here is a suggested order:

  1. Start with Wei et al. (2022) (#1) for the theoretical foundation of chain-of-thought prompting.
  2. Read White et al. (2023) (#6) for a broader catalog of prompt patterns.
  3. Review your chosen AI tool's official guide (#12 or #13) for model-specific best practices.
  4. Read Zhou et al. (2023) (#10) for the research behind meta-prompting.
  5. Explore Zamfirescu-Pereira et al. (2023) (#7) to understand common prompting failures and how to avoid them.
  6. Dive into the remaining papers based on which techniques you use most in your daily practice.