Chapter 7 Further Reading: Prompting Fundamentals
The following resources extend the principles covered in Chapter 7 across academic research, practitioner guides, and foundational writing clarity literature. Resources are organized by category and annotated for relevance.
Foundational Prompt Engineering
1. "Prompt Engineering Guide" — DAIR.AI (promptingguide.ai) A comprehensive open-source guide maintained by the AI research community. Covers zero-shot, few-shot, chain-of-thought, and advanced techniques with examples across multiple models. Regularly updated. Best used as a reference after reading this chapter — the concepts will make more sense with the foundational framework in place. Recommended for: Anyone who wants a technical deepdive into prompt techniques beyond the fundamentals covered here.
2. "The Art of Asking" — OpenAI Cookbook (platform.openai.com/docs/guides/prompt-engineering) OpenAI's official prompt engineering guide, emphasizing their models' specific behaviors. Includes worked examples for a range of task types: classification, summarization, code generation, and structured data extraction. Practical and well-annotated. Recommended for: Regular ChatGPT users who want platform-specific guidance grounded in official documentation.
3. "Anthropic's Claude Prompting Guide" — Anthropic Documentation (docs.anthropic.com) Anthropic's official guide to prompting Claude effectively. Covers XML tagging for complex prompts, system prompt design, long-context best practices, and model-specific behaviors. Includes a prompt library with annotated examples. Recommended for: Regular Claude users, or anyone who wants to understand the nuances of how well-engineered system prompts differ from conversational prompts.
Research and Academic Foundation
4. "Language Models are Few-Shot Learners" — Brown et al. (2020), arXiv The GPT-3 paper that introduced and named few-shot prompting. Demonstrates empirically how in-context examples (providing examples within the prompt itself) improve model performance across dozens of tasks. This is the foundational academic work behind Section 7.8 of this chapter. Recommended for: Readers interested in the research basis for why examples in prompts work so reliably.
5. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" — Wei et al. (2022), Google Brain Demonstrates that prompting AI to "reason step by step" before providing an answer dramatically improves performance on complex reasoning tasks. This underlies the tip in Section 7.10 about asking the AI to reason through before writing. Recommended for: Anyone who uses AI for analysis, problem-solving, or complex synthesis tasks.
6. "Large Language Models are Zero-Shot Reasoners" — Kojima et al. (2022), arXiv The paper that popularized the "Let's think step by step" prompt addition. Shows that this simple addition consistently improves output quality on reasoning-heavy tasks without requiring few-shot examples. Recommended for: Practitioners who want the research basis for "think step by step" type instructions.
7. "Calibrate Before Use: Improving Few-Shot Performance of Language Models" — Zhao et al. (2021), arXiv Examines how the order and selection of examples in few-shot prompts affects output quality. Relevant to Chapter 10 (few-shot prompting) but provides useful grounding for why example quality matters, not just quantity. Recommended for: Readers who want to understand the mechanics behind why some examples work better than others.
Writing Clarity and Communication Design
8. "On Writing Well" — William Zinsser The classic guide to clear nonfiction writing. Zinsser's principles — brevity, clarity, active verbs, eliminating clutter — apply directly to prompt writing. The chapter on "Simplicity" alone is worth the book. Everything Zinsser says about writing for readers translates remarkably well to writing for AI. Recommended for: Anyone who wants their prompt-writing to improve alongside their general writing ability.
9. "The Elements of Style" — Strunk and White Still the most concise reference on clear English prose. The preference for active voice, the elimination of unnecessary words, and the commitment to concrete rather than abstract language are all principles that improve prompts as much as they improve essays. Recommended for: Practitioners who find their prompts tend to be wordy, vague, or passive.
10. "Writing to Be Understood: Clarity in Professional Communication" — Various research in the field of plain language The plain language movement in government and professional communication offers substantial evidence that clarity is a skill, not a talent — and that specific techniques (short sentences, active voice, concrete nouns, defined terms) measurably improve comprehension. The same techniques improve AI outputs. Recommended for: Practitioners in regulated industries (healthcare, legal, finance) where clarity has compliance implications.
Practical AI Productivity
11. "Superhuman AI Prompting Newsletter" — Various issues (lennysnewsletter.com and similar) Product and marketing communities have produced some of the most practically useful prompt libraries for specific professional tasks. Lenny's Newsletter and similar practitioner-focused publications regularly publish prompt collections, walkthroughs, and case studies. Recommended for: Practitioners who learn best from specific, real-world examples in their domain.
12. "The Prompt Report" — Schulhoff et al. (2024), arXiv A comprehensive survey of 58 prompting techniques with meta-analysis of their effectiveness. One of the most thorough academic treatments of the field. Dense reading, but invaluable as a reference if you want to systematically explore the landscape of advanced prompting methods. Recommended for: Readers who want a systematic inventory of prompting techniques beyond what any single chapter or guide covers.
13. "AI for Humans: How to Leverage Artificial Intelligence Without Losing Your Human Edge" — Various practitioner accounts A growing body of practitioner-written guides focusing on how to think about AI collaboration from a workflow and cognitive perspective. The best of these balance technical prompting guidance with the higher-order question of where human judgment remains essential. Recommended for: Managers and team leads who are thinking about AI adoption at the organizational level, not just individual practice.
Specificity and Precision in Communication
14. "Made to Stick: Why Some Ideas Survive and Others Die" — Chip Heath and Dan Heath The concept of the "curse of knowledge" — the difficulty of communicating clearly when you already know something deeply — directly applies to the Assumption Gap failure mode discussed in Section 7.9. Heath and Heath's framework for making ideas concrete and specific is immediately applicable to prompt design. Recommended for: Readers who struggle with the Assumption Gap failure mode — who know their domain so well they forget what needs to be explained.
15. "Thinking in Systems: A Primer" — Donella Meadows While not directly about prompting, Meadows' approach to describing complex systems with precision and economy is highly applicable to the challenge of context-loading in prompts. The ability to describe a complex situation in a way that captures its essential dynamics without requiring the reader to understand the whole — that is a core prompting skill. Recommended for: Practitioners who work with complex systems (technical, organizational, financial) and need to describe them to AI tools efficiently.