Chapter 11 Key Takeaways: Prompt Engineering Patterns
-
Ad hoc prompting is wasteful. Every time you reconstruct the optimal prompt for a recurring task from scratch, you are re-solving a problem you have already solved. Prompt patterns capture the solution once and apply it repeatedly.
-
A prompt pattern is a reusable template with variable placeholders. The structural elements — role, instruction format, output specification, quality criteria — are fixed. The task-specific elements — the specific document, the specific audience, the specific goal — become [BRACKET VARIABLES].
-
Good patterns have three properties: generalizable, parameterized, and documented. Generalizable means it works for multiple instances of the same task type. Parameterized means it uses explicit [bracket variables] so the variables are clearly distinguishable from the constants. Documented means it has a name, use case, and example that make it findable and usable after a gap.
-
The test for a good pattern: can you use it next week for a different specific instance? If not, it's a prompt, not a pattern. Generalize until it works for any instance of the task type.
-
The 15 essential patterns cover most professional recurring tasks. Summarizer, Transformer, Analyzer, Generator, Critic/Reviewer, Explainer, Comparator, Planner, Extractor, Classifier, Rewriter, Brainstormer, Responder, Checker, and Scaffolder collectively address the vast majority of professional AI use cases.
-
The Extractor's most critical rule: only include information explicitly stated in the source. Without this constraint, models fill in unspecified fields by inference, producing extracted data that was never in the source document. Always specify what to write when a field is absent ("Not specified").
-
The Critic/Reviewer pattern's most important variable is the role. The role determines which evaluative framework and quality standards are applied. "A security researcher" and "a potential customer" will critique the same code or content very differently. Choose the role that represents the most important perspective you need to address.
-
The Generator and Brainstormer patterns need explicit diversity requirements. Without them, models cluster around the most common solution type for the task, producing variations rather than genuinely different options. Specifying constraints like "at least 2 unconventional approaches" or "range from quick wins to 6-month initiatives" forces the model to cover the full solution space.
-
The Rewriter pattern requires both change and preserve instructions. What you want changed matters less than you might think if you don't also specify what must be preserved. Improving clarity can sacrifice precision; shortening can eliminate critical caveats. The preserve section prevents the model from solving the stated problem while creating a new one.
-
Bracket variable names should be descriptive, not minimal.
[AUDIENCE — role, knowledge level, and what they'll do with this output]communicates far more than[AUDIENCE]. The variable name is documentation for the person filling in the template — including yourself six months from now. -
Pattern composition is more powerful than any individual pattern. Chaining Scaffolder → Generator → Critic, or Extractor → Classifier → Summarizer, produces results that no single pattern achieves alone. The output of one pattern becomes the input to the next.
-
Building a personal pattern library is one of the highest-leverage AI investments you can make. Alex's library reduced her weekly recurring task time by 85%. Elena's library enables her to produce consistent, high-quality consulting deliverables faster than competitors. The initial investment pays back within days and compounds indefinitely.
-
Start with the five most time-consuming recurring tasks, not the most interesting ones. The interesting strategic tasks are usually unique enough that patterns help less. The repetitive, structural tasks — weekly reports, recurring reviews, standard document types — are where patterns produce the most dramatic time savings.
-
Store patterns in a location you will actually access during work. The best storage format is the one you will use. A pattern in a document you never open is a waste of the time spent building it. Choose your tool based on access speed, searchability, and ability to paste from.
-
Document patterns immediately after building them, not later. The detail you need for good documentation — what the key variables do, what failure modes to watch for, why the pattern works — is freshest right after you build and test it. Documentation deferred is documentation lost.
-
Pattern failure modes are as important as pattern strengths. Knowing that the Analyzer pattern produces generic output when criteria are vague (generic fill-in), that context override can affect Extractor and Rewriter patterns with long content, and that role drift can affect Responder patterns in long exchanges tells you where to add guardrails to your templates.
-
Patterns improve over time with intentional iteration. Elena's Interview Analyzer improved from v1 to v3 because she reviewed it after each engagement. Post-engagement pattern review — 30 minutes to note what worked, what failed, and what new patterns to build — compounds the library's value with every use.
-
Team pattern libraries produce consistency benefits that individual libraries don't. When a team shares patterns, the same creative brief goes through the same validation, the same code review covers the same criteria, the same competitive analysis has the same structure. This consistency reduces quality variance — which matters as much as average quality in professional contexts.
-
Patterns encode expertise in transferable form. Elena's Slide Checker embeds her quality standards for consulting deliverables in a form a junior consultant can use. The pattern is a mechanism for expertise transfer — much like legal document templates or engineering checklists, but for AI-assisted work.
-
The pattern discovery process is: track → reconstruct → generalize → test → document. Track every AI task for a week to identify recurring types. Reconstruct your best prompt for each type. Replace specific elements with [BRACKET VARIABLES]. Test on a new instance. Document with use case and example.
-
Patterns are most valuable for tasks with high repeatability AND high structure. The ideal pattern candidate is a task you do weekly (high repeatability) that has a consistent format requirement (high structure). Tasks that are frequent but highly variable in structure, or infrequent but highly structured, are lower-value pattern candidates.
-
Include an example in every pattern documentation. An example showing a filled-in template and the resulting output communicates what the pattern produces more clearly than any description. Future users — including yourself — should be able to look at the example and understand what to expect.
-
Platform-specific pattern variants are sometimes necessary. Patterns built for one AI platform may need adjustment for another. If you use multiple tools, note which platforms a pattern works well on, and maintain variants if needed. Don't assume a pattern that works perfectly on ChatGPT will transfer identically to Claude.
-
The value of a pattern library compounds with library size, up to a point. A library of 30 well-organized, documented patterns is dramatically more valuable than 30 independent prompts — because you can find the right one quickly, combine them, and improve them systematically. Beyond about 50 patterns, maintenance overhead starts to grow; prune obsolete patterns regularly.
-
Advanced techniques from Chapter 10 should be embedded in patterns, not applied ad hoc. Alex's Brand Copy Writer pattern includes her few-shot reference library automatically. Raj's Debug Tracer pattern includes the CoT debugging structure. When you figure out that CoT or few-shot examples improve output for a specific pattern, add them to the template so they're applied every time — not just when you remember.