21 min read

Most people who use AI tools start the same way: they encounter a task, they think about what they want, they type something into the chat box, and they evaluate what comes back. If it's good, they use it. If it's not, they try again. The process is...

Chapter 11: Prompt Engineering Patterns for Recurring Tasks

From Ad Hoc to Systematic

Most people who use AI tools start the same way: they encounter a task, they think about what they want, they type something into the chat box, and they evaluate what comes back. If it's good, they use it. If it's not, they try again. The process is intuitive, responsive, and largely ad hoc.

This works. It is, in fact, how most professionals use AI tools every day. But it has a significant inefficiency baked into it: the best thinking about how to prompt for a given task type — the specific role, context, instruction structure, and format that produces great results — is done once and then lost. The next time you encounter the same type of task, you start over.

Multiply this across 20 task types, done weekly, by a professional who's been using AI for a year, and the cumulative waste is substantial. You've solved the prompting problem for "summarize a long report for an executive audience" a hundred times without ever capturing the solution.

Prompt engineering patterns are the solution to this waste. A pattern is a reusable prompt template: a general structure with variable placeholders that you can fill in for any specific instance of a task type. Once you've done the work of finding what works for a given task, you capture it as a pattern — and from then on, you apply it rather than reinventing it.

This chapter catalogs 15 essential patterns that cover the most common professional recurring tasks. Each pattern includes a complete template with [bracket] variables you can customize, a worked example showing the template in action, and notes on when to use it and how to adapt it. The chapter also covers how to build and maintain your own pattern library — one of the highest-leverage investments any regular AI user can make.


1. Why Prompt Patterns Matter

The Compound Effect of Reuse

A well-constructed prompt template for a recurring task produces two kinds of value:

Efficiency: You stop spending 5-10 minutes constructing the right prompt from scratch each time. You open your template, fill in the variables, and launch. For someone who runs 10 AI tasks per day, eliminating even 3 minutes of prompt construction per task saves 30 minutes daily — over 100 hours per year.

Consistency: Because you're using the same structure each time, the outputs are more consistent. When you need 20 customer emails, 15 competitive briefs, or 30 code review comments to all follow the same format and quality standard, a shared template enforces that standard automatically.

The Shift from User to Designer

Using prompt patterns changes your relationship with AI tools. Instead of being a user who interacts ad hoc, you become a designer who has engineered the interaction. This shift has downstream effects on how you think about AI at work: you start seeing tasks as belonging to task types, and task types as matching patterns, rather than treating every task as a unique challenge.


2. What Makes a Good Reusable Pattern

Not all prompts make good patterns. A good reusable pattern has three properties:

Generalizable

The pattern works for many specific instances of the task type, not just one. "Summarize this 10-page Q3 earnings report for the CFO" is a prompt, not a pattern. "Summarize [document] for [audience] in [format] highlighting [focus areas]" is a pattern — it works for any document, any audience, any format.

The test: if you can use the same structure next week for a different specific instance, it's a pattern.

Parameterized

Good patterns use [bracket variables] for the elements that change between instances, and hard-code the elements that stay constant. The constant elements — the role, the instruction verb, the output format specification, the quality criteria — represent your accumulated knowledge of what works. The variables are everything that's specific to the current task.

Documented

A pattern you can't find or can't interpret is useless. Good documentation means: the pattern name (so you can find it), the use case (when to apply it), the template itself (with clear variable labels), an example (so you can see what "filled in" looks like), and notes on variations or cautions.


3. The 15 Essential Patterns

Pattern 1: The Summarizer

Use case: Condensing long content for a specific audience and purpose.

Template:

Summarize [DOCUMENT/CONTENT] for [AUDIENCE] in [FORMAT].

The summary should:
- Be [LENGTH — e.g., "no more than 200 words" or "3 bullet points"]
- Emphasize [FOCUS AREA — e.g., "financial implications" or "action items"]
- [OPTIONAL: Use [TONE — e.g., "plain language" or "executive tone"]]
- [OPTIONAL: Omit [EXCLUDED ELEMENTS — e.g., "technical details" or "historical background"]]

Worked Example:

Summarize the attached Q3 board report for our VP of Sales in a bulleted list.

The summary should:
- Be no more than 5 bullets, each under 25 words
- Emphasize revenue performance, pipeline health, and deal velocity
- Use plain, direct language — no finance jargon
- Omit operational metrics unrelated to the sales function

Notes: The Summarizer is the most frequently used pattern in most professionals' libraries. Invest time in calibrating the length, tone, and focus parameters for your specific audiences. A VP-of-Sales version and a Board-of-Directors version should have different parameter values.


Pattern 2: The Transformer

Use case: Converting content from one format or medium to another.

Template:

Take the following [INPUT DESCRIPTION] and convert it into [OUTPUT FORMAT].

[OPTIONAL: Keep [PRESERVED ELEMENTS — e.g., "all factual claims" or "the original structure"]]
[OPTIONAL: Adjust [CHANGED ELEMENTS — e.g., "the tone to be more formal" or "the length to be 50% shorter"]]
[OPTIONAL: The output will be used for [PURPOSE/CONTEXT]]

[INPUT CONTENT]

Worked Example:

Take the following meeting transcript excerpt and convert it into a structured
action item list in this format:
- Action: [specific task]
- Owner: [name if mentioned]
- Deadline: [date or "Not specified"]
- Priority: [High/Medium/Low based on discussion emphasis]

Keep all commitments made, even implicit ones. Do not add actions not in the transcript.

[TRANSCRIPT]

Notes: The Transformer is especially powerful for content repurposing (article → social post → email → slide bullets), format conversion (prose → table, table → prose, notes → document), and media conversion (transcript → action items, recording → summary).


Pattern 3: The Analyzer

Use case: Structured analysis of content against specified criteria.

Template:

Analyze [CONTENT/SUBJECT] for [ANALYSIS CRITERIA].

For each criterion, provide:
- [ELEMENT 1 — e.g., "Current state"]
- [ELEMENT 2 — e.g., "Key finding"]
- [ELEMENT 3 — e.g., "Recommendation"]

Format: [TABLE / NUMBERED LIST / SECTION HEADERS]

[OPTIONAL: Prioritize [PRIORITY FOCUS] in your analysis]
[OPTIONAL: Context: [RELEVANT BACKGROUND]]

Worked Example:

Analyze this product landing page for conversion effectiveness.

For each of the following criteria, provide: Current State (1 sentence),
Key Finding (1 sentence), Recommendation (1 sentence):

1. Headline clarity and value proposition
2. Social proof and credibility elements
3. Call-to-action visibility and specificity
4. Page load friction indicators
5. Mobile layout considerations

Format as a table with columns: Criterion | Current State | Key Finding | Recommendation

Context: This is a B2B SaaS product targeting operations managers. Average deal value: $15K/year.

[PAGE CONTENT]

Notes: The Analyzer is most powerful when you define exactly what you want for each criterion — not just "evaluate" but specifically "current state / finding / recommendation" or "score 1-5 / evidence / next step." Vague criteria produce vague analysis.


Pattern 4: The Generator

Use case: Creating multiple options for a given task.

Template:

Generate [NUMBER] [OUTPUT TYPE — e.g., "options" or "variations" or "ideas"] for [TASK DESCRIPTION].

Constraints:
- [CONSTRAINT 1 — e.g., "Each option must be under 10 words"]
- [CONSTRAINT 2 — e.g., "No overlap in approach between options"]
- [CONSTRAINT 3 — e.g., "All must be appropriate for a B2B audience"]

For each option, include: [OPTION ELEMENT 1], [OPTION ELEMENT 2 if needed]

Context: [RELEVANT BACKGROUND]

Worked Example:

Generate 8 email subject line options for our new product launch announcement.

Constraints:
- Each subject line must be under 50 characters
- No two options should use the same hook type (curiosity, urgency, benefit, social proof, etc.)
- All must be appropriate for a professional B2B audience

For each option, include: the subject line and a one-word label for the hook type used.

Context: We are announcing an AI-powered expense reporting tool that saves finance teams
an average of 4 hours per week. Target audience: CFOs and VP Finance at mid-market companies.

Notes: Always request more options than you think you need — having 8 choices is better than having 3. Specifying that options should be diverse in approach prevents the model from producing minor variations of the same idea. The label for approach type (hook type, method, angle) helps you select across a real range.


Pattern 5: The Critic/Reviewer

Use case: Structured critique of content against explicit standards.

Template:

Review [CONTENT TYPE] as [ROLE — e.g., "a senior editor" or "a potential customer"].

Identify [ISSUE TYPE — e.g., "weaknesses" or "risks" or "gaps"] in:
- [AREA 1]
- [AREA 2]
- [AREA 3]

For each issue identified:
- Quote the specific text or element with the issue
- Explain why it's a problem
- Suggest a specific improvement

[OPTIONAL: Also note what is working well in a separate section]
[OPTIONAL: Prioritize issues by impact, with the highest-impact issue first]

Worked Example:

Review this job posting as a talented senior software engineer who is evaluating
whether to apply.

Identify weaknesses or red flags in:
- Compensation and benefits clarity
- Role description specificity and interest
- Company and team context
- Growth and advancement signals
- Technical environment description

For each issue: quote the relevant text, explain how it reads to a strong candidate,
and suggest a specific improvement.

Prioritize issues by their likely impact on application rates, highest impact first.

[JOB POSTING]

Notes: The choice of role for the Critic is critical. "A potential customer," "an experienced editor," "a security researcher," and "a skeptical board member" will critique the same content very differently. Choose the role that represents the most important perspective you need to address.


Pattern 6: The Explainer

Use case: Converting complex content into accessible explanations for a specific audience.

Template:

Explain [CONCEPT/TOPIC] to [AUDIENCE DESCRIPTION].

Use [APPROACH — e.g., "an everyday analogy" or "a step-by-step walkthrough" or "concrete examples"].

[OPTIONAL: Assume the audience knows [PRIOR KNOWLEDGE].]
[OPTIONAL: Avoid [EXCLUDED ELEMENTS — e.g., "technical jargon" or "mathematical notation"]]
[OPTIONAL: Length: [TARGET LENGTH]]
[OPTIONAL: Start with [STARTING POINT — e.g., "why this matters to them" or "the core idea in one sentence"]]

Worked Example:

Explain machine learning model fine-tuning to a marketing director who has basic
familiarity with AI tools but no technical background.

Use a concrete business analogy (not a "brain" or "learning" metaphor — those
are overused).

Assume she understands that AI models are trained on data, but does not know
what "weights," "parameters," or "gradients" mean.

Start with why fine-tuning matters for her use case (getting more brand-consistent
output from AI tools), then explain how it works conceptually.

Length: 150-200 words.

Notes: The most powerful variables in the Explainer are audience knowledge level and the prohibition on certain explanation approaches. "Don't use the brain analogy" is more specific and useful than "use accessible language." The more you know about what hasn't worked for this audience before, the better you can specify the explanation approach.


Pattern 7: The Comparator

Use case: Structured comparison of two or more options across defined criteria.

Template:

Compare [OPTION A] and [OPTION B] [OPTIONAL: and [OPTION C]] on the following criteria:
- [CRITERION 1]
- [CRITERION 2]
- [CRITERION 3]
- [CRITERION 4]

Format: [TABLE with options as columns and criteria as rows / SECTION PER CRITERION / PROS-CONS LIST]

[OPTIONAL: For each criterion, indicate which option performs better and why]
[OPTIONAL: Conclude with a recommendation given [DECISION CONTEXT]]
[OPTIONAL: Note any significant uncertainties in the comparison]

Worked Example:

Compare Notion, Confluence, and Coda as team documentation platforms for a
50-person startup engineering team on the following criteria:
- Ease of setup and administration (no dedicated IT staff)
- Real-time collaboration quality
- Code snippet and technical documentation support
- Integration with GitHub, Jira, and Slack
- Pricing at 50 users

Format: table with platforms as columns and criteria as rows.

For each cell, provide a 1-2 sentence assessment and indicate the relative ranking
(Best / Good / Adequate / Poor) for that criterion.

Conclude with a recommendation for an engineering team that prioritizes ease of
maintenance over advanced features.

Notes: The Comparator produces its best results when the criteria are specific and evaluable, not generic ("quality" tells the model nothing; "real-time collaboration quality for documents edited simultaneously by 5 people" gives it something to assess). The recommendation request at the end forces the model to synthesize rather than just list, which often produces more useful output.


Pattern 8: The Planner

Use case: Creating structured plans, roadmaps, or project plans.

Template:

Create a [PLAN TYPE — e.g., "project plan" or "launch plan" or "learning roadmap"] for [GOAL].

Structure the plan as [FORMAT — e.g., "phases with milestones" or "weekly tasks" or "sprints"].

Constraints:
- [CONSTRAINT 1 — e.g., "Timeline: 6 weeks"]
- [CONSTRAINT 2 — e.g., "Team size: 3 people, 2 of whom are part-time"]
- [CONSTRAINT 3 — e.g., "Budget: $10,000"]

For each [PHASE/WEEK/SPRINT], include:
- [ELEMENT 1 — e.g., "Key deliverable"]
- [ELEMENT 2 — e.g., "Dependencies or risks"]
- [ELEMENT 3 — e.g., "Success criteria"]

Context: [RELEVANT BACKGROUND ABOUT THE PROJECT/GOAL]

Worked Example:

Create a product launch plan for a new B2B software feature rollout.

Structure the plan as 4 phases with milestones and owners.

Constraints:
- Timeline: 8 weeks from today to launch
- Team: 1 product manager, 1 developer (part-time on this), 1 marketing manager
- The feature is built — this is launch planning only (no dev time in plan)

For each phase, include:
- Phase name and objective
- Key tasks (list of 3-5)
- Deliverable due at phase end
- Primary owner
- Key risk to watch

Context: Feature is an AI-powered reporting dashboard for our HR software. Target
customers: existing enterprise clients. Launch includes: documentation, internal
training, email announcement, webinar.

Notes: The Planner pattern works best when you provide the constraints explicitly — timeline, team, budget, and what's in-scope vs. out-of-scope. Without constraints, the model produces an idealized plan that bears no relation to your actual situation. The "plan then execute" approach from Chapter 10 applies here: treat the first output as a draft to review and revise before acting on it.


Pattern 9: The Extractor

Use case: Pulling specific structured information from unstructured source content.

Template:

Extract [INFORMATION TYPE] from the following [SOURCE TYPE].

Output format:
[FIELD 1]: [description or example]
[FIELD 2]: [description or example]
[FIELD 3]: [description or example]

Rules:
- Only include information explicitly stated in the source
- If [FIELD] is not mentioned, write "Not specified"
- [OPTIONAL: For [FIELD], consolidate multiple mentions into a single entry]

[SOURCE CONTENT]

Worked Example:

Extract key commitments and action items from the following sales call transcript.

Output format (one entry per commitment):
- Commitment: [exact quote or close paraphrase]
- Made by: [speaker name or role]
- Deadline: [if mentioned, otherwise "Not specified"]
- Follow-up required: [Yes/No and brief description]

Rules:
- Only include explicit commitments, not general statements of interest
- If the same commitment is restated, list it once
- Include commitments made by both our team and the prospect

[TRANSCRIPT]

Notes: The Extractor's most important rule is the "only include what's explicitly stated" instruction. Without it, models will fill in gaps by inference, which often produces fabricated "extractions" that were never in the source. Always specify what to do with missing fields ("write Not specified") to prevent invented content.


Pattern 10: The Classifier

Use case: Sorting items from a list into defined categories.

Template:

Classify each of the following [ITEM TYPE] as [CATEGORY LIST].

Classification criteria:
- [CATEGORY A]: [definition]
- [CATEGORY B]: [definition]
- [CATEGORY C]: [definition]

Output format: [ITEM] → [CATEGORY]: [one-sentence rationale]

[OPTIONAL: If an item could fit multiple categories, assign the primary category
and note the secondary fit in parentheses]
[OPTIONAL: Items that do not clearly fit any category should be flagged as "Needs review"]

Items to classify:
[LIST OF ITEMS]

Worked Example:

Classify each of the following customer feedback comments as:
- Feature Request: asks for new functionality
- Bug Report: describes something broken or not working as expected
- Complaint: expresses dissatisfaction without requesting specific action
- Compliment: positive feedback about existing functionality
- Question: seeks information or clarification

Output: [Comment excerpt] → [Category]: [one-sentence rationale]

If a comment clearly fits two categories, assign the primary one and note
the secondary in parentheses.

Comments to classify:
1. "The export button doesn't work in Safari — nothing happens when I click it"
2. "Would love to be able to filter by date range, not just by status"
3. "Your onboarding team was incredible. Genuinely the best setup experience I've had"
4. "I can't figure out how to add a second user to my account"
5. "The app is so slow it's basically unusable on Mondays"
6. "It would be great if the reports could be automatically emailed to my team"

Notes: The Classifier pattern requires clear, non-overlapping category definitions. If the definitions overlap, the model will assign borderline cases inconsistently. Include the rationale in the output (even a brief one) so you can spot misclassifications quickly — it is much faster to review "Bug Report: describes a Safari-specific behavior" than a bare "Bug Report."


Pattern 11: The Rewriter

Use case: Improving or transforming existing content while preserving its substance.

Template:

Rewrite the following [CONTENT TYPE] to be [TARGET QUALITY — e.g., "clearer and more direct" or "more persuasive" or "appropriate for a general audience"].

Target audience: [AUDIENCE DESCRIPTION]
Target use: [WHERE/HOW IT WILL BE USED]

Specific changes to make:
- [CHANGE 1 — e.g., "Reduce length by 30%"]
- [CHANGE 2 — e.g., "Replace passive voice with active voice"]
- [CHANGE 3 — e.g., "Remove all acronyms and spell out technical terms"]

Preserve:
- [PRESERVE 1 — e.g., "All factual claims"]
- [PRESERVE 2 — e.g., "The existing section structure"]

[ORIGINAL CONTENT]

Worked Example:

Rewrite this internal technical specification to be understandable for senior
business stakeholders who will use it to make a funding decision.

Target audience: VPs and C-suite with business backgrounds, no engineering knowledge
Target use: Appendix to a business case document

Specific changes to make:
- Replace all technical terms with plain-language explanations or remove
- Convert implementation details into capability descriptions ("what it does" not "how it works")
- Add a plain-English summary of each section (2-3 sentences) before the technical content

Preserve:
- All timeline estimates and resource requirements
- All stated risks and dependencies
- The overall section structure

[TECHNICAL SPECIFICATION]

Notes: The Rewriter is improved significantly by specifying both what to change AND what to preserve. Without the "preserve" section, the model may improve clarity by oversimplifying or omitting important content. The most common Rewriter failure: the rewrite is clearer but has lost critical specificity or nuance from the original.


Pattern 12: The Brainstormer

Use case: Generating a diverse, creative set of ideas for open-ended challenges.

Template:

Generate [NUMBER] ideas for [CHALLENGE DESCRIPTION].

Context: [RELEVANT BACKGROUND — problem, constraints, audience, current situation]

Diversity requirements:
- Cover [DIMENSION 1 — e.g., "both high-cost and low-cost approaches"]
- Include [DIMENSION 2 — e.g., "at least 2 unconventional or surprising ideas"]
- Range from [DIMENSION 3 — e.g., "quick wins to longer-term strategic moves"]

For each idea:
- [Element 1 — e.g., "Idea title (3-5 words)"]
- [Element 2 — e.g., "Brief description (2-3 sentences)"]
- [Element 3 — e.g., "Primary benefit"]
- [Element 4 — e.g., "Main challenge or cost"]

Worked Example:

Generate 12 ideas for reducing customer churn in our B2B SaaS product.

Context: Monthly churn rate is 3.5% (industry average is 2%). Main churn drivers
from exit surveys: "didn't use it enough," "too complex," and "found a simpler tool."
Customer segment: operations managers at 50-200 person companies. Product is a
workflow automation platform.

Diversity requirements:
- Cover both product/UX improvements and customer success/relationship approaches
- Include at least 2 ideas focused on the onboarding period (first 30 days)
- Range from things we could test in 2 weeks to strategic 6-month initiatives

For each idea:
- Name (3-5 words)
- Description (2-3 sentences)
- How it addresses one of the 3 stated churn drivers
- Rough implementation effort (Low/Medium/High)

Notes: The Brainstormer's diversity requirements are its most important parameters. Without them, you get 12 variations of the same idea type. The specific diversity constraints ("at least 2 unconventional ideas," "both quick wins and long-term moves") force the model to range across the solution space rather than cluster around the most obvious answers.


Pattern 13: The Responder

Use case: Drafting responses to messages, inquiries, complaints, or communications.

Template:

Draft a response to the following [MESSAGE TYPE — e.g., "customer complaint" or "media inquiry" or "job application"].

Response goals:
- [GOAL 1 — e.g., "Acknowledge the issue genuinely"]
- [GOAL 2 — e.g., "Explain our position clearly without being defensive"]
- [GOAL 3 — e.g., "Propose a specific next step"]

Tone: [TONE DESCRIPTION — e.g., "professional and empathetic" or "direct and confident"]
Length: [LENGTH — e.g., "3-4 short paragraphs" or "under 150 words"]

[OPTIONAL: Do not: [PROHIBITIONS — e.g., "admit fault" or "make promises about timeline"]]
[OPTIONAL: Do include: [REQUIRED ELEMENTS — e.g., "a specific contact name and direct line"]]

Original message:
[INCOMING MESSAGE]

Context:
[ANY RELEVANT BACKGROUND THE RESPONDER WOULD KNOW]

Worked Example:

Draft a response to the following negative review on G2 (our B2B software review platform).

Response goals:
- Acknowledge the reviewer's experience genuinely (not with a generic "we're sorry")
- Explain the specific improvement we've made to the feature they mentioned
- Invite them to return or reconnect with our support team

Tone: Professional, direct, and genuine — not corporate or scripted
Length: 3 short paragraphs, under 120 words total

Do not: make overly broad promises about the future or dispute any factual claims
Do include: a specific invitation to contact support directly

Original review: "The reporting module is nearly unusable — I've been trying to
export custom date ranges for 3 months and every time it either crashes or exports
empty files. The support team was friendly but couldn't fix it. Moving to a competitor."

Context: We released a patch for the export bug 6 weeks ago. This reviewer last
logged in 4 months ago and has not seen the fix.

Notes: The Responder pattern is one of the highest-stakes patterns for customer-facing professionals. The most important parameter is the "do not" list — what you cannot promise, admit, or say. Getting this wrong in a draft you publish can create legal liability. Always review Responder output more carefully than internal content before sending.


Pattern 14: The Checker

Use case: Systematic review of content for specific types of issues.

Template:

Check the following [CONTENT TYPE] for [ISSUE TYPES].

For each issue found:
- Quote the specific text with the issue
- Identify the issue type: [ISSUE CATEGORY 1 / ISSUE CATEGORY 2 / ISSUE CATEGORY 3]
- Explain why it is an issue
- [OPTIONAL: Suggest a correction]

Also provide:
- A count of issues by category
- An overall assessment: [PASS / NEEDS MINOR REVISION / NEEDS MAJOR REVISION]

[OPTIONAL: Do not flag [EXCLUSION — e.g., "Oxford comma usage" or "passive voice"
unless it causes ambiguity"]]

[CONTENT TO CHECK]

Worked Example:

Check the following press release for the following issue types:

1. Factual claims without attribution (quote the claim)
2. Legal or compliance risk language (any promises, guarantees, or forward-looking
   statements without proper hedging)
3. Jargon or acronyms that a general business audience would not know
4. Inconsistencies (figures, names, or dates that contradict each other)

For each issue: quote the text, identify the type, explain why it's a problem,
suggest a correction.

Provide a count by category and an overall assessment (Ready to Send / Needs Review).

Do not flag subjective style preferences — only flag objective issues in the
four categories above.

[PRESS RELEASE]

Notes: The Checker is most valuable when you specify the exact issue types — because each issue type requires different expertise and has different stakes. Bundling "grammar," "legal risk," and "factual accuracy" into a single check produces an inconsistent result. For high-stakes content, run separate Checker passes for different issue categories.


Pattern 15: The Scaffolder

Use case: Creating frameworks, templates, and structural outlines for projects.

Template:

Create a [STRUCTURE TYPE — e.g., "document template" or "project framework" or "meeting agenda structure"] for [PROJECT/PURPOSE].

The structure should cover:
- [SCOPE ELEMENT 1]
- [SCOPE ELEMENT 2]
- [SCOPE ELEMENT 3]

For each section/component, include:
- [META ELEMENT 1 — e.g., "Section title"]
- [META ELEMENT 2 — e.g., "Purpose of this section (1 sentence)"]
- [META ELEMENT 3 — e.g., "Typical length or content count"]
- [META ELEMENT 4 — e.g., "Example questions to answer here"]

Context: [WHO WILL USE THIS, FOR WHAT PURPOSE, WHAT CONSTRAINTS]

Worked Example:

Create a document template for a client-ready competitive analysis report.

The structure should cover:
- Executive summary and key conclusions
- Market landscape overview
- Competitor profiles (4-6 competitors)
- Competitive positioning analysis
- Strategic implications and recommendations

For each section, include:
- Section title and subtitle format
- Purpose of this section (1 sentence)
- Typical content (what goes here)
- Key questions this section should answer
- Approximate length guideline for a 10-page report

Context: This template will be used by junior consultants at a strategy firm.
Reports are delivered to VP/C-suite clients in professional services companies.
The firm's brand is "rigorous and direct" — no fluff, every slide earns its place.

Notes: The Scaffolder creates starting structures, not finished content. Its output is a template or framework that then gets populated (ideally using other patterns — the Planner, the Analyzer, the Generator). Think of the Scaffolder as the architect; the other patterns fill in the content.


4. Parameterizing Patterns with [Brackets]

The bracket variable convention is a practical choice, not a technical requirement. Brackets make variables visually distinctive — you can scan a template quickly and see exactly where you need to fill in specifics.

Best practices for bracket variables:

Name variables descriptively. [AUDIENCE] tells you less than [AUDIENCE — role, knowledge level, what they'll do with this output]. The extra detail in the variable name serves as a reminder of what information the variable needs.

Group related variables. If a pattern has 6 variables, organize them: context variables (who, what, why) at the top; task variables (what to do, how to format it) in the middle; constraint variables (what to avoid, length limits) at the end.

Note which variables are optional. Mark optional variables with [OPTIONAL: ...] so you can distinguish between required customization and optional enhancement.

Include an example fill-in. For new patterns, include a comment showing what a filled-in variable would look like: [AUDIENCE — e.g., "VP of Sales at a 200-person B2B SaaS company"].


5. Building Your Personal Pattern Library

Storage Options

The best storage format is the one you will actually use. The key requirements: accessible from wherever you work, searchable, and easy to paste from. Common options:

Notion or Confluence: Excellent for teams, version-controllable, easy to share. Can organize patterns by category, tag by use case, and maintain a changelog.

A dedicated notes app (Obsidian, Bear, Apple Notes): Good for individuals. Fast to access, easy to search, available on mobile for reference.

A shared document (Google Docs, Word): The lowest-friction option for teams that aren't using a dedicated tool. Add a table of contents so patterns are findable.

A custom GPT or AI assistant with patterns pre-loaded: Advanced option. Many AI platforms allow you to create custom versions with system prompts that contain your standard patterns, so you don't have to paste them each time.

Documentation Template

For each pattern in your library, capture:

PATTERN NAME: [Brief, searchable name]
USE CASE: [When to use this — 1-2 sentences]
LAST UPDATED: [Date]

TEMPLATE:
[Full prompt template with [BRACKET VARIABLES]]

EXAMPLE:
Input variables: [show what you filled in]
Output: [show what you got, or link to an example output]

NOTES:
- [Any known limitations or failure modes]
- [Variations for different situations]
- [What makes this pattern work — the key elements]

Maintenance

A pattern library without maintenance drifts out of date and loses usefulness. Build a simple maintenance habit:

  • After each use: Note any modifications you made to make it work. If the modification improves the base pattern, update the template.
  • Monthly: Review patterns you haven't used in 30+ days. Archive or delete patterns that have become obsolete. Update examples to reflect current output quality.
  • When the AI platform changes: Major model updates sometimes require adjusting patterns that previously worked. Re-test your most important patterns after significant model updates.

6. Pattern Composition: Combining Two or More Patterns

Many complex tasks require multiple patterns in sequence. You can compose patterns explicitly:

Scaffold → Generate → Critique: 1. Use the Scaffolder to create a framework for a deliverable 2. Use the Generator to produce content options for each section 3. Use the Critic to review the assembled result

Analyze → Brainstorm → Plan: 1. Use the Analyzer to assess the current state 2. Use the Brainstormer to generate improvement options 3. Use the Planner to structure the selected approach

Extract → Classify → Summarize: 1. Use the Extractor to pull key data from source documents 2. Use the Classifier to organize extracted data by category 3. Use the Summarizer to produce an audience-appropriate summary

Pattern composition works best when the output of one pattern is explicitly passed as input to the next. This is easier in multi-turn conversation (one pattern per exchange) than in a single prompt (though both work).


7. Pattern Failure Modes and Adaptation

Even well-designed patterns fail sometimes. Common failure modes:

Generic fill-in. When a variable is too broad, the model fills it in generically. "Analyze [CONTENT] for [CRITERIA]" with criteria = "quality" produces useless output. Fix: make criteria specific and evaluable.

Context override. When you paste very long content, the content can overwhelm the instructions. Fix: put instructions at the end as well as the beginning (repetition helps for long-context prompts).

Role drift. In a long exchange, the model may drift from the assigned role. Fix: restate the role at the start of each new prompt in the exchange.

Format abandonment. For complex outputs, the model may abandon the specified format mid-response. Fix: add "follow this format exactly throughout your response" and consider reducing the length of each section.

Instruction conflict. When a pattern has conflicting requirements, the model will try to satisfy both and often satisfies neither well. Fix: audit your patterns for internal consistency before adding them to your library.


8. The Personas' Pattern Libraries

Alex's Weekly Pattern Library (6 Core Patterns)

Alex runs marketing at Brightleaf Consumer Goods. Her six most-used patterns:

  1. Brand Copy Writer — The Generator adapted for product descriptions, using her five-example few-shot reference library (see Chapter 10 Case Study)
  2. Campaign Analyzer — The Analyzer adapted for evaluating campaign performance data against brand KPIs
  3. Competitor Watcher — The Extractor adapted for pulling key claims and positioning statements from competitor content
  4. Email Responder — The Responder adapted for customer service escalations, with brand-voice parameters
  5. Content Repurposer — The Transformer adapted for converting long-form content into social posts, email headers, and ad copy
  6. Brief Checker — The Checker adapted for reviewing creative briefs before they go to external agencies, checking for missing information rather than errors

Alex estimates these six patterns save her 8-12 hours per week. More importantly, they ensure consistency: the same campaign analyzed with the same criteria every quarter, the same brand voice reference applied to every copy task.

Raj's Engineering Pattern Library (5 Core Patterns)

Raj is a senior engineer at Clearpath Financial. His five most-used patterns:

  1. Code Reviewer — The Critic adapted for code review: "Review this code as a security-focused senior engineer at a fintech company. Identify issues in: security vulnerabilities, error handling, test coverage gaps, and performance risks."
  2. Function Documenter — The Transformer adapted for converting code into documentation: "Take this function and produce: (1) a docstring, (2) an inline comment for each non-obvious line, (3) a plain-English description for the README."
  3. Debug Tracer — The Analyzer adapted with the five-step CoT debugging structure from Chapter 10 (his own extension of the CoT technique into a reusable pattern)
  4. Architecture Explainer — The Explainer adapted for converting technical architecture decisions into non-technical summaries for product and business stakeholders
  5. Test Case Generator — The Generator adapted for generating test cases: "Generate 10 test cases for this function covering: happy path, boundary conditions, error states, and security edge cases."

Elena's Consulting Pattern Library (7 Core Patterns)

Elena is a strategy consultant. Her patterns are the most sophisticated — they embed her consulting methodology directly into the templates.

  1. Research Synthesizer — The Summarizer adapted for multi-source synthesis: multiple documents → structured insight summary with source attribution
  2. Framework Applicator — The Analyzer adapted for applying standard consulting frameworks (SWOT, Porter's Five Forces, McKinsey 7-S) to client situations
  3. Deliverable Scaffolder — The Scaffolder adapted for creating document templates for common consulting deliverable types
  4. Interview Analyzer — The Extractor adapted for structured extraction from stakeholder interview transcripts: themes, quotes, tensions, and implications
  5. Recommendation Builder — Custom pattern combining Analyzer + Generator: analyzes situation, generates options, evaluates options against criteria, recommends
  6. Slide Checker — The Checker adapted for slide decks: checks for unsupported assertions, logic gaps, missing context, and audience-appropriateness
  7. Factual Auditor — The Checker adapted specifically for factual accuracy in client deliverables (the self-critique variant she developed, described in Chapter 10)

Elena's most-used insight about her library: "Consulting is fundamentally pattern-matching — recognizing that this client's problem is structurally similar to one you've seen before. My prompt patterns are the same idea applied to AI: recognizing that this deliverable type is structurally similar to others, and having a tested template ready."


9. The Pattern Discovery Process

How do you find new patterns worth building? The process:

Step 1 — Identify recurring tasks. For one week, note every time you use AI for a task. After the week, group similar tasks. Any task you do more than twice is a pattern candidate.

Step 2 — Identify the prompt you used. For your best results, reconstruct the prompt. What did you specify about role, context, format, and constraints? That reconstruction is the draft pattern.

Step 3 — Generalize the specific. Replace the task-specific details with [BRACKET VARIABLES]. The goal is to capture the structure while making all the specifics swappable.

Step 4 — Test the generalization. Run the generalized pattern on a different specific instance of the same task type. Does it produce equally good results? If not, what needs adjusting?

Step 5 — Document and store. Add the working pattern to your library with a descriptive name, use case, and at least one filled-in example.


10. Research Breakdown: Template-Based Workflow Productivity

Research on workflow automation and template use (primarily from the operations management and knowledge work productivity literature, with emerging AI-specific studies) supports several findings relevant to prompt pattern libraries:

Expertise capture: Templates are a mechanism for capturing and transferring tacit expertise. The same principle that makes legal document templates and engineering checklists valuable applies to prompt patterns — they encode the knowledge of what works in a form that others can use without re-deriving it.

Consistency improvements: Studies on templated vs. ad hoc professional communication show that templated approaches produce more consistent quality, with lower variance — fewer excellent results but also fewer poor results. For AI prompting, this translates directly: pattern libraries reduce the variance in AI output quality for recurring tasks.

Cognitive load reduction: Templates reduce the cognitive load of task initiation — the "blank page" problem. This is particularly relevant to AI prompting, where constructing a good prompt from scratch requires significant mental effort. Pattern libraries reduce initiation time and allow more mental energy to go to the actual task.

Time savings compound over time. A pattern that saves 5 minutes per use saves 4 hours per year if used weekly — and the value per use stays constant or increases as your output quality standard rises. Unlike one-time optimizations, pattern improvements compound across every future use.


Content Blocks

💡 Intuition: Why Patterns Work Better Than Memory You cannot reliably reconstruct a prompt that worked perfectly six months ago from memory. You remember that it worked, maybe the general structure, but not the specific phrasing that made it work. A pattern library externalizes that memory reliably. The 20 minutes you spend documenting a new pattern will pay back within the first two or three reuses.


⚠️ Common Pitfall: The One-Instance Pattern Building a "pattern" that is really just one specific prompt documented verbatim. You copy the prompt for "summarizing the Q3 board report for the CFO," but the template has no variables — it's specific to Q3, that specific board report, and that CFO. Next quarter, you're back to square one. The test: can you use this template for a different specific instance of the same task type without significant rewriting? If not, it's a prompt, not a pattern. Generalize it before you store it.


✅ Best Practice: Name Patterns by Function, Not by First Use Name patterns by what they do ("Customer Complaint Responder"), not by when you first built them ("The Email from August"). Functional names are searchable, memorable, and communicate the use case instantly. When your library has 30 patterns, you need to be able to find the right one in 10 seconds.


🎭 Scenario Walkthrough: Elena Builds a Pattern in Real Time Elena is working late, trying to extract structured competitive intelligence from 8 competitor websites for a client. She has never done exactly this task before. She builds a one-time prompt, runs it, gets good results — then spends 5 minutes before closing her laptop to convert that prompt into a reusable pattern by replacing the client-specific details with [BRACKET VARIABLES]. Three weeks later, a new client needs the same thing. She opens her library, fills in four variables, and launches. Total setup time for the second client: 90 seconds. Without the pattern: another 30-minute prompt-building session.


📋 Action Checklist: Building Your First Pattern Library - [ ] Track your AI tasks for one week — note every task type, not every instance - [ ] Identify your top 5 recurring task types (by frequency) - [ ] For each, find your best recent prompt and reconstruct it - [ ] Generalize each prompt by replacing specifics with [BRACKET VARIABLES] - [ ] Test each pattern on a new instance of the same task type - [ ] Document with name, use case, template, and example - [ ] Choose your storage location and add all 5 patterns - [ ] Set a 30-day reminder to review and update


📊 Research Breakdown: The Value of Template-Based Work Knowledge workers using templated approaches for recurring tasks report 25-40% time savings on those tasks vs. ad hoc approaches (various operations management studies). For AI-augmented work specifically, early survey data from enterprise AI deployments suggests that teams with maintained prompt libraries produce more consistent output quality and higher task completion rates than teams using purely ad hoc prompting. The consistency benefit is often cited as more valuable than the speed benefit, particularly in regulated industries where output variance is a compliance risk.


Summary

Prompt patterns transform ad hoc AI use into systematic professional practice. The 15 patterns in this chapter cover the most common recurring professional tasks — summarizing, transforming, analyzing, generating, critiquing, explaining, comparing, planning, extracting, classifying, rewriting, brainstorming, responding, checking, and scaffolding.

Each pattern is a template that encodes accumulated knowledge about what works for that task type. Building a personal library of the patterns most relevant to your work is one of the highest-leverage investments any regular AI user can make — the initial investment pays back within days, and the value compounds with every future use.

Chapter 12 extends the prompting skill set in a different direction: past text-only prompting to multimodal inputs — images, documents, data, and code.