When someone describes a frustrating experience with an AI tool, the complaint usually sounds like one of these: "It gave me something totally generic," "It missed the point entirely," or "The output was way too long and off-topic." What almost...
In This Chapter
- 7.1 What a Prompt Actually Is — and What the AI Does With It
- 7.2 The Anatomy of an Effective Prompt: Five Core Components
- 7.3 The Specificity Ladder: From Vague to Precise
- 7.4 Clarity Principles: The Four Rules of Readable Prompts
- 7.5 The Context Loading Principle: How Much Is Enough?
- 7.6 Format Specification: Telling the AI How to Deliver
- 7.7 The Constraint Layer: Defining Boundaries
- 7.8 The Power of Examples: A Few-Shot Preview
- 7.9 Common Structural Mistakes: The Five Failure Modes
- 7.10 Platform-Specific Formatting Tips
- 7.11 The CRAFT Framework
- 7.12 Scenario Walkthrough: Alex Transforms a Vague Brief
- 7.13 Scenario Walkthrough: Raj Structures a Technical Explanation
- 7.14 Scenario Walkthrough: Elena Builds a Client Communication Prompt
- 7.15 Eight Side-by-Side Prompt Comparisons
- 7.16 Research Breakdown: Prompt Specificity and Output Quality
- 7.17 The CRAFT Framework in Action: A Complete Reference
- 7.18 Chapter Summary
- Chapter Navigation
Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
When someone describes a frustrating experience with an AI tool, the complaint usually sounds like one of these: "It gave me something totally generic," "It missed the point entirely," or "The output was way too long and off-topic." What almost never gets said — but is almost always true — is: "My prompt wasn't clear enough."
This chapter is about closing that gap. Before any technique, shortcut, or advanced workflow, there is the prompt itself — the message you type, the instructions you give, the context you provide. Everything downstream of that first interaction is shaped by what you put into it.
The good news: prompting is learnable. It is not a talent you either have or lack. It is a craft, and like all crafts, it improves with knowledge, practice, and deliberate attention to what works.
7.1 What a Prompt Actually Is — and What the AI Does With It
A prompt is not a search query. It is not a keyword string. It is an instruction, a request, a context-loading mechanism, and often a negotiation — all at once.
When you type a message to an AI language model, the model does not "look up" an answer. It generates a response by predicting the most statistically appropriate continuation of your input, given everything it learned during training. That is a profound difference from search.
What this means practically:
The model takes your prompt literally and inferentially. If you ask for "a list of marketing ideas," it will produce a list of marketing ideas — probably generic ones, because that is what the training data suggests "a list of marketing ideas" looks like when you give no further context. It will infer that you probably want something it considers professional, probably want bullet points, probably want around 8–12 items. None of this was stated. All of it is inference.
The model fills gaps with assumptions. Every piece of information you leave out, the model fills in from its training data. If you do not specify the audience, it assumes a general audience. If you do not specify tone, it defaults to something professional-neutral. If you do not specify length, it estimates based on the type of task. These defaults are rarely what you actually want.
The model responds to the structure and tone of your prompt. If you write a casual, brief prompt, you often get a casual, brief response. If you write a structured, detailed prompt, you more often get structured, detailed output. The form of your input signals the form you expect back.
The model does not know what you know. It cannot see your previous conversations (unless they are in the current session). It does not know your company, your audience, your writing style, your project context, or your preferences — unless you tell it.
Understanding these mechanics transforms how you approach prompting. You stop treating it like a search box and start treating it like a brief to a skilled contractor: the more clear and complete your brief, the better the work you get back.
7.2 The Anatomy of an Effective Prompt: Five Core Components
After studying thousands of prompts across dozens of domains, a consistent pattern emerges. The most effective prompts contain some combination of five components. Not every prompt needs all five, but knowing all five means you always know what might be missing.
Component 1: Task
The task is the core instruction — the verb-driven heart of what you are asking the AI to do. Every prompt has a task, even if everything else is absent.
Weak task: "Blog post about remote work" Strong task: "Write a 900-word blog post arguing that remote work increases productivity for knowledge workers when paired with intentional communication norms"
The difference is not complexity — it is precision. A strong task specifies the action (write), the scope (900-word blog post), the angle (arguing), the topic (remote work increases productivity), the qualifier (for knowledge workers), and the condition (when paired with intentional communication norms).
Component 2: Context
Context is the background information the AI needs to do the task well. It answers the implicit questions: Who is asking? For whom? In what situation? With what constraints?
Without context: "Summarize this article for my team" With context: "Summarize this article for my team of non-technical marketing managers who are evaluating whether to add an AI writing tool to our workflow. They are skeptical about AI reliability. Keep the summary focused on practical benefits and credibility indicators."
The context does not make the task longer — it makes the output more useful.
Component 3: Format
Format specifies how you want the output structured. Length, headings, lists, tables, tone, reading level — all of these are format decisions.
Without format: "Give me feedback on this proposal" With format: "Give me feedback on this proposal structured as: (1) three specific strengths with examples from the text, (2) three specific weaknesses with suggestions for improvement, (3) one overall assessment of whether you would fund this proposal and why"
Specifying format eliminates the guesswork and prevents the model from defaulting to a generic paragraph-based response that buries the insights you care about.
Component 4: Constraints
Constraints are boundaries — what the output should not do, what it must avoid, what limits apply.
Implicit constraint (weak): "Write an email declining the meeting" Explicit constraint (strong): "Write an email declining the meeting. Do not mention that we are understaffed. Do not leave the door open for rescheduling. Keep it under 100 words and professional in tone."
Constraints are particularly important when you know your context has landmines — things that a reasonable AI response might include that you absolutely cannot use.
Component 5: Examples
Examples — sometimes called few-shot prompting — show the AI what good output looks like rather than just describing it. Examples are among the most powerful tools in prompting, and they are covered in depth in Chapter 10. For now, the key insight is this: a single well-chosen example often communicates more than three paragraphs of description.
Without example: "Write product descriptions in our brand voice — friendly but professional" With example: "Write product descriptions in our brand voice. Here is an example of our current style: 'The Aria Desk Lamp doesn't try to be clever. It just lights your space beautifully, dims to your preference, and stays out of the way. Because good design should feel invisible.' Match that voice — warm, direct, confident without arrogance."
7.3 The Specificity Ladder: From Vague to Precise
Specificity is the single variable that most reliably improves prompt quality. The specificity ladder shows how the same request can exist at multiple levels of precision.
Rung 1: Pure Vague (Almost Useless)
"Write something about marketing."
This prompt tells the AI: a topic exists. Nothing else. The output will be generic to the point of uselessness — probably a definition of marketing, or a list of marketing channels, or a brief overview of digital marketing trends. All of it technically correct. None of it useful to you.
Rung 2: Topic + Type (Marginally Better)
"Write a blog post about email marketing."
Now the AI knows the format (blog post) and the sub-topic (email marketing). The output will be better — probably an introduction, a few tips, a conclusion. Still generic. Still usable by anyone writing anything about email marketing.
Rung 3: Topic + Type + Audience (Getting Useful)
"Write a blog post about email marketing for small e-commerce businesses."
Now the output has a target. The AI will likely include specific platform recommendations, budget-conscious tips, and examples relevant to retail. It is starting to feel like it was written for someone.
Rung 4: Topic + Type + Audience + Angle (Genuinely Useful)
"Write a blog post for small e-commerce businesses arguing that email marketing still outperforms social media for repeat purchases, and showing how to build a simple post-purchase sequence."
Now the AI has a position to take, a claim to support, and a specific tactic to explain. The output will have a perspective and a practical application. It will be distinguishable from other blog posts on the same topic.
Rung 5: Full Specification (High-Performance Output)
"Write an 800-word blog post for small e-commerce businesses who are currently prioritizing Instagram and TikTok over email. The post argues that email still generates higher repeat purchase rates and ROI than social media, citing the fact that email lists are owned audiences while social platforms are rented. Include one specific example of a post-purchase 3-email sequence (subject lines and brief content descriptions). Tone: confident, data-curious, not preachy. Assume the reader is skeptical but open-minded."
The output from this prompt will feel like it was written specifically for that audience with that argument in mind. The difference between Rung 2 and Rung 5 is not that Rung 5 is harder to write — it is that Rung 5 requires you to know what you actually want.
💡 Intuition Builder: The Newspaper Test Before submitting a prompt, ask: "If a journalist received this as a brief, would they know exactly what story to write?" If the answer is no, your prompt needs more specificity.
7.4 Clarity Principles: The Four Rules of Readable Prompts
Specificity gets you the right content. Clarity ensures the AI understands your instructions accurately. These are different dimensions. A highly specific prompt can still be unclear if it is poorly written.
Rule 1: Use Active Voice and Direct Verbs
Passive construction creates ambiguity about who is doing what.
Passive and unclear: "A summary of the report should be created that focuses on financial highlights" Active and clear: "Summarize the report, focusing on the three most significant financial highlights"
The active version tells the AI exactly what to do (summarize), what to focus on (financial highlights), and how many (three). The passive version requires the AI to infer all of that.
Strong verbs to use: write, draft, outline, analyze, critique, compare, explain, summarize, list, generate, revise, evaluate, recommend, translate, rewrite
Weak verbs to avoid: do, make, create something about, help me with, think about, look at
Rule 2: Avoid Ambiguous References
Pronouns and vague references — "it," "this," "that," "the thing" — cause AI to guess what you mean.
Ambiguous: "Take the proposal and rewrite it so it sounds better with the tone we discussed" Clear: "Rewrite the attached proposal so it uses a collaborative rather than directive tone — specifically, replace phrases like 'you must' and 'you are required to' with phrases like 'we recommend' and 'the team would benefit from'"
In long prompts, be explicit about what "this document," "the above example," or "our previous discussion" refers to.
Rule 3: One Primary Task Per Prompt
Trying to accomplish too many things in one prompt often results in the AI addressing each task superficially or ignoring some entirely.
Overloaded prompt: "Write a blog post about our new product, then draft 5 social media posts for it, create an email subject line, and also give me feedback on whether our messaging is clear"
These are four distinct tasks. Each deserves its own prompt. When you combine them, you usually get a mediocre blog post, weak social posts, a generic subject line, and feedback that is too surface-level to be useful.
If your tasks are related, run them sequentially — each in its own message, where the output of one feeds the input of the next.
Rule 4: State What Success Looks Like
Many prompts describe the task but not the success criteria. What does a good output look like to you? What would make you immediately say "yes, that is exactly what I needed"?
Without success criteria: "Write a value proposition for our project management software" With success criteria: "Write a value proposition for our project management software. Success looks like: a two-sentence statement that a non-technical manager would understand immediately, that differentiates us from generic to-do list tools, and that focuses on team coordination rather than individual productivity"
7.5 The Context Loading Principle: How Much Is Enough?
One of the most common prompting mistakes is providing either too little context or too much. Finding the right amount requires understanding what context actually does.
The Under-Context Problem
When you provide too little context, the AI generates something generic. The output is technically correct but not useful for your specific situation. You get a blog post that sounds like every other blog post. You get code that solves a theoretical version of your problem, not the actual one.
Signs of under-contextualized prompts: - The output could have been written by anyone, for any company, in any industry - The output does not mention your specific constraints, audience, or goals - The output solves a slightly different problem than the one you have
The Over-Context Problem
More is not always better. When you dump every piece of background information into a prompt, several things can happen:
- The AI loses focus and addresses everything equally rather than prioritizing
- Critical instructions get buried and overlooked
- The prompt becomes hard to write and maintain
- You hit token limits on some platforms
The Goldilocks Principle
The right amount of context is the minimum necessary to produce a useful, specific, non-generic output. A useful test: read your prompt and ask, "Is there any piece of context here that would not change the output if I removed it?" If yes, remove it.
Context that almost always belongs in a prompt: - Who the output is for (audience) - What it will be used for (purpose) - Any constraints that are non-obvious - Tone or style requirements - One example if style is hard to describe
Context you often do not need: - Background history that does not affect the task - Explanations of why you are doing something - Apologies or qualifications ("I know this is complicated, but...") - Repetition of what the AI already knows from training
⚠️ Common Pitfall: The Context Dump Pasting a 2,000-word document as context and then asking a one-sentence question often produces a response that addresses the document broadly rather than your specific question. Front-load your question, then provide the reference material. Position shapes attention.
7.6 Format Specification: Telling the AI How to Deliver
Format specification is one of the highest-leverage actions in prompting. The same content delivered in different formats has radically different utility.
Length Specification
The most common format mistake is not specifying length. AI models have a tendency to produce responses that are "appropriately long" by training data standards — which is often longer than what you actually need.
Be specific: "Summarize in three sentences," "Keep this under 150 words," "Aim for 800-1,000 words," "Give me a single paragraph."
Vague length cues (avoid): "brief," "detailed," "comprehensive," "short" — these mean different things to different people and to the AI.
Structure Specification
Tell the AI what organizational structure you want: - "Use headers and subheaders" - "Present this as a numbered list" - "Use a table with three columns: Feature, Benefit, Example" - "Write this as three paragraphs: problem, solution, call to action" - "Give me a pros/cons list"
Tone and Register
Tone specification goes beyond "professional" or "casual" — those are vague categories. The more specific you are, the better the match.
Instead of "professional," try: "Direct and confident without jargon. The tone of a senior manager writing to peers, not subordinates."
Instead of "casual," try: "Conversational and warm — the way you'd explain something to a smart friend over coffee, not a formal colleague."
Reading Level
If your audience has a specific reading level, name it: "Explain this for someone with no prior technical knowledge," "Assume the reader has a PhD in biology," "Write at a tenth-grade reading level."
✅ Best Practice: The Format Mirror If you want a specific format, model it in your prompt structure. A prompt that itself uses headers and numbered lists signals that you want structured, organized output. A flowing paragraph prompt tends to produce flowing paragraph output.
7.7 The Constraint Layer: Defining Boundaries
Constraints tell the AI what not to do — and they matter as much as what you ask it to do. Effective use of constraints prevents the most common output failures.
Types of Constraints
Content constraints: "Do not include pricing information," "Do not mention competitor names," "Avoid statistics older than 2020"
Tone constraints: "Do not use exclamation points," "Avoid jargon and acronyms," "Do not be condescending"
Length constraints: "Keep this under 200 words," "Use no more than five bullet points"
Structural constraints: "Do not use headers — write in prose," "Present this as a single cohesive paragraph, not a list"
Perspective constraints: "Do not recommend any specific software products," "Do not take a political position," "Focus only on the implementation, not the strategy"
Negative Instructions and How to Use Them
Research and experience suggest that negative instructions (do not do X) are less reliable than positive instructions (do Y instead of X). This does not mean you should avoid them — it means you should pair them with the positive alternative where possible.
Less effective: "Don't make this sound too formal" More effective: "Write in a conversational tone — imagine you're explaining this to a friend, not writing a policy document"
Over-Constraining: The Other Failure Mode
A prompt with too many constraints can produce output that satisfies all the constraints but is otherwise dull and stilted. If you find yourself writing five or more constraint clauses, ask whether some of them can be bundled into a style description or example instead.
⚠️ Common Pitfall: The Contradiction Trap Giving contradictory constraints forces the AI to choose, and it will choose unpredictably. "Keep this concise but include detailed explanations of each point" is contradictory. "Keep this under 400 words and use bullet points to stay concise while covering the key points" is consistent.
7.8 The Power of Examples: A Few-Shot Preview
Examples transform abstract instructions into concrete guidance. This principle — providing examples of desired output within the prompt — is called few-shot prompting and receives full treatment in Chapter 10. But even at the fundamentals level, examples are worth understanding.
When you include an example in your prompt, you accomplish several things at once:
- You demonstrate tone without having to describe it
- You show structure without having to specify every element
- You establish vocabulary and register implicitly
- You reduce the gap between what you mean and what the AI produces
A single well-chosen example often does more work than three paragraphs of style description.
Zero-shot (no example): "Write a product headline in our brand voice" Output risk: Generic headline with no match to your actual brand voice
One-shot (one example): "Write a product headline in our brand voice. Here's an example of a headline we love: 'Meetings that end on time. Finally.' Match that style." Output: Much higher likelihood of matching the directness, wit, and brevity of that example
The practical rule: if you have a style or format that is hard to describe in words, provide an example instead of (or in addition to) the description.
7.9 Common Structural Mistakes: The Five Failure Modes
Even experienced users fall into these patterns. Recognizing them is the first step to avoiding them.
Failure Mode 1: The Buried Lede
The most important instruction is placed at the end of a long prompt, buried after paragraphs of background. AI models read from top to bottom and give disproportionate weight to what comes first.
Buried lede: "We are a B2B software company. We sell project management tools. Our customers are mostly mid-sized businesses. We have been in business for 8 years. We recently launched a new feature called FlowSync. Our main competitors are Asana and Monday.com. Write a press release."
Fixed: "Write a press release announcing our new FlowSync feature. Background for context: [B2B project management software, 8 years in business, competitors are Asana and Monday.com, targeting mid-sized businesses]"
Failure Mode 2: The Assumption Gap
You assume the AI knows something it does not. Usually this is domain-specific knowledge, company-specific context, or jargon that means something particular in your field.
Assumption-laden: "Rewrite this using our standard TDP framework" Fixed: "Rewrite this using the TDP framework: Task (what needs to be done), Data (what evidence supports it), Path (what the recommended action is)"
Failure Mode 3: The Vague Imperative
Using high-level action words without specifying what that action looks like.
Vague: "Make this better" Fixed: "Revise this to be more concise (cut by 30%), use active verbs instead of passive construction, and eliminate the three instances where you use 'utilize' instead of 'use'"
Failure Mode 4: The Wall of Text
A prompt with no line breaks, headers, or structure. Long unbroken text is hard for both humans and AI to parse efficiently.
Instead of a single paragraph with five instructions embedded in it, use numbered instructions: 1. Summarize the attached report in three bullets 2. Identify the two most significant risks mentioned 3. Suggest one follow-up question for each risk
Failure Mode 5: The Over-Constrained Prompt
So many constraints that the AI is trapped, producing output that satisfies the rules but has no life or usefulness.
Over-constrained: "Write a 200-word blog intro that is engaging but not sensational, professional but not formal, enthusiastic but not hyperbolic, detailed but accessible, and avoids all clichés while still being immediately understandable"
Some of these constraints are in tension with each other. When this happens, prioritize: decide which two or three constraints matter most and drop the rest.
7.10 Platform-Specific Formatting Tips
The same prompt principles apply across all major AI platforms, but each has quirks worth knowing.
ChatGPT (OpenAI)
- Responds well to clear system-style framing even in user messages: "Your role in this conversation is to act as an expert technical editor."
- Custom Instructions feature allows you to set persistent context that applies to every conversation
- Tends to produce more structured output when you use numbered lists in your prompt
- GPT-4 handles long, complex prompts well; GPT-3.5 works better with simpler, more focused prompts
Claude (Anthropic)
- Responds well to nuanced tone descriptions and style examples
- Handles long documents and multi-document contexts gracefully
- Explicit XML-style tagging (e.g.,
<document>...</document>,<instructions>...</instructions>) can help organize complex prompts - More responsive to "think step by step" and reasoning-first instructions
- Appreciates being told the reasoning behind a request, not just the request itself
Gemini (Google)
- Strong integration with Google Workspace means it benefits from specific references to document types and formats
- Well-suited for tasks that combine research retrieval with generation
- Responds well to structured prompts with clear section breaks
- Multimodal prompts (combining text with image or document references) are a particular strength
General Principles That Apply Everywhere
- Numbered lists of instructions tend to be followed more reliably than prose descriptions
- The first sentence of your prompt receives disproportionate attention — put your most important instruction there
- Asking the AI to "think step by step" or "reason through this before writing" improves quality on complex tasks across all platforms
7.11 The CRAFT Framework
The CRAFT framework is a mnemonic for structuring prompts that need to cover all five dimensions efficiently. It is most useful when you are building prompts for important tasks and want a checklist to ensure nothing is missing.
C — Context: What background does the AI need? Who are you, who is the audience, what is the situation?
R — Role: What role should the AI play? (Covered in depth in Chapter 9, but the principle applies here: "You are a senior copywriter reviewing this draft for a consumer goods brand.")
A — Action: What specific action do you want performed? Use strong verbs.
F — Format: How should the output be structured? Length, headers, lists, tables, tone.
T — Tone: What is the desired emotional register? Formal/casual? Direct/nuanced? Confident/exploratory?
CRAFT in Practice
Here is the same request constructed without CRAFT, then with it:
Without CRAFT: "Write a LinkedIn post about our new product launch"
With CRAFT: - Context: We are a sustainable packaging startup. Our new product is a compostable shipping envelope that replaces plastic poly mailers. Our audience is e-commerce founders and sustainability-minded operations managers. - Role: You are a LinkedIn content strategist specializing in purpose-driven brands. - Action: Draft a LinkedIn post announcing the product launch. - Format: 150-200 words, no hashtags, end with a single clear call to action (link in bio). - Tone: Confident and mission-driven without being preachy. We want to inspire, not lecture.
CRAFT prompt: "You are a LinkedIn content strategist specializing in purpose-driven brands. We are a sustainable packaging startup launching a compostable shipping envelope that replaces plastic poly mailers. Our audience is e-commerce founders and sustainability-minded operations managers. Draft a LinkedIn post announcing this launch. 150-200 words, no hashtags, end with a single clear call to action (link in bio). Tone: confident and mission-driven without being preachy — inspire, don't lecture."
The CRAFT version takes about 60 seconds longer to write. The output quality gap is substantial.
7.12 Scenario Walkthrough: Alex Transforms a Vague Brief
🎭 Scenario: Alex's Blog Post Transformation
Alex manages content marketing for a DTC lifestyle brand. Her manager asks her to "get AI to write a blog post about our new candle line for the website."
Alex's first attempt (Rung 1): "Write a blog post about our new candle line."
The output is exactly what you would expect: a generic introduction about the appeal of candles, a few descriptive paragraphs about scents and ambiance, and a closing sentence about ordering online. Nothing wrong with it. Nothing right with it, either. It could be for any candle brand on earth.
Alex's revised prompt (Rung 3): "Write a 600-word blog post for our lifestyle brand's website about our new candle line, aimed at women aged 25-40 who care about home aesthetics and natural ingredients."
Better. The output now mentions natural ingredients, references home decor aesthetics, and uses a tone that skews slightly toward that demographic. Still generic — it does not know what makes their candles different, what the brand voice sounds like, or what specific products exist.
Alex's high-performance prompt (Rung 5): "Write a 600-word blog post for Lumier Home's website announcing our new Terroir Collection — five candles named after French wine regions (Bordeaux, Burgundie, Alsace, Loire, Champagne), each scented to evoke the landscape of that region. Our brand voice is sophisticated but approachable — think 'a well-traveled friend who makes you feel cultured without making you feel ignorant.' Our audience is women 28-42 who follow home décor accounts, read Apartment Therapy, and buy gifts that feel intentional. The post should position these candles as an experience, not just a product — buying a piece of a place you've visited or always wanted to. Include one concrete sensory description for each candle (3-4 sentences each). No generic candle clichés ('warm your space,' 'perfect for any occasion'). End with a gentle call to action linking to the collection."
The output from this third prompt requires minimal editing. It names the products, evokes the specific brand tone, speaks to the right audience, and delivers the sensory descriptions requested. Alex did not work harder — she worked clearer.
See Case Study 01 at the end of this chapter for the full versions of all three prompts and analysis of exactly what changed between each iteration.
7.13 Scenario Walkthrough: Raj Structures a Technical Explanation
🎭 Scenario: Raj's Technical Documentation Request
Raj is a senior software engineer who needs to explain the company's new microservices authentication flow to two very different audiences: the engineering team (who need implementation detail) and the product leadership team (who need to understand business implications and risks).
Raj's first attempt: "Explain our OAuth 2.0 authentication flow for microservices."
The output is a textbook explanation of OAuth 2.0 — academically accurate, completely divorced from Raj's actual system, and pitched at a technical level that is too dense for leadership and not specific enough for engineers.
Raj's structured prompt: "I need two versions of an explanation of our microservices authentication flow. Here is the technical summary of how it works: [Raj pastes a 200-word technical description of the actual flow, including token types, services involved, and known failure modes].
Version 1 — For the engineering team: Technical depth, including implementation notes and known failure modes. Format: headers, numbered process steps, a section on edge cases. Assume the reader understands OAuth, REST APIs, and JWT tokens. ~400 words.
Version 2 — For product leadership: Business-level explanation with no code or protocol references. Focus on: what data is being protected, what happens when authentication fails (user impact), and why this architecture was chosen over a simpler approach. Format: three short paragraphs. Assume the reader has no technical background. ~200 words."
The output delivers two distinct documents, each calibrated to its audience. Raj did not need two separate conversations — he bundled both tasks in one well-structured prompt because they share the same source context.
7.14 Scenario Walkthrough: Elena Builds a Client Communication Prompt
🎭 Scenario: Elena's Constrained Communication
Elena is a management consultant who regularly needs to communicate sensitive information to clients — project delays, budget overruns, scope changes — in ways that are honest without being alarming, and professional without being defensive.
Elena's constraint-heavy prompt: "Draft an email to a client informing them that the current phase of our consulting engagement will run two weeks over the original timeline.
Context: The delay is due to slower-than-expected data access from their IT department, not a failure on our team's part. The client is a risk-sensitive CFO who has previously expressed frustration with delays in other vendor relationships.
Constraints: - Do not use the phrase 'unfortunately' or begin with an apology - Do not assign blame to the client's IT team directly — frame it as a shared data access challenge - Do not use passive voice to obscure responsibility - Do not exceed 180 words - Do include a revised timeline and a specific next step with a date - Tone: direct, confident, solutions-oriented — not defensive or apologetic
Format: Standard email with Subject, greeting, three short paragraphs, and a clear next step."
Elena's constraints are specific because she knows the failure modes in this type of communication. The AI is constrained away from all of them. What comes back requires almost no revision.
7.15 Eight Side-by-Side Prompt Comparisons
The following comparisons illustrate the before/after impact of applying the principles in this chapter.
1. Meeting summary - Weak: "Summarize the meeting" - Strong: "Summarize this meeting transcript in three sections: Key decisions made (numbered list), Action items with owner and due date (table), and Open questions that still need resolution (bullets). Max 250 words total."
2. Performance review feedback - Weak: "Help me write feedback for my employee" - Strong: "Draft constructive feedback for a junior analyst who consistently meets deadlines and produces accurate work but struggles to communicate proactively when blockers arise. The feedback should acknowledge the strength, describe the specific pattern (without a single incident), and offer a concrete behavior change. 150 words, formal tone."
3. Code explanation - Weak: "Explain this code" - Strong: "Explain this Python function to a junior developer who understands basic syntax but has not worked with decorators before. Use a real-world analogy for what the decorator does, then walk through the code line by line. Max 200 words."
4. Cover letter - Weak: "Write a cover letter for this job" - Strong: "Write a cover letter for a senior product manager applying to [Company] for [Role]. The applicant has 7 years of B2B SaaS experience, led the 0-to-1 launch of a data analytics feature, and has a background in user research. Tone: confident and specific — avoid generic 'I am passionate about' language. Two paragraphs: one on relevant experience, one on why this specific company. Max 200 words."
5. Sales email - Weak: "Write an email to a potential customer" - Strong: "Write a cold outreach email to a VP of Operations at a mid-sized logistics company. We sell fleet management software. Pain point: their current system requires manual mileage reporting. Our differentiator: automated real-time GPS logging with one-click payroll integration. Keep it under 100 words, no attachments, single clear CTA (15-minute call)."
6. Slide content - Weak: "Give me content for my presentation slide" - Strong: "Write the content for one PowerPoint slide on the business case for investing in employee mental health programs. Format: a header, three bullet points of no more than 12 words each, and one supporting statistic. Audience: executive leadership. Tone: evidence-based and direct."
7. Policy document - Weak: "Write a remote work policy" - Strong: "Draft a remote work policy for a 50-person technology company. Sections: eligibility criteria, required availability hours (core hours 10am-3pm local time), equipment stipend ($500/year, receipts required), in-person requirements (one team week per quarter), and performance expectations. Formal tone, plain language, no jargon. 400-500 words."
8. Social media post - Weak: "Write a post for LinkedIn" - Strong: "Write a LinkedIn post from the CEO's perspective announcing that we just reached 100 enterprise customers. Tone: genuinely grateful, not performatively humble. Mention the team, not just the milestone. 150 words, no hashtags, end with one forward-looking sentence about year two."
7.16 Research Breakdown: Prompt Specificity and Output Quality
The relationship between prompt specificity and output quality has been studied across both controlled research settings and large-scale industry observations.
A 2023 study from Stanford's Human-Computer Interaction Group examined how variations in prompt structure affected the quality of AI-generated code, summaries, and creative writing as rated by domain experts. Prompts with explicit task specifications, audience definitions, and format requirements consistently produced outputs rated 15-30% higher quality than minimal prompts for the same underlying task.
Research from Anthropic on Claude's response patterns shows that providing examples of desired output (few-shot prompting) reduces the need for iteration by approximately 40% compared to zero-shot prompts for style-sensitive tasks.
Industry data from organizations that have formalized prompt libraries — standardized, high-specification prompts for recurring tasks — shows consistent quality improvements and reduction in revision cycles. A content marketing agency that moved from ad-hoc to structured prompts across a team of 12 writers reported a 35% reduction in editing time per piece over six months.
The mechanism is straightforward: the AI is not producing lower-quality output from vague prompts due to lack of capability. It is producing output that represents a reasonable interpretation of an underspecified request. More specification narrows the interpretation space and directs the model's capabilities toward your actual need.
⚖️ Myth vs. Reality: "Better AI models make prompting less important"
Myth: As AI models get more capable, the quality of your prompt matters less because the AI can figure out what you mean.
Reality: More capable models respond more dramatically to the quality of your prompt — both in positive and negative directions. A vague prompt given to a highly capable model often produces a highly capable generic response, which is still useless. The same model given a specific, well-structured prompt produces output that is astonishingly good. The gap between weak and strong prompting grows as the underlying model improves.
7.17 The CRAFT Framework in Action: A Complete Reference
Below is a reference table for using CRAFT across different task types.
| Task Type | Context | Role | Action | Format | Tone |
|---|---|---|---|---|---|
| Blog post | Industry, audience, brand, unique angle | Content strategist for [brand type] | Write, argue, explore | Word count, headers, CTA | Brand voice description |
| Code review | Language, framework, standards, expertise level | Senior [language] engineer | Review, identify, suggest | Numbered issues, severity levels | Direct, specific |
| Client email | Relationship, sensitivity, business context | Senior client manager | Draft, communicate | Word limit, subject line, sign-off | Professional, solutions-oriented |
| Report summary | Audience expertise level, purpose of summary | Analyst briefing executive | Summarize, highlight | Bullets, word limit, sections | Clear, evidence-based |
| Training content | Learner level, learning objective, format constraints | Instructional designer | Develop, explain, create | Module structure, length per section | Accessible, progressive |
7.18 Chapter Summary
The quality of your AI output is, to a larger degree than most people realize, a direct function of the quality of your prompt. This chapter established the foundation:
A prompt has five components: Task, Context, Format, Constraints, and Examples. Each component reduces ambiguity and directs the AI's considerable capabilities toward your actual need.
The specificity ladder shows that moving from "write something about X" to a fully specified prompt is not about adding complexity — it is about knowing what you want and expressing it clearly.
Clarity principles — active voice, direct verbs, unambiguous references, and one task per prompt — ensure that your instructions are understood, not merely received.
Format specification is one of the highest-leverage prompting decisions you can make, transforming how output is structured and delivered.
The CRAFT framework (Context, Role, Action, Format, Tone) provides a reliable checklist for prompts that need to cover all dimensions.
The five failure modes — buried lede, assumption gap, vague imperative, wall of text, over-constraint — each have specific fixes.
In Chapter 8, we turn to context specifically: what it is, how much to include, and how to build reusable context packets that make every AI session more productive from the first message.
Chapter Navigation
- Previous: Chapter 6 — AI Tools in the Modern Workplace: Capabilities and Limitations
- Next: Chapter 8 — Context Is Everything: Loading Your AI's Working Memory
- Exercises: Chapter 7 Exercises
- Quiz: Chapter 7 Quiz
- Case Studies: Alex's Blog Post Transformation | Raj's Technical Documentation Prompt
- Key Takeaways: Chapter 7 Key Takeaways
- Further Reading: Chapter 7 Further Reading