Chapter 7 Quiz: Prompting Fundamentals
Test your understanding of prompting structure, clarity, and specificity. For each question, attempt your answer before expanding the solution.
Question 1
Which of the five prompt components is most consistently missing from first-draft prompts written by people new to AI tools?
A) Task B) Context C) Format D) Constraints
Answer
**B) Context** — and often **C) Format** as well. New users almost always include a task (they know what they want done), but rarely specify the audience, purpose, or situation (context), and even more rarely specify how they want the output structured (format). The AI fills both gaps with reasonable but generic defaults, which is why first-draft AI output so often feels like it could be for anyone.Question 2
You submit this prompt: "Write a report on employee satisfaction." The output you receive is technically accurate but generic — it could apply to any company. Which principle most directly explains the problem?
A) The prompt used passive voice B) The prompt contained contradictory constraints C) The prompt did not provide context or format specification D) The prompt had too many tasks
Answer
**C) The prompt did not provide context or format specification.** The task (write a report) is present, but there is no context about whose employees, what industry, what the report is for, or who will read it. There is also no format specification — what sections, what length, what structure. Without these, the AI defaults to the most statistically typical version of an "employee satisfaction report," which is generic. Passive voice (A) is not the issue here; contradictory constraints (B) are absent; there is only one task (D).Question 3
In the specificity ladder, what distinguishes Rung 4 from Rung 3?
A) The addition of a format specification B) The addition of a specific angle, position, or argument to take C) The addition of an example of desired output D) The specification of a word count
Answer
**B) The addition of a specific angle, position, or argument to take.** Rung 3 adds audience to topic and type. Rung 4 adds a specific perspective or angle — it tells the AI not just who to write for, but what to argue, explore, or demonstrate. This is what transforms output from "useful for that audience" to "genuinely specific and distinguishable from other writing on the same topic." Format (A) and examples (C) are important but are components that can appear at any rung.Question 4
Which of the following is the most reliable way to improve tone matching in AI output?
A) Describing the tone in adjective pairs ("professional but warm") B) Providing a word count and structure specification C) Including an example of the desired tone from existing content D) Using the word "important" before the tone instruction
Answer
**C) Including an example of the desired tone from existing content.** Examples communicate tone with a precision that adjective descriptions cannot match. "Professional but warm" means different things to different people (and to different AI systems). A sentence or paragraph in the actual desired tone shows rather than tells. Adjective pairs (A) are a starting point but rarely sufficient for style-sensitive tasks. Word count and structure (B) are format, not tone. Adding "important" (D) has no meaningful effect.Question 5
You are writing a prompt and realize that two of your constraints contradict each other. What should you do?
A) Keep both constraints — the AI will figure out the right balance B) Remove both constraints and rely on the AI's judgment C) Decide which constraint matters more and revise or remove the other D) Add a third constraint to resolve the conflict
Answer
**C) Decide which constraint matters more and revise or remove the other.** Contradictory constraints force the AI to choose which one to honor, and it will do so unpredictably. The right approach is to resolve the contradiction before submitting the prompt. Keeping both (A) produces inconsistent output. Removing both (B) gives up necessary control. Adding a third constraint (D) does not resolve the underlying conflict and often makes it worse.Question 6
The "buried lede" failure mode refers to:
A) Using passive voice that obscures who is performing the action B) Placing the most important instruction at the end of a long prompt C) Providing too much background context before stating the task D) Both B and C
Answer
**D) Both B and C.** The buried lede failure mode occurs when the actual task or most critical instruction is preceded by so much background information that it receives less weight in the AI's processing. AI models process from the beginning of the prompt and give proportionally more attention to early content. The fix is to front-load your key instruction and move background context to a secondary position, often labeled explicitly as context.Question 7
Which of the following active verbs is the weakest task instruction?
A) "Analyze" B) "Critique" C) "Help me with" D) "Outline"
Answer
**C) "Help me with"** "Help me with" is not a real instruction — it is a request for assistance without specifying what kind of assistance. The AI is forced to guess whether you want writing, analysis, summarization, brainstorming, or something else. "Analyze," "Critique," and "Outline" are all specific action verbs that tell the AI exactly what cognitive and output operation you want performed.Question 8
What does the CRAFT mnemonic stand for?
A) Clarity, Relevance, Accuracy, Format, Tone B) Context, Role, Action, Format, Tone C) Content, Request, Audience, Framework, Task D) Context, Request, Angle, Format, Type
Answer
**B) Context, Role, Action, Format, Tone** The CRAFT framework provides a checklist for ensuring prompts cover all critical dimensions: the background information the AI needs (Context), the perspective or persona to adopt (Role), the specific task to perform (Action), how the output should be structured (Format), and the emotional register to use (Tone).Question 9
You write a prompt that asks the AI to "briefly summarize, provide detailed analysis, create an executive dashboard, draft a follow-up action plan, and flag any legal risks." What failure mode does this represent?
A) Buried lede B) Assumption gap C) Over-constrained D) Wall of text / multiple tasks
Answer
**D) Wall of text / multiple tasks** This prompt contains five distinct tasks that each deserve their own focused prompt. Bundling them results in the AI addressing each task superficially, and some tasks may receive minimal attention. The solution is sequential prompting: run each task in its own message, using the output of the previous task as context for the next.Question 10
According to the clarity principles in Section 7.4, which of the following is the clearest version of an instruction?
A) "The report should be made more readable by someone" B) "Make this more readable" C) "Revise the report's executive summary: shorten it to 150 words, replace all passive voice with active, and define any acronyms on first use" D) "Clean this up and make it better for the audience"
Answer
**C) "Revise the report's executive summary: shorten it to 150 words, replace all passive voice with active, and define any acronyms on first use"** This version uses active voice (revise), names the specific section (executive summary), gives measurable criteria (150 words, passive to active, acronym definition), and leaves nothing to interpretation. Options A, B, and D all use vague imperatives that require the AI to guess what "readable," "better," and "clean" mean in this specific context.Question 11
When is it appropriate to use negative instructions (e.g., "do not include X") in a prompt?
A) Never — negative instructions are unreliable and should always be replaced with positive ones B) When you know a specific output failure you want to prevent, ideally paired with the positive alternative C) Only when writing prompts for system-level role assignment D) Always — negative constraints are more precise than positive ones
Answer
**B) When you know a specific output failure you want to prevent, ideally paired with the positive alternative.** Negative instructions are appropriate and useful when you have a specific output problem to prevent. However, research and practice suggest they are less reliably followed than positive instructions, so pairing them with the positive alternative increases reliability. "Do not use jargon" is less effective than "Do not use jargon — write as if explaining to someone with no technical background." The answer is not never (A) or always (D), and negative instructions are relevant at all levels of prompting, not just system prompts (C).Question 12
The "context loading principle" describes the right amount of context as:
A) As much as possible — more context always improves output B) Only the task itself — context biases the AI toward your assumptions C) The minimum necessary to produce useful, specific, non-generic output D) Exactly three to five sentences of background
Answer
**C) The minimum necessary to produce useful, specific, non-generic output.** There is a Goldilocks principle at work: too little context produces generic output, but too much context can dilute focus, bury important instructions, and cause the AI to treat all information as equally relevant. The right amount is the minimum that makes the output genuinely specific and useful for your situation, not a general answer to a general question.Question 13
Which platform-specific prompting tip is associated with Claude (Anthropic)?
A) Custom Instructions feature for persistent context across conversations B) XML-style tagging to organize complex prompts C) Core hours scheduling for team workflows D) Integration with Google Workspace documents
Answer
**B) XML-style tagging to organize complex prompts** Claude is particularly responsive to structured formatting using XML-style tags (e.g., `Question 14
True or False: As AI models become more capable, the quality of your prompt matters less.
Answer
**False.** This is one of the most common myths in AI prompting. More capable models respond more dramatically to prompt quality — in both directions. A vague prompt given to a highly capable model produces a highly capable generic response, which is still not useful to you. The same model given a specific, well-structured prompt produces astonishingly good output. The gap between weak and strong prompting tends to grow as the underlying model improves, because more capable models can do more with clearer instructions.Question 15
You need to prompt an AI to write a technical blog post that a non-technical audience can understand. You have tried several times and the output is always too jargon-heavy. Which combination of prompt elements is most likely to solve this problem?
A) A longer constraint list specifying every technical term to avoid B) A reading level specification plus a single example paragraph in the desired style C) A shorter prompt — less instruction produces simpler output D) Asking the AI to "try harder" to be clear