Chapter 8 Quiz: Prompt Engineering Fundamentals
Test your understanding of the five pillars of effective prompts, anti-patterns, templates, and prompt evaluation. 25 questions total.
Question 1
What are the five pillars of an effective code prompt?
Show Answer
The five pillars are: 1. **Clarity** — Saying exactly what you mean without ambiguity 2. **Specificity** — Including the right level of detail 3. **Context** — Providing background information the AI needs 4. **Constraints** — Defining boundaries and requirements 5. **Output Formatting** — Controlling the structure of the responseQuestion 2
Which anti-pattern is characterized by a single run-on sentence that buries important requirements in a stream of consciousness?
Show Answer
**The Wall of Text** anti-pattern. It fails because important requirements get buried in unstructured prose, making it difficult for both humans and AI to parse. The solution is to break the prompt into structured sections with clear headings and bullet points.Question 3
What is the "Goldilocks Rule of Specificity"?
Show Answer
Include enough detail that the AI cannot reasonably produce the wrong thing, but not so much that you are effectively writing pseudocode. If your prompt is more detailed than the code it would produce, you have gone too far.Question 4
A developer writes the prompt: "Make the API faster." Which pillar is most lacking, and how would you improve it?
Show Answer
**Clarity** is the most lacking pillar. "Faster" is vague — it could mean reducing response latency, improving throughput, optimizing database queries, adding caching, or many other things. An improved version might be: "Reduce the response time of the GET /api/products endpoint from ~800ms to under 200ms by adding Redis caching for the product catalog query, which rarely changes (updated hourly)."Question 5
What are the four types of context discussed in Section 8.4?
Show Answer
1. **Technology Stack Context** — Frameworks, libraries, language versions, and architectural patterns 2. **Codebase Context** — Existing models, functions, and data structures the new code must integrate with 3. **Domain Context** — Business rules, terminology, and domain-specific conventions 4. **Problem Context** — The specific issue being solved, including symptoms and root causesQuestion 6
Which prompt anti-pattern asks the AI to generate an entire complex system in a single prompt?
Show Answer
**The Kitchen Sink** anti-pattern. It fails because AI coding assistants work best on focused, well-defined tasks. Attempting to generate an entire system in one prompt produces superficial, incomplete code for each component. The solution is to break the work into incremental, focused tasks.Question 7
True or False: Longer prompts are always more effective than shorter prompts.
Show Answer
**False.** Prompt quality comes from structure and precision, not word count. A concise, well-structured prompt with clear sections and bullet points often outperforms a long prose paragraph. The key is having the right information, organized clearly, not maximizing length.Question 8
What is a "negative constraint" and why is it particularly powerful in prompts?
Show Answer
A negative constraint specifies what the code should NOT do (e.g., "Do NOT use global variables," "Do NOT modify the input data in place"). Negative constraints are powerful because they explicitly prevent common AI default behaviors that may not match your requirements. They help eliminate a large class of possible (but unwanted) implementations.Question 9
Given the prompt "Write good code for this," which anti-pattern does it represent?
Show Answer
**The Implicit Standard** anti-pattern. "Good" is subjective and means different things in different contexts — good for performance, readability, maintainability, or security? According to which style guide? The prompt should explicitly define what "good" means in the specific context.Question 10
What are the four measures of prompt effectiveness described in Section 8.10?
Show Answer
1. **First-Attempt Success Rate** — Whether the AI produces usable code on the first try 2. **Iteration Count** — How many follow-up prompts are needed to reach working code 3. **Code Quality Score** — Whether the generated code meets style, documentation, edge-case, and efficiency standards 4. **Reusability** — Whether the prompt can be reused (with minor modifications) for similar future tasksQuestion 11
A prompt asks the AI to "write a simple function with comprehensive error handling that is concise but thoroughly documented." What anti-pattern is this?
Show Answer
**The Contradictory Prompt** anti-pattern. "Simple" conflicts with "comprehensive error handling." "Concise" conflicts with "thoroughly documented." The AI must make trade-off decisions without knowing the developer's priorities. The solution is to state priorities explicitly and in order.Question 12
What prompt quality level (on the five-level spectrum) does the following represent?
Create a responsive login page with email and password fields,
a remember me checkbox, client-side validation, and accessible labels.
Use HTML5, CSS Flexbox, and vanilla JavaScript.
Show Answer
**Level 3: The Specification (Good).** It has clear intent, specific features (email/password, checkbox, validation, accessibility), some technical context (HTML5, CSS Flexbox, JS), and implied constraints. It is missing detailed design specifications, exact styling, responsive breakpoints, and detailed accessibility requirements that would elevate it to Level 4.Question 13
Why is the "Curse of Knowledge" a common pitfall when writing prompts?
Show Answer
The Curse of Knowledge means that because you already know what you mean, your prompt *seems* clear to you — but the AI does not share your mental context. You unconsciously assume the AI knows things it does not (your project structure, your conventions, the bug you are looking at). A useful test: could a junior developer who just joined your team understand exactly what you want from the prompt alone?Question 14
Which prompt template from Section 8.9 would you use if you wanted to ensure a payment processing function handles edge cases like network timeouts and invalid card numbers?
Show Answer
**Template 4: Test Generation.** To verify edge case handling, you would use the test generation template, which includes sections for happy path tests, edge case tests, and error case tests. You would specify the function to test, the specific edge cases to cover (network timeouts, invalid cards, expired cards, etc.), and the testing framework conventions.Question 15
What is the difference between "Technology Stack Context" and "Codebase Context"?
Show Answer
**Technology Stack Context** describes the tools, frameworks, libraries, language versions, and architectural patterns of the project (e.g., "Flask 3.0 with SQLAlchemy 2.0, Python 3.12, application factory pattern"). **Codebase Context** provides actual code — existing models, functions, interfaces, and data structures that the new code must work with (e.g., showing your existing User model class definition). Stack context tells the AI *what tools are available*; codebase context shows *what already exists*.Question 16
You give an AI a prompt and it generates code that is syntactically correct and runs without errors, but it uses a deprecated API that your project has moved away from. Which pillar most likely needed strengthening?
Show Answer
**Context.** Specifically, technology stack context. If the prompt had specified the library versions being used or noted "use the SQLAlchemy 2.0 query syntax, not the legacy 1.x style," the AI would have generated code using the current API. The AI defaulted to a common (but outdated) pattern because it lacked information about your project's current technology choices.Question 17
Rank these verbs from most vague to most precise for code prompts: "fix," "replace the null check on line 42 with a guard clause," "handle," "improve," "add input validation for email format."
Show Answer
From most vague to most precise: 1. **"handle"** — Most vague; could mean catch, log, retry, suppress, re-raise, etc. 2. **"improve"** — Vague; improve how? Performance, readability, correctness? 3. **"fix"** — Somewhat vague; at least implies something is broken, but what? 4. **"add input validation for email format"** — Specific action with a defined scope 5. **"replace the null check on line 42 with a guard clause"** — Most precise; exact location, exact actionQuestion 18
What does the "Show, Don't Just Tell" best practice mean in the context of output formatting?
Show Answer
When specifying the desired output format, include a concrete example of what you want rather than just describing it in words. For instance, instead of saying "return it as JSON," show a sample JSON structure with realistic placeholder data. An example is worth a hundred words of description because it removes ambiguity about structure, naming conventions, nesting, and data types.Question 19
A developer routinely needs 6-8 follow-up prompts to get working code from an initial prompt. Which measure of effectiveness does this directly indicate is low?
Show Answer
**First-Attempt Success Rate** is directly indicated as low. The **Iteration Count** (6-8) is the metric that quantifies this problem. Both measures suggest the initial prompts need improvement — likely lacking in one or more of the five pillars (most commonly clarity, specificity, or context). The developer should analyze which pillar fails most often by asking: "Did the AI understand what I wanted? Did it include the right details? Did the code fit my project?"Question 20
What is the recommended approach when you have multiple constraints that might conflict, such as "optimize for performance" and "maximize readability"?
Show Answer
State the priority order explicitly. For example: "Security constraints take precedence over performance. If there is a conflict between readability and clever optimization, choose readability." This helps the AI make trade-off decisions that align with your values rather than guessing which constraint matters more.Question 21
Which of the following is NOT one of the six prompt anti-patterns discussed in the chapter? a) The "Just Do It" Prompt b) The Wall of Text c) The Contradictory Prompt d) The Premature Optimization Prompt e) The Kitchen Sink Prompt
Show Answer
**d) The Premature Optimization Prompt** is NOT one of the six anti-patterns discussed. The six anti-patterns covered are: 1. The "Just Do It" Prompt 2. The Wall of Text 3. The Contradictory Prompt 4. The Assumption Dump 5. The Kitchen Sink Prompt 6. The Implicit StandardQuestion 22
You are writing a prompt to generate a database migration script that will alter a production table with 50 million rows. Should you aim for Level 2 or Level 4 on the prompt quality spectrum? Why?
Show Answer
**Level 4 (The Blueprint)** at minimum. Database migration scripts for production tables with 50 million rows are high-risk tasks where the consequences of an error are severe (data loss, extended downtime, corrupted tables). High-risk tasks warrant highly detailed prompts that specify the exact schema changes, migration strategy (online vs. offline), rollback plan, data validation steps, expected execution time, and constraints about locking behavior. A Level 2 prompt could result in a migration that locks the table for hours or loses data.Question 23
What is "Prompt Journaling" and why is it recommended?
Show Answer
Prompt Journaling is the practice of keeping a log of your best prompts and the results they produced. Over time, this becomes a personal reference library that accelerates learning. You can review what worked, identify patterns in successful prompts, and reuse effective approaches for similar tasks. Many experienced vibe coders maintain a document or repository of their most effective prompts organized by task type.Question 24
Using the five-pillar scoring rubric (1-5 for each pillar), what minimum score across all pillars is described as generally producing "good results"?
Show Answer
A prompt scoring **3 or above** on all five pillars will generally produce good results. A prompt scoring **4 or above** on all five pillars will typically produce excellent results on the first attempt. A score of 3 means: clear intent with minor ambiguities, adequate detail for the task, key context provided, important constraints stated, and general format specified.Question 25
A team member writes this prompt: "Write the API endpoint." When asked to improve it, they say, "But the AI can see the rest of my code in the context window." Is this a valid defense? Explain your reasoning using concepts from the chapter.