Chapter 20 Quiz: Advanced Prompt Engineering
Multiple Choice
Question 1. What is the primary purpose of chain-of-thought (CoT) prompting?
- (a) To reduce the number of tokens in the model's response.
- (b) To encourage the model to produce intermediate reasoning steps before arriving at a final answer.
- (c) To connect multiple LLM calls in a sequence.
- (d) To enforce a specific output format like JSON.
Question 2. Tom's single-prompt approach to Athena's Q3 analysis failed primarily because:
- (a) The LLM did not have access to current sales data.
- (b) The prompt was too short to generate a useful response.
- (c) Asking a single prompt to perform multiple distinct cognitive tasks simultaneously produces generic, shallow outputs.
- (d) The model's context window was too small to process the data.
Question 3. Which technique involves generating multiple responses to the same prompt and selecting the answer that appears most frequently?
- (a) Chain-of-thought prompting
- (b) Tree-of-thought prompting
- (c) Self-consistency
- (d) Meta-prompting
Question 4. In tree-of-thought prompting, the model is asked to:
- (a) Generate a single optimal reasoning path from start to finish.
- (b) Generate multiple alternative reasoning paths, evaluate each, and select the best.
- (c) Follow a predefined decision tree with fixed branches.
- (d) Generate a visualization of the prompt chain structure.
Question 5. A company processes 10,000 customer emails per day using a classification prompt at $0.002 per call. Implementing self-consistency with 3 samples would cost approximately:
- (a) $20 per day — the same as the current approach.
- (b) $40 per day — double the current cost.
- (c) $60 per day — triple the current cost.
- (d) $30 per day — 1.5 times the current cost.
Question 6. In prompt chaining, the output of Step 2 is used as:
- (a) The system message for Step 3.
- (b) Part of the input context for Step 3.
- (c) The validation criteria for Step 1.
- (d) The temperature setting for Step 3.
Question 7. Which of the following is NOT a benefit of prompt chaining compared to a single complex prompt?
- (a) Each step can be tested and debugged independently.
- (b) Errors can be traced to specific steps.
- (c) The total API cost is always lower than a single prompt.
- (d) Individual steps can be reused across different chains.
Question 8. What is the primary purpose of JSON mode (or structured output mode) in LLM APIs?
- (a) To make the model's responses shorter.
- (b) To guarantee that the model's output is valid JSON that can be parsed by code.
- (c) To improve the model's reasoning accuracy.
- (d) To reduce API latency.
Question 9. Function calling (tool use) in LLMs allows the model to:
- (a) Directly execute functions on the server.
- (b) Return structured arguments for predefined functions, which the application code then executes.
- (c) Modify its own training data.
- (d) Access the internet to retrieve current information.
Question 10. Meta-prompting refers to:
- (a) Using an LLM to generate, evaluate, or optimize prompts for itself or another LLM.
- (b) Adding metadata tags to prompts for version tracking.
- (c) Prompts that analyze the model's own internal architecture.
- (d) Using multiple models simultaneously on the same prompt.
Question 11. The constitutional AI self-critique pattern follows which sequence?
- (a) Critique → Generate → Revise
- (b) Generate → Revise → Critique
- (c) Generate → Critique → Revise
- (d) Revise → Critique → Generate
Question 12. Which of the following is an example of a "business constitution" rule as described in the chapter?
- (a) "Always use GPT-4o for customer-facing prompts."
- (b) "Never promise specific delivery dates unless confirmed by the logistics system."
- (c) "Set temperature to 0.3 for all production prompts."
- (d) "All prompts must be fewer than 200 words."
Question 13.
In the PromptChain class, what is the purpose of the validator function?
- (a) It authenticates the API key before making a call.
- (b) It checks whether a step's output meets quality criteria before proceeding to the next step.
- (c) It validates the Python syntax of the prompt template.
- (d) It ensures the step's temperature is within the allowed range.
Question 14. Prompt injection is a security threat in which:
- (a) An attacker gains access to the API key and makes unauthorized calls.
- (b) A user crafts input that overrides or manipulates the system prompt's instructions.
- (c) The model generates output that exceeds the context window.
- (d) An engineer accidentally deploys an unfinished prompt to production.
Question 15. Which technique would be LEAST appropriate for the following task: "Write a creative marketing tagline for Athena's holiday campaign"?
- (a) Setting a higher temperature (0.7-0.9) for variety.
- (b) Using chain-of-thought reasoning to show reasoning steps.
- (c) Generating multiple options and selecting the best.
- (d) Providing examples of successful taglines from prior campaigns.
Question 16. The chapter describes prompt regression testing as:
- (a) Testing whether a prompt's performance improves over time without changes.
- (b) Re-running a test suite when the LLM model version changes to detect performance degradation.
- (c) Using statistical regression models to predict prompt performance.
- (d) Gradually simplifying a prompt to find the minimal effective version.
Question 17. In the multi-agent "generator-critic" pattern, the critic's role is to:
- (a) Generate an alternative output that competes with the generator's.
- (b) Evaluate the generator's output against specific criteria and identify deficiencies.
- (c) Rewrite the generator's output from scratch.
- (d) Determine which LLM model to use for the generator.
Question 18. Athena's customer service team saw satisfaction scores rise from 3.2 to 4.1 after implementing the generate-critique-revise pattern. According to the chapter, the improvement came primarily from:
- (a) The AI generating better initial drafts than human agents.
- (b) The systematic critique step catching issues that humans under time pressure frequently missed.
- (c) Eliminating human involvement in complaint resolution entirely.
- (d) Using a more expensive LLM model for customer communications.
Question 19. Which of the following is an anti-pattern described in the chapter?
- (a) Using lower temperature for data extraction steps.
- (b) Building test cases before deploying prompts to production.
- (c) Cramming every requirement into a single prompt longer than 500 words with 10+ instructions.
- (d) Using meta-prompting to generate category-specific prompts.
Question 20. The chapter's key principle — that "the most powerful prompt engineering technique is decomposition" — connects to which broader business skill?
- (a) Financial modeling
- (b) Stakeholder management
- (c) Project management and work breakdown structures
- (d) Competitive analysis
Short Answer
Question 21.
Explain why the PromptChain class uses different temperature settings for different steps. Give an example of a step that should use low temperature and one that should use higher temperature, with your reasoning.
Question 22. A startup has 3 engineers and limited budget. They need to deploy a customer support chatbot quickly. Argue whether they should implement full enterprise prompt governance or a simplified version. What is the minimum viable governance process you would recommend?
Question 23. Describe the relationship between prompt chaining (Chapter 20) and retrieval-augmented generation (introduced briefly here and covered in depth in Chapter 21). How might you combine both techniques in a single business workflow?
Question 24. The chapter mentions that Athena's QBR chain reduced preparation time from 3 days to 4 hours, but the data team still spent time reviewing and refining outputs. Why is this human-in-the-loop step essential, and under what conditions (if any) could it be safely removed?
Question 25. Compare the cost-effectiveness of self-consistency versus the generate-critique-revise pattern for improving output quality. Under what circumstances would you choose each approach?
Answer key: Multiple choice answers and rubrics for short-answer questions are available in Appendix B.