Chapter 19 Quiz: Prompt Engineering Fundamentals
Multiple Choice
Question 1. Which of the following best defines prompt engineering?
- (a) The process of fine-tuning a large language model on custom data.
- (b) The discipline of designing and refining text inputs to LLMs to produce outputs aligned with user intent.
- (c) A programming technique for building chatbot applications.
- (d) The process of selecting the best LLM for a given business task.
Question 2. In the six-component prompt framework, which component specifies the persona or expertise the model should adopt?
- (a) Instruction
- (b) Context
- (c) Role
- (d) Constraints
Question 3. A marketing manager writes the following prompt: "Write something about our spring sale." The resulting output is generic and unhelpful. Which prompt engineering principle has been most clearly violated?
- (a) Role assignment
- (b) Specificity of instruction
- (c) Temperature selection
- (d) Prompt injection prevention
Question 4. Which prompting strategy provides the model with examples of desired input-output pairs to guide its behavior?
- (a) Zero-shot prompting
- (b) Few-shot prompting
- (c) Role-based prompting
- (d) Chain-of-thought prompting
Question 5. A data analyst needs the LLM to extract product names, prices, and categories from unstructured customer reviews and return the results as valid JSON. Which temperature setting is most appropriate?
- (a) 0.0 - 0.2
- (b) 0.5 - 0.6
- (c) 0.7 - 0.8
- (d) 0.9 - 1.0
Question 6. Which of the following is NOT one of the six components of the prompt anatomy framework presented in this chapter?
- (a) Output format
- (b) Training data
- (c) Constraints
- (d) Context
Question 7. In few-shot prompting, why is consistent formatting across examples important?
- (a) It reduces the token count, saving money on API calls.
- (b) It allows the model to identify the pattern it should replicate in its response.
- (c) It prevents the model from generating outputs longer than the examples.
- (d) It is required by the OpenAI API specification.
Question 8. A prompt reads: "Explain why our customer retention rate has dropped and recommend increasing the loyalty program budget." This prompt exhibits which common pitfall?
- (a) Overly complex prompting
- (b) Ambiguity
- (c) Leading the witness
- (d) Ignoring model limitations
Question 9.
What does the top_p (nucleus sampling) parameter control?
- (a) The maximum number of tokens the model can generate.
- (b) The percentage of training data used for the response.
- (c) The cumulative probability threshold for token selection, filtering out low-probability tokens.
- (d) The number of response candidates the model generates internally before selecting the best one.
Question 10. Which of the following is an example of prompt injection?
- (a) A user writes a very long prompt that exceeds the model's context window.
- (b) A user includes malicious instructions in input data that override the system prompt's intent.
- (c) A user sets the temperature parameter to a very high value.
- (d) A user provides contradictory instructions in two separate messages.
Question 11. Athena Retail Group's prompt library reduced editorial revision cycles by 55%. Which of the following best explains why standardized prompts reduce revision cycles?
- (a) Standardized prompts use fewer tokens, so the model generates shorter outputs that need less editing.
- (b) Standardized prompts encode quality criteria, brand voice, and format requirements, so initial outputs are closer to the desired final product.
- (c) Standardized prompts bypass the model's safety filters, allowing more direct responses.
- (d) Standardized prompts are always written in JSON format, which eliminates formatting errors.
Question 12.
The PromptBuilder class supports template variables using the {{variable}} syntax. What business advantage does template variable interpolation provide?
- (a) It automatically selects the optimal temperature for each prompt.
- (b) It enables a single prompt template to be reused across different inputs — such as different competitors, time periods, or product categories — without rewriting the entire prompt.
- (c) It connects the prompt to a database for real-time data retrieval.
- (d) It ensures the model always produces valid JSON output.
Question 13. A product manager needs to generate a detailed quarterly business review. She writes a single prompt that asks the model to: (1) analyze sales data, (2) compare against targets, (3) identify competitive threats, (4) draft a strategic response, (5) create a budget proposal, and (6) prepare presentation slides. What is the most likely problem with this approach?
- (a) The model will refuse to answer because the prompt is too long.
- (b) The prompt attempts too many tasks simultaneously, likely producing shallow coverage of each rather than high-quality output on any single task.
- (c) Few-shot examples are required for multi-task prompts.
- (d) The model's temperature must be set to exactly 0.5 for multi-task prompts to work.
Question 14. Which of the following best describes the iterative prompt refinement process?
- (a) Write one prompt, run it once, and accept the output.
- (b) Write the prompt, test with representative inputs, evaluate against defined criteria, and refine based on specific failures — repeating until quality targets are met.
- (c) Increase the temperature gradually from 0 to 1 until the output improves.
- (d) Use a different LLM for each iteration until one produces satisfactory results.
Question 15. In the context of organizational prompt management, what is the primary purpose of version control for prompts?
- (a) To prevent unauthorized users from accessing the prompts.
- (b) To track changes over time, document why modifications were made, and measure the performance impact of each version.
- (c) To automatically update prompts when the underlying LLM is updated.
- (d) To reduce the token count of prompts by removing older versions.
True or False
Question 16. Zero-shot prompting is always inferior to few-shot prompting regardless of the task.
- True
- False
Question 17. Setting the temperature parameter to 0 guarantees that the model will produce a factually correct response.
- True
- False
Question 18. Role-based prompting changes the model's underlying knowledge — assigning the role "You are a doctor" gives the model access to medical databases it otherwise could not reach.
- True
- False
Short Answer
Question 19. NK Adeyemi observes that prompt engineering is "like writing a creative brief, but for a machine." In three to four sentences, explain what she means by this analogy and why her marketing background gives her an advantage in prompt engineering.
Question 20. Describe two specific scenarios in which output format specification transforms an LLM from a "conversational tool" into a "business systems component." For each scenario, explain what format you would request and how the output would be integrated into a downstream business process.
Question 21. Tom Kowalski initially dismisses prompt engineering as "just typing words into a box." Explain, using at least two concepts from the chapter, why this characterization is inaccurate. What does Tom eventually recognize that changes his view?
Question 22. A company is processing customer support tickets using an LLM. Explain why prompt injection is a relevant security concern in this context and describe two specific mitigation strategies the company should implement.
Question 23. Explain the relationship between prompt engineering and the concept of "AI as a strategic asset" discussed earlier in this textbook. How does an organization's prompt library function as intellectual property?
Answer key for Questions 1-18 is provided in Appendix B. Short-answer questions (19-23) are graded by the instructor using the rubric in the instructor supplement.