Chapter 19 Exercises: Prompt Engineering Fundamentals
Section A: Recall and Comprehension
Exercise 19.1 Define prompt engineering in your own words, using no more than three sentences. How does it differ from simply "asking a question" to a chatbot?
Exercise 19.2 List the six components of a well-structured prompt as described in this chapter. For each component, write one sentence explaining its function and provide a brief example different from those in the chapter text.
Exercise 19.3 Explain the difference between zero-shot and few-shot prompting. Under what circumstances would you choose each approach? Give one business scenario where zero-shot is sufficient and one where few-shot is preferable.
Exercise 19.4 What is temperature in the context of LLM parameters? A marketing manager wants to use an LLM for two tasks: (a) extracting structured data from sales reports and (b) brainstorming taglines for a new product launch. What temperature setting would you recommend for each, and why?
Exercise 19.5 Describe three of the six common prompt engineering pitfalls discussed in Section 19.9. For each, explain why the pitfall occurs and provide one strategy for avoiding it.
Exercise 19.6 What is a prompt library? Explain three categories of value that organizational prompt libraries provide, drawing on the Athena Retail Group example.
Exercise 19.7 Explain what prompt injection is and why it poses a security risk for business applications. Describe two mitigation strategies.
Section B: Application
Exercise 19.8: Prompt Anatomy Decomposition Find a prompt you or a colleague have recently used with an LLM (or write one typical of your work). Decompose it using the six-component framework: - (a) Identify which of the six components are present and which are missing. - (b) Rewrite the prompt to include all six components. - (c) Run both versions (original and revised) with the same LLM and compare the outputs. Document the differences in quality, specificity, and relevance. - (d) In 200 words, explain which components made the most difference and why.
Exercise 19.9: Zero-Shot vs. Few-Shot Experiment Choose one of the following tasks: - Classifying customer emails into categories (Billing, Shipping, Product, Account, Other) - Summarizing product reviews into one-sentence highlights - Converting unstructured meeting notes into a formatted action-item list
For your chosen task: - (a) Write a zero-shot prompt and test it with five representative inputs. - (b) Write a few-shot prompt with three examples and test it with the same five inputs. - (c) Create a comparison table rating each output on Accuracy, Format Compliance, and Usefulness (scale 1-5). - (d) Write a one-paragraph conclusion about when few-shot examples provide meaningful improvement over zero-shot for this task type.
Exercise 19.10: Role-Based Prompting Comparison Write the same business analysis prompt three times, each with a different role: - (a) "You are a CFO." - (b) "You are a VP of Marketing." - (c) "You are an operations manager."
Use the following task: "Evaluate the proposal to open 15 new Athena retail stores in the southeastern United States over the next two years."
Run all three prompts and analyze: - How does each role shift the focus of the analysis? - Which role produced the most relevant perspective for a CEO making the final decision? - What does this exercise reveal about how roles function in prompt engineering?
Exercise 19.11: Output Format Engineering Take the following raw prompt and rewrite it five times, each requesting a different output format: - Original: "Analyze customer satisfaction trends for Q4." - (a) Request output as a markdown table - (b) Request output as valid JSON - (c) Request output as an executive summary (3 sentences maximum) - (d) Request output as a bulleted list of exactly 5 findings - (e) Request output as a slide outline (title, 3 bullet points, one chart recommendation)
Test each version with sample data. Which format is most useful for (i) a board presentation, (ii) integration into a data pipeline, (iii) a quick Slack update to your team?
Exercise 19.12: Iterative Refinement Practice Choose a business writing task you perform regularly (drafting emails, creating reports, writing product copy, summarizing documents). Complete the following four-iteration refinement cycle:
- Iteration 1: Write a minimal prompt (instruction only, no other components). Run it and evaluate the output against three quality criteria you define.
- Iteration 2: Add role, context, and output format. Run and re-evaluate.
- Iteration 3: Add constraints and/or few-shot examples to address specific weaknesses in the Iteration 2 output. Run and re-evaluate.
- Iteration 4: Adjust parameters (temperature, max tokens) and fine-tune language based on remaining issues. Run and re-evaluate.
Document each iteration in a table with columns: Version, What Changed, Why, Output Quality Score (1-10), Remaining Issues.
Exercise 19.13: Prompt Debugging Challenge The following prompts produce poor results. For each, diagnose the problem and rewrite the prompt to fix it.
(a) "Tell me about our company's performance." - Problem: __ - Rewritten prompt: __
(b) "You are a world-class data scientist and a brilliant marketing strategist and also a seasoned CFO. Analyze our customer data, create a marketing campaign, estimate the ROI, build a predictive model, and design a dashboard — all in one response." - Problem: __ - Rewritten prompt: __
(c) "Explain why our Q4 sales were disappointing and suggest that we need to increase our digital advertising budget." - Problem: __ - Rewritten prompt: __
(d) "What is the exact revenue of Company X for 2025?" - Problem: __ - Rewritten prompt: __
(e) "Summarize this." [with no input data provided] - Problem: __ - Rewritten prompt: __
Section C: Prompt Engineering with Python
Exercise 19.14: Building a PromptBuilder
Using the PromptBuilder class from the chapter, create a prompt for one of the following business tasks:
- (a) A job posting generator that produces consistent, brand-aligned job listings
- (b) A meeting agenda generator that creates structured agendas from a list of discussion topics
- (c) A risk assessment summarizer that converts detailed risk reports into executive-level briefings
Your implementation should: - Set all six prompt components (role, instruction, context, examples, output format, constraints) - Include at least two few-shot examples - Set appropriate parameters (temperature, max_tokens) - Add at least two validation rules - Save at least one version - Print the preview
Exercise 19.15: Template Variable System
Using the PromptBuilder with template variables, create a reusable competitive analysis template. Your template should include at least three variables (e.g., {{company_name}}, {{competitor_name}}, {{time_period}}). Demonstrate the template by:
- (a) Building the prompt for Athena vs. Nordstrom in Q1 2026
- (b) Building the same prompt for Athena vs. Amazon in Q4 2025
- (c) Building the same prompt for Athena vs. Target in Q1 2026
- (d) Printing all three assembled prompts and highlighting what changed
Exercise 19.16: Validation Rules
Create a PromptBuilder for generating product descriptions. Add the following validation rules:
- Maximum 100 words
- Minimum 40 words
- Must contain the word "Athena"
- Must match a regex pattern for a call to action (e.g., contains "shop", "find", "discover", or "explore")
Write a sample output that passes all four rules and one that fails at least two. Run validate_output() on both and interpret the results.
Exercise 19.17: Prompt Library as Code
Create a small prompt library (at least three prompts) for a business function of your choice (marketing, customer service, HR, finance). For each prompt:
- Build it using PromptBuilder
- Save it as version 1.0
- Serialize it to JSON using to_dict()
- Save all three prompts to a single JSON file as a list
- Load the library from the JSON file and verify each prompt builds correctly
Write a function load_prompt_library(filepath) that reads the JSON file and returns a dictionary mapping prompt names to PromptBuilder instances.
Exercise 19.18: Before-and-After Measurement Design a quantitative experiment to measure the impact of prompt engineering on output quality. Choose a task and: - (a) Create a "baseline" prompt (minimal, unengineered) - (b) Create an "optimized" prompt (using all six components) - (c) Define five measurable quality criteria (e.g., "Contains a specific recommendation: yes/no", "Word count within target range: yes/no") - (d) Run each prompt 10 times with the same input (at temperature 0.5 to introduce variation) - (e) Score each output against your criteria - (f) Calculate the pass rate for each criterion for both prompt versions - (g) Present results in a comparison table and write a one-paragraph interpretation
Section D: Case Analysis
Exercise 19.19: The Ad Hoc Prompting Problem Read the Athena Update in Section 19.10, where Ravi Mehta discovers that 17 marketing team members are using LLMs with completely different approaches. - (a) List three specific business risks that ad hoc prompting creates for an organization. - (b) Propose a three-phase plan for transitioning from ad hoc prompting to a managed prompt library. Include timeline, key activities, and success metrics for each phase. - (c) Identify two potential sources of organizational resistance to prompt standardization and propose strategies for addressing each.
Exercise 19.20: Prompt Engineering ROI Athena's prompt library reduced revision cycles by 55% and editorial review time by approximately 73%. Assume the marketing team consists of 17 people, each producing an average of 8 pieces of content per week, with an average loaded labor cost of $55/hour. - (a) Estimate the old time spent per piece of content (including drafting and revision) and the new time, based on the chapter's data. - (b) Calculate the annual labor cost savings. - (c) Estimate the cost of building and maintaining the prompt library (consider NK's time, testing, documentation, and training). - (d) Calculate the ROI of the prompt library initiative. - (e) What non-financial benefits would you include in a business case presented to Athena's CEO?
Exercises marked with Python code requirements refer to the PromptBuilder class defined in Section 19.11. Solutions to selected exercises appear in Appendix B.