Chapter 12: Exercises — Advanced Prompting Techniques
These exercises progress from basic recall through creative application to challenging integration problems. Complete them in order within each tier, as later exercises often build on concepts practiced in earlier ones.
Tier 1: Recall (Exercises 1–7)
These exercises test your understanding of the core concepts and terminology from the chapter.
Exercise 1: Technique Matching
Match each scenario to the most appropriate prompting technique:
| Scenario | Technique |
|---|---|
| A. You need to implement a binary search with three edge cases | 1. Few-shot prompting |
| B. You want code that matches your team's existing validation pattern | 2. Chain-of-thought prompting |
| C. You need a security-focused code review | 3. Meta-prompting |
| D. You are unsure what features your notification system needs | 4. Role-based prompting |
| E. Your prompts keep producing incomplete results | 5. Socratic prompting |
| F. You need to build a microservice with 6 components | 6. Decomposition prompting |
| G. You cannot decide between SQL and NoSQL for your project | 7. Comparative prompting |
Exercise 2: Chain-of-Thought Identification
Read the following prompt and identify which elements make it a chain-of-thought prompt versus a basic prompt:
Write a function to find the shortest path in a weighted graph.
First, identify the algorithm best suited for graphs with
non-negative weights. Then explain why that algorithm works
(the key invariant it maintains). Next, outline the data
structures you will need. Finally, implement the solution
with those data structures.
List each chain-of-thought element and explain what it contributes to the output quality.
Exercise 3: Few-Shot Example Count
A colleague has written a few-shot prompt with eight examples, each showing how to convert a REST endpoint to GraphQL. The examples consume 70% of the context window, leaving only 30% for the AI's response. What advice would you give, and why?
Exercise 4: Role-Based Prompt Evaluation
Evaluate these two role-based prompts. Which is more effective and why?
Prompt A:
Act as the best programmer in the world. Write me a login system.
Prompt B:
Act as a security engineer specializing in authentication systems.
Review and implement a login system that:
- Uses bcrypt for password hashing with a cost factor of 12
- Implements rate limiting (5 attempts per 15-minute window)
- Logs all authentication events for audit purposes
- Returns generic error messages to prevent user enumeration
Exercise 5: Constraint Classification
Classify each of the following constraints as "Hard" (must be met) or "Soft" (preferred but negotiable) for a typical production web application:
- All user passwords must be hashed before storage
- API responses should complete within 200ms
- Code should follow PEP 8 formatting
- All SQL queries must use parameterized statements
- Functions should have type hints
- The application must not expose stack traces to users
- Code coverage should exceed 80%
- The application must support Python 3.10+
Exercise 6: Prompt Chain Dependencies
Given a prompt chain for building a REST API, draw the dependency graph:
- Step A: Design the database schema
- Step B: Generate Pydantic models from the schema
- Step C: Write API endpoint handlers
- Step D: Generate test fixtures from the schema
- Step E: Write integration tests using fixtures and endpoints
- Step F: Generate API documentation from endpoints
Which steps can run in parallel? Which must be sequential?
Exercise 7: Terminology Recall
Define each of the following terms in one to two sentences:
- Meta-prompting
- Decomposition prompting
- Constraint satisfaction prompting
- Few-shot prompting
- Socratic prompting
- Prompt chaining
- Role stacking
Tier 2: Apply (Exercises 8–14)
These exercises ask you to apply the techniques from the chapter to specific coding scenarios.
Exercise 8: Chain-of-Thought for Sorting
Write a chain-of-thought prompt that asks the AI to implement a merge sort algorithm. Your prompt should explicitly request: - Analysis of the divide-and-conquer strategy - Identification of base cases - Pseudocode before implementation - Time and space complexity analysis - At least three test cases to verify correctness
Run your prompt against an AI assistant and evaluate whether the chain-of-thought reasoning improved the output compared to simply asking "implement merge sort."
Exercise 9: Few-Shot for Error Handling
Create a few-shot prompt with three examples that teaches the AI your project's error handling pattern. Your pattern should include:
- A custom exception class for each error type
- Structured logging with context (using Python's logging module)
- A consistent return type for error responses
- An error code enum
Provide three examples of increasing complexity, then ask the AI to generate error handling for two new scenarios.
Exercise 10: Role-Based Code Review
Write a role-based prompt that asks the AI to review the following code from the perspective of a database performance specialist:
def get_user_orders(user_id: int, db: Session) -> list[Order]:
user = db.query(User).filter(User.id == user_id).first()
if not user:
raise UserNotFoundError(user_id)
orders = db.query(Order).filter(Order.user_id == user_id).all()
for order in orders:
order.items = db.query(OrderItem).filter(
OrderItem.order_id == order.id
).all()
for item in order.items:
item.product = db.query(Product).filter(
Product.id == item.product_id
).first()
return orders
Your prompt should guide the AI to identify the N+1 query problem and suggest specific fixes.
Exercise 11: Constraint Satisfaction for API Design
Write a constraint satisfaction prompt for generating a file upload API endpoint with the following requirements: - Maximum file size: 50MB - Allowed file types: PDF, PNG, JPG, DOCX - Files must be scanned for malware (mock the scanner interface) - Upload progress must be trackable - Files must be stored with unique names to prevent collisions - Metadata (original name, upload date, uploader) must be stored - The endpoint must be idempotent (re-uploading the same file should not create duplicates)
Organize your constraints into categories (functional, security, performance, code quality).
Exercise 12: Comparative Prompting for Data Storage
Write a comparative prompt asking the AI to compare two approaches for storing user session data: - Approach A: Server-side sessions stored in Redis - Approach B: JWT tokens with no server-side storage
Request implementations of both, a comparison table, and a recommendation for a specific use case: a SaaS application with 50,000 daily active users that prioritizes security over scalability.
Exercise 13: Meta-Prompt Improvement
Take this poorly-performing prompt:
Write a Python script that processes log files.
Use meta-prompting to transform it into a detailed, effective prompt. Write the meta-prompt you would use, then show the improved prompt that you expect the AI to generate.
Exercise 14: Decomposition for a Chat Application
Apply decomposition prompting to break down a real-time chat application into modules. Your decomposition should: - Identify at least 6 independent modules - Define the public interface for each module - Specify dependencies between modules - Recommend a build order - Write the focused prompt for the first module in the build order
Tier 3: Analyze (Exercises 15–21)
These exercises require you to analyze prompts, identify issues, and evaluate effectiveness.
Exercise 15: Chain-of-Thought Failure Analysis
The following chain-of-thought prompt produced incorrect code. Analyze why the chain-of-thought structure failed to prevent the error:
Prompt: "Think step by step about implementing a thread-safe
singleton pattern in Python. Consider the race conditions,
then implement it."
AI Response (abbreviated):
"Step 1: A singleton should only have one instance.
Step 2: We need to check if an instance exists.
Step 3: Race condition: two threads might both see no instance
and create two.
Step 4: Solution: use a lock."
class Singleton:
_instance = None
_lock = threading.Lock()
def __new__(cls):
if cls._instance is None:
with cls._lock:
cls._instance = cls() # BUG: recursive call
return cls._instance
- What is the bug in the generated code?
- Why did the chain-of-thought reasoning fail to catch it?
- How would you modify the prompt to prevent this specific failure?
Exercise 16: Few-Shot Overfitting Analysis
A developer created a few-shot prompt with three examples, all of which validate string fields using regex. When they asked the AI to generate a validator for a numeric range (age must be between 18 and 120), the AI produced a regex-based solution that converted the number to a string and matched against a pattern.
- Explain why this happened (in terms of the AI's pattern generalization).
- How would you restructure the few-shot examples to prevent this?
- Write an improved set of three examples that would lead to the correct numeric validation approach.
Exercise 17: Role Conflict Resolution
You need code that is both highly performant and extremely secure. You write:
Act as both a performance engineer and a security engineer.
Implement a user data caching system.
The AI's response includes caching plaintext passwords for performance and using HTTP (not HTTPS) for internal cache communication because it is faster.
- Why did the role combination produce these problematic recommendations?
- How would you restructure the prompt to handle the tension between performance and security?
- Write an improved prompt that prioritizes security while still requesting performance optimization where it does not conflict.
Exercise 18: Decomposition Granularity Analysis
Evaluate these two decompositions of the same task (building a user authentication system):
Decomposition A (3 modules): 1. User management (registration, profile, roles) 2. Authentication (login, logout, token management) 3. Authorization (permissions, access control)
Decomposition B (12 modules): 1. Password hashing utility 2. Email validation utility 3. JWT token generator 4. JWT token validator 5. Registration endpoint 6. Login endpoint 7. Logout endpoint 8. Token refresh endpoint 9. User profile endpoint 10. Role model 11. Permission checker middleware 12. Session storage adapter
For each decomposition, analyze: What are the advantages? What are the risks? Which would you recommend for (a) a solo developer on a startup MVP, and (b) a team of six on an enterprise system?
Exercise 19: Constraint Conflict Detection
The following constraint set contains conflicts. Identify all conflicting pairs and explain why they conflict:
- All database queries must use an ORM (no raw SQL)
- Complex analytical queries must be optimized for performance
- The application must support SQLite for local development
- The application must use PostgreSQL-specific features (JSONB, array columns)
- All functions must be pure (no side effects)
- All database operations must be logged
- Response time must not exceed 100ms for any endpoint
- All requests must be validated against a JSON schema before processing
Exercise 20: Prompt Chain Evaluation
A developer used this three-step prompt chain to build a data pipeline:
Step 1: "Design a data pipeline architecture for processing CSV files." Step 2: "Implement the pipeline you designed." (No previous output included) Step 3: "Write tests for the pipeline." (No previous output included)
Evaluate this chain: 1. What specific problems will occur at each step? 2. How much information is lost between steps? 3. Rewrite the chain with proper context passing between steps.
Exercise 21: Technique Selection Analysis
For each of the following scenarios, recommend which combination of techniques to use and explain why:
- You are joining a new team and need to understand their codebase conventions before contributing.
- You need to implement a rate limiter but are unsure whether to use the token bucket or sliding window algorithm.
- Your team is building a microservices system with 8 services that need to communicate.
- You need to convert 50 functions from callback style to async/await style.
- A junior developer on your team keeps writing prompts that produce incomplete code.
Tier 4: Create (Exercises 22–28)
These exercises ask you to create original prompts, systems, and solutions using the techniques from the chapter.
Exercise 22: Build a Chain-of-Thought Template Library
Create chain-of-thought prompt templates for five common algorithm categories: 1. Sorting and searching 2. Graph algorithms 3. Dynamic programming 4. Tree traversal and manipulation 5. String processing
Each template should include: problem analysis steps, complexity analysis requirements, edge case enumeration, pseudocode phase, and implementation phase. Test at least one template against an AI assistant.
Exercise 23: Create a Few-Shot Prompt for Your Project
Choose a real project you are working on (or a project from earlier in this textbook). Create a few-shot prompt that teaches the AI your project's: - Naming conventions (variables, functions, classes, files) - Error handling pattern - Documentation style - Testing approach
Provide 2-3 examples from your actual codebase, then use the prompt to generate a new component. Evaluate whether the generated code matches your project's style.
Exercise 24: Design a Role-Based Review Pipeline
Create a three-stage code review pipeline using different roles: 1. Stage 1: Correctness Review — Define the role and prompt 2. Stage 2: Security Review — Define the role and prompt 3. Stage 3: Maintainability Review — Define the role and prompt
Each stage should produce a structured report. Create a final "synthesis" prompt that combines the three reports into a unified review with prioritized action items.
Exercise 25: Meta-Prompt for Your Domain
Create a meta-prompt specifically designed to generate prompts for your primary development domain (web development, data science, mobile development, DevOps, etc.). The meta-prompt should: - Ask about the specific task - Ask about constraints and requirements - Ask about the target audience for the generated code - Produce a detailed, well-structured prompt
Test your meta-prompt by using it to generate prompts for three different tasks in your domain.
Exercise 26: Build a Decomposition Framework
Create a reusable decomposition framework — a prompt template that takes any complex system description and produces: 1. A module breakdown with responsibilities 2. Interface definitions between modules 3. A dependency graph 4. A recommended build order 5. Focused prompts for each module
Test your framework on at least two different systems (e.g., an e-commerce backend and a real-time analytics dashboard).
Exercise 27: Create a Prompt Effectiveness Scorecard
Design a scorecard for evaluating prompt effectiveness. Your scorecard should include: - At least 8 evaluation criteria (e.g., output completeness, code quality, adherence to constraints) - A rating scale for each criterion - Weighted scoring (some criteria matter more than others) - A process for using the scorecard
Use your scorecard to evaluate three different prompts for the same task and rank them.
Exercise 28: Build a Prompt Library Starter Kit
Create a personal prompt library with at least 10 prompts covering: - 2 chain-of-thought prompts for algorithms - 2 few-shot prompts for code generation - 2 role-based prompts for code review - 1 meta-prompt for prompt improvement - 1 decomposition template - 1 constraint satisfaction template - 1 comparative analysis template
Store them in a structured format (YAML or JSON) with metadata including: name, technique, tags, version, template, and usage notes.
Tier 5: Challenge (Exercises 29–35)
These exercises integrate concepts across multiple chapters and push the boundaries of what you have learned.
Exercise 29: Cross-Chapter Integration — Prompt Engineering Pipeline
Combine techniques from this chapter with concepts from Chapter 8 (Fundamentals), Chapter 9 (Context Management), and Chapter 11 (Iterative Refinement) to design a complete prompt engineering pipeline for building a new feature from scratch.
Your pipeline should: 1. Start with Socratic prompting to discover requirements 2. Use decomposition to break the feature into modules 3. Apply context management strategies from Chapter 9 to maintain coherence across modules 4. Use chain-of-thought for complex algorithmic modules 5. Apply iterative refinement from Chapter 11 when output is not satisfactory 6. End with a role-based review
Document the pipeline as a flowchart or structured process, then execute it on a real feature: a recommendation engine that suggests related articles based on reading history.
Exercise 30: Adversarial Prompt Testing
Write a set of "adversarial tests" for prompts — prompts that are designed to reveal weaknesses in other prompts. Create adversarial tests for: 1. A chain-of-thought prompt (test: does the AI actually use the reasoning, or does it just decorate its normal output with reasoning text?) 2. A few-shot prompt (test: does the AI generalize correctly, or does it overfit to the examples?) 3. A role-based prompt (test: does the role actually change the output, or is it ignored?)
For each test, design a methodology for determining whether the technique is genuinely improving output quality.
Exercise 31: Prompt Optimization Experiment
Choose a coding task that you perform regularly. Write five different prompts for the same task, each using a different technique: 1. A basic prompt (no advanced technique) 2. A chain-of-thought prompt 3. A few-shot prompt 4. A role-based prompt 5. A constraint satisfaction prompt
Run each prompt against the same AI assistant three times. Evaluate the outputs on: correctness, completeness, code quality, and adherence to best practices. Which technique produced the best results? Was the best technique the same across all three runs?
Exercise 32: Multi-Agent Prompt Design
Design a prompt chain that simulates a development team: 1. Product Manager prompt: Generates user stories and acceptance criteria 2. Architect prompt: Designs the system based on user stories 3. Developer prompt: Implements each component 4. QA Engineer prompt: Generates test cases 5. Code Reviewer prompt: Reviews the implementation
Each prompt should take the output of the previous step as input. Include explicit instructions for how each "agent" should interact with the artifacts from previous steps.
Execute the chain for a small feature: a todo list API with priorities and due dates.
Exercise 33: Domain-Specific Prompt Language
Design a lightweight "prompt DSL" (domain-specific language) for your most common coding tasks. The DSL should:
- Have a simple, readable syntax
- Support variables and templates
- Include technique selectors (e.g., @chain-of-thought, @few-shot(3))
- Support constraint blocks
- Be translatable to full prompts by a simple script
Implement a Python script that parses your DSL and produces full prompts. Test it on at least five tasks.
Exercise 34: Prompt Library with Analytics
Extend the prompt library system from Section 12.10 with analytics: - Track how often each prompt is used - Record the quality rating for each use (user-provided score 1-5) - Calculate average effectiveness per prompt - Identify your most and least effective prompts - Suggest prompts that need improvement based on low effectiveness scores
Implement this as a Python application with a CLI interface. Use SQLite for storage.
Exercise 35: Teaching Workshop Design
Design a 90-minute workshop that teaches the advanced prompting techniques from this chapter to intermediate developers. Your workshop plan should include: - Learning objectives for the workshop - A sequence of hands-on exercises (at least 5) - Live demonstration scripts for 3 techniques - Assessment criteria for evaluating participant skill - A take-home reference card summarizing all 10 techniques
Create all workshop materials, including the exercise prompts, demonstration scripts, and reference card. The workshop should be immediately deliverable.
Submission Guidelines
For exercises that produce prompts, submit both the prompt and the AI-generated output. For exercises that require analysis, provide structured written responses with specific evidence from the chapter or from your experimentation. For exercises that produce code, ensure all code is syntactically correct and includes appropriate tests.