Chapter 10 Exercises: Advanced Prompting Techniques

These exercises are designed to be completed in sequence. Each builds on skills from the previous one. Exercises marked Core are essential; exercises marked Extension go deeper and are recommended if you have additional time.


Section A: Chain-of-Thought Prompting

Exercise 1 — Zero-Shot CoT: The Trigger Phrase Test (Core)

Goal: Experience the concrete difference CoT makes on a reasoning problem.

Take the following prompt and run it twice: once as written, once with "Let's think step by step" appended.

Prompt:

"A company has 3 sales regions. Region A generates 45% of revenue. Region B generates 30%. Region C generates 25%. If overall revenue grows by 20% next year, but Region A is expected to underperform at only 10% growth while Regions B and C grow at 25%, what will Region A's new percentage of total revenue be?"

Compare the two responses: - Does the standard version show any calculation? - Does the CoT version trace through the math explicitly? - Which answer is correct? (You should verify this independently.) - What errors, if any, appear in the non-CoT version?

Write a one-paragraph reflection on what you observed.


Exercise 2 — Manual CoT: Building the Reasoning Example (Core)

Goal: Practice constructing a few-shot CoT prompt with a worked reasoning example.

Choose one of the following problem types that is relevant to your work: - Financial calculation with multiple variables - Strategic decision with 3+ competing factors - Root cause analysis with multiple possible causes - Project timeline estimation with dependencies

Write a worked example that shows: 1. The problem setup 2. The step-by-step reasoning (at least 4 steps) 3. The conclusion

Then write a new, similar problem and construct the full prompt using your worked example as the reasoning template. Test it and assess whether the model followed your reasoning structure.


Exercise 3 — CoT for a Real Work Task (Core)

Goal: Apply CoT to an actual analytical task from your professional context.

Identify a recurring task in your work that involves multi-step analysis — budget review, feasibility assessment, strategic option comparison, or any decision that has multiple variables.

  1. Write a standard prompt for this task (no CoT)
  2. Run it and save the output
  3. Add explicit CoT instruction: list the specific reasoning steps you want the model to take
  4. Run the CoT version and compare outputs

Assess: What did CoT add? Were there reasoning steps you hadn't anticipated that the model surfaced? Were there steps the model got wrong that you would not have caught without the visible reasoning trace?


Exercise 4 — CoT for Debugging (Extension)

Goal: Use CoT for technical or logical debugging.

Find a piece of code, a logical argument, a financial model, or a process description that has an error (real or deliberately introduced).

Write a CoT debugging prompt that requires the model to: 1. Describe what the code/argument/model is supposed to do 2. Trace through what it actually does 3. Identify every point where the actual behavior diverges from the intended behavior 4. Rank the most likely error sources 5. Suggest specific fixes

Evaluate whether the model found the error. If it did not, where did the reasoning go wrong?


Exercise 5 — CoT Verification: Spotting Fake Reasoning (Extension)

Goal: Develop the ability to identify when CoT output is genuine reasoning vs. pattern-matched reasoning.

Run a moderately complex prompt with CoT instruction. Then evaluate the reasoning trace:

  1. Does each step actually build on the previous step, or are they independent claims?
  2. Does the model commit to specific intermediate conclusions (e.g., specific numbers, specific decisions) or remain vague?
  3. Does the reasoning pass the "red pen test" — could you mark specific lines as wrong without the whole argument falling apart?
  4. What would you need to add to the prompt to get genuinely step-dependent reasoning?

Write a revised prompt that forces more rigorous chain-dependency between steps.


Section B: Few-Shot Prompting

Exercise 6 — First Few-Shot: Style Transfer (Core)

Goal: Build your first few-shot prompt for style consistency.

Choose a type of professional writing you produce regularly (email updates, status reports, client summaries, technical documentation, social media posts, etc.).

  1. Find 3 examples of that writing at its best — pieces you're proud of or that received positive feedback
  2. Strip them down to the core input context and output text (the pattern)
  3. Build a few-shot prompt using those 3 examples
  4. Write a new piece using your few-shot template
  5. Compare: how closely does the output match your style vs. a generic version without examples?

Document the template for future reuse.


Exercise 7 — Few-Shot Classification System (Core)

Goal: Build a few-shot classifier for a real categorization task.

Choose a classification task you face regularly: - Triaging customer inquiries by type or urgency - Categorizing expense receipts or budget items - Sorting research sources by relevance tier - Classifying support tickets, leads, or tasks

  1. Define your categories (3-6 categories work best)
  2. Write 2 examples per category (6-12 total examples)
  3. Build the few-shot prompt with all examples
  4. Test it on 10 real items from your work
  5. Rate accuracy: how many classifications match what you would have assigned manually?
  6. Identify which categories the model confuses and add a clarifying example for each confused category

Exercise 8 — Few-Shot Structured Extraction (Core)

Goal: Use few-shot prompting to extract structured data from unstructured text.

Choose a type of document you regularly need to extract information from: - Meeting notes → action items + owners + deadlines - Job postings → requirements + compensation + logistics - Sales call notes → pain points + objections + next steps - Research abstracts → findings + methodology + limitations

  1. Build 2-3 worked examples with the input document and your desired output structure
  2. Construct the few-shot extraction prompt
  3. Test on 3 new documents
  4. Assess: does the structure stay consistent? Does the model invent information not in the source? How does it handle missing fields?

Exercise 9 — Example Quality Test (Extension)

Goal: Understand how example quality affects output quality.

Take a few-shot prompt you built in exercises 6-8. Create two versions: - Version A: Your best examples (the ones you'd actually choose) - Version B: Mediocre examples — similar in form but lower quality, less representative, or slightly off-style

Run both on the same test inputs. Document the difference in output quality.

Then try a third version: Version C — replace one good example with a deliberately atypical or edge-case example.

What does this tell you about example selection strategy?


Exercise 10 — Optimal Example Count Test (Extension)

Goal: Find the optimal number of examples for one of your recurring tasks.

Using a style or classification task, run the same test input with: - 1 example - 2 examples - 3 examples - 5 examples - 7 examples

Rate each output on your target criteria. Plot (even informally) quality vs. number of examples. Where does the quality plateau? Where does more examples stop helping or start hurting (due to context window pressure or format noise)?


Section C: Self-Critique Prompting

Exercise 11 — First Self-Critique: Email or Report (Core)

Goal: Experience the two-step generate → critique → improve cycle.

Choose a professional communication task: an important email, a report section, a presentation slide, or a client summary.

  1. Generate the first version with a well-crafted standard prompt
  2. Write an explicit critique prompt with 3-4 specific criteria relevant to your use case
  3. Get the critique (without revision yet)
  4. Read the critique: Do you agree? Did it miss any real problems? Did it invent false ones?
  5. Now ask for the revision
  6. Compare first version, critique, and final version

Rate the improvement. Were the critique's identified weaknesses actually the most important ones?


Exercise 12 — Constitutional Self-Critique (Core)

Goal: Build a reusable quality standard for self-critique of your most important output type.

Choose the single type of AI output you produce most often for work (weekly report, client email, code review, data summary, etc.).

  1. Write a "constitution" — a list of 5-7 specific quality criteria this output must meet
  2. Format it as a numbered list that can be dropped into any self-critique prompt
  3. Run a test: generate an output, then apply your constitution to critique it
  4. Revise based on the critique
  5. Assess: does this 5-7 point standard catch the errors that matter most for this output type?

Store this constitution as a reusable component.


Exercise 13 — Multi-Round Self-Critique (Extension)

Goal: Explore whether multiple self-critique rounds produce compounding improvement.

Take a complex output (long document section, detailed analysis, multi-step recommendation).

Run three rounds of critique + revision: - Round 1: Critique and improve for content completeness - Round 2: Critique and improve for logical structure and coherence - Round 3: Critique and improve for clarity and concision

Compare the first draft to the third revision. What changed most? Where did the model make real improvements vs. just shuffling language? At what point (if any) did additional critique rounds stop producing meaningful improvements?


Section D: Structured Decomposition

Exercise 14 — Plan Then Execute (Core)

Goal: Practice the plan-then-execute approach for a complex task.

Choose a work project that requires a multi-part deliverable — a report, a presentation, a strategic plan, a technical design document — that would normally take you several hours.

  1. Write a planning prompt that asks only for the structure/outline (not the content)
  2. Review the plan and revise it (add, remove, reorder sections as appropriate)
  3. Execute each section in a separate exchange, referencing the approved plan
  4. Compare this output to how you would have done it with a single prompt

Reflect: What decisions did you make at the planning stage that improved the final output? Where did the AI's plan surprise you?


Exercise 15 — Decomposition for Analysis (Core)

Goal: Break a complex analytical task into defined reasoning steps.

Choose an analytical question relevant to your work — a make-vs-buy decision, a market assessment, a risk evaluation, a technology comparison.

Write a prompt that explicitly decomposes the analysis into: - Step 1: Gather / state the key facts and unknowns - Step 2: Identify the evaluation criteria - Step 3: Analyze each option against each criterion - Step 4: Weigh the criteria and summarize - Step 5: State a recommendation with confidence level

Evaluate the output of each step. Where did the decomposition help? Were there steps that still needed further breaking down?


Section E: Combining Techniques

Exercise 16 — Few-Shot + CoT Combined (Core)

Goal: Build a prompt that shows both how to think AND what the output should look like.

Take a task that requires both style consistency (use few-shot for format) and reasoning accuracy (use CoT for logic). Good candidates: financial analysis formatted in your company's style, technical recommendations following your team's decision framework, strategic options assessed against your criteria.

Build a prompt where your few-shot examples include the reasoning steps, not just the input/output pairs. Run the prompt and assess: does the model follow both the reasoning format AND the output style?


Exercise 17 — The Full Master Prompt (Extension)

Goal: Construct a prompt that combines role + context + few-shot + CoT + self-critique.

Choose your most important and complex recurring AI task. Build a "master prompt" that: 1. Assigns a specific, appropriate role 2. Front-loads all necessary context 3. Includes 2-3 few-shot examples 4. Requests chain-of-thought reasoning for the main analysis 5. Ends with a self-critique instruction for the 2-3 most critical quality criteria

Test the master prompt. Document it. This is the highest-value prompt in your library.


Exercise 18 — Technique Selection Practice (Extension)

Goal: Build fluency in selecting the right technique for the right task.

Review the 10 tasks below. For each, identify which technique(s) from this chapter are most appropriate and briefly explain why. There are no single right answers — the goal is to practice reasoning through the selection.

  1. Write a product description matching a competitor's style
  2. Debug a financial model that's returning errors
  3. Draft a performance review summary that is fair and specific
  4. Classify 50 customer reviews into 4 sentiment categories
  5. Develop a go-to-market strategy for a new product launch
  6. Explain a complex technical architecture to a non-technical audience
  7. Extract contact information from 20 business cards photographed as images
  8. Write a board memo that must pass legal review
  9. Estimate the ROI of a proposed initiative with many variables
  10. Generate 20 creative campaign concepts for a brand

After completing the exercise, compare your selections to the technique selection guide in the chapter. Where did you differ? What reasoning led to different choices?


Exercise 19 — Build Your Technique Reference Card (Core)

Goal: Create a personal quick-reference guide for technique selection.

Based on your work in this chapter, create a one-page personal reference card that includes: - The 4 main techniques with a 1-sentence description of when to use each - Your top 2 task types for each technique from your own professional context - The key prompt addition for each technique (the specific language that triggers it) - One warning per technique (the most common mistake you want to avoid)

This reference card becomes part of your prompting toolkit.


Exercise 20 — Retrospective: Before and After (Extension)

Goal: Measure your improvement over this chapter's exercises.

Return to the first complex AI task you remember producing poor results on — before you learned these techniques. Reconstruct the original prompt as best you can. Then rebuild it using everything from this chapter: - Add CoT if reasoning is involved - Add few-shot examples if style or format matters - Add self-critique if quality verification is important - Decompose if the task is multi-part

Run both versions. Document the improvement. This comparison is the clearest measure of what you've gained from these techniques.