Chapter 15 Exercises
These exercises build Claude-specific skills, from understanding its design philosophy through advanced XML-based prompting. Work through them progressively. Exercises marked [Pro] work best with a Claude.ai Pro subscription for access to Opus and Projects.
Section A: Understanding Claude's Design Philosophy
Exercise 15.1 — The Sycophancy Comparison
Pick a decision, plan, or piece of work you feel reasonably confident about. Send it to both Claude and ChatGPT with the same open-ended prompt: "What do you think of this?"
Compare the two responses on: - How much did each model validate your position versus challenge it? - Which model offered more substantive critique without being asked? - Which response was more useful for improving the work?
Write a short note on which model you found more valuable as a thinking partner for this specific task, and why.
Exercise 15.2 — Testing Uncertainty Calibration
Ask both Claude and ChatGPT the same question about a specific fact in your domain — something where you know the correct answer. Pick a question that has a definitive answer but is not common knowledge.
Observe: - Did either model give a wrong answer? With what level of apparent confidence? - Did Claude express any uncertainty where ChatGPT did not? - Was Claude's uncertainty calibration accurate (did it flag uncertainty when wrong and express confidence when right)?
This exercise gives you a personal calibration of how much to trust each model's confidence signals in your domain.
Exercise 15.3 — The Refusal Rephrase
Find a professional topic where Claude is likely to add heavy caveats or decline in its default response. (Good candidates: medical questions, security topics, legal matters, negotiation tactics, sensitive HR scenarios.) Send your original request without context.
Note the response. Then rephrase with explicit professional context and purpose. Note how the response changes.
Document the rephrasing strategy that worked. You will use variations of this pattern regularly.
Section B: XML Tagging Practice
Exercise 15.4 — Convert a Basic Prompt to XML Structure
Take a prompt you have previously used with Claude or ChatGPT that had multiple components — some document content, some instructions, some context. Rewrite it using XML tags.
Compare the response to your previous (non-XML) prompt. What changed? Was anything different about how completely Claude addressed each component?
Exercise 15.5 — The Four-Part XML Template
Write a complete XML-structured prompt using all four standard tags:
- <document> — some text you want analyzed
- <background> — context about your situation
- <instructions> — what you want done with the document
- <output_format> — how you want the response formatted
Use this for a real task you have coming up. After the response, evaluate: did the structure produce a better-organized output than your usual approach? Were any of your instructions missed?
Exercise 15.6 — System Prompt in XML
Write an XML-structured system prompt for a hypothetical Claude API application relevant to your work. Include:
- <role> — what kind of expert or assistant this Claude instance is
- <task_context> — what it will be used for
- <behavior_rules> — specific behaviors you want
- <output_standards> — format and quality expectations
You do not need to actually deploy this — the exercise is in the drafting. Compare your draft to the example in Section 15.8 and revise.
Exercise 15.7 — Complex Multi-Constraint Instructions
Write a prompt that specifies at least six distinct constraints on a piece of writing (tone, length, format, audience, included elements, excluded elements). Send this to both Claude and ChatGPT.
Count how many of the six constraints each model honored in its response. Repeat with two variations of the same prompt to check for consistency.
This exercise quantifies the instruction-following gap you will likely observe between the models.
Section C: Long Document Analysis
Exercise 15.8 — First Long Document Upload
Find a document from your work that is at least 10-15 pages long (a report, a policy document, a long email chain, a research article). Upload it to Claude.
Send this opening message: "Please confirm you have received this complete document by telling me: (1) how many sections or major headings it contains, and (2) the topic of the last paragraph."
Then proceed with an analysis question specific to your needs. Evaluate: did Claude's analysis reflect understanding of the full document, or did it seem to focus on certain sections?
Exercise 15.9 — Structured Document Analysis
Using a long document from your work, write a complete XML-structured analysis prompt that specifies: - Background context on why you are analyzing this document - A list of five to seven specific questions or analysis areas - The intended audience for your analysis output - The format you want the output in
Compare this structured approach to your usual document analysis workflow. What is different about the output?
Exercise 15.10 — The Completeness Check
After Claude analyzes a long document, send this follow-up: "What is in the document that is significant but that you did not include in your analysis? What am I missing?"
This prompt leverages Claude's careful reading to surface things that did not fit the initial analysis framework. Record what it surfaces. This often reveals important material.
Section D: Using Claude as a Critic
Exercise 15.11 — The Critic Prompt
Take a piece of work you have recently completed: a report, a proposal, a plan, a piece of code, a presentation outline. Send it to Claude with this prompt:
"Read this carefully. Your job is not to be supportive — it's to be useful. Tell me: (1) the three most significant weaknesses or problems, (2) any assumptions that seem questionable, (3) anything important that is missing. Be direct."
Use the feedback to identify whether there are genuine issues you had not seen, or whether Claude's critique reflects misunderstanding of your context. Respond with clarifications where needed and see how Claude adjusts.
Exercise 15.12 — The Steelman Exercise
Describe a position or decision you hold with high confidence to Claude. Then ask:
"What is the strongest possible argument against this position? I want the steelman — the most charitable, most rigorous version of the opposing view. Do not soften it."
Evaluate: did Claude surface any counter-arguments you had not fully considered? If so, do any of them change your confidence level?
Exercise 15.13 — Edit for Concision Only
Take a piece of writing you are satisfied with and ask Claude: "Edit this for concision only. Do not change the meaning, the structure, or the substance. Cut only what is unnecessary — filler words, redundant phrases, padding. Return the edited version with no explanation."
Compare word counts before and after. What percentage was cut? Read both versions — is anything meaningful lost? This exercise teaches you two things: what good editing looks like, and how much filler most first drafts contain.
Section E: Long-Form Writing Workflows
Exercise 15.14 — Outline-First Discipline
For your next long writing project (report, proposal, presentation script), use Claude exclusively for the outline phase before writing any content.
Share: the topic, the intended audience, the purpose, and any constraints. Ask Claude to develop a detailed structural outline. Review it, revise it through discussion, and finalize it before writing any section.
Note: how much did the outline discussion change your planned structure? Did Claude raise any structural questions you had not considered?
Exercise 15.15 — Section-by-Section Writing
Using the outline from Exercise 15.14 (or another project), draft one section at a time with Claude. For each section: 1. Paste the overall structure so Claude has context 2. Paste any data, quotes, or sources for this section 3. Ask for a draft 4. Review, revise, finalize that section before moving to the next
After completing three or more sections: evaluate the consistency of tone, terminology, and argument across the sections. What needed adjustment at the integration phase?
Section F: Claude Projects [Pro]
Exercise 15.16 — Set Up Your First Project
Create a Claude Project for a current work engagement or ongoing project. Upload: - Any background documents (briefs, research, policies) - A brief description of the project purpose and status - Your preferences for outputs in this project
Run three different tasks within the Project and observe: how well does Claude use the project context? Does it reference your uploaded documents when relevant?
Exercise 15.17 — Project Context vs. New Conversation
Run the same analysis task twice: once in your Project, once in a fresh Claude conversation with no context. Compare the outputs.
What specific differences does the Project context produce? Is the improvement sufficient to justify the setup time? (For most users with ongoing projects, the answer is yes — but verifying this for your specific use case is worthwhile.)
Section G: Coding with Claude
Exercise 15.18 — Code Explanation Request
Take any piece of code you have written or are working with (10-50 lines is a good size). Ask Claude:
"Explain this code as if I am a competent programmer in a different language who has never seen this codebase before. What does it do? Why is it structured this way? Are there any aspects that might be non-obvious to a reviewer?"
Evaluate: is Claude's explanation accurate? Is it at the right level of detail? Does it surface anything about the code's design that you had not explicitly thought about?
Exercise 15.19 — The Architecture Critique
Describe the architecture of a system you are working on — the major components, how they connect, the design decisions you have made. Ask Claude:
"What are the potential weaknesses in this architecture? What problems might emerge as the system scales or as requirements change? What would you have designed differently and why?"
Note where Claude's critique is incisive (reveals real issues) versus where it reflects a lack of context about your specific constraints. Respond with constraints and see how Claude adjusts its critique.
Reflection Questions
After completing these exercises, consider:
-
Which Claude-specific technique produced the most significant improvement in output quality for your work? Why do you think it worked?
-
How has working with Claude's genuine pushback affected your thinking on any task? Were there times the critique was wrong (and you knew it)? Were there times it surfaced a real issue?
-
The XML tagging technique feels like extra work. Based on the exercises you completed: is the improvement in output quality worth the extra effort for your specific use cases? Under what conditions would you use it?
-
If you are currently using ChatGPT as your primary tool: which specific tasks in your work would most benefit from switching to Claude? What is your barrier to doing so?
-
How does Claude's approach to uncertainty change how you work? Does having explicit uncertainty signals change how you verify outputs?