Chapter 37: Exercises
Tier 1 — Remember and Understand (Exercises 1-6)
Exercise 1: MCP Vocabulary Match
Match each MCP concept with its correct definition:
| Concept | Definition |
|---|---|
| 1. Tool | A. A data source the AI can read for context |
| 2. Resource | B. A pre-defined prompt structure for guiding AI behavior |
| 3. Prompt | C. A function the AI can invoke to perform actions |
| 4. MCP Client | D. The message format used for MCP communication |
| 5. MCP Server | E. The component that discovers and talks to servers |
| 6. JSON-RPC 2.0 | F. A process that exposes capabilities to AI clients |
Exercise 2: Transport Layer Comparison
Create a table comparing the three MCP transport mechanisms (stdio, SSE, and Streamable HTTP). For each transport, list: - How it works - When to use it - Advantages - Limitations - A real-world scenario where it is the best choice
Exercise 3: Tool Categories Classification
Given the following list of custom tools, classify each into the correct category (Knowledge Access, System Interaction, Data Queries, Code Operations, Workflow Automation, or Domain Tools):
search_confluence— Searches a Confluence wikideploy_staging— Deploys code to a staging environmentquery_analytics— Queries a Google Analytics datasetlint_terraform— Runs custom Terraform linting rulesrelease_workflow— Manages the complete release processcalculate_mortgage— Computes mortgage payment schedulesread_runbook— Reads operational runbookstrigger_ci_pipeline— Starts a CI/CD buildfetch_user_metrics— Retrieves user behavior data from a databasegenerate_migration— Creates database migration files
Exercise 4: MCP Message Flow Ordering
Put the following MCP message flow steps in the correct order:
- A. Client sends
tools/callto invoke a specific tool - B. Server returns tool schemas in
tools/listresult - C. Client sends
initializerequest - D. Server returns initialization result with capabilities
- E. Client sends
initializednotification - F. Server returns tool execution result
- G. Client sends
tools/listto discover available tools
Exercise 5: Tool Schema Interpretation
Given the following tool schema, answer the questions below:
{
"name": "create_github_issue",
"description": "Create a new issue in a GitHub repository",
"inputSchema": {
"type": "object",
"properties": {
"repo": {
"type": "string",
"description": "Repository in owner/name format"
},
"title": {
"type": "string",
"description": "Issue title",
"maxLength": 256
},
"body": {
"type": "string",
"description": "Issue body in Markdown format"
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "Labels to apply",
"default": []
},
"assignees": {
"type": "array",
"items": {"type": "string"},
"description": "GitHub usernames to assign"
}
},
"required": ["repo", "title", "body"]
}
}
- Which parameters are required?
- What happens if
labelsis not provided? - What is the maximum length for
title? - Can the tool accept additional parameters not listed in the schema?
- What format should
repobe in?
Exercise 6: Slash Command Anatomy
Examine the following slash command and answer the questions:
<!-- .claude/commands/debug.md -->
Help me debug the issue described below.
First, read these files for context:
- src/config.py
- src/logging_config.py
Then:
1. Identify the likely root cause
2. Suggest a fix with code changes
3. Recommend tests to prevent regression
Issue: $ARGUMENTS
- Where should this file be stored in the project?
- How would a user invoke this command?
- What does
$ARGUMENTSrepresent? - What would happen if a user types
/debug TypeError in the payment processor? - Is this a project-level or user-level command?
Tier 2 — Apply (Exercises 7-12)
Exercise 7: Basic Tool Schema Design
Design complete JSON Schema definitions for the following three tools. Include name, description, inputSchema with all properties, types, descriptions, required fields, and defaults:
search_logs— Searches application logs by date range, severity, and keywordget_deployment_status— Returns the current deployment status for a given service and environmentcalculate_sprint_velocity— Calculates sprint velocity for a team over a given number of past sprints
Exercise 8: Simple MCP Server
Write a complete MCP server in Python that exposes two tools:
convert_units— Converts between common units (temperature, distance, weight). Supports Celsius/Fahrenheit/Kelvin, meters/feet/miles, and kilograms/pounds.format_json— Takes a JSON string and returns it formatted with proper indentation, sorted keys, and syntax validation.
The server should use the stdio transport, include proper error handling, and return structured JSON responses.
Exercise 9: Resource Provider
Extend the server from Exercise 8 to also expose resources. Add three resources:
docs://units/temperature— Documentation about temperature unit conversionsdocs://units/distance— Documentation about distance unit conversionsdocs://units/weight— Documentation about weight unit conversions
Each resource should return Markdown-formatted documentation including formulas, examples, and common use cases.
Exercise 10: Slash Command Suite
Create a suite of five slash commands for a Python web application project. Write the complete Markdown content for each:
/review:security— Security-focused code review/generate:test— Generate tests for a given module/analyze:performance— Performance analysis of a code section/document:api— Generate API documentation/migrate:database— Guide a database migration process
Each command should reference specific project conventions and include structured output requirements.
Exercise 11: Input Validation Layer
Write a complete input validation module for MCP tools that:
- Validates arguments against JSON Schema definitions
- Provides clear, AI-friendly error messages
- Handles type coercion (e.g., string "5" to integer 5 when the schema expects integer)
- Supports custom validation functions beyond JSON Schema (e.g., checking that a file exists, validating URL format)
- Returns a standardized validation result object
Include at least five unit tests demonstrating different validation scenarios.
Exercise 12: Logging Middleware
Implement a comprehensive logging middleware for MCP tools that:
- Logs all tool invocations with timestamps, tool names, and arguments
- Logs all tool results with duration and success/failure status
- Redacts sensitive fields (passwords, API keys, tokens) from logs
- Supports configurable log levels per tool
- Writes logs to both a file and stderr
- Includes a daily log rotation mechanism
Tier 3 — Analyze (Exercises 13-18)
Exercise 13: Tool Design Review
Analyze the following tool definitions and identify design problems. For each tool, explain what is wrong and how you would fix it:
Tool A:
{
"name": "do_stuff",
"description": "Does things with the database",
"inputSchema": {
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["query", "insert", "update", "delete", "backup", "restore"]
},
"data": {
"type": "string"
}
},
"required": ["action", "data"]
}
}
Tool B:
{
"name": "search_and_replace_and_commit_and_push",
"description": "Search for text in files, replace it, commit changes, and push to remote",
"inputSchema": {
"type": "object",
"properties": {
"search": {"type": "string"},
"replace": {"type": "string"},
"files": {"type": "string"},
"commit_message": {"type": "string"},
"branch": {"type": "string"}
},
"required": ["search", "replace", "files", "commit_message", "branch"]
}
}
Tool C:
{
"name": "q",
"description": "Query",
"inputSchema": {
"type": "object",
"properties": {
"s": {"type": "string"},
"n": {"type": "integer"},
"f": {"type": "string"}
},
"required": ["s"]
}
}
Exercise 14: Security Audit
You are reviewing an MCP server that provides database access. Analyze the following code and identify all security vulnerabilities. For each vulnerability, explain the risk and provide a secure alternative:
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "query_database":
sql = arguments["sql"]
db = await aiosqlite.connect(arguments.get("database", "production.db"))
cursor = await db.execute(sql)
rows = await cursor.fetchall()
await db.close()
return [TextContent(type="text", text=str(rows))]
elif name == "read_file":
path = arguments["path"]
with open(path, "r") as f:
content = f.read()
return [TextContent(type="text", text=content)]
elif name == "run_command":
import subprocess
result = subprocess.run(
arguments["command"], shell=True, capture_output=True, text=True
)
return [TextContent(type="text", text=result.stdout)]
Exercise 15: Performance Analysis
An MCP server that searches a large codebase is running slowly. The search tool takes 15-30 seconds for each invocation. Analyze the following implementation and identify at least five performance bottlenecks. For each bottleneck, explain why it is slow and propose a specific optimization:
async def handle_search(arguments: dict) -> list[TextContent]:
query = arguments["query"]
root = arguments.get("root", "/home/user/project")
results = []
for root_dir, dirs, files in os.walk(root):
for filename in files:
filepath = os.path.join(root_dir, filename)
try:
with open(filepath, "r") as f:
content = f.read()
lines = content.split("\n")
for i, line in enumerate(lines):
if query.lower() in line.lower():
results.append({
"file": filepath,
"line": i + 1,
"content": line,
"context": lines[max(0,i-2):i+3],
})
except:
pass
return [TextContent(
type="text",
text=json.dumps({"results": results})
)]
Exercise 16: Middleware Pipeline Analysis
Given the following middleware pipeline configuration, analyze the execution order and identify potential issues:
pipeline = MiddlewarePipeline()
cache = CacheMiddleware(ttl_seconds=600)
rate_limiter = RateLimitMiddleware(max_calls=10, window_seconds=60)
auth = AuthMiddleware(required_roles=["developer"])
logger_mw = LoggingMiddleware(level="DEBUG")
validator = ValidationMiddleware(schemas=TOOL_SCHEMAS)
pipeline.add_pre_handler(cache.check_cache)
pipeline.add_pre_handler(logger_mw.log_request)
pipeline.add_pre_handler(auth.check_permissions)
pipeline.add_pre_handler(rate_limiter.check_rate_limit)
pipeline.add_pre_handler(validator.validate_input)
pipeline.add_post_handler(logger_mw.log_response)
pipeline.add_post_handler(cache.store_cache)
- What is the execution order of pre-handlers?
- What problems does this ordering cause?
- What would the optimal ordering be and why?
- What happens if the cache returns a hit? Which subsequent handlers still run?
- Should validation run before or after rate limiting? Why?
Exercise 17: Tool Ecosystem Evaluation
You need to choose between three approaches for giving your AI assistant access to your company's Jira instance:
Option A: Use a community-built open-source MCP server for Jira with 200 GitHub stars, last updated 3 months ago, written in TypeScript.
Option B: Build a custom MCP server in Python that wraps the Jira REST API, exposing only the specific operations your team needs.
Option C: Use Jira's official REST API directly through a generic HTTP tool that lets the AI make arbitrary HTTP requests.
For each option, analyze: 1. Setup complexity 2. Security implications 3. Maintenance burden 4. Flexibility and customizability 5. Team adoption difficulty
Provide a recommendation with justification.
Exercise 18: Architecture Comparison
Compare and contrast two approaches to building a suite of development tools:
Approach A: Monolithic Server — One MCP server that exposes all tools (database queries, API access, file operations, deployment commands) through a single process.
Approach B: Microservice Servers — Multiple MCP servers, each responsible for one category of tools (one for databases, one for APIs, one for files, one for deployment).
Analyze each approach across these dimensions: 1. Development complexity 2. Deployment and operations 3. Fault isolation 4. Performance 5. Security 6. Configuration management 7. Developer experience
Tier 4 — Evaluate and Create (Exercises 19-24)
Exercise 19: Full MCP Server — Project Analytics
Build a complete MCP server that provides project analytics tools:
count_lines_of_code— Count lines of code by language in a directoryfind_large_files— Find files larger than a specified sizedependency_tree— Parse and display the dependency tree from a requirements.txt or package.jsoncode_age_report— Using git, report when each file was last modifiedtodo_finder— Find all TODO, FIXME, and HACK comments in a codebase
Include proper schemas, error handling, structured output, and a resource that provides documentation for all tools.
Exercise 20: Data Integration Server
Build an MCP server that integrates multiple data sources:
- A SQLite database containing project metadata (teams, services, deployments)
- A JSON file containing configuration defaults
- A mock REST API (implement a simple in-memory API) for incident data
The server should expose: - Tools for querying each data source individually - A unified search tool that searches across all sources - Resources for database schema and API documentation - Error handling for each data source independently (one source failing should not break others)
Exercise 21: Middleware Framework
Design and implement a complete middleware framework that supports:
- Ordered middleware registration with priority levels
- Conditional middleware that only runs for specific tools
- Middleware chaining where one middleware's output feeds into the next
- Error recovery middleware that catches errors and returns graceful responses
- Async middleware that can perform I/O operations
- Middleware metrics that track execution time for each middleware layer
Write comprehensive tests and demonstrate the framework with at least four different middleware implementations.
Exercise 22: Testing Framework
Build a testing framework specifically designed for MCP servers. The framework should provide:
- A mock MCP client that can be used in tests
- Assertion helpers for verifying tool outputs (e.g.,
assert_tool_returns_json,assert_tool_error_contains) - Schema validation test generators that automatically create tests from tool schemas
- Scenario runners that execute multi-step AI interaction scenarios
- Performance benchmarking utilities for measuring tool execution time
- A test report generator that summarizes test results
Exercise 23: Slash Command Generator
Build a Python tool that generates slash commands from a configuration file. The tool should:
- Read a YAML configuration file that defines commands, their parameters, context files, and output format requirements
- Generate the Markdown files for each command in the correct directory structure
- Support command categories (subdirectories)
- Include template variables like
$ARGUMENTS`, `$CURRENT_FILE, and custom variables - Validate that referenced context files actually exist
- Generate a README documenting all available commands
Example configuration:
commands:
review:
security:
description: "Security-focused code review"
context_files:
- "SECURITY.md"
- "src/middleware/auth.py"
focus_areas:
- "injection vulnerabilities"
- "authentication bypass"
- "data exposure"
Exercise 24: Tool Composition Engine
Design and implement a tool composition engine that allows simple tools to be chained together into complex workflows. The engine should support:
- Sequential composition — Output of tool A becomes input for tool B
- Parallel composition — Tools A and B run simultaneously, results are merged
- Conditional composition — Tool B only runs if tool A's output meets a condition
- Loop composition — Tool A runs repeatedly until a condition is met
- Error handling — Configurable behavior when a step fails (retry, skip, abort)
Define workflows using a simple DSL (dictionary-based) and implement an executor.
Tier 5 — Synthesis and Real-World Application (Exercises 25-30)
Exercise 25: Team Onboarding System
Design and build a complete MCP server suite for onboarding new developers to a team. The system should include:
- Knowledge base tools — Search and read team documentation, ADRs, coding standards
- Codebase navigation tools — Find relevant code examples, understand project structure, trace dependencies
- Environment setup tools — Verify development environment configuration, run setup scripts, check dependencies
- Mentor tools — Answer common questions using a FAQ database, explain team conventions
- Progress tracking tools — Track onboarding checklist completion, record completed steps
Include slash commands for common onboarding tasks, prompt templates for learning activities, and comprehensive documentation.
Exercise 26: CI/CD Integration Server
Build an MCP server that integrates with a CI/CD pipeline (simulated). The server should provide:
- Pipeline tools — Trigger builds, check build status, view build logs, cancel running builds
- Deployment tools — Deploy to staging/production, rollback deployments, check deployment health
- Environment tools — List environments, compare configurations, promote between environments
- Notification tools — Send deployment notifications to Slack (simulated), update status dashboards
- Safety checks — Pre-deployment validation, dependency scanning, change risk assessment
Implement with full middleware (logging, rate limiting, approval gates for production deployments).
Exercise 27: Code Quality Dashboard Server
Build an MCP server that provides a code quality dashboard:
- Metrics tools — Calculate cyclomatic complexity, code duplication, test coverage (simulated)
- Trend tools — Track metrics over time, identify improving/degrading areas
- Comparison tools — Compare quality metrics between branches, between releases
- Alert tools — Check metrics against thresholds, generate quality reports
- Recommendation tools — Suggest refactoring targets based on quality metrics
Include resources for quality standards documentation and prompt templates for code quality improvement workflows.
Exercise 28: Multi-Protocol Integration
Build an MCP server that demonstrates integration with multiple external protocols:
- GraphQL integration — Query a GraphQL API (simulated) for project data
- WebSocket integration — Connect to a WebSocket service (simulated) for real-time data
- gRPC integration — Call a gRPC service (simulated) for internal microservice communication
- Message queue integration — Publish/consume messages from a queue (simulated)
For each protocol, implement proper connection management, error handling, and result formatting. Compare the integration patterns and document when each protocol is most appropriate.
Exercise 29: Enterprise Tool Governance
Design a governance framework for managing MCP servers in an enterprise environment. Create:
- A tool registry service — MCP server that manages a registry of all available tools across the organization
- Access control system — Role-based access control for tools (who can use which tools)
- Audit system — Comprehensive logging of all tool usage with queryable audit log
- Health monitoring — Health checks for all registered MCP servers
- Configuration management — Centralized configuration for all MCP servers
Write the implementation, documentation, and operational runbooks.
Exercise 30: Capstone — Custom Development Platform
Build a comprehensive custom development platform as an MCP server suite that combines everything from this chapter:
- At least 10 custom tools spanning multiple categories
- At least 5 resources providing documentation and context
- At least 3 prompt templates for common workflows
- A middleware pipeline with logging, caching, rate limiting, and validation
- A slash command suite with at least 8 commands
- Comprehensive tests including unit, integration, and scenario tests
- Deployment configuration for both local and remote deployment
- Full documentation including setup guide, tool reference, and architecture diagram
The platform should be designed for a specific domain (e.g., e-commerce development, data science, mobile app development) and demonstrate deep integration with domain-specific tools and knowledge.