Chapter 37: Exercises

Tier 1 — Remember and Understand (Exercises 1-6)

Exercise 1: MCP Vocabulary Match

Match each MCP concept with its correct definition:

Concept Definition
1. Tool A. A data source the AI can read for context
2. Resource B. A pre-defined prompt structure for guiding AI behavior
3. Prompt C. A function the AI can invoke to perform actions
4. MCP Client D. The message format used for MCP communication
5. MCP Server E. The component that discovers and talks to servers
6. JSON-RPC 2.0 F. A process that exposes capabilities to AI clients

Exercise 2: Transport Layer Comparison

Create a table comparing the three MCP transport mechanisms (stdio, SSE, and Streamable HTTP). For each transport, list: - How it works - When to use it - Advantages - Limitations - A real-world scenario where it is the best choice

Exercise 3: Tool Categories Classification

Given the following list of custom tools, classify each into the correct category (Knowledge Access, System Interaction, Data Queries, Code Operations, Workflow Automation, or Domain Tools):

  1. search_confluence — Searches a Confluence wiki
  2. deploy_staging — Deploys code to a staging environment
  3. query_analytics — Queries a Google Analytics dataset
  4. lint_terraform — Runs custom Terraform linting rules
  5. release_workflow — Manages the complete release process
  6. calculate_mortgage — Computes mortgage payment schedules
  7. read_runbook — Reads operational runbooks
  8. trigger_ci_pipeline — Starts a CI/CD build
  9. fetch_user_metrics — Retrieves user behavior data from a database
  10. generate_migration — Creates database migration files

Exercise 4: MCP Message Flow Ordering

Put the following MCP message flow steps in the correct order:

  • A. Client sends tools/call to invoke a specific tool
  • B. Server returns tool schemas in tools/list result
  • C. Client sends initialize request
  • D. Server returns initialization result with capabilities
  • E. Client sends initialized notification
  • F. Server returns tool execution result
  • G. Client sends tools/list to discover available tools

Exercise 5: Tool Schema Interpretation

Given the following tool schema, answer the questions below:

{
  "name": "create_github_issue",
  "description": "Create a new issue in a GitHub repository",
  "inputSchema": {
    "type": "object",
    "properties": {
      "repo": {
        "type": "string",
        "description": "Repository in owner/name format"
      },
      "title": {
        "type": "string",
        "description": "Issue title",
        "maxLength": 256
      },
      "body": {
        "type": "string",
        "description": "Issue body in Markdown format"
      },
      "labels": {
        "type": "array",
        "items": {"type": "string"},
        "description": "Labels to apply",
        "default": []
      },
      "assignees": {
        "type": "array",
        "items": {"type": "string"},
        "description": "GitHub usernames to assign"
      }
    },
    "required": ["repo", "title", "body"]
  }
}
  1. Which parameters are required?
  2. What happens if labels is not provided?
  3. What is the maximum length for title?
  4. Can the tool accept additional parameters not listed in the schema?
  5. What format should repo be in?

Exercise 6: Slash Command Anatomy

Examine the following slash command and answer the questions:

<!-- .claude/commands/debug.md -->
Help me debug the issue described below.

First, read these files for context:
- src/config.py
- src/logging_config.py

Then:
1. Identify the likely root cause
2. Suggest a fix with code changes
3. Recommend tests to prevent regression

Issue: $ARGUMENTS
  1. Where should this file be stored in the project?
  2. How would a user invoke this command?
  3. What does $ARGUMENTS represent?
  4. What would happen if a user types /debug TypeError in the payment processor?
  5. Is this a project-level or user-level command?

Tier 2 — Apply (Exercises 7-12)

Exercise 7: Basic Tool Schema Design

Design complete JSON Schema definitions for the following three tools. Include name, description, inputSchema with all properties, types, descriptions, required fields, and defaults:

  1. search_logs — Searches application logs by date range, severity, and keyword
  2. get_deployment_status — Returns the current deployment status for a given service and environment
  3. calculate_sprint_velocity — Calculates sprint velocity for a team over a given number of past sprints

Exercise 8: Simple MCP Server

Write a complete MCP server in Python that exposes two tools:

  1. convert_units — Converts between common units (temperature, distance, weight). Supports Celsius/Fahrenheit/Kelvin, meters/feet/miles, and kilograms/pounds.
  2. format_json — Takes a JSON string and returns it formatted with proper indentation, sorted keys, and syntax validation.

The server should use the stdio transport, include proper error handling, and return structured JSON responses.

Exercise 9: Resource Provider

Extend the server from Exercise 8 to also expose resources. Add three resources:

  1. docs://units/temperature — Documentation about temperature unit conversions
  2. docs://units/distance — Documentation about distance unit conversions
  3. docs://units/weight — Documentation about weight unit conversions

Each resource should return Markdown-formatted documentation including formulas, examples, and common use cases.

Exercise 10: Slash Command Suite

Create a suite of five slash commands for a Python web application project. Write the complete Markdown content for each:

  1. /review:security — Security-focused code review
  2. /generate:test — Generate tests for a given module
  3. /analyze:performance — Performance analysis of a code section
  4. /document:api — Generate API documentation
  5. /migrate:database — Guide a database migration process

Each command should reference specific project conventions and include structured output requirements.

Exercise 11: Input Validation Layer

Write a complete input validation module for MCP tools that:

  1. Validates arguments against JSON Schema definitions
  2. Provides clear, AI-friendly error messages
  3. Handles type coercion (e.g., string "5" to integer 5 when the schema expects integer)
  4. Supports custom validation functions beyond JSON Schema (e.g., checking that a file exists, validating URL format)
  5. Returns a standardized validation result object

Include at least five unit tests demonstrating different validation scenarios.

Exercise 12: Logging Middleware

Implement a comprehensive logging middleware for MCP tools that:

  1. Logs all tool invocations with timestamps, tool names, and arguments
  2. Logs all tool results with duration and success/failure status
  3. Redacts sensitive fields (passwords, API keys, tokens) from logs
  4. Supports configurable log levels per tool
  5. Writes logs to both a file and stderr
  6. Includes a daily log rotation mechanism

Tier 3 — Analyze (Exercises 13-18)

Exercise 13: Tool Design Review

Analyze the following tool definitions and identify design problems. For each tool, explain what is wrong and how you would fix it:

Tool A:

{
  "name": "do_stuff",
  "description": "Does things with the database",
  "inputSchema": {
    "type": "object",
    "properties": {
      "action": {
        "type": "string",
        "enum": ["query", "insert", "update", "delete", "backup", "restore"]
      },
      "data": {
        "type": "string"
      }
    },
    "required": ["action", "data"]
  }
}

Tool B:

{
  "name": "search_and_replace_and_commit_and_push",
  "description": "Search for text in files, replace it, commit changes, and push to remote",
  "inputSchema": {
    "type": "object",
    "properties": {
      "search": {"type": "string"},
      "replace": {"type": "string"},
      "files": {"type": "string"},
      "commit_message": {"type": "string"},
      "branch": {"type": "string"}
    },
    "required": ["search", "replace", "files", "commit_message", "branch"]
  }
}

Tool C:

{
  "name": "q",
  "description": "Query",
  "inputSchema": {
    "type": "object",
    "properties": {
      "s": {"type": "string"},
      "n": {"type": "integer"},
      "f": {"type": "string"}
    },
    "required": ["s"]
  }
}

Exercise 14: Security Audit

You are reviewing an MCP server that provides database access. Analyze the following code and identify all security vulnerabilities. For each vulnerability, explain the risk and provide a secure alternative:

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "query_database":
        sql = arguments["sql"]
        db = await aiosqlite.connect(arguments.get("database", "production.db"))
        cursor = await db.execute(sql)
        rows = await cursor.fetchall()
        await db.close()
        return [TextContent(type="text", text=str(rows))]

    elif name == "read_file":
        path = arguments["path"]
        with open(path, "r") as f:
            content = f.read()
        return [TextContent(type="text", text=content)]

    elif name == "run_command":
        import subprocess
        result = subprocess.run(
            arguments["command"], shell=True, capture_output=True, text=True
        )
        return [TextContent(type="text", text=result.stdout)]

Exercise 15: Performance Analysis

An MCP server that searches a large codebase is running slowly. The search tool takes 15-30 seconds for each invocation. Analyze the following implementation and identify at least five performance bottlenecks. For each bottleneck, explain why it is slow and propose a specific optimization:

async def handle_search(arguments: dict) -> list[TextContent]:
    query = arguments["query"]
    root = arguments.get("root", "/home/user/project")
    results = []

    for root_dir, dirs, files in os.walk(root):
        for filename in files:
            filepath = os.path.join(root_dir, filename)
            try:
                with open(filepath, "r") as f:
                    content = f.read()
                    lines = content.split("\n")
                    for i, line in enumerate(lines):
                        if query.lower() in line.lower():
                            results.append({
                                "file": filepath,
                                "line": i + 1,
                                "content": line,
                                "context": lines[max(0,i-2):i+3],
                            })
            except:
                pass

    return [TextContent(
        type="text",
        text=json.dumps({"results": results})
    )]

Exercise 16: Middleware Pipeline Analysis

Given the following middleware pipeline configuration, analyze the execution order and identify potential issues:

pipeline = MiddlewarePipeline()

cache = CacheMiddleware(ttl_seconds=600)
rate_limiter = RateLimitMiddleware(max_calls=10, window_seconds=60)
auth = AuthMiddleware(required_roles=["developer"])
logger_mw = LoggingMiddleware(level="DEBUG")
validator = ValidationMiddleware(schemas=TOOL_SCHEMAS)

pipeline.add_pre_handler(cache.check_cache)
pipeline.add_pre_handler(logger_mw.log_request)
pipeline.add_pre_handler(auth.check_permissions)
pipeline.add_pre_handler(rate_limiter.check_rate_limit)
pipeline.add_pre_handler(validator.validate_input)

pipeline.add_post_handler(logger_mw.log_response)
pipeline.add_post_handler(cache.store_cache)
  1. What is the execution order of pre-handlers?
  2. What problems does this ordering cause?
  3. What would the optimal ordering be and why?
  4. What happens if the cache returns a hit? Which subsequent handlers still run?
  5. Should validation run before or after rate limiting? Why?

Exercise 17: Tool Ecosystem Evaluation

You need to choose between three approaches for giving your AI assistant access to your company's Jira instance:

Option A: Use a community-built open-source MCP server for Jira with 200 GitHub stars, last updated 3 months ago, written in TypeScript.

Option B: Build a custom MCP server in Python that wraps the Jira REST API, exposing only the specific operations your team needs.

Option C: Use Jira's official REST API directly through a generic HTTP tool that lets the AI make arbitrary HTTP requests.

For each option, analyze: 1. Setup complexity 2. Security implications 3. Maintenance burden 4. Flexibility and customizability 5. Team adoption difficulty

Provide a recommendation with justification.

Exercise 18: Architecture Comparison

Compare and contrast two approaches to building a suite of development tools:

Approach A: Monolithic Server — One MCP server that exposes all tools (database queries, API access, file operations, deployment commands) through a single process.

Approach B: Microservice Servers — Multiple MCP servers, each responsible for one category of tools (one for databases, one for APIs, one for files, one for deployment).

Analyze each approach across these dimensions: 1. Development complexity 2. Deployment and operations 3. Fault isolation 4. Performance 5. Security 6. Configuration management 7. Developer experience


Tier 4 — Evaluate and Create (Exercises 19-24)

Exercise 19: Full MCP Server — Project Analytics

Build a complete MCP server that provides project analytics tools:

  1. count_lines_of_code — Count lines of code by language in a directory
  2. find_large_files — Find files larger than a specified size
  3. dependency_tree — Parse and display the dependency tree from a requirements.txt or package.json
  4. code_age_report — Using git, report when each file was last modified
  5. todo_finder — Find all TODO, FIXME, and HACK comments in a codebase

Include proper schemas, error handling, structured output, and a resource that provides documentation for all tools.

Exercise 20: Data Integration Server

Build an MCP server that integrates multiple data sources:

  1. A SQLite database containing project metadata (teams, services, deployments)
  2. A JSON file containing configuration defaults
  3. A mock REST API (implement a simple in-memory API) for incident data

The server should expose: - Tools for querying each data source individually - A unified search tool that searches across all sources - Resources for database schema and API documentation - Error handling for each data source independently (one source failing should not break others)

Exercise 21: Middleware Framework

Design and implement a complete middleware framework that supports:

  1. Ordered middleware registration with priority levels
  2. Conditional middleware that only runs for specific tools
  3. Middleware chaining where one middleware's output feeds into the next
  4. Error recovery middleware that catches errors and returns graceful responses
  5. Async middleware that can perform I/O operations
  6. Middleware metrics that track execution time for each middleware layer

Write comprehensive tests and demonstrate the framework with at least four different middleware implementations.

Exercise 22: Testing Framework

Build a testing framework specifically designed for MCP servers. The framework should provide:

  1. A mock MCP client that can be used in tests
  2. Assertion helpers for verifying tool outputs (e.g., assert_tool_returns_json, assert_tool_error_contains)
  3. Schema validation test generators that automatically create tests from tool schemas
  4. Scenario runners that execute multi-step AI interaction scenarios
  5. Performance benchmarking utilities for measuring tool execution time
  6. A test report generator that summarizes test results

Exercise 23: Slash Command Generator

Build a Python tool that generates slash commands from a configuration file. The tool should:

  1. Read a YAML configuration file that defines commands, their parameters, context files, and output format requirements
  2. Generate the Markdown files for each command in the correct directory structure
  3. Support command categories (subdirectories)
  4. Include template variables like $ARGUMENTS`, `$CURRENT_FILE, and custom variables
  5. Validate that referenced context files actually exist
  6. Generate a README documenting all available commands

Example configuration:

commands:
  review:
    security:
      description: "Security-focused code review"
      context_files:
        - "SECURITY.md"
        - "src/middleware/auth.py"
      focus_areas:
        - "injection vulnerabilities"
        - "authentication bypass"
        - "data exposure"

Exercise 24: Tool Composition Engine

Design and implement a tool composition engine that allows simple tools to be chained together into complex workflows. The engine should support:

  1. Sequential composition — Output of tool A becomes input for tool B
  2. Parallel composition — Tools A and B run simultaneously, results are merged
  3. Conditional composition — Tool B only runs if tool A's output meets a condition
  4. Loop composition — Tool A runs repeatedly until a condition is met
  5. Error handling — Configurable behavior when a step fails (retry, skip, abort)

Define workflows using a simple DSL (dictionary-based) and implement an executor.


Tier 5 — Synthesis and Real-World Application (Exercises 25-30)

Exercise 25: Team Onboarding System

Design and build a complete MCP server suite for onboarding new developers to a team. The system should include:

  1. Knowledge base tools — Search and read team documentation, ADRs, coding standards
  2. Codebase navigation tools — Find relevant code examples, understand project structure, trace dependencies
  3. Environment setup tools — Verify development environment configuration, run setup scripts, check dependencies
  4. Mentor tools — Answer common questions using a FAQ database, explain team conventions
  5. Progress tracking tools — Track onboarding checklist completion, record completed steps

Include slash commands for common onboarding tasks, prompt templates for learning activities, and comprehensive documentation.

Exercise 26: CI/CD Integration Server

Build an MCP server that integrates with a CI/CD pipeline (simulated). The server should provide:

  1. Pipeline tools — Trigger builds, check build status, view build logs, cancel running builds
  2. Deployment tools — Deploy to staging/production, rollback deployments, check deployment health
  3. Environment tools — List environments, compare configurations, promote between environments
  4. Notification tools — Send deployment notifications to Slack (simulated), update status dashboards
  5. Safety checks — Pre-deployment validation, dependency scanning, change risk assessment

Implement with full middleware (logging, rate limiting, approval gates for production deployments).

Exercise 27: Code Quality Dashboard Server

Build an MCP server that provides a code quality dashboard:

  1. Metrics tools — Calculate cyclomatic complexity, code duplication, test coverage (simulated)
  2. Trend tools — Track metrics over time, identify improving/degrading areas
  3. Comparison tools — Compare quality metrics between branches, between releases
  4. Alert tools — Check metrics against thresholds, generate quality reports
  5. Recommendation tools — Suggest refactoring targets based on quality metrics

Include resources for quality standards documentation and prompt templates for code quality improvement workflows.

Exercise 28: Multi-Protocol Integration

Build an MCP server that demonstrates integration with multiple external protocols:

  1. GraphQL integration — Query a GraphQL API (simulated) for project data
  2. WebSocket integration — Connect to a WebSocket service (simulated) for real-time data
  3. gRPC integration — Call a gRPC service (simulated) for internal microservice communication
  4. Message queue integration — Publish/consume messages from a queue (simulated)

For each protocol, implement proper connection management, error handling, and result formatting. Compare the integration patterns and document when each protocol is most appropriate.

Exercise 29: Enterprise Tool Governance

Design a governance framework for managing MCP servers in an enterprise environment. Create:

  1. A tool registry service — MCP server that manages a registry of all available tools across the organization
  2. Access control system — Role-based access control for tools (who can use which tools)
  3. Audit system — Comprehensive logging of all tool usage with queryable audit log
  4. Health monitoring — Health checks for all registered MCP servers
  5. Configuration management — Centralized configuration for all MCP servers

Write the implementation, documentation, and operational runbooks.

Exercise 30: Capstone — Custom Development Platform

Build a comprehensive custom development platform as an MCP server suite that combines everything from this chapter:

  1. At least 10 custom tools spanning multiple categories
  2. At least 5 resources providing documentation and context
  3. At least 3 prompt templates for common workflows
  4. A middleware pipeline with logging, caching, rate limiting, and validation
  5. A slash command suite with at least 8 commands
  6. Comprehensive tests including unit, integration, and scenario tests
  7. Deployment configuration for both local and remote deployment
  8. Full documentation including setup guide, tool reference, and architecture diagram

The platform should be designed for a specific domain (e.g., e-commerce development, data science, mobile app development) and demonstrate deep integration with domain-specific tools and knowledge.