Chapter 36 Exercises: Programmatic AI — APIs, Python, and Automations

These exercises build your Python API skills progressively. Part A covers foundational API usage, Part B covers batch processing and automation, and Part C covers advanced integration patterns.

Prerequisites: Python 3.9+, the anthropic and openai packages installed, and API keys configured in a .env file.


Part A: Foundation — API Basics

Exercise 1: Your First API Call

Set up your environment and make your first successful API call.

  1. Create a new directory for your exercises.
  2. Create a .env file with your API key.
  3. Create a .gitignore file that excludes .env.
  4. Write a Python script that: - Loads the environment variable - Makes a call to the Anthropic API asking "What are three practical use cases for AI APIs in professional settings?" - Prints the response text, the input token count, and the output token count
  5. Confirm the script runs without errors.

Exercise 2: Parameter Exploration

Write a script that calls the API three times with the same prompt but different temperature settings (0.0, 0.5, 1.0):

Prompt: "Suggest a creative project name for an AI automation initiative at a mid-sized company."

For each call, print the temperature, the response, and the output token count. Observe and write a one-paragraph commentary on how temperature affects the outputs.


Exercise 3: System Prompt Design

Design three different system prompts for the same task and compare results.

Task: ask the AI "Explain what an API is."

System prompt A: (none — no system prompt) System prompt B: "You are a technical expert. Be precise and concise." System prompt C: "You are explaining technical concepts to a business executive who has no technical background. Use analogies and avoid jargon."

Make the call with each system prompt. Write a paragraph comparing the three outputs: how did the system prompt shape the response?


Exercise 4: Stop Reason Handling

Write a script that deliberately triggers the max_tokens stop reason by setting max_tokens=50 on a request for a long response.

Then modify the script to detect when the stop reason is max_tokens and automatically retry with max_tokens=512.

Demonstrate that the retry produces a complete response.


Exercise 5: Both SDKs

Implement the same task using both the Anthropic and OpenAI SDKs.

Task: given a job title and industry, generate a three-sentence professional bio.

Test with: "Senior Data Analyst" at "healthcare technology".

Compare: the response quality, the code structure differences, and the token usage. Which did you find more useful for this task?


Exercise 6: Multi-Turn Conversation Manager

Build a working command-line conversation manager (based on the chat_session function in the chapter) with the following additions: - Print token count after each exchange - Add a /history command that prints the full conversation history - Add a /clear command that resets the conversation history - Save the conversation to a file named with the current timestamp when the user exits

Test your manager with at least a five-turn conversation.


Part B: Batch Processing and Automation

Exercise 7: Batch Summarizer

Build a batch summarization script that processes a list of texts from a JSON input file.

Input format (texts_to_summarize.json):

[
  {"id": "doc001", "title": "Article Title", "content": "Full article text..."},
  {"id": "doc002", "title": "Article Title", "content": "Full article text..."}
]

Your script should: - Read the input file - Summarize each text using claude-haiku (two to three sentences) - Write results to an output JSON file with id, title, original_word_count, summary, and tokens_used - Print a summary report: total items, total tokens used, estimated cost

Create at least five sample texts to process (you can write them yourself or use real articles).


Exercise 8: Classification Pipeline

Build a customer feedback classifier. Create a CSV file with at least ten feedback items (write realistic ones — mix of positive, negative, feature requests, and bug reports).

Your pipeline should: - Read the CSV - Classify each item as: sentiment (positive/neutral/negative), category (bug/feature_request/praise/complaint/question), and priority (high/medium/low) - Use structured JSON output from the API - Write results back to a new CSV with added classification columns - Generate a summary report: distribution of sentiments, categories, and priorities


Exercise 9: Resumable Batch Processor

Implement the process_batch_with_recovery pattern from the chapter on your own dataset.

  1. Create a list of at least 20 items to process (classification, summarization, or extraction — your choice)
  2. Implement checkpointing that saves progress every five items
  3. Test the recovery by interrupting the script partway through (Ctrl+C), then rerunning it and confirming it resumes from the checkpoint rather than restarting
  4. Add a final report showing: total items processed, successful vs. failed, total tokens used

Exercise 10: Build a Working Batch Processor

Build a "bulk content transformer" that takes a list of text snippets in one style and transforms them to another.

Scenario: your company is rebranding and needs to update 25 product descriptions from formal/technical language to conversational/friendly language.

Create 25 sample product descriptions (at least two sentences each), then build a batch processor that: - Transforms each description to the new tone - Validates output length is within 20% of the original - Retries if validation fails (maximum two retries per item) - Reports total cost and processing time


Exercise 11: Rate Limit Testing

Build a script that intentionally sends requests at a high rate to trigger a rate limit error, then demonstrates exponential backoff recovery.

Note: Use the smallest possible max_tokens value (e.g., 5) to minimize cost during this exercise.

Your script should: - Attempt to send 10 requests with a 0.1-second delay between them - Catch RateLimitError and implement backoff - Log each attempt, the wait time, and the outcome - Complete all 10 requests successfully despite hitting rate limits


Part C: Advanced Integration

Exercise 12: Multi-Turn Chatbot with Persona

Build a multi-turn chatbot that maintains a consistent persona across the conversation.

Choose a professional persona (e.g., "a Socratic philosophy tutor who answers questions with questions," or "a straightforward financial advisor who always asks about risk tolerance before giving advice," or your own).

Requirements: - The persona should be defined entirely in the system prompt - The chatbot must stay in character across at least ten turns - Implement the ManagedConversation class from the chapter with context summarization - Save the full conversation with timestamps to a JSON file at the end

Evaluate: at what point (if any) did the persona start to drift? What causes drift and how would you prevent it?


Exercise 13: Document Analyzer

Build the document analysis pipeline from the chapter and test it on at least three real documents from your own work (redacted if necessary).

Extend the pipeline with a fourth analysis type of your own design. Your analysis type should: - Be specific to a domain you work in - Extract at least five specific categories of information - Use structured output that could be stored in a database


Exercise 14: Email Triage System

Extend the email triage system from the chapter: - Add a fifth category of your own design to the classification scheme - Add a "urgency score" from 1-10 as a numeric field (not just categorical) - Add a "language quality" assessment: is the email professionally written, casually written, or written by a non-native speaker? - Test with at least ten email samples covering all categories - Generate a summary report of your test batch


Exercise 15: Data Extraction Pipeline

Build a data extraction pipeline for unstructured text. Choose a domain (job postings, news articles, product reviews, research abstracts — something relevant to your work).

Your pipeline should: - Extract at least six specific fields from each document - Return structured JSON for each extraction - Validate that required fields are present (flag missing fields rather than failing) - Handle at least 20 documents from your chosen domain - Output a clean CSV with all extracted fields


Exercise 16: Cost Tracking Dashboard

Implement the CostTracker class from the chapter and extend it to: - Track costs across multiple sessions (persist to a JSON file between script runs) - Generate a daily cost report showing usage by model and by hour of day - Alert when cumulative daily cost exceeds a threshold you define - Produce a monthly projection based on the current day's usage rate

Run your tracker across all your exercises in Part B and generate a cost report for the full exercise set.


Exercise 17: Streaming Application

Build an interactive question-answering application that uses streaming to display responses as they are generated.

Requirements: - Uses streaming for all responses - Maintains conversation history (multi-turn) - Displays a "thinking..." indicator before streaming starts - Shows tokens per second as a performance metric at the end of each response - Gracefully handles interruption (Ctrl+C mid-stream) without crashing


Exercise 18: CSV Processing Challenge

Use the process_csv_with_ai function from the chapter on a real CSV dataset from your own work.

If you do not have a CSV available, download any public dataset from Kaggle or government data portals.

Tasks: 1. Describe the dataset to the AI and ask it to identify the three most analytically interesting columns 2. Ask the AI to classify every row in one column into categories it defines itself (inductive categorization) 3. Ask the AI to generate a hypothesis about what the data shows and explain how you would test it

Document what worked, what failed, and what surprised you.


Exercise 19: Integration Architecture Design

You are building an AI feature for an existing application. Design (but do not necessarily implement) the full integration architecture for one of the following scenarios:

a. A legal firm wants to add AI-powered first-pass document review to their document management system. Documents arrive as PDFs, reviews need to be stored and attributed, and lawyers need to be able to accept or override AI assessments.

b. An e-commerce company wants to automatically generate SEO-optimized product descriptions from raw product data (dimensions, materials, category) for new SKUs.

c. A recruiter wants to screen resumes against job descriptions and produce a structured fit assessment for each applicant.

Your design document should cover: data flow, prompt design, error handling, cost estimation, human review points, and storage/integration requirements.


Exercise 20: End-to-End Automation Build

Build a complete end-to-end automation for a real task you do regularly.

Requirements: - The automation must involve at least three API calls (chained or parallel) - It must read input from a file or database and write output to a file or database - It must include proper error handling and retry logic - It must track and report costs - It must have at least one human review point (the automation pauses and waits for approval before proceeding)

Document your automation with: a description of the task it automates, the chain specification, instructions for running it, and a before/after comparison of the time this task took with and without the automation.