Chapter 5 Exercises: Building Your AI Environment
These exercises are designed to be completed sequentially as you build your AI environment. Most produce tangible outputs — saved files, configured settings, written prompts — that become part of your ongoing AI environment. Do not just read them; complete them.
Part A: Foundation Setup (Exercises 1-5)
Exercise 1: Primary Tool Selection Audit
Before committing to a primary AI tool, do a structured comparison of your two most likely candidates.
Step 1: Choose two tools from ChatGPT, Claude, and Gemini that you want to compare.
Step 2: Run the same five tasks on both tools: 1. Ask each tool to summarize a long document you paste in (use something from your actual work) 2. Ask each to write a professional email in your industry 3. Ask each to brainstorm 10 ideas for a project you are currently working on 4. Ask each to explain a technical concept in your field 5. Ask each to reformat a piece of content you provide (e.g., bullet points to prose)
Step 3: Rate each output on a 1-5 scale for: accuracy, relevance, format, and tone fit.
Step 4: Calculate scores and compare. Note specific cases where one tool was clearly stronger.
Step 5: Based on the comparison, choose your primary tool. Write a one-paragraph justification.
The point is not to find the "objectively best" tool but to find the best tool for your specific use cases and preferences.
Exercise 2: Privacy Configuration Sprint
This exercise ensures you have made deliberate choices about your data privacy settings before entering any professional content into AI tools.
For each AI tool you use:
- Go to the privacy/data settings for your account
- Answer the following questions: - Does this account use my conversations for training by default? - How do I turn off training if I want to? - What is the data retention policy? - Is there an enterprise/team tier that provides stronger data protection?
- Based on your answers and your professional context, decide: is your current account type appropriate for the professional content you intend to enter?
- Make any settings changes needed based on your decision.
- Document the settings you have configured and why.
Also complete: Review your organization's policy on AI tools, if one exists. If one does not exist, consider whether you should flag this to your IT or legal team.
Exercise 3: Custom Instructions Workshop
Write your custom instructions for your primary AI tool. This is not a quick exercise — take 20-30 minutes to do it properly.
Custom instruction template to complete:
ROLE AND CONTEXT:
I am a [job title] at [type of organization]. I primarily work on [main work areas].
My expertise level in my field is [beginner/intermediate/expert].
TARGET AUDIENCES:
The content I produce is primarily for [audience descriptions].
OUTPUT PREFERENCES:
- When providing lists, use [bulleted/numbered] lists for [type] of items.
- Default response length: [brief/moderate/detailed] unless I specify otherwise.
- Include [headers/no headers] in responses longer than [X] paragraphs.
EXPERTISE ASSUMPTIONS:
You can assume I am familiar with [concepts/tools/frameworks]. Do not explain [X] from scratch.
VERIFICATION FLAGS:
When you include specific statistics, citations, or factual claims that I should verify
independently, note them with [your preferred marker, e.g., "[VERIFY]"].
COMMUNICATION STYLE:
Be [direct/conversational/formal]. [Additional style preferences.]
CONTEXT I WILL OFTEN PROVIDE:
I frequently work with [types of content, e.g., client proposals, technical documentation,
marketing copy]. Format your responses to fit this context.
Fill this template in completely for your specific situation. Then enter it into your primary AI tool's custom instructions field.
Test your custom instructions by starting a new conversation and asking a question typical of your work. Do the default responses feel better calibrated to your context? Iterate on your instructions until they produce a noticeably better default baseline.
Exercise 4: File Organization System Setup
Set up the file organization structure described in the chapter, adapted to your preferred tools.
Step 1: Choose your storage medium (Notion, Obsidian, Google Docs, local files, or another system).
Step 2: Create the following folder/page structure: - AI Environment (top level) - Prompts (with subcategories based on your work tasks) - Templates (with subcategories) - Outputs (with subcategories) - Calibration Log (from Chapter 4) - Red Flag List (from Chapter 4) - Custom Instructions (current version)
Step 3: Create a template for prompt library entries with fields for: - Prompt name - Use case description - The prompt text (with [VARIABLES] marked) - Example output (paste in a good example) - Notes and reliability considerations - Date added / date last updated
Step 4: Verify the organization feels natural for your workflow. If not, adjust before you fill it.
Exercise 5: Prompt Documentation Sprint
Take the prompts you have been using informally and document them properly in your new system.
Step 1: Think back to the last two weeks of AI interactions. What prompts or prompt patterns did you use? List them from memory.
Step 2: For each prompt you can recall: - Write out the full prompt text, replacing specific content with [VARIABLE] placeholders - Note what it was good for - Note any reliability issues or things to watch for - Rate the prompt quality: does it reliably get you what you want, or does it need refinement?
Step 3: For any prompts you rated as needing refinement, rewrite them. Use what you know about your work context, the specific output you want, and the format that is most useful.
Step 4: Add all prompts to your library, properly formatted.
Target: Have at least 10 documented prompts in your library by the end of this exercise. If you cannot identify 10 from memory, use the next two weeks to actively build the library by documenting each effective prompt as you create it.
Part B: Tool Integration (Exercises 6-10)
Exercise 6: Browser Extension Test Drive
Install and test one browser extension for your primary AI tool.
Step 1: Find the official extension for your primary AI tool (or a well-reviewed third-party tool) from the browser extension store.
Step 2: Review the permissions it requests before installing. Are you comfortable with those permissions?
Step 3: Install the extension and complete the following test tasks: 1. Visit a news article or blog post in your industry. Use the extension to summarize it. 2. Highlight a specific paragraph and ask the extension to explain or expand on it. 3. Use the extension while writing an email to draft a suggested reply.
Step 4: Evaluate: Does this extension reduce friction for tasks you do regularly? Is the access it requires proportionate to the value it provides?
Step 5: If yes, keep it and identify two specific recurring situations where you will use it. If no, uninstall it and note what you would need from an extension for it to be worth keeping.
Exercise 7: The Workflow Integration Map
Map AI assistance into your actual workflow for one typical work week.
Step 1: List your five most time-consuming recurring work tasks.
Step 2: For each task, identify: - What specific parts of this task could AI assistance accelerate or improve? - What zone (1-5 from Chapter 4) do those parts fall into? - What is currently preventing you from using AI for this task? (If anything.) - What prompt or prompt template would be needed?
Step 3: For the top two tasks where AI integration seems most valuable, design the integration: - When in the task flow does AI assistance fit in? - What is the trigger for starting AI assistance? - What prompt template are you starting from? - How do you review and use the output?
Step 4: Implement the integration for one week and track whether it actually saves time and improves output quality.
Exercise 8: Note-Taking Tool Integration
If you use a note-taking or knowledge management tool (Notion, Obsidian, Roam Research, Evernote, OneNote, etc.), set up an AI integration for it.
Step 1: Research what AI integration options are available for your specific tool. Options might include: - Built-in AI features (Notion AI, Obsidian AI plugins) - Browser extension that works in the tool's web interface - API connection via automation tools like Zapier or Make - A workflow that involves copying between your note tool and AI chat
Step 2: Set up the integration. If your tool has a native AI feature, try it. If not, establish the workflow for moving content between your note tool and your AI chat interface.
Step 3: Test the integration with a real piece of work: - Use AI to help draft a note or summarize source material - Save the output to your note tool in a way that is organized and searchable
Step 4: Evaluate friction: Is this integration smooth enough that you will actually use it? If not, what adjustment would make it workable?
Exercise 9: The Prompt Library Expansion Plan
Now that you have a working prompt library structure and initial entries, create a three-month expansion plan.
Step 1: Identify your 10 most frequent AI use cases — the things you ask AI to help with most often.
Step 2: For each use case, assess whether you have a documented, tested prompt: - Yes, high quality: No action needed - Yes, but needs refinement: Schedule a 15-minute refinement session - No: Schedule a 30-minute prompt development session
Step 3: Create a simple plan: by what date will you have a tested prompt for each of your top 10 use cases?
Step 4: For the two use cases where a good prompt would provide the most value, write and test those prompts now.
The goal is a prompt library that covers all your high-frequency use cases within 90 days.
Exercise 10: AI Tool Comparison for Your Specific Tasks
This builds on Exercise 1 by comparing tools on your actual highest-value tasks, not just generic tasks.
Step 1: Identify the three tasks where AI has the highest value-to-risk ratio in your work (high productivity gain, manageable verification requirements).
Step 2: For each task, write the specific prompt you use and run it on two different AI tools.
Step 3: Compare the outputs. Rate them on: quality of output, reliability, format appropriateness, need for iteration.
Step 4: For each task, identify whether there is a clearly superior tool, and update your stack accordingly.
The outcome of this exercise may be that you use different tools for different task types — which is perfectly appropriate and the nature of building a stack rather than having a single tool.
Part C: Habit Development (Exercises 11-15)
Exercise 11: The 30-Day AI Habit Tracker
Design and launch a 30-day habit tracking system for your daily AI touchpoints.
Step 1: Based on your Workflow Audit, identify two or three daily AI touchpoints — specific recurring tasks where you will use AI assistance consistently.
Step 2: Create a simple tracking system (a calendar, a spreadsheet, a habit app) with a checkbox for each touchpoint for each day of the month.
Step 3: At the end of each day for 30 days, mark which touchpoints you completed.
Step 4: At the end of 30 days, review your completion rate: - Which touchpoints became habitual (90%+ completion rate)? - Which did not stick? Why not? - What adjustment would make the non-sticky habits more durable?
Step 5: Redesign the non-sticky habits based on what you learned and run a second 30-day cycle.
Exercise 12: The Daily Planning Prompt
Develop and test a daily planning prompt that integrates AI into your morning routine.
Step 1: Design a daily planning prompt template. Consider including: - Input: Your calendar for the day (summarized), your to-do list, any outstanding items from yesterday - Output requested: A prioritized task list, draft of the most important email or communication of the day, any talking points for meetings
A sample template:
Today's date: [DATE]
My calendar today: [CALENDAR SUMMARY]
Outstanding tasks: [TASK LIST]
Most important outcome for today: [KEY GOAL]
Please help me:
1. Create a prioritized task list for today with time estimates
2. Draft the opening paragraph for the most important email I need to send
3. Create three talking points for my [MEETING NAME] meeting
Step 2: Use this prompt for five consecutive mornings.
Step 3: Evaluate: Does it save time? Does it produce useful output? What needs to be changed?
Step 4: Refine and save the final version to your prompt library.
Exercise 13: The Weekly Review Prompt
Develop a weekly review prompt that uses AI to synthesize your week and plan the next one.
Step 1: Outline what you want from a weekly review: - Summary of what was accomplished - Identification of what did not get done and why - Priority setting for the coming week - Any patterns to note
Step 2: Design a prompt that takes inputs you can realistically provide (your task list, your calendar summary, any notes) and produces the outputs you want.
Step 3: Run a trial weekly review using the prompt.
Step 4: Refine and save to your prompt library.
Exercise 14: Role-Specific Stack Documentation
Document your personal AI stack in a single reference page.
Your stack documentation should include: - Primary chat tool and why you chose it - Secondary tool(s) you use for specific task types - Browser extensions installed - File organization system and location - Custom instructions (current version) - Top 5 prompts with use cases - Privacy configuration decisions - (Developers) API environment and packages
This document serves as: 1. A reference for yourself when onboarding to a new machine or browser 2. A starting point if you want to share your setup with a colleague 3. A baseline for your next stack review (conduct quarterly)
Exercise 15: Team AI Environment Audit (Group Exercise)
If you work in a team, audit your team's collective AI environment and identify gaps and opportunities.
Step 1: Survey your team (2-5 people) on their current AI tool usage: - What tools do they use? - How often do they use them? - What do they use them for? - What privacy settings do they have configured? - Do they have custom instructions? - Do they save prompts?
Step 2: Compile the results. Identify: - Where is there duplicated effort that could be shared (e.g., multiple people developing similar prompts independently)? - Where are there privacy risks (professional content in consumer-tier accounts)? - Where is there significant variance in usage that might indicate skill gaps?
Step 3: Based on the audit, identify one or two team-level improvements: - Shared prompt library - Shared custom instructions template - Team privacy policy for AI tools - A short knowledge-sharing session
Step 4: If you have the organizational authority, implement one improvement. If not, document the finding and bring it to a team discussion.
Part D: Technical Setup (Exercises 16-20 — For Developers)
Exercise 16: Python Environment Verification
Verify that your Python development environment is correctly set up for AI API work.
Step 1: Confirm Python 3.8+ is installed:
python --version
Step 2: Install required packages in a virtual environment:
python -m venv ai-env
source ai-env/bin/activate # On Windows: ai-env\Scripts\activate
pip install anthropic openai python-dotenv
Step 3: Create a .env file and add your API keys.
Step 4: Run the basic test scripts from the chapter to confirm both API connections work.
Step 5: Verify your .gitignore includes .env.
Deliverable: A working Python environment with both API connections tested.
Exercise 17: Build a Reusable AI Utility Module
Take the helper functions from the chapter and build them into a proper utility module you will reuse across projects.
# ai_utils.py
import anthropic
import os
from openai import OpenAI
from dotenv import load_dotenv
from typing import Optional
load_dotenv()
def ask_claude(
prompt: str,
system: str = "",
model: str = "claude-opus-4-6",
max_tokens: int = 1024,
temperature: float = 1.0
) -> str:
"""
Send a prompt to Claude and return the response text.
Args:
prompt: The user message.
system: Optional system prompt.
model: The Claude model to use.
max_tokens: Maximum tokens in the response.
temperature: Sampling temperature (0.0 to 1.0).
Returns:
Response text as a string.
"""
client = anthropic.Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
kwargs = {
"model": model,
"max_tokens": max_tokens,
"messages": [{"role": "user", "content": prompt}]
}
if system:
kwargs["system"] = system
message = client.messages.create(**kwargs)
return message.content[0].text
def ask_gpt(
prompt: str,
system: str = "",
model: str = "gpt-4o",
max_tokens: int = 1024,
temperature: float = 1.0
) -> str:
"""
Send a prompt to GPT and return the response text.
Args:
prompt: The user message.
system: Optional system prompt.
model: The OpenAI model to use.
max_tokens: Maximum tokens in the response.
temperature: Sampling temperature (0.0 to 1.0).
Returns:
Response text as a string.
"""
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
messages = []
if system:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": prompt})
response = client.chat.completions.create(
model=model,
messages=messages,
max_tokens=max_tokens,
temperature=temperature
)
return response.choices[0].message.content
Add at least one additional utility function relevant to your work. For example:
- summarize_document(text: str) -> str
- generate_variations(text: str, n: int) -> list[str]
- review_code(code: str, language: str) -> str
Test your module with real work tasks.
Exercise 18: Build a Simple Prompt Runner Script
Create a command-line script that runs a prompt from your prompt library against an AI model.
#!/usr/bin/env python3
"""
prompt_runner.py — Run saved prompts against AI models from the command line.
Usage: python prompt_runner.py --prompt "path/to/prompt.txt" --model claude
"""
import argparse
import sys
from pathlib import Path
from ai_utils import ask_claude, ask_gpt
def load_prompt(path: str) -> str:
"""Load a prompt from a text file."""
prompt_path = Path(path)
if not prompt_path.exists():
print(f"Error: Prompt file not found: {path}", file=sys.stderr)
sys.exit(1)
return prompt_path.read_text(encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Run an AI prompt from a file.")
parser.add_argument("--prompt", required=True, help="Path to the prompt text file")
parser.add_argument(
"--model",
choices=["claude", "gpt"],
default="claude",
help="Which AI model to use (default: claude)"
)
parser.add_argument("--system", default="", help="Optional system prompt text")
parser.add_argument(
"--max-tokens",
type=int,
default=1024,
help="Maximum response tokens (default: 1024)"
)
args = parser.parse_args()
prompt_text = load_prompt(args.prompt)
print(f"Running prompt with {args.model}...\n", file=sys.stderr)
if args.model == "claude":
result = ask_claude(prompt_text, system=args.system, max_tokens=args.max_tokens)
else:
result = ask_gpt(prompt_text, system=args.system, max_tokens=args.max_tokens)
print(result)
if __name__ == "__main__":
main()
Test this script with a real prompt file from your prompt library.
Exercise 19: Multi-Model Comparison Script
Build a script that runs the same prompt on multiple models and saves the outputs for comparison.
#!/usr/bin/env python3
"""
model_compare.py — Run the same prompt on multiple AI models and save outputs.
"""
import argparse
import json
from datetime import datetime
from pathlib import Path
from ai_utils import ask_claude, ask_gpt
def compare_models(
prompt: str,
system: str = "",
max_tokens: int = 1024,
output_dir: str = "comparison_outputs"
) -> dict:
"""
Run the same prompt on Claude and GPT-4o, return and save results.
Args:
prompt: The prompt to run on both models.
system: Optional system prompt.
max_tokens: Maximum response tokens.
output_dir: Directory to save comparison outputs.
Returns:
Dictionary with model names as keys and response text as values.
"""
results = {}
models = {
"claude-opus-4-6": lambda p, s, m: ask_claude(p, system=s, max_tokens=m),
"gpt-4o": lambda p, s, m: ask_gpt(p, system=s, max_tokens=m)
}
for model_name, runner in models.items():
print(f"Running {model_name}...")
results[model_name] = runner(prompt, system, max_tokens)
output_path = Path(output_dir)
output_path.mkdir(exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_file = output_path / f"comparison_{timestamp}.json"
output_data = {
"timestamp": timestamp,
"prompt": prompt,
"system": system,
"results": results
}
output_file.write_text(json.dumps(output_data, indent=2), encoding="utf-8")
print(f"\nResults saved to: {output_file}")
return results
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Compare AI model outputs.")
parser.add_argument("prompt", help="The prompt to compare")
parser.add_argument("--system", default="", help="Optional system prompt")
args = parser.parse_args()
results = compare_models(args.prompt, system=args.system)
for model, response in results.items():
print(f"\n{'='*50}")
print(f"MODEL: {model}")
print(f"{'='*50}")
print(response)
Use this script to compare outputs for a real work task. Document what you observe about the differences.
Exercise 20: API Environment Documentation
Create a README file for your AI development environment that documents everything a colleague (or future you) would need to reproduce your setup.
The README should include: - Required packages and versions - Environment variable setup (.env template with placeholder values) - How to run the basic test scripts - Directory structure and what each file does - Any custom configuration or utility functions - Notes on cost management and rate limiting
This document becomes part of your AI environment reference material (from Exercise 14) and is particularly valuable when setting up on a new machine or onboarding a colleague to your workflow.