Case Study 2: Optimizing an Existing Developer Setup for AI
An Experienced Developer Adding AI Tools to Their Workflow
Background
James is a 28-year-old backend developer with four years of experience. He works at a mid-size software company building Python web services. His daily tools include VS Code, Git, Python 3.11, and Docker. He is comfortable with the command line and has a well-configured macOS development environment with Homebrew, Zsh with Oh My Zsh, and multiple virtual environments managed across various projects.
James has been curious about AI coding tools but has not used them beyond occasionally pasting code into ChatGPT. After reading the first three chapters of this book, he wants to integrate Claude Code, GitHub Copilot, and Cursor into his existing workflow without disrupting what already works.
This case study follows James as he audits his current setup, adds AI tools, resolves conflicts, and establishes a workflow that combines traditional development practices with AI assistance.
Step 1: Auditing the Current Environment
James starts by taking stock of what he already has. He opens his terminal (iTerm2 with Zsh) and runs a series of diagnostic commands:
python3 --version
# Python 3.11.6
node --version
# v18.17.1
git --version
# git version 2.42.0
code --version
# 1.85.2
brew --version
# Homebrew 4.2.3
Everything looks good, but he notices his Python version is 3.11, not the latest 3.12. He decides to upgrade:
brew upgrade python@3.12
After the upgrade, he checks that his existing projects still work by running their test suites. They do — Homebrew manages Python versions cleanly on macOS.
He also checks his VS Code extensions:
code --list-extensions
He sees his existing extensions: Python, Pylance, GitLens, Docker, REST Client, and a few theme extensions. No AI extensions are installed yet.
Lesson learned: Before adding new tools, audit what you already have. James's systematic check confirmed that his foundation was solid and identified one version to update. This prevents the common mistake of installing new tools on top of an unstable base.
Step 2: Installing Claude Code
James already has Node.js 18, so he can install Claude Code directly:
npm install -g @anthropic-ai/claude-code
He verifies the installation:
claude --version
The version number appears. Now he needs his API key.
James goes to console.anthropic.com, creates an account, and generates an API key. Rather than setting it as a global environment variable in his shell profile (which would make it available to every project), he decides on a more nuanced approach.
Strategy: Per-project API keys using direnv
James uses direnv, a tool that automatically loads and unloads environment variables when you enter and leave a directory. This is more sophisticated than a global shell variable and allows different API keys for different projects.
brew install direnv
He adds the direnv hook to his ~/.zshrc:
eval "$(direnv hook zsh)"
Now, for each project where he wants Claude Code, he creates an .envrc file:
cd ~/projects/my-web-service
echo 'export ANTHROPIC_API_KEY="sk-ant-his-key-here"' > .envrc
direnv allow
When he cds into that directory, the API key is automatically loaded. When he leaves, it is unloaded. This is cleaner than a global variable and allows him to use different keys for work and personal projects if needed.
He adds .envrc to his global .gitignore:
echo ".envrc" >> ~/.gitignore_global
git config --global core.excludesfile ~/.gitignore_global
Lesson learned: Experienced developers benefit from per-project environment management rather than global variables. Tools like
direnvprovide automatic loading and unloading of environment variables based on your current directory, which is both more secure and more flexible.
Step 3: Integrating Claude Code into an Existing Project
James opens one of his existing Python projects — a REST API built with FastAPI. He wants to see how Claude Code handles a real, complex codebase.
cd ~/projects/inventory-api
claude
Claude Code starts up and indexes his project. James notices it reads his directory structure and understands the project layout. He starts with a practical task:
Add request rate limiting middleware to the FastAPI application.
Limit to 100 requests per minute per IP address.
Claude Code analyzes the existing codebase, identifies the FastAPI app instance in main.py, and proposes adding a middleware using the slowapi library. It shows the changes it would make to main.py and the addition of slowapi to requirements.txt.
James reviews the proposed changes carefully. He notices that Claude Code correctly identified the existing middleware chain and inserted the new middleware in the right position. He accepts the changes and runs his test suite:
pytest tests/ -v
All existing tests pass. He writes a few additional tests for the rate limiting functionality, guided by Claude Code, and commits the changes.
Lesson learned: When introducing Claude Code to an existing project, start with a self-contained task that does not require understanding the entire codebase. Rate limiting, adding logging, or writing documentation are good first tasks. This lets you evaluate the AI's understanding of your code before trusting it with more complex changes.
Step 4: Installing and Configuring GitHub Copilot
James has a GitHub Pro account through his company. He installs Copilot in VS Code:
code --install-extension GitHub.copilot
He opens VS Code, signs in to GitHub when prompted, and verifies Copilot is active by checking the status bar icon.
He immediately notices Copilot suggestions appearing as he types in his existing project. Some suggestions are helpful — it correctly suggests the next line of a function he is writing based on the patterns in his codebase. Other suggestions are less useful — it suggests code patterns that do not match his team's conventions.
Tuning Copilot for his workflow:
James configures Copilot to be less aggressive in certain contexts. He opens VS Code settings and adds:
{
"github.copilot.enable": {
"*": true,
"plaintext": false,
"yaml": true,
"dockerfile": true,
"markdown": false
}
}
He disables Copilot for Markdown files because he writes technical documentation and does not want AI-generated suggestions there. He keeps it enabled for YAML and Dockerfiles because those formats are highly repetitive and benefit from auto-completion.
Establishing keyboard muscle memory:
James spends 30 minutes deliberately practicing the Copilot workflow:
Tabto accept a suggestionEscto dismiss a suggestionAlt+]/Option+]to see the next alternative suggestionAlt+[/Option+[to see the previous alternative suggestionCtrl+Enterto open the Copilot suggestions panel with multiple options
He finds that cycling through alternatives (Alt+]) is particularly useful — the first suggestion is not always the best one.
Lesson learned: Copilot works best when you learn its keyboard shortcuts and develop a rhythm of accepting, rejecting, and cycling through suggestions. Spending 30 minutes deliberately practicing this workflow pays dividends in daily productivity.
Step 5: Setting Up Cursor Alongside VS Code
James is curious about Cursor but does not want to abandon VS Code. He decides to install Cursor and use it for specific types of work.
He downloads Cursor from cursor.com and installs it. On first launch, he imports his VS Code settings and extensions. The transition is seamless — Cursor looks and feels exactly like his VS Code setup.
He experiments with Cursor's AI features on a refactoring task. He has a 200-line function in his inventory API that needs to be broken into smaller functions. In Cursor, he selects the entire function, presses Ctrl+K, and types:
Refactor this into three smaller functions: one for validation, one for database operations, and one for response formatting. Keep the same behavior.
Cursor proposes the refactoring inline, showing exactly what will change. James reviews the diff, makes one minor adjustment (renaming a function), and accepts the changes.
He is impressed. This task would have taken him 15 minutes manually. Cursor did it in under a minute, and the result is clean.
James's two-editor strategy:
After a week of experimentation, James settles on a workflow:
- VS Code with Copilot for day-to-day coding: writing new code line by line, fixing small bugs, working with Docker and YAML files
- Cursor for refactoring and structural changes: breaking up large functions, reorganizing code, applying patterns across multiple files
- Claude Code (CLI) for complex reasoning tasks: debugging tricky issues, understanding unfamiliar code, planning architectural changes
This three-tool approach gives him the right tool for each type of task.
Step 6: Resolving Configuration Conflicts
James encounters a few conflicts when running multiple AI tools:
Conflict 1: Copilot and Cursor completions overlapping
When he opens a Python file in Cursor, both Copilot and Cursor's built-in completions offer suggestions. The suggestions sometimes conflict or appear on top of each other.
Resolution: James disables the Copilot extension when working in Cursor, since Cursor has its own completion engine. He keeps Copilot active in VS Code. He creates two separate VS Code settings profiles — one for "VS Code + Copilot" and one that he can import into Cursor without Copilot.
Conflict 2: Multiple .env management approaches
James's existing projects use python-dotenv with .env files. His new direnv setup also manages environment variables. When both are active, variables can be loaded twice or conflict.
Resolution: He standardizes on direnv for environment variables that are specific to the development environment (API keys, debug flags) and python-dotenv for application configuration that needs to work in production (database URLs, feature flags). He documents this convention in his project README files.
Conflict 3: Git hooks with AI-modified files
James has pre-commit hooks that run black (code formatting), mypy (type checking), and pytest (tests) before every commit. When Claude Code modifies files, the pre-commit hooks sometimes fail because Claude's generated code does not exactly match his formatting conventions or type annotations.
Resolution: He adds a step to his Claude Code workflow: after accepting changes, he runs black and mypy manually before committing. He also adds instructions to his Claude Code prompts: "Follow PEP 8, use type hints consistent with the rest of the codebase, and ensure compatibility with mypy strict mode."
Lesson learned: When combining multiple AI tools with an existing development setup, configuration conflicts are inevitable. The key is to identify them early, resolve them systematically, and document the conventions so they become habits.
Step 7: Establishing an AI-Enhanced Workflow
After two weeks of experimentation, James documents his optimized workflow:
Morning routine:
1. Open the terminal and navigate to the current project (direnv loads the environment automatically)
2. Run git pull to get the latest changes
3. Open VS Code for day-to-day coding with Copilot
For new features: 1. Start in Claude Code to plan the approach: "I need to add user authentication to the inventory API. What's the best approach given the current codebase?" 2. Review Claude's plan and refine it through conversation 3. Switch to VS Code to implement, using Copilot for line-by-line coding 4. Use Claude Code for any complex logic that requires reasoning
For refactoring:
1. Open the project in Cursor
2. Use Ctrl+K for inline refactoring of individual functions
3. Use Composer (Ctrl+I) for changes that span multiple files
4. Run the test suite after each refactoring step
For debugging: 1. Start with Claude Code: paste the error message and ask for analysis 2. Claude Code can read the relevant source files and suggest fixes 3. Apply the fix in VS Code or Cursor 4. Run tests to verify
For code review:
1. Use Claude Code to review a diff: claude "Review the changes in the last commit for potential issues"
2. Address any concerns Claude raises
3. Use VS Code's GitLens to annotate and understand the change history
Step 8: Measuring the Impact
After one month of AI-enhanced development, James tracks some informal metrics:
| Metric | Before AI Tools | After AI Tools | Change |
|---|---|---|---|
| Lines of code per day | ~150 | ~300 | +100% |
| Time spent on boilerplate | ~30% of day | ~10% of day | -67% |
| Bugs found in code review | ~5 per PR | ~2 per PR | -60% |
| Time to write tests | ~45 min per feature | ~15 min per feature | -67% |
| Time debugging unfamiliar code | ~2 hours | ~30 minutes | -75% |
James is cautious about these numbers — they are self-reported estimates over a short period. But the trend is clear: AI tools are making him significantly more productive, especially for tasks he previously found tedious (writing tests, boilerplate) or frustrating (debugging unfamiliar code).
The biggest surprise is code quality. He expected AI tools to introduce more bugs, but the opposite happened. Claude Code catches issues he might have missed, and Copilot's suggestions often include edge case handling he would have forgotten.
James's Recommendations for Experienced Developers
-
Do not replace your entire workflow — augment it. Your existing setup works for a reason. Add AI tools as additional capabilities, not replacements.
-
Match the tool to the task. Copilot for completions, Claude Code for reasoning, Cursor for refactoring. Using the wrong tool for a task creates friction.
-
Invest time in learning keyboard shortcuts. The speed of AI-assisted development depends heavily on how quickly you can accept, reject, and modify AI suggestions.
-
Add instructions to your prompts about code conventions. AI tools do not know your team's style guide unless you tell them. Include formatting, naming, and architectural conventions in your prompts.
-
Run your existing quality tools (linters, formatters, tests) after every AI modification. AI-generated code should meet the same quality bar as hand-written code.
-
Use per-project environment management. Tools like
direnvprovide cleaner isolation than global environment variables, especially when managing multiple API keys. -
Document your AI workflow. Write down which tool you use for which task. This helps you be intentional rather than randomly switching between tools.
-
Give it two weeks. The first few days feel slower because you are learning new tools. By the end of the second week, the productivity gains become unmistakable.
Key Takeaways from This Case Study
-
Experienced developers should audit their existing setup before adding AI tools. Ensure the foundation (Python version, Git config, terminal setup) is solid first.
-
Per-project API key management with tools like
direnvis more secure and flexible than global environment variables for developers working across multiple projects. -
Multiple AI tools can coexist but require intentional configuration to avoid conflicts with overlapping completions, duplicate environment variables, and formatting inconsistencies.
-
The three-tool approach (Copilot for completions, Claude Code for reasoning, Cursor for refactoring) provides comprehensive AI coverage without any single tool being a bottleneck.
-
Existing quality tools (formatters, linters, type checkers, tests) remain essential. AI-generated code must pass the same quality bar as hand-written code.
-
The productivity gains from AI tools are most dramatic for tedious tasks like writing tests, handling boilerplate, and debugging unfamiliar code — the work experienced developers find least engaging.
-
Two weeks of intentional practice is the tipping point where AI tools start to feel natural rather than disruptive, and productivity gains become measurable.