Case Study 2: Building a Multi-Module Application
Creating a Project with 15+ Files While Maintaining Consistency Throughout
Background
Marcus Chen is a data analyst transitioning into software development. He has completed the first twelve chapters of this textbook and can work effectively with AI on single-file projects. Now he wants to build his first real multi-file application: an inventory management system for the small electronics repair shop he runs on weekends.
The application needs to track parts inventory, customer repairs, supplier orders, and generate reports. Marcus estimates this will require 15-20 files across multiple packages. He has never structured a multi-file Python project from scratch, so he plans to lean heavily on AI assistance while applying the techniques from this chapter.
Phase 1: Architecture and Planning (Session 1)
Marcus begins with a planning prompt:
I want to build an inventory management system for an electronics repair
shop. It needs to:
- Track parts (name, SKU, quantity, cost, supplier, minimum stock level)
- Track customer repairs (customer info, device, issue, parts used, status, cost)
- Track supplier orders (supplier, parts ordered, quantities, status, dates)
- Generate reports (low stock alerts, revenue summary, repair history)
Please suggest a project structure with separate modules for models,
services, CLI interface, and storage. Use Python with SQLite for storage.
Target approximately 15-20 files.
The AI proposes the following structure:
repair-shop/
├── pyproject.toml
├── src/
│ └── repair_shop/
│ ├── __init__.py
│ ├── main.py
│ ├── models/
│ │ ├── __init__.py
│ │ ├── part.py
│ │ ├── repair.py
│ │ ├── supplier.py
│ │ └── order.py
│ ├── services/
│ │ ├── __init__.py
│ │ ├── inventory_service.py
│ │ ├── repair_service.py
│ │ ├── order_service.py
│ │ └── report_service.py
│ ├── storage/
│ │ ├── __init__.py
│ │ ├── database.py
│ │ └── repositories.py
│ ├── cli/
│ │ ├── __init__.py
│ │ ├── commands.py
│ │ └── formatters.py
│ └── errors.py
└── tests/
├── __init__.py
├── test_models.py
├── test_services.py
└── test_storage.py
That is 19 Python files plus the pyproject.toml, hitting his target. Marcus saves the directory tree for use in all future prompts.
Phase 2: Establishing Conventions (Session 1, continued)
Before generating any code, Marcus asks the AI to produce a style guide:
For this project, please create a concise style guide covering:
- Naming conventions for classes, functions, variables, constants, files
- Docstring format
- Type hint requirements
- Error handling approach
- Import conventions
- Data model patterns
The project uses Python 3.11+, SQLite via the sqlite3 standard library
module, and dataclasses for models.
The AI produces a style guide that Marcus saves as a reusable preamble:
## repair-shop Style Guide
Naming: PascalCase classes, snake_case functions/methods/variables,
UPPER_SNAKE_CASE constants, snake_case file names.
Docstrings: Google-style on all public classes and functions.
Include Args, Returns, Raises sections where applicable.
Type hints: Required on all function signatures including return types.
Use Optional[X] for nullable parameters. Use list[X], dict[X, Y] lowercase.
Error handling: Custom exceptions in errors.py. Services raise domain
exceptions; CLI layer catches and displays user-friendly messages.
Imports: Absolute imports only (from repair_shop.models.part import Part).
Standard library first, then third-party, then project imports.
Models: Frozen dataclasses with __post_init__ validation.
IDs are integers. Timestamps are datetime objects.
Phase 3: Holistic Model Generation (Session 2)
Marcus generates all four model files plus the errors module in a single holistic prompt:
Using this style guide and project structure, please generate all model
files and the errors.py module:
[paste style guide]
[paste directory tree]
Models needed:
1. Part: id, sku, name, description, quantity, unit_cost, supplier_id,
min_stock_level, created_at, updated_at
2. Repair: id, customer_name, customer_phone, device_type, device_model,
issue_description, status (pending/in_progress/completed/cancelled),
parts_used (list of tuples: part_id, quantity), labor_cost, created_at,
completed_at
3. Supplier: id, name, contact_name, email, phone, address
4. Order: id, supplier_id, items (list of tuples: part_id, quantity,
unit_cost), status (pending/shipped/received/cancelled), ordered_at,
received_at
Also generate errors.py with: NotFoundError, ValidationError,
InsufficientStockError, DuplicateError.
Also generate all __init__.py files for the models package that
re-export the main classes.
The AI generates all files in one response. Marcus reviews them and is satisfied with the consistency: all models use frozen dataclasses, all have __post_init__ validation, all follow the naming conventions, and all use the same timestamp patterns.
Phase 4: Storage Layer (Session 3)
For the storage layer, Marcus provides the model interfaces as context:
[paste style guide]
[paste directory tree]
Here are the model definitions (interface only):
[paste class signatures and fields from all four model files]
Please generate:
1. database.py - SQLite connection management with context manager,
table creation, and migration support
2. repositories.py - Repository classes for each model with CRUD operations
Follow the same conventions as the models. Each repository should have:
get_by_id, get_all, create, update, delete methods.
The PartRepository also needs: get_low_stock(threshold) and
get_by_supplier(supplier_id).
Marcus notes that he provides model interfaces rather than full files, saving context window space while giving the AI everything it needs to write correct SQL and serialization code.
Phase 5: File-by-File Service Generation (Sessions 4-7)
The services are the most complex layer. Marcus switches to file-by-file generation, generating the first service (inventory_service) with full attention, then using it as a consistency reference for the remaining services.
Session 4: Inventory Service
[paste style guide]
[paste directory tree]
[paste import map]
Here are the interfaces the InventoryService needs:
- Part model: [paste Part class definition]
- PartRepository: [paste PartRepository class with method signatures]
- Errors: NotFoundError, ValidationError, InsufficientStockError
Please generate inventory_service.py with methods:
- add_part(sku, name, ...) -> Part
- update_stock(part_id, quantity_change) -> Part
- check_stock(part_id) -> int
- get_low_stock_alerts() -> list[Part]
- use_parts_for_repair(parts: list[tuple[int, int]]) -> None
(deducts stock, raises InsufficientStockError if not enough)
Session 5: Repair Service (using consistency reference)
[paste style guide]
[paste import map]
Consistency reference - follow this file's patterns exactly:
[paste inventory_service.py in full]
Interfaces needed:
- Repair model: [paste Repair class definition]
- RepairRepository: [paste method signatures]
- InventoryService.use_parts_for_repair: (parts: list[tuple[int, int]]) -> None
Please generate repair_service.py with methods:
- create_repair(customer_name, ...) -> Repair
- update_repair_status(repair_id, status) -> Repair
- add_parts_to_repair(repair_id, parts) -> Repair
- complete_repair(repair_id, labor_cost) -> Repair
- get_repairs_by_status(status) -> list[Repair]
Marcus continues this pattern for the order service and report service, always providing the inventory service as the consistency reference.
Phase 6: CLI Layer (Session 8)
The CLI layer is relatively straightforward. Marcus uses holistic generation since the commands and formatters are tightly coupled:
[paste style guide]
[paste directory tree]
The CLI uses Python's argparse module. Here are all the service interfaces:
[paste method signatures from all four services]
Please generate:
1. commands.py - CLI command handlers using argparse subcommands
(inventory, repair, order, report subcommands, each with sub-subcommands)
2. formatters.py - Functions to format model objects as readable
terminal output (tables for lists, detailed views for single items)
3. main.py - Entry point that wires everything together
The CLI should catch all domain exceptions and display user-friendly
error messages.
Phase 7: Tests (Session 9)
Marcus generates all test files holistically, since they follow similar patterns:
[paste style guide]
Here are the files to test:
[paste interfaces of all service classes]
[paste interfaces of all repository classes]
[paste model definitions]
Please generate comprehensive tests:
1. test_models.py - Test model creation, validation, and edge cases
2. test_services.py - Test service methods with mock repositories
3. test_storage.py - Test repository CRUD with an in-memory SQLite database
Use pytest. Use unittest.mock for mocking dependencies in service tests.
Each test function should have a descriptive name starting with test_.
Phase 8: Consistency Verification (Session 10)
With all 19 files generated, Marcus runs a comprehensive consistency check:
I have generated all files for my project. Please review these files
for consistency:
[paste all service files]
[paste errors.py]
Check for:
1. Consistent parameter naming across similar methods
2. Consistent error handling (all using custom exceptions from errors.py)
3. Consistent docstring format (Google-style with Args/Returns/Raises)
4. Consistent return types for similar operations
5. No missing type hints
The AI finds three issues:
-
Naming inconsistency:
repair_service.pyusesrepair_idas a parameter name, butorder_service.pyusesoidin one method. Marcus fixes it toorder_id. -
Missing error handling:
order_service.pydoes not raiseNotFoundErrorwhen an order is not found, unlike the other services. Marcus adds the check. -
Docstring gap: Two private helper methods in
report_service.pylack docstrings. While the convention requires docstrings only for public methods, Marcus decides to add them for completeness.
Phase 9: Integration and Final Testing (Session 11)
Marcus asks the AI to trace the complete flow for a key use case:
Please trace the complete flow when a user runs:
python -m repair_shop inventory add --sku "CAP-100UF" --name "100uF Capacitor"
--quantity 50 --cost 0.25 --supplier-id 1 --min-stock 10
Starting from main.py through the CLI, service, repository, and database
layers. Verify that the data types are consistent at each boundary.
[paste main.py, commands.py, inventory_service.py, repositories.py, database.py]
The AI traces the flow and confirms it is consistent. It also notes that Marcus should add input validation in the CLI layer for cost (should be a positive decimal) and quantity (should be a positive integer), which Marcus adds.
Results
Marcus's final project has 19 Python files totaling approximately 2,800 lines of code. The entire development process took 11 AI sessions over 5 days (working evenings and weekends). Here is what he tracked:
| Metric | Value |
|---|---|
| Total files | 19 |
| Total lines of code | ~2,800 |
| AI sessions used | 11 |
| Consistency issues found | 3 (all caught in verification) |
| Import errors | 0 (thanks to import map) |
| Circular dependencies | 0 (thanks to dependency rules) |
| Generation approach | Hybrid (holistic for models/tests, file-by-file for services) |
Lessons Learned
What worked well:
-
The style guide preamble was essential. Including it in every session prevented most consistency issues. The three issues found in verification were all in sessions where Marcus had abbreviated the preamble to save time.
-
The consistency reference pattern was highly effective. Using
inventory_service.pyas a reference for the other services resulted in remarkably consistent code. The services look like they were written by one developer in one sitting. -
Holistic generation was right for models and tests. The model files are tightly coupled (they reference each other's types) and benefit from being generated together. Tests benefit from consistent mocking and assertion patterns.
-
File-by-file generation was right for services. Each service has complex internal logic that benefits from focused prompts. The AI produced higher-quality method implementations when it only had to focus on one service at a time.
-
The import map prevented all import errors. Marcus did not encounter a single incorrect import path because every session included the exact import statements to use.
What Marcus would do differently:
-
Generate
__init__.pyfiles explicitly with each package. He initially forgot some__init__.pyfiles and had to go back and generate them, which required a separate session. -
Include error handling patterns in the style guide from the start. The initial style guide mentioned custom exceptions but did not specify the exact pattern for checking and raising them. This led to the inconsistency in
order_service.py. -
Test after each layer, not all at the end. Running the model tests immediately after generating models would have caught a minor validation issue earlier, saving time in later sessions.
-
Keep a running changelog of decisions. Marcus made several design decisions during the process (like using tuples for parts lists instead of a separate junction model) that he had to re-explain in later sessions. A running log of decisions would have made this easier.
Marcus's Workflow Template
Based on his experience, Marcus created a reusable workflow for multi-file projects:
- Plan: Architecture and structure (1 session)
- Conventions: Style guide and dependency rules (same session as planning)
- Models: Generate holistically with error types (1 session)
- Storage/Data layer: Generate with model interfaces as context (1 session)
- Services: Generate file-by-file with consistency reference (1 session per service)
- Interface layer: Generate holistically (CLI, API, or UI) (1 session)
- Tests: Generate holistically with all interfaces as context (1 session)
- Verify: Consistency check across all files (1 session)
- Integration: End-to-end flow tracing and testing (1 session)
This template serves as Marcus's go-to approach for any multi-file project he builds with AI assistance going forward.