Case Study 1: Building a Task Management API
Overview
In this case study, we walk through the complete process of building a production-quality Task Management REST API using FastAPI. We use AI prompting at every stage -- from initial design through implementation, testing, and documentation. The resulting API supports full CRUD operations, JWT authentication, role-based authorization, request validation, and auto-generated OpenAPI documentation.
This case study demonstrates how iterative prompting (covered in Chapter 11) applies specifically to backend development. Rather than trying to generate the entire API in a single prompt, we build it layer by layer, reviewing and refining at each step.
The Requirements
Our fictional client needs an API for a task management application with the following requirements:
- Users can register and log in
- Tasks have a title, description, status (todo, in_progress, done), priority (1-5), due date, and assignee
- Users can only see and modify their own tasks (unless they are admins)
- Admins can see and modify all tasks
- The API must include filtering, sorting, and pagination for task lists
- All endpoints must be documented with OpenAPI
- Proper error handling with consistent error response format
Phase 1: Data Models and Schemas
We start by defining the data structures. This is where Pydantic shines, and where AI can save significant time.
Prompt 1: Core Schemas
"Create Pydantic models for a task management API. I need: - UserCreate (registration), UserResponse (public profile), UserLogin - TaskCreate, TaskUpdate (partial), TaskResponse - Tasks have: title (required, 1-200 chars), description (optional, max 2000 chars), status (enum: todo, in_progress, done), priority (1-5, default 3), due_date (optional, must be future), assignee_id (optional) - Include proper Field descriptions for OpenAPI docs - Use Python 3.12 type hints"
The AI generates comprehensive Pydantic models. Here is what we get for the task schemas:
from datetime import date, datetime
from enum import Enum
from typing import Optional
from pydantic import BaseModel, Field, field_validator
class TaskStatus(str, Enum):
"""Valid task statuses."""
TODO = "todo"
IN_PROGRESS = "in_progress"
DONE = "done"
class TaskCreate(BaseModel):
"""Schema for creating a new task."""
title: str = Field(
..., min_length=1, max_length=200,
description="The title of the task"
)
description: Optional[str] = Field(
None, max_length=2000,
description="Detailed description of the task"
)
status: TaskStatus = Field(
default=TaskStatus.TODO,
description="Current status of the task"
)
priority: int = Field(
default=3, ge=1, le=5,
description="Priority from 1 (highest) to 5 (lowest)"
)
due_date: Optional[date] = Field(
None, description="Due date for the task"
)
assignee_id: Optional[int] = Field(
None, description="User ID of the assignee"
)
@field_validator("due_date")
@classmethod
def due_date_must_be_future(cls, v: Optional[date]) -> Optional[date]:
if v is not None and v < date.today():
raise ValueError("Due date must be in the future")
return v
class TaskUpdate(BaseModel):
"""Schema for partially updating a task."""
title: Optional[str] = Field(None, min_length=1, max_length=200)
description: Optional[str] = Field(None, max_length=2000)
status: Optional[TaskStatus] = None
priority: Optional[int] = Field(None, ge=1, le=5)
due_date: Optional[date] = None
assignee_id: Optional[int] = None
class TaskResponse(BaseModel):
"""Schema for task in API responses."""
id: int
title: str
description: Optional[str]
status: TaskStatus
priority: int
due_date: Optional[date]
assignee_id: Optional[int]
owner_id: int
created_at: datetime
updated_at: datetime
Review Notes
The AI-generated schemas are solid, but we notice two things to improve:
- The
TaskUpdatemodel allows all fields to be None, which means a client could send an empty body. We add a model validator to require at least one field. - The due_date validator does not account for updates -- when updating a task, the existing due date might already be in the past. We adjust the validator to only enforce the future constraint on creation.
These are exactly the kinds of edge cases that AI often misses and that human review catches.
Phase 2: Authentication
Prompt 2: JWT Authentication
"Add JWT authentication to the task management API. Include: - Password hashing with bcrypt via passlib - Registration endpoint that checks for duplicate email - Login endpoint returning access token (30 min) and refresh token (7 days) - A get_current_user dependency that extracts user from the JWT - A require_admin dependency that checks user role - Use python-jose for JWT operations"
The AI generates the authentication layer. Key pieces:
from datetime import datetime, timedelta, timezone
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from jose import JWTError, jwt
from passlib.context import CryptContext
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login")
def create_access_token(data: dict, expires_delta: timedelta) -> str:
"""Create a JWT access token with an expiration time."""
to_encode = data.copy()
expire = datetime.now(timezone.utc) + expires_delta
to_encode.update({"exp": expire, "type": "access"})
return jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
async def get_current_user(token: str = Depends(oauth2_scheme)) -> dict:
"""Extract and validate the current user from a JWT token."""
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
if payload.get("type") != "access":
raise credentials_exception
user_id: int = payload.get("sub")
if user_id is None:
raise credentials_exception
except JWTError:
raise credentials_exception
user = users_db.get(user_id)
if user is None:
raise credentials_exception
return user
async def require_admin(
current_user: dict = Depends(get_current_user),
) -> dict:
"""Dependency that requires the current user to be an admin."""
if current_user.get("role") != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Admin access required",
)
return current_user
Review Notes
The AI correctly separates access and refresh tokens with a type field in the JWT payload -- this prevents refresh tokens from being used as access tokens. We verify that the SECRET_KEY is loaded from environment variables rather than hardcoded, and add token revocation support (a simple in-memory set of revoked token IDs for now).
Phase 3: CRUD Endpoints
Prompt 3: Task CRUD with Authorization
"Create FastAPI router for task CRUD operations. Requirements: - GET /api/tasks - List tasks with filtering (status, priority, assignee), sorting (created_at, priority, due_date), and pagination (page, per_page) - GET /api/tasks/{task_id} - Get single task - POST /api/tasks - Create task (owner is automatically the current user) - PUT /api/tasks/{task_id} - Full update - PATCH /api/tasks/{task_id} - Partial update - DELETE /api/tasks/{task_id} - Delete task - Regular users can only access their own tasks - Admins can access all tasks - Use the Pydantic schemas and auth dependencies from earlier - Return proper status codes: 201 for create, 204 for delete, 404 for not found"
from fastapi import APIRouter, Depends, HTTPException, Query, status
router = APIRouter(prefix="/api/tasks", tags=["tasks"])
@router.get("/", response_model=PaginatedResponse[TaskResponse])
async def list_tasks(
status_filter: Optional[TaskStatus] = Query(None, alias="status"),
priority: Optional[int] = Query(None, ge=1, le=5),
assignee_id: Optional[int] = None,
sort_by: str = Query("created_at", pattern="^(created_at|priority|due_date)$"),
sort_order: str = Query("desc", pattern="^(asc|desc)$"),
page: int = Query(1, ge=1),
per_page: int = Query(20, ge=1, le=100),
current_user: dict = Depends(get_current_user),
):
"""List tasks with filtering, sorting, and pagination."""
# Start with all tasks or user's tasks based on role
if current_user["role"] == "admin":
filtered = list(tasks_db.values())
else:
filtered = [t for t in tasks_db.values() if t["owner_id"] == current_user["id"]]
# Apply filters
if status_filter:
filtered = [t for t in filtered if t["status"] == status_filter]
if priority:
filtered = [t for t in filtered if t["priority"] == priority]
if assignee_id:
filtered = [t for t in filtered if t["assignee_id"] == assignee_id]
# Sort
reverse = sort_order == "desc"
filtered.sort(key=lambda t: t.get(sort_by, ""), reverse=reverse)
# Paginate
total = len(filtered)
start = (page - 1) * per_page
end = start + per_page
items = filtered[start:end]
return {
"items": items,
"total": total,
"page": page,
"per_page": per_page,
"total_pages": (total + per_page - 1) // per_page,
}
@router.post("/", response_model=TaskResponse, status_code=status.HTTP_201_CREATED)
async def create_task(
task: TaskCreate,
current_user: dict = Depends(get_current_user),
):
"""Create a new task. The current user is automatically set as the owner."""
task_id = next_id()
now = datetime.now(timezone.utc)
task_data = {
"id": task_id,
**task.model_dump(),
"owner_id": current_user["id"],
"created_at": now,
"updated_at": now,
}
tasks_db[task_id] = task_data
return task_data
Review Notes
The AI correctly handles the authorization logic -- regular users only see their own tasks, admins see all tasks. The pagination implementation is clean and follows standard conventions. We note that the in-memory storage (dictionaries) will be replaced with database queries in Chapter 18, but the endpoint logic and authorization patterns will remain the same.
Phase 4: Error Handling
Prompt 4: Consistent Error Responses
"Add a consistent error handling system to the task management API. Create a custom APIError exception class and global exception handlers. Error responses should always have the format: {error: {code: 'ERROR_CODE', message: 'Human-readable message', details: {}}}. Handle: validation errors (422), not found (404), auth errors (401, 403), and unexpected errors (500). In production mode, do not expose stack traces."
The AI generates a comprehensive error handling system. This is one of the most valuable uses of AI for backend development -- the error handling boilerplate is tedious to write manually but critical for a production API.
class APIError(Exception):
"""Base API error with structured error response."""
def __init__(self, status_code: int, code: str, message: str, details: dict = None):
self.status_code = status_code
self.code = code
self.message = message
self.details = details or {}
class TaskNotFoundError(APIError):
"""Raised when a task is not found."""
def __init__(self, task_id: int):
super().__init__(
status_code=404,
code="TASK_NOT_FOUND",
message=f"Task with ID {task_id} not found",
details={"task_id": task_id},
)
Phase 5: Testing
We prompt the AI to generate a comprehensive test suite. See Case Study Code (code/case-study-code.py) for the full implementation.
Prompt 5: Test Suite
"Create a pytest test suite for the task management API using httpx AsyncClient. Test: - User registration and login - CRUD operations on tasks - Authorization (users cannot access others' tasks, admins can) - Validation errors (invalid data returns 422) - Error handling (not found returns 404) - Pagination and filtering Use fixtures for authenticated clients and test data."
Lessons Learned
What AI Did Well
-
Pydantic model generation. The AI consistently produced well-structured models with appropriate Field constraints and validators. This is one of AI's strongest areas for backend development.
-
Standard CRUD patterns. The generated CRUD endpoints followed established conventions for status codes, response formats, and error handling.
-
JWT authentication flow. The authentication implementation was security-aware, with separate token types, proper expiration, and bcrypt hashing.
-
OpenAPI documentation. Every endpoint automatically received accurate documentation through FastAPI's integration with Pydantic models.
What Required Human Intervention
-
Edge cases in validation. The due_date validator did not account for the update scenario. This required manual review and adjustment.
-
Authorization boundary conditions. The initial implementation did not handle the case where a user tries to assign a task to someone else. We added a check: regular users can only assign tasks to themselves.
-
Rate limiting configuration. The AI suggested in-memory rate limiting, which we noted would not work in a multi-instance deployment.
-
Secret management. We had to explicitly verify that no secrets were hardcoded and that all sensitive values came from environment variables.
Prompting Strategy
The iterative approach -- building one layer at a time -- was essential. Each prompt was focused on a specific concern (schemas, auth, CRUD, errors, tests), which produced higher quality output than a single massive prompt. After each generation step, we reviewed the output, identified issues, and addressed them either through follow-up prompts or manual edits.
Final Architecture
The completed API has:
- 13 endpoints across auth and task routers
- 8 Pydantic models for requests and responses
- JWT authentication with access and refresh tokens
- Role-based authorization (user and admin)
- Comprehensive validation with custom error messages
- Consistent error responses with structured error codes
- Auto-generated OpenAPI docs accessible at /docs
- 45 test cases covering happy paths and edge cases
The complete code for this case study is available in code/case-study-code.py.