In Chapter 16, you built interactive frontends with React. In Chapter 17, you designed RESTful APIs with FastAPI. In Chapter 18, you modeled databases and wrote queries with SQLAlchemy. Each of those chapters focused on a single layer in isolation...
In This Chapter
- Learning Objectives
- Introduction
- 19.1 The Full-Stack Integration Challenge
- 19.2 Architecture Planning with AI
- 19.3 Connecting Frontend to Backend
- 19.4 State Synchronization
- 19.5 User Authentication End-to-End
- 19.6 File Uploads and Media Handling
- 19.7 Real-Time Features with WebSockets
- 19.8 Deployment Considerations
- 19.9 Environment Configuration
- 19.10 Building a Complete Full-Stack App
- Summary
- Key Terms
Chapter 19: Full-Stack Application Development
Learning Objectives
After completing this chapter, you will be able to:
- Identify the key integration challenges that arise when combining frontend, backend, and database layers into a single application (Remember)
- Explain how AI assistants help bridge the knowledge gap between frontend and backend development (Understand)
- Design a full-stack architecture with clearly defined API contracts between layers (Apply)
- Implement a React frontend that communicates with a FastAPI backend through RESTful API calls (Apply)
- Configure CORS policies, environment variables, and deployment settings for a full-stack application (Apply)
- Build an end-to-end authentication flow with JWT tokens, protected routes, and session management (Apply)
- Integrate file upload functionality spanning frontend forms, backend processing, and cloud storage (Apply)
- Implement real-time features using WebSocket connections between client and server (Apply)
- Analyze state synchronization strategies to keep client and server data consistent (Analyze)
- Evaluate deployment architectures and environment configuration strategies for production readiness (Evaluate)
- Create a complete full-stack application from scratch using AI-assisted development workflows (Create)
Introduction
In Chapter 16, you built interactive frontends with React. In Chapter 17, you designed RESTful APIs with FastAPI. In Chapter 18, you modeled databases and wrote queries with SQLAlchemy. Each of those chapters focused on a single layer in isolation. The real challenge — and the real power — emerges when you stitch those layers together into a cohesive application.
Full-stack development is where the complexity multiplies. Your frontend needs to know exactly what shape the backend's responses will take. Your backend needs to validate and transform data before it reaches the database. Authentication tokens must flow seamlessly from login forms through HTTP headers to protected endpoints and back. File uploads must travel from browser file pickers through multipart form requests to server-side processing and cloud storage. Real-time features require persistent connections that behave differently from the request-response pattern you have used so far.
This is also where AI assistants shine brightest. Most developers are stronger on one side of the stack than the other. A frontend specialist might struggle with database migrations. A backend developer might find React's state management confusing. AI coding assistants bridge this gap by bringing deep knowledge of every layer simultaneously. When you describe an end-to-end feature — "I need a login page that authenticates against my FastAPI backend and stores the JWT token for subsequent requests" — the AI can generate code for every layer at once, ensuring they fit together correctly.
This chapter walks you through the integration challenges one by one. By the end, you will have built a complete full-stack application and developed the mental models needed to tackle any integration problem with AI assistance.
Note
This chapter assumes you are comfortable with the material from Chapters 16 through 18. If React components, FastAPI endpoints, or SQLAlchemy models feel unfamiliar, review those chapters before continuing. We will reference specific patterns from each.
19.1 The Full-Stack Integration Challenge
Building a full-stack application is fundamentally different from building its individual layers. The difficulty is not in the frontend code or the backend code or the database schema — it is in the seams between them. Let us examine what makes these seams so tricky.
The Impedance Mismatch Problem
Each layer of a full-stack application speaks a different language. The frontend thinks in components, props, and state updates. The backend thinks in endpoints, request bodies, and response schemas. The database thinks in tables, rows, and foreign keys. Data must be translated at every boundary:
Browser (JavaScript objects)
↕ JSON serialization/deserialization
API Layer (Python dictionaries / Pydantic models)
↕ ORM mapping
Database (SQL rows and columns)
A User on the frontend might be { firstName: "Ada", lastName: "Lovelace", email: "ada@example.com" }. The same user on the backend is a Pydantic model with first_name, last_name, and email fields — note the naming convention change from camelCase to snake_case. In the database, it is a row in a users table with a created_at timestamp and a hashed password that must never be sent to the frontend.
These translations are where bugs hide. A missing field, a wrong data type, a naming mismatch — any of these can cause errors that are difficult to trace because the symptom appears in one layer but the cause lives in another.
The Coordination Problem
When you change one layer, you often need to change the others to match. Adding a new field to a database table means updating the SQLAlchemy model, the Pydantic schema, the API endpoint, and the frontend component that displays it. Miss any one of these, and the feature breaks.
In traditional development, this coordination is manual and error-prone. Developers maintain API documentation, write integration tests, and hold cross-team meetings to stay synchronized. With AI assistance, you can describe the change once and have the AI generate updates across all layers simultaneously.
The Environment Problem
Each layer has its own runtime environment with its own configuration needs. The frontend needs to know the API's URL. The backend needs database connection strings, secret keys, and third-party API credentials. The database needs connection pool settings and migration configurations. These settings differ between development, staging, and production environments.
Managing these configurations correctly — without accidentally committing secrets to version control or hardcoding URLs that change between environments — requires discipline and tooling that we will cover in Section 19.9.
Intuition: Think of a full-stack application as three musicians playing together. Each is talented individually, but the performance only works if they are playing the same song, in the same key, at the same tempo. The "integration challenge" is the rehearsal process — getting them synchronized so the audience hears a single, coherent piece of music rather than three separate performances.
Where AI Assistants Help Most
AI coding assistants are particularly effective at full-stack integration for several reasons:
-
Cross-layer knowledge. The AI understands React, FastAPI, and SQLAlchemy simultaneously. It can generate a frontend component, its corresponding API endpoint, and the database query in a single response.
-
Convention awareness. The AI knows that JavaScript uses camelCase while Python uses snake_case, and it can handle the translation automatically when generating API client code.
-
Boilerplate generation. Much of full-stack integration is boilerplate — CORS configuration, authentication middleware, API client setup. The AI generates this correctly and consistently.
-
Error pattern recognition. When you paste a CORS error or a 422 Unprocessable Entity response, the AI immediately recognizes the pattern and suggests the fix across the relevant layers.
The prompt strategy for full-stack work differs from single-layer development. Instead of describing a component or an endpoint, you describe a feature that spans the stack:
Prompt: "I need a user registration feature. The React form should collect
username, email, and password. The FastAPI endpoint should validate the input,
hash the password, and store the user in a PostgreSQL database using SQLAlchemy.
Return appropriate error messages for duplicate usernames or emails."
This kind of prompt gives the AI enough context to generate coordinated code across all layers.
19.2 Architecture Planning with AI
Before writing code, a full-stack application needs an architecture — a high-level plan for how the pieces fit together. AI assistants are excellent architecture planning partners, but you need to ask the right questions.
Choosing a Project Structure
The first decision is how to organize your code. There are two main approaches for a React + FastAPI application:
Monorepo structure — frontend and backend in the same repository:
my-app/
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ ├── pages/
│ │ ├── services/
│ │ └── App.tsx
│ ├── package.json
│ └── vite.config.ts
├── backend/
│ ├── app/
│ │ ├── models/
│ │ ├── routes/
│ │ ├── schemas/
│ │ └── main.py
│ ├── requirements.txt
│ └── alembic/
├── docker-compose.yml
└── README.md
Separate repositories — frontend and backend in different repositories, deployed independently.
For learning and small-to-medium projects, the monorepo structure is simpler. You can run both frontend and backend with a single docker-compose up command, and your AI assistant can see both codebases in a single context window. We will use the monorepo approach throughout this chapter.
Best Practice: When asking AI to scaffold a project, include the structure in your prompt. Say "Create a monorepo with a React frontend in
/frontendand a FastAPI backend in/backend" rather than just "Create a full-stack app." The more specific your structure request, the more consistent the generated code will be.
Defining the API Contract
The API contract is the agreement between frontend and backend about what requests look like and what responses contain. Defining this contract before writing code prevents the most common integration bugs.
Ask your AI assistant to help define the contract:
Prompt: "I'm building a task management app. Define the REST API contract
for tasks. Each task has a title, description, status (todo/in-progress/done),
priority (low/medium/high), and belongs to a user. Include endpoints for CRUD
operations, filtering by status, and bulk status updates. Show the request and
response schemas for each endpoint."
The AI will produce something like:
POST /api/tasks → Create a task
GET /api/tasks → List tasks (with ?status=todo&priority=high filters)
GET /api/tasks/{id} → Get a single task
PUT /api/tasks/{id} → Update a task
DELETE /api/tasks/{id} → Delete a task
PATCH /api/tasks/bulk → Bulk update task statuses
Request body (POST /api/tasks):
{
"title": "string (required, 1-200 chars)",
"description": "string (optional, max 5000 chars)",
"status": "todo | in-progress | done (default: todo)",
"priority": "low | medium | high (default: medium)"
}
Response body (GET /api/tasks):
{
"tasks": [...],
"total": 42,
"page": 1,
"per_page": 20
}
This contract becomes the specification that both frontend and backend code must conform to. When you generate code for either layer, include the contract in your prompt context.
Data Flow Diagrams
For complex features, ask the AI to produce a data flow description before generating code:
Prompt: "Describe the complete data flow for a user uploading a profile
picture in our app. Cover the React component, the API request, the
backend processing, storage, and how the frontend displays the result."
Understanding the flow before coding helps you spot missing steps — like thumbnail generation or CDN cache invalidation — before they become bugs.
Technology Selection
AI assistants can help evaluate technology choices, but be specific about your constraints:
Prompt: "I'm choosing between SQLite and PostgreSQL for a full-stack app
that will have fewer than 1000 users. The app needs full-text search and
will be deployed on a single server. What are the tradeoffs?"
Common Pitfall: AI assistants sometimes suggest overcomplicated architectures. If you are building a simple application, push back when the AI suggests microservices, message queues, or Kubernetes. A monolithic FastAPI backend with a React frontend and a PostgreSQL database can handle the vast majority of applications. Start simple and add complexity only when you have a specific problem that demands it.
19.3 Connecting Frontend to Backend
The most fundamental integration task is getting your React frontend to communicate with your FastAPI backend. This involves three things: configuring CORS on the backend, making HTTP requests from the frontend, and handling the API responses.
CORS Configuration
Cross-Origin Resource Sharing (CORS) is a browser security feature that blocks requests from one origin (like http://localhost:5173, where Vite runs your React app) to a different origin (like http://localhost:8000, where FastAPI runs). Without CORS configuration, every API call from your frontend will fail with an error in the browser console.
Configure CORS in your FastAPI application:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
# In development, allow requests from the Vite dev server
origins = [
"http://localhost:5173",
"http://localhost:3000",
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Common Pitfall: Using
allow_origins=["*"]in production is a security risk. It allows any website to make requests to your API. Always list the specific origins that should be allowed. In development, listing your frontend's dev server URL is sufficient.
Making API Calls from React
Create a centralized API client rather than scattering fetch calls throughout your components. This gives you a single place to handle authentication headers, error responses, and base URL configuration:
// frontend/src/services/api.ts
const API_BASE_URL = import.meta.env.VITE_API_URL || 'http://localhost:8000';
class ApiClient {
private baseUrl: string;
constructor(baseUrl: string) {
this.baseUrl = baseUrl;
}
private async request<T>(
endpoint: string,
options: RequestInit = {}
): Promise<T> {
const token = localStorage.getItem('access_token');
const headers: HeadersInit = {
'Content-Type': 'application/json',
...options.headers,
};
if (token) {
(headers as Record<string, string>)['Authorization'] =
`Bearer ${token}`;
}
const response = await fetch(`${this.baseUrl}${endpoint}`, {
...options,
headers,
});
if (!response.ok) {
const error = await response.json().catch(() => ({}));
throw new ApiError(response.status, error.detail || 'Request failed');
}
return response.json();
}
async get<T>(endpoint: string): Promise<T> {
return this.request<T>(endpoint);
}
async post<T>(endpoint: string, data: unknown): Promise<T> {
return this.request<T>(endpoint, {
method: 'POST',
body: JSON.stringify(data),
});
}
async put<T>(endpoint: string, data: unknown): Promise<T> {
return this.request<T>(endpoint, {
method: 'PUT',
body: JSON.stringify(data),
});
}
async delete<T>(endpoint: string): Promise<T> {
return this.request<T>(endpoint, { method: 'DELETE' });
}
}
class ApiError extends Error {
status: number;
constructor(status: number, message: string) {
super(message);
this.status = status;
}
}
export const api = new ApiClient(API_BASE_URL);
This client automatically attaches JWT tokens (which we will implement in Section 19.5), handles JSON serialization, and provides a consistent error interface.
Handling Loading and Error States
Every API call has three possible states: loading, success, and error. Your components must handle all three. A custom hook simplifies this pattern:
// frontend/src/hooks/useApi.ts
import { useState, useEffect } from 'react';
import { api } from '../services/api';
interface UseApiResult<T> {
data: T | null;
loading: boolean;
error: string | null;
refetch: () => void;
}
function useApi<T>(endpoint: string): UseApiResult<T> {
const [data, setData] = useState<T | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const fetchData = async () => {
setLoading(true);
setError(null);
try {
const result = await api.get<T>(endpoint);
setData(result);
} catch (err) {
setError(err instanceof Error ? err.message : 'Unknown error');
} finally {
setLoading(false);
}
};
useEffect(() => {
fetchData();
}, [endpoint]);
return { data, loading, error, refetch: fetchData };
}
Use this hook in your components:
function TaskList() {
const { data: tasks, loading, error, refetch } = useApi<Task[]>('/api/tasks');
if (loading) return <Spinner />;
if (error) return <ErrorMessage message={error} onRetry={refetch} />;
return (
<ul>
{tasks?.map(task => <TaskItem key={task.id} task={task} />)}
</ul>
);
}
Naming Convention Translation
JavaScript uses camelCase (firstName), while Python uses snake_case (first_name). You have two options:
- Translate at the API boundary. FastAPI's Pydantic models can be configured to accept and return camelCase:
from pydantic import BaseModel, ConfigDict
class TaskResponse(BaseModel):
model_config = ConfigDict(
alias_generator=lambda s: ''.join(
word.capitalize() if i else word
for i, word in enumerate(s.split('_'))
),
populate_by_name=True,
)
task_id: int
created_at: datetime
is_completed: bool
- Translate on the frontend. Use a utility function that converts response keys from snake_case to camelCase before they reach your components.
Either approach works. The important thing is to pick one and be consistent. When prompting your AI assistant, specify which convention you want: "Use camelCase in all API responses so the frontend can use them directly."
Best Practice: If you are using TypeScript on the frontend (which we recommend), generate TypeScript interfaces from your Pydantic models. Several tools automate this, and AI assistants can generate matching types from a Pydantic model definition. This ensures compile-time type safety across the stack.
19.4 State Synchronization
One of the hardest problems in full-stack development is keeping the frontend's view of the data consistent with what is actually in the database. This is the state synchronization problem.
The Challenge
Suppose two users are viewing the same task list. User A marks a task as "done." User B's screen still shows the task as "in-progress." How long should this inconsistency persist? What if User B tries to edit the now-completed task?
State synchronization strategies fall on a spectrum from simple to complex:
-
Fetch on navigation. The simplest approach. Fetch fresh data every time the user navigates to a page. Stale data is possible within a page, but navigating away and back always shows current data.
-
Polling. Fetch data at regular intervals (every 10-30 seconds). Reduces staleness but increases server load.
-
Optimistic updates. Update the UI immediately when the user takes an action, then confirm with the server. If the server rejects the change, roll back the UI.
-
Real-time sync. Use WebSockets to push updates from the server to all connected clients immediately (covered in Section 19.7).
For most applications, a combination of "fetch on navigation" and "optimistic updates" provides a good user experience without excessive complexity.
Implementing Optimistic Updates
Optimistic updates make the application feel instant. Here is the pattern:
async function toggleTaskComplete(taskId: number, currentStatus: boolean) {
// 1. Update the UI immediately
setTasks(prev => prev.map(task =>
task.id === taskId
? { ...task, isCompleted: !currentStatus }
: task
));
try {
// 2. Send the update to the server
await api.put(`/api/tasks/${taskId}`, {
isCompleted: !currentStatus,
});
} catch (error) {
// 3. If the server rejects, roll back the UI
setTasks(prev => prev.map(task =>
task.id === taskId
? { ...task, isCompleted: currentStatus }
: task
));
showErrorToast('Failed to update task. Please try again.');
}
}
Intuition: Optimistic updates are like saying "I'm sure this will work" and acting accordingly. Most of the time, the server confirms and everything is fine. On the rare occasion the server rejects the change, you apologize and undo your premature action. The user experience is dramatically better because they do not have to wait for a round trip to the server for every click.
Cache Invalidation Strategies
When you modify data through the API, you need to decide what cached data to invalidate. For example, after creating a new task, the task list cache is stale. Common strategies include:
- Refetch on mutation. After any POST, PUT, or DELETE, refetch the related list. Simple but can be slow for large lists.
- Update the cache directly. After the server confirms a creation, add the new item to the cached list without refetching. More complex but faster.
- Time-based expiration. Cache data for a fixed duration (such as 60 seconds) and refetch when it expires.
Libraries like React Query (TanStack Query) provide built-in cache management that handles many of these patterns. When prompting your AI assistant for data fetching code, specify whether you want vanilla React state or a library like React Query:
Prompt: "Create a task list component using TanStack Query for data fetching.
Include optimistic updates for toggling task completion. Use the API client
from our services/api.ts module."
Server-Side Validation as the Source of Truth
Regardless of your state synchronization strategy, the server is always the source of truth. Never trust the frontend to validate data correctly — always validate on the backend as well. The frontend validation is for user experience (fast feedback), while the backend validation is for data integrity (security and correctness).
# Backend validation — the authoritative check
from pydantic import BaseModel, Field
class TaskCreate(BaseModel):
title: str = Field(..., min_length=1, max_length=200)
description: str = Field(default="", max_length=5000)
status: Literal["todo", "in-progress", "done"] = "todo"
priority: Literal["low", "medium", "high"] = "medium"
The frontend should have matching validation for immediate feedback, but the backend's validation is what actually protects your data.
19.5 User Authentication End-to-End
Authentication is the feature that touches every layer of the stack. A user enters credentials in a frontend form. The backend verifies them against hashed passwords in the database. A token is issued, stored on the client, and sent with every subsequent request. Protected routes on both frontend and backend check the token before allowing access.
The JWT Authentication Flow
JSON Web Tokens (JWT) are the standard approach for API authentication. Here is the complete flow:
1. User enters email + password in React login form
2. Frontend sends POST /api/auth/login with credentials
3. Backend verifies credentials against database
4. Backend generates a JWT containing user ID and expiration time
5. Backend returns the JWT in the response body
6. Frontend stores the JWT in localStorage (or httpOnly cookie)
7. Frontend includes JWT in Authorization header for all subsequent requests
8. Backend middleware validates the JWT on protected endpoints
9. If JWT is expired or invalid, backend returns 401 Unauthorized
10. Frontend detects 401, clears stored token, redirects to login
Backend Authentication Implementation
Here is the backend side with FastAPI:
from datetime import datetime, timedelta, timezone
from typing import Optional
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from jose import JWTError, jwt
from passlib.context import CryptContext
from pydantic import BaseModel
# Configuration
SECRET_KEY = "your-secret-key-change-in-production"
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES = 30
# Password hashing
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
# OAuth2 scheme
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login")
class TokenData(BaseModel):
user_id: int
exp: datetime
def verify_password(plain_password: str, hashed_password: str) -> bool:
return pwd_context.verify(plain_password, hashed_password)
def hash_password(password: str) -> str:
return pwd_context.hash(password)
def create_access_token(user_id: int) -> str:
expire = datetime.now(timezone.utc) + timedelta(
minutes=ACCESS_TOKEN_EXPIRE_MINUTES
)
payload = {"sub": str(user_id), "exp": expire}
return jwt.encode(payload, SECRET_KEY, algorithm=ALGORITHM)
async def get_current_user(token: str = Depends(oauth2_scheme)):
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
user_id = int(payload.get("sub"))
if user_id is None:
raise credentials_exception
except JWTError:
raise credentials_exception
# Fetch user from database
user = await get_user_by_id(user_id)
if user is None:
raise credentials_exception
return user
Login and Registration Endpoints
from fastapi import APIRouter
router = APIRouter(prefix="/api/auth", tags=["authentication"])
@router.post("/register")
async def register(request: RegisterRequest, db: Session = Depends(get_db)):
# Check for existing user
existing = db.query(User).filter(
(User.email == request.email) | (User.username == request.username)
).first()
if existing:
raise HTTPException(
status_code=400,
detail="Username or email already registered"
)
# Create new user
user = User(
username=request.username,
email=request.email,
hashed_password=hash_password(request.password),
)
db.add(user)
db.commit()
db.refresh(user)
# Return token so user is logged in immediately after registration
token = create_access_token(user.id)
return {"access_token": token, "token_type": "bearer"}
@router.post("/login")
async def login(request: LoginRequest, db: Session = Depends(get_db)):
user = db.query(User).filter(User.email == request.email).first()
if not user or not verify_password(request.password, user.hashed_password):
raise HTTPException(
status_code=401,
detail="Incorrect email or password"
)
token = create_access_token(user.id)
return {"access_token": token, "token_type": "bearer"}
Frontend Authentication
On the frontend, create an authentication context that manages the token and user state:
// frontend/src/contexts/AuthContext.tsx
import { createContext, useContext, useState, useEffect, ReactNode } from 'react';
import { api } from '../services/api';
interface User {
id: number;
username: string;
email: string;
}
interface AuthContextType {
user: User | null;
login: (email: string, password: string) => Promise<void>;
register: (username: string, email: string, password: string) => Promise<void>;
logout: () => void;
isAuthenticated: boolean;
}
const AuthContext = createContext<AuthContextType | undefined>(undefined);
export function AuthProvider({ children }: { children: ReactNode }) {
const [user, setUser] = useState<User | null>(null);
useEffect(() => {
// On mount, check if we have a stored token and fetch user data
const token = localStorage.getItem('access_token');
if (token) {
api.get<User>('/api/auth/me')
.then(setUser)
.catch(() => localStorage.removeItem('access_token'));
}
}, []);
const login = async (email: string, password: string) => {
const response = await api.post<{ access_token: string }>(
'/api/auth/login',
{ email, password }
);
localStorage.setItem('access_token', response.access_token);
const userData = await api.get<User>('/api/auth/me');
setUser(userData);
};
const register = async (
username: string, email: string, password: string
) => {
const response = await api.post<{ access_token: string }>(
'/api/auth/register',
{ username, email, password }
);
localStorage.setItem('access_token', response.access_token);
const userData = await api.get<User>('/api/auth/me');
setUser(userData);
};
const logout = () => {
localStorage.removeItem('access_token');
setUser(null);
};
return (
<AuthContext.Provider value={{
user,
login,
register,
logout,
isAuthenticated: !!user,
}}>
{children}
</AuthContext.Provider>
);
}
export function useAuth() {
const context = useContext(AuthContext);
if (!context) throw new Error('useAuth must be used within AuthProvider');
return context;
}
Protected Routes
On the frontend, wrap routes that require authentication:
// frontend/src/components/ProtectedRoute.tsx
import { Navigate } from 'react-router-dom';
import { useAuth } from '../contexts/AuthContext';
export function ProtectedRoute({ children }: { children: ReactNode }) {
const { isAuthenticated } = useAuth();
if (!isAuthenticated) {
return <Navigate to="/login" replace />;
}
return <>{children}</>;
}
On the backend, use the get_current_user dependency on any endpoint that requires authentication:
@router.get("/api/tasks")
async def list_tasks(current_user: User = Depends(get_current_user)):
# current_user is guaranteed to be authenticated here
tasks = get_tasks_for_user(current_user.id)
return tasks
Common Pitfall: Storing JWT tokens in
localStorageis simple but vulnerable to XSS attacks. For higher security, use httpOnly cookies. This requires changing your CORS configuration to includeallow_credentials=Trueand modifying the backend to set cookies instead of returning tokens in the response body. Ask your AI assistant: "Convert this JWT auth flow from localStorage to httpOnly cookies" for the specific changes needed.
19.6 File Uploads and Media Handling
File uploads involve coordination between a frontend file picker, an HTTP multipart request, backend file processing, and storage. This is a common source of integration bugs because it uses a different content type (multipart/form-data) than the JSON requests you have been making.
Frontend File Upload Component
// frontend/src/components/FileUpload.tsx
import { useState, useRef } from 'react';
import { api } from '../services/api';
interface FileUploadProps {
onUploadComplete: (url: string) => void;
accept?: string;
maxSizeMB?: number;
}
export function FileUpload({
onUploadComplete,
accept = "image/*",
maxSizeMB = 5
}: FileUploadProps) {
const [uploading, setUploading] = useState(false);
const [progress, setProgress] = useState(0);
const [error, setError] = useState<string | null>(null);
const fileInputRef = useRef<HTMLInputElement>(null);
const handleFileSelect = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file) return;
// Client-side validation
if (file.size > maxSizeMB * 1024 * 1024) {
setError(`File must be smaller than ${maxSizeMB}MB`);
return;
}
setUploading(true);
setError(null);
const formData = new FormData();
formData.append('file', file);
try {
const token = localStorage.getItem('access_token');
const response = await fetch(
`${import.meta.env.VITE_API_URL}/api/upload`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${token}`,
},
// Do NOT set Content-Type — the browser sets it
// automatically with the correct boundary for multipart
body: formData,
}
);
if (!response.ok) throw new Error('Upload failed');
const result = await response.json();
onUploadComplete(result.url);
} catch (err) {
setError('Upload failed. Please try again.');
} finally {
setUploading(false);
}
};
return (
<div>
<input
type="file"
ref={fileInputRef}
onChange={handleFileSelect}
accept={accept}
disabled={uploading}
/>
{uploading && <p>Uploading...</p>}
{error && <p className="error">{error}</p>}
</div>
);
}
Common Pitfall: When uploading files, do not set the
Content-Typeheader manually. The browser must set it tomultipart/form-datawith the correct boundary string. If you setContent-Type: application/jsonor evenContent-Type: multipart/form-datawithout the boundary, the upload will fail with a cryptic error.
Backend File Handling
import shutil
from pathlib import Path
from uuid import uuid4
from fastapi import APIRouter, File, UploadFile, Depends, HTTPException
router = APIRouter(prefix="/api", tags=["uploads"])
UPLOAD_DIR = Path("uploads")
UPLOAD_DIR.mkdir(exist_ok=True)
ALLOWED_EXTENSIONS = {".jpg", ".jpeg", ".png", ".gif", ".webp"}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
@router.post("/upload")
async def upload_file(
file: UploadFile = File(...),
current_user: User = Depends(get_current_user),
):
# Validate file extension
extension = Path(file.filename).suffix.lower()
if extension not in ALLOWED_EXTENSIONS:
raise HTTPException(
status_code=400,
detail=f"File type {extension} not allowed. "
f"Allowed: {', '.join(ALLOWED_EXTENSIONS)}"
)
# Validate file size
contents = await file.read()
if len(contents) > MAX_FILE_SIZE:
raise HTTPException(
status_code=400,
detail=f"File too large. Maximum size is "
f"{MAX_FILE_SIZE // (1024 * 1024)}MB"
)
# Generate unique filename to prevent collisions
unique_filename = f"{uuid4().hex}{extension}"
file_path = UPLOAD_DIR / unique_filename
# Write file to disk
with open(file_path, "wb") as f:
f.write(contents)
# In production, upload to cloud storage (S3, GCS) instead
file_url = f"/uploads/{unique_filename}"
return {"url": file_url, "filename": file.filename, "size": len(contents)}
Serving Uploaded Files
In development, serve uploaded files directly from FastAPI:
from fastapi.staticfiles import StaticFiles
app.mount("/uploads", StaticFiles(directory="uploads"), name="uploads")
In production, serve files from a CDN or cloud storage service. The backend should return the CDN URL rather than a local path.
Production Storage with Cloud Services
For production applications, store files in a cloud storage service rather than on the server's filesystem. This ensures files persist across deployments and can be served from a CDN for performance:
import boto3
from botocore.exceptions import ClientError
s3_client = boto3.client(
"s3",
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
region_name=settings.AWS_REGION,
)
async def upload_to_s3(file_content: bytes, filename: str) -> str:
"""Upload a file to S3 and return the public URL."""
key = f"uploads/{uuid4().hex}/{filename}"
s3_client.put_object(
Bucket=settings.S3_BUCKET_NAME,
Key=key,
Body=file_content,
ContentType="image/jpeg", # Set based on actual file type
)
return f"https://{settings.S3_BUCKET_NAME}.s3.amazonaws.com/{key}"
Best Practice: Always validate uploads on the backend even if you validate on the frontend. A malicious user can bypass frontend validation entirely by sending requests directly to your API. Check file type, file size, and consider scanning for malware in high-security applications.
19.7 Real-Time Features with WebSockets
HTTP follows a request-response pattern: the client asks, the server answers. This works well for most operations, but some features need the server to push data to the client without being asked — new chat messages, live notifications, collaborative editing, real-time dashboards. WebSockets provide a persistent, bidirectional connection for exactly this purpose.
How WebSockets Differ from HTTP
| Aspect | HTTP | WebSocket |
|---|---|---|
| Connection | New connection per request | Single persistent connection |
| Direction | Client-initiated only | Bidirectional |
| Overhead | Headers sent with every request | Minimal per-message overhead |
| Use case | CRUD operations, page loads | Real-time updates, streaming |
FastAPI WebSocket Server
FastAPI has built-in WebSocket support:
from fastapi import WebSocket, WebSocketDisconnect
from typing import Dict, Set
class ConnectionManager:
"""Manages active WebSocket connections."""
def __init__(self):
# Map of room_id -> set of connected WebSockets
self.rooms: Dict[str, Set[WebSocket]] = {}
async def connect(self, websocket: WebSocket, room_id: str):
await websocket.accept()
if room_id not in self.rooms:
self.rooms[room_id] = set()
self.rooms[room_id].add(websocket)
def disconnect(self, websocket: WebSocket, room_id: str):
self.rooms.get(room_id, set()).discard(websocket)
if room_id in self.rooms and not self.rooms[room_id]:
del self.rooms[room_id]
async def broadcast(self, room_id: str, message: dict):
"""Send a message to all connections in a room."""
for connection in self.rooms.get(room_id, set()):
try:
await connection.send_json(message)
except Exception:
pass # Connection will be cleaned up on next disconnect
manager = ConnectionManager()
@app.websocket("/ws/{room_id}")
async def websocket_endpoint(websocket: WebSocket, room_id: str):
await manager.connect(websocket, room_id)
try:
while True:
data = await websocket.receive_json()
# Process the message (validate, store in DB, etc.)
await manager.broadcast(room_id, {
"type": "message",
"content": data["content"],
"sender": data["sender"],
"timestamp": datetime.now(timezone.utc).isoformat(),
})
except WebSocketDisconnect:
manager.disconnect(websocket, room_id)
await manager.broadcast(room_id, {
"type": "system",
"content": f"A user has left the room.",
})
Frontend WebSocket Client
// frontend/src/hooks/useWebSocket.ts
import { useEffect, useRef, useState, useCallback } from 'react';
interface UseWebSocketOptions {
url: string;
onMessage: (data: any) => void;
reconnectInterval?: number;
}
export function useWebSocket({
url,
onMessage,
reconnectInterval = 3000
}: UseWebSocketOptions) {
const wsRef = useRef<WebSocket | null>(null);
const [isConnected, setIsConnected] = useState(false);
const connect = useCallback(() => {
const ws = new WebSocket(url);
ws.onopen = () => {
setIsConnected(true);
console.log('WebSocket connected');
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
onMessage(data);
};
ws.onclose = () => {
setIsConnected(false);
// Automatically reconnect after a delay
setTimeout(connect, reconnectInterval);
};
ws.onerror = (error) => {
console.error('WebSocket error:', error);
ws.close();
};
wsRef.current = ws;
}, [url, onMessage, reconnectInterval]);
useEffect(() => {
connect();
return () => {
wsRef.current?.close();
};
}, [connect]);
const sendMessage = useCallback((data: any) => {
if (wsRef.current?.readyState === WebSocket.OPEN) {
wsRef.current.send(JSON.stringify(data));
}
}, []);
return { isConnected, sendMessage };
}
Use the hook in a chat component:
function ChatRoom({ roomId }: { roomId: string }) {
const [messages, setMessages] = useState<Message[]>([]);
const { user } = useAuth();
const { isConnected, sendMessage } = useWebSocket({
url: `ws://localhost:8000/ws/${roomId}`,
onMessage: (data) => {
setMessages(prev => [...prev, data]);
},
});
const handleSend = (content: string) => {
sendMessage({
content,
sender: user?.username,
});
};
return (
<div>
<div className="connection-status">
{isConnected ? 'Connected' : 'Reconnecting...'}
</div>
<MessageList messages={messages} />
<MessageInput onSend={handleSend} disabled={!isConnected} />
</div>
);
}
WebSocket Authentication
WebSockets do not support custom headers in the browser API, so you cannot use the Authorization header. Instead, pass the token as a query parameter:
// Frontend
const wsUrl = `ws://localhost:8000/ws/${roomId}?token=${accessToken}`;
# Backend
from fastapi import Query
@app.websocket("/ws/{room_id}")
async def websocket_endpoint(
websocket: WebSocket,
room_id: str,
token: str = Query(...),
):
# Validate the token before accepting the connection
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
user_id = int(payload.get("sub"))
except JWTError:
await websocket.close(code=4001)
return
await manager.connect(websocket, room_id)
# ... rest of handler
Intuition: Think of HTTP as sending letters — each one is a complete, self-contained message that includes the recipient's address and the sender's return address. WebSockets are like a phone call — you establish the connection once, and then both parties can speak freely without re-dialing. Use letters (HTTP) when you need to send occasional, independent messages. Use a phone call (WebSocket) when you need a continuous conversation.
19.8 Deployment Considerations
Developing a full-stack app on localhost is one thing. Getting it running in production is quite another. This section covers the deployment architecture decisions you need to make.
Deployment Architecture Options
Option 1: Single server with reverse proxy
The simplest production setup. A single server runs both the frontend (as static files) and the backend (as a Python process), with Nginx as a reverse proxy:
Internet → Nginx → /api/* → FastAPI (Uvicorn)
→ /* → React static files
This eliminates CORS issues entirely because both frontend and backend are served from the same origin. Nginx routes API requests to FastAPI and serves static files for everything else.
Example Nginx configuration:
server {
listen 80;
server_name example.com;
# Serve React static files
location / {
root /var/www/frontend/dist;
try_files $uri $uri/ /index.html;
}
# Proxy API requests to FastAPI
location /api/ {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# WebSocket proxy
location /ws/ {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Option 2: Separate deployments
The frontend is built into static files and served from a CDN (like Cloudflare Pages, Vercel, or Netlify). The backend runs on a separate server or platform (like Railway, Render, or AWS). This scales better but requires CORS configuration.
Option 3: Containerized deployment
Both frontend and backend are packaged as Docker containers and orchestrated with Docker Compose or Kubernetes. This provides the most consistency between development and production environments.
Docker Compose for Development
A docker-compose.yml file lets you start your entire stack with a single command:
version: '3.8'
services:
frontend:
build: ./frontend
ports:
- "5173:5173"
volumes:
- ./frontend/src:/app/src
environment:
- VITE_API_URL=http://localhost:8000
backend:
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend/app:/app/app
environment:
- DATABASE_URL=postgresql://user:password@db:5432/myapp
- SECRET_KEY=dev-secret-key
depends_on:
- db
db:
image: postgres:16
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Best Practice: Use Docker Compose for local development so that every developer on the team has an identical environment. It also makes it easy for AI assistants to reproduce your setup — you can include the
docker-compose.ymlin your prompt context.
Build Process
The React frontend needs to be built into static files for production:
cd frontend
npm run build
# Output is in frontend/dist/
The FastAPI backend runs directly from Python:
cd backend
uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4
In production, use multiple Uvicorn workers (or Gunicorn with Uvicorn workers) to handle concurrent requests. The number of workers depends on your server's CPU cores — a common rule of thumb is 2 * num_cores + 1.
19.9 Environment Configuration
Environment configuration is the practice of separating settings that change between environments (development, staging, production) from the code itself. Done well, the same code runs everywhere with different configuration.
The .env File Pattern
Store environment-specific settings in .env files:
# backend/.env (development)
DATABASE_URL=postgresql://user:password@localhost:5432/myapp_dev
SECRET_KEY=dev-secret-key-not-for-production
CORS_ORIGINS=http://localhost:5173,http://localhost:3000
AWS_ACCESS_KEY_ID=your-dev-key
AWS_SECRET_ACCESS_KEY=your-dev-secret
S3_BUCKET_NAME=myapp-dev-uploads
# frontend/.env (development)
VITE_API_URL=http://localhost:8000
VITE_WS_URL=ws://localhost:8000
Common Pitfall: Never commit
.envfiles to version control. They contain secrets like database passwords and API keys. Add.envto your.gitignorefile. Instead, commit a.env.examplefile with placeholder values that shows developers what variables they need to set.
Loading Environment Variables in FastAPI
Use Pydantic's BaseSettings to load and validate environment variables:
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
database_url: str
secret_key: str
cors_origins: str = "http://localhost:5173"
aws_access_key_id: str = ""
aws_secret_access_key: str = ""
s3_bucket_name: str = ""
@property
def cors_origin_list(self) -> list[str]:
return [origin.strip() for origin in self.cors_origins.split(",")]
model_config = {"env_file": ".env"}
settings = Settings()
This approach validates that all required environment variables are set when the application starts, rather than crashing at runtime when a missing variable is first accessed.
Frontend Environment Variables
In Vite-based React projects, environment variables must be prefixed with VITE_ to be exposed to the frontend code:
// These work:
const apiUrl = import.meta.env.VITE_API_URL;
const wsUrl = import.meta.env.VITE_WS_URL;
// This does NOT work (no VITE_ prefix):
const secret = import.meta.env.DATABASE_URL; // undefined
This prefix requirement is a security feature. It prevents accidental exposure of server-side secrets to the browser, where they would be visible in the JavaScript bundle.
Configuration Hierarchy
In production, environment variables typically come from multiple sources with a priority order:
- System environment variables (highest priority) — set in deployment platform
.envfile — used for local development- Default values in code (lowest priority) — fallbacks
This hierarchy means you can override any setting in production without changing code or .env files. Your deployment platform (Railway, Render, AWS, etc.) provides an interface to set environment variables that take precedence over everything else.
Secrets Management
For production applications, do not store secrets in .env files on servers. Use a secrets management service:
- AWS Secrets Manager or AWS Systems Manager Parameter Store
- Google Cloud Secret Manager
- HashiCorp Vault
- Doppler or Infisical for smaller teams
These services provide encrypted storage, access control, audit logs, and automatic rotation for sensitive credentials.
Best Practice: When prompting AI to generate configuration code, always specify: "Use environment variables for all configuration. Never hardcode URLs, secrets, or credentials. Provide a .env.example file with placeholder values."
19.10 Building a Complete Full-Stack App
Let us bring everything together by walking through the construction of a complete full-stack task management application. We will use AI assistance at every step, showing the prompts and explaining the decisions.
Step 1: Project Scaffold
Start by asking your AI assistant to generate the project structure:
Prompt: "Scaffold a full-stack task management app with:
- React + TypeScript + Vite frontend in /frontend
- FastAPI + SQLAlchemy backend in /backend
- PostgreSQL database
- Docker Compose for local development
Include the complete directory structure and configuration files."
The AI generates the monorepo structure we described in Section 19.2, complete with package.json, requirements.txt, docker-compose.yml, Dockerfile for each service, and stub files for the main application entry points.
Step 2: Define the Data Model
Next, define the database models:
Prompt: "Create SQLAlchemy models for a task management app:
- User: id, username, email, hashed_password, created_at
- Task: id, title, description, status (enum: todo/in-progress/done),
priority (enum: low/medium/high), user_id (FK), created_at, updated_at
Include Alembic migration setup."
The AI generates models that match the database layer patterns from Chapter 18, with proper relationships, indexes, and enum types.
Step 3: Build the API
Prompt: "Create FastAPI routes for the task management app:
- POST /api/auth/register — register a new user
- POST /api/auth/login — login and receive JWT token
- GET /api/auth/me — get current user profile
- GET /api/tasks — list tasks for current user (filterable by status, priority)
- POST /api/tasks — create a new task
- GET /api/tasks/{id} — get a specific task
- PUT /api/tasks/{id} — update a task
- DELETE /api/tasks/{id} — delete a task
Include Pydantic schemas, authentication middleware, and proper error handling.
Use the auth utilities from Section 19.5."
Step 4: Build the Frontend
Prompt: "Create a React frontend for the task management app with:
- Login and registration pages
- Task dashboard showing all tasks in columns (todo, in-progress, done)
- Create/edit task modal
- Filter tasks by priority
- Protected routes that redirect to login
- AuthContext for managing authentication state
- API client that automatically includes JWT tokens
Use TypeScript, Tailwind CSS, and React Router."
Step 5: Connect the Layers
With both frontend and backend generated, the integration step is where you test that everything works together:
# Start the database
docker-compose up db -d
# Run database migrations
cd backend && alembic upgrade head
# Start the backend
uvicorn app.main:app --reload
# In a separate terminal, start the frontend
cd frontend && npm run dev
Open http://localhost:5173 in your browser. Register a user. Create some tasks. Verify that the task list updates when you create, edit, and delete tasks.
Step 6: Add Real-Time Features
Once the basic CRUD functionality works, add a WebSocket connection for real-time updates:
Prompt: "Add real-time task updates to our app. When any user creates,
updates, or deletes a task, all connected clients should see the change
immediately. Use WebSockets with FastAPI on the backend and a custom
useWebSocket hook on the frontend. Include authentication for the
WebSocket connection."
Step 7: Add File Upload
Add profile picture uploads to demonstrate file handling:
Prompt: "Add profile picture upload to the user profile page.
The frontend should show a file picker limited to images under 5MB.
The backend should validate the file, generate a unique filename,
store it in the uploads directory, and return the URL. Display the
uploaded image on the user's profile and in task cards."
Iterative Development with AI
Notice the pattern: each step builds on the previous one, and each prompt references the existing code. This is how effective AI-assisted full-stack development works in practice:
- Start with the foundation — project structure, database models, basic API.
- Build the happy path — core CRUD functionality working end to end.
- Add authentication — protect the API and add login/registration flows.
- Add advanced features — real-time updates, file uploads, search.
- Polish and deploy — error handling, loading states, production configuration.
At each step, include relevant context from previous steps in your prompts. If the AI generates code that does not integrate well with what you have, describe the specific incompatibility: "The API client expects camelCase keys but the backend returns snake_case. Update the backend Pydantic models to return camelCase."
Intuition: Building a full-stack app with AI is like directing a construction crew. You do not lay every brick yourself, but you need to understand the blueprint, make architectural decisions, and verify that the work meets your specifications. The AI is an incredibly fast and knowledgeable builder, but you are the architect.
Testing the Full Stack
Test your application at multiple levels:
Unit tests for individual functions:
def test_hash_password():
hashed = hash_password("mysecret")
assert verify_password("mysecret", hashed)
assert not verify_password("wrongpassword", hashed)
API integration tests using FastAPI's TestClient:
from fastapi.testclient import TestClient
def test_create_task_requires_auth(client: TestClient):
response = client.post("/api/tasks", json={"title": "Test"})
assert response.status_code == 401
def test_create_task(authenticated_client: TestClient):
response = authenticated_client.post("/api/tasks", json={
"title": "Write chapter 19",
"description": "Full-stack development chapter",
"priority": "high",
})
assert response.status_code == 201
assert response.json()["title"] == "Write chapter 19"
End-to-end tests using a tool like Playwright:
def test_login_and_create_task(page):
page.goto("http://localhost:5173/login")
page.fill("[name=email]", "test@example.com")
page.fill("[name=password]", "password123")
page.click("button[type=submit]")
# Should redirect to dashboard
assert page.url.endswith("/dashboard")
# Create a task
page.click("text=New Task")
page.fill("[name=title]", "My First Task")
page.click("text=Create")
# Task should appear on the dashboard
assert page.locator("text=My First Task").is_visible()
Best Practice: Ask your AI to generate tests as you build each feature. Say "Write unit tests and integration tests for the task creation endpoint" after generating the endpoint. Testing early catches integration bugs before they compound.
Summary
Full-stack development is where individual skills in frontend, backend, and database work come together into a cohesive application. The challenges are real — CORS configuration, state synchronization, authentication flows, file handling, and environment management all require careful attention. But AI coding assistants transform these challenges from memorization exercises into guided conversations.
The key principles from this chapter are:
-
Define the API contract first. Before writing code, agree on the shape of requests and responses. This prevents the most common integration bugs.
-
Centralize cross-cutting concerns. API clients, authentication logic, and error handling should each live in one place, not scattered across components.
-
The server is the source of truth. Frontend validation improves user experience, but backend validation protects data integrity. Always validate on both sides.
-
Use environment variables for configuration. Never hardcode URLs, secrets, or environment-specific settings. Use
.envfiles for development and your platform's secrets management for production. -
Start simple, add complexity only when needed. A monolithic backend with a React frontend handles the vast majority of use cases. Do not reach for microservices, message queues, or complex architectures until you have a specific scaling problem.
-
Let AI bridge the knowledge gap. Describe features end-to-end in your prompts. The AI can generate coordinated code across all layers simultaneously, which is its greatest advantage in full-stack development.
In Chapter 20, we will extend your application's capabilities by integrating with external APIs and third-party services — payment processors, email providers, maps, and more.
Key Terms
| Term | Definition |
|---|---|
| CORS | Cross-Origin Resource Sharing. A browser security mechanism that controls which origins can make requests to your API. |
| JWT | JSON Web Token. A compact, self-contained token format used for authentication between client and server. |
| Monorepo | A project structure where frontend and backend code live in the same repository. |
| Optimistic update | Updating the UI immediately before the server confirms the change, rolling back if the server rejects it. |
| State synchronization | The process of keeping client-side data consistent with server-side data. |
| WebSocket | A protocol providing full-duplex communication over a single TCP connection, used for real-time features. |
| API contract | The agreed-upon specification for how the frontend and backend communicate, including request and response formats. |
| Reverse proxy | A server (like Nginx) that sits in front of your application, routing requests to the appropriate service. |
| Environment variable | A configuration value set outside the code, used to customize application behavior across different environments. |
| Multipart form data | An HTTP content type used for uploading files, where the request body contains multiple parts with different content types. |
| Connection manager | A server-side component that tracks active WebSocket connections and enables broadcasting messages to groups of clients. |
| Bearer token | An authentication scheme where the client includes a token in the Authorization header prefixed with "Bearer". |