Case Study 01: Zero to Production in a Day
Deploying a Full-Stack App with Docker, CI/CD, and Monitoring
Background
Priya Sharma is a freelance web developer who has been using AI coding assistants for the past year. She built a project management tool called "TaskFlow" for a client — a small marketing agency that needed a custom solution to track campaigns, assign tasks, and generate reports. Using vibe coding techniques, Priya completed the application in two weeks: a FastAPI backend with a PostgreSQL database, a React frontend, and a Redis-backed background task queue for generating PDF reports.
The application was working beautifully on her laptop. The client was impressed during the demo. Then came the question she had been dreading: "When can we start using it?"
Priya had never deployed a production application before. She had always handed off code to another developer or used simple hosting for static sites. This time, the client wanted a private deployment — no shared hosting, proper security, reliable uptime. Priya had one day to figure it out.
The Challenge
Priya's application consisted of four services that needed to work together:
- FastAPI backend — 15 API endpoints, SQLAlchemy ORM, Alembic migrations
- React frontend — Built with Vite, communicating with the API via fetch
- PostgreSQL database — 12 tables with foreign keys and indexes
- Redis + Celery worker — Background processing for PDF report generation
She needed to deploy all four services to a server, set up HTTPS, configure automated deployments so she could push updates without SSH-ing into the server every time, and establish basic monitoring so she would know if something broke.
Her constraints were real: one day, no DevOps experience, and a budget of $50/month for hosting.
Phase 1: Containerization (Morning, 9:00 AM – 11:00 AM)
Priya started by asking her AI assistant to help containerize the application. Her first prompt was deliberately comprehensive:
"I have a FastAPI application with the following structure: a
main.pyentry point,requirements.txtwith dependencies including SQLAlchemy, Alembic, psycopg2-binary, celery, and redis. Generate a production-ready Dockerfile with multi-stage build, non-root user, and health check. The app runs on port 8000 with Uvicorn."
The AI generated a Dockerfile that Priya reviewed carefully. She noticed the AI used python:3.12-slim as the base image, which was correct, but it did not include the psycopg2 build dependencies. She refined:
"The application uses psycopg2-binary for development but should use psycopg2 (compiled) in production for performance. Update the multi-stage Dockerfile to install build dependencies (gcc, libpq-dev) in the builder stage and compile psycopg2, but keep the final image slim."
The updated Dockerfile worked correctly. Priya then containerized the frontend:
"Generate a Dockerfile for a React application built with Vite. Use a Node 20 builder stage to run
npm ci && npm run build, then copy the built assets into an Nginx Alpine container. Include an nginx.conf that proxies/apirequests to a backend service."
This produced a clean two-stage build. The nginx configuration included the critical reverse proxy rule:
location /api/ {
proxy_pass http://backend:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Next, Priya created a Docker Compose file to orchestrate all four services. She asked the AI:
"Create a docker-compose.yml for production with: FastAPI backend (built from ./backend), React frontend with Nginx (built from ./frontend), PostgreSQL 16 with persistent volume and health check, Redis 7 Alpine, and a Celery worker using the same backend image but running
celery -A app.tasks worker. The frontend should be the only service exposed on ports 80 and 443. All services should be on a shared network."
The AI produced a comprehensive compose file. Priya made one important addition the AI missed: she added restart: always to all services so they would restart automatically if the server rebooted.
By 11:00 AM, Priya could run docker compose up on her laptop and the entire stack worked. She verified by running through every feature: creating tasks, assigning them, generating a PDF report, and viewing the dashboard.
Phase 2: CI/CD Pipeline (Late Morning, 11:00 AM – 12:30 PM)
With the application containerized, Priya set up a CI/CD pipeline using GitHub Actions. She organized her repository and prompted:
"Generate a GitHub Actions workflow that: (1) on every push and PR, runs Python linting with ruff and pytest for the backend, and npm lint and npm test for the frontend, in parallel; (2) on push to main, builds Docker images for backend and frontend, pushes them to GitHub Container Registry; (3) after push, SSHs into a production server and runs docker compose pull && docker compose up -d."
The AI generated a workflow with three jobs: test-backend, test-frontend, and build-and-deploy. Priya reviewed it carefully and made several adjustments:
- She added a PostgreSQL service container for the backend test job
- She added pip caching to speed up CI runs
- She changed the deploy step to use a properly secured SSH key stored in GitHub Secrets
- She added a post-deployment health check: after deploying, the workflow would curl the health endpoint and fail the job if it returned anything other than 200
The final workflow file was 95 lines long. Priya committed it and pushed to a test branch to verify the CI portion worked. Tests passed on the first try — a testament to the thorough test suite she had written with AI assistance (following Chapter 21's practices).
Phase 3: Server Setup and Deployment (Afternoon, 1:00 PM – 3:30 PM)
After lunch, Priya provisioned a server. She chose a $24/month DigitalOcean Droplet (2 vCPU, 4 GB RAM) — well within budget and sufficient for the client's 15-person team.
She SSHed into the fresh Ubuntu server and asked her AI assistant to generate a setup script:
"Generate a bash script to set up a fresh Ubuntu 22.04 server for Docker-based deployment. It should: install Docker and Docker Compose, create a non-root deploy user, set up SSH key authentication, install and configure Nginx as a reverse proxy with Let's Encrypt SSL (using certbot), configure UFW firewall to allow only ports 22, 80, and 443, and set up automatic security updates."
The AI produced a thorough setup script. Priya reviewed each section before running it, verifying that the firewall rules would not lock her out (port 22 was included) and that the SSL certificate would be requested for the correct domain.
She ran the script section by section rather than all at once, verifying each step:
- Docker installed and running — verified with
docker run hello-world - Deploy user created with SSH access — verified by logging in as the deploy user
- Nginx configured and SSL certificate obtained — verified by visiting the domain in a browser
- Firewall configured — verified with
ufw status
Then she configured the production environment. She created a .env file on the server (never committed to Git) with production values:
DATABASE_URL=postgresql://taskflow:STRONG_RANDOM_PASSWORD@db:5432/taskflow_prod
REDIS_URL=redis://cache:6379/0
SECRET_KEY=LONG_RANDOM_STRING_GENERATED_WITH_OPENSSL
ENVIRONMENT=production
ALLOWED_ORIGINS=https://taskflow.clientdomain.com
She cloned the repository, set up GitHub Container Registry credentials, and ran docker compose up -d. The application started. She visited the domain — and got a 502 Bad Gateway error.
The First Bug
Priya checked the logs: docker compose logs web. The backend was crashing with a database connection error. The PostgreSQL container was not ready yet when the backend tried to connect. The depends_on with condition: service_healthy was in the compose file, but the backend's startup code did not have retry logic for the initial database connection.
She asked her AI assistant:
"My FastAPI app crashes on startup if PostgreSQL is not ready yet, even though Docker Compose waits for the health check. Add connection retry logic to my database initialization that retries 5 times with exponential backoff."
The AI generated a wait_for_db() function that she added to the application startup. She pushed the fix, the CI pipeline ran, built new images, deployed, and the application came up successfully.
Time lost to this bug: 20 minutes. Without the AI assistant, Priya estimated it would have taken her an hour to figure out the retry pattern.
Phase 4: Monitoring Setup (Afternoon, 3:30 PM – 5:00 PM)
With the application running, Priya needed to know when things went wrong without checking manually. She set up three layers of monitoring:
Layer 1: Health Checks. The application already had /health and /health/ready endpoints. She configured Uptime Robot (free tier) to ping /health every 5 minutes and email her if it failed for two consecutive checks.
Layer 2: Error Tracking. She integrated Sentry (free tier for small projects) into the FastAPI backend. The AI helped her write the integration:
"Add Sentry integration to my FastAPI application. It should capture all unhandled exceptions, include the request URL and user ID in the context, and filter out health check endpoints to avoid noise."
The integration took 15 minutes. She tested it by deliberately triggering a 500 error and confirmed the error appeared in the Sentry dashboard with full stack trace.
Layer 3: Log Aggregation. Rather than setting up a full ELK stack (overkill for this project), Priya configured structured JSON logging and set up Docker's built-in log rotation:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
She could view logs with docker compose logs when needed and search them with grep. For a 15-person tool, this level of log management was sufficient.
Layer 4: Basic Metrics. She added a simple middleware that logged request count, latency, and error rate to the structured logs. If the project grew, she could add Prometheus and Grafana later.
Phase 5: Documentation and Handoff (Evening, 5:00 PM – 6:00 PM)
Priya asked the AI to generate a runbook:
"Generate a deployment runbook for my TaskFlow application. Include: how to deploy a new version, how to rollback to a previous version, how to check application health, how to view logs, how to restart individual services, how to back up and restore the database, and common troubleshooting steps."
The AI produced a comprehensive Markdown document. Priya reviewed it, tested each procedure, and added it to the repository.
She also set up automated database backups using a cron job that ran pg_dump nightly, compressed the output, and retained 30 days of backups:
# /etc/cron.d/taskflow-backup
0 3 * * * deploy docker compose exec -T db pg_dump -U taskflow taskflow_prod | gzip > /backups/taskflow_$(date +\%Y\%m\%d).sql.gz
Results
By 6:00 PM — roughly 8 hours of work — Priya had:
- A fully containerized four-service application
- Automated CI/CD pipeline (push to main triggers testing, building, and deployment)
- HTTPS with automatic certificate renewal
- Health check monitoring with email alerts
- Error tracking with Sentry
- Structured logging with rotation
- Automated daily database backups
- A comprehensive deployment runbook
The client started using TaskFlow the next morning. Over the following month: - Zero downtime incidents - 3 feature updates deployed via the CI/CD pipeline (total deployment time: under 5 minutes each) - 1 bug caught by Sentry before any user reported it (a timezone issue in report generation) - Database backups ran reliably every night
Lessons Learned
Start with containerization. Getting Docker working locally is the single most important step. Once the application runs in containers, everything else (CI/CD, deployment, scaling) becomes dramatically simpler.
AI excels at boilerplate generation. The Dockerfiles, nginx config, GitHub Actions workflow, and server setup script were all generated by AI with only minor modifications. This saved hours of reading documentation and debugging configuration syntax.
Always test in staging first. Priya skipped a staging environment due to time pressure and hit the database connection bug in production. Even a quick docker compose up test on the server before switching DNS would have caught it.
Start simple, iterate. Uptime Robot + Sentry + structured logs was enough for this project. She did not need Prometheus, Grafana, the ELK stack, or Kubernetes. When the project grows, she can add those layers incrementally.
Document everything. The runbook took 30 minutes to create (with AI help) and has already been used twice for routine maintenance. That investment pays for itself immediately.
Technical Architecture Diagram
Internet
│
┌─────▼─────┐
│ Nginx │
│ + SSL │
│ (Host) │
└─────┬─────┘
│
┌────────────┼────────────┐
│ │ │
┌─────▼─────┐ │ ┌─────▼─────┐
│ Frontend │ │ │ API │
│ (Nginx) │─────┘ │ (FastAPI) │
│ :80 │ │ :8000 │
└───────────┘ └─────┬─────┘
│
┌────────────┼────────────┐
│ │ │
┌─────▼─────┐ ┌───▼─────┐ ┌───▼─────┐
│PostgreSQL │ │ Redis │ │ Celery │
│ :5432 │ │ :6379 │ │ Worker │
└───────────┘ └─────────┘ └─────────┘
Key Takeaways for Vibe Coders
-
A production deployment does not have to take weeks. With containers, CI/CD, and AI assistance, a solo developer can go from local development to production in a single day.
-
AI dramatically reduces the DevOps knowledge barrier. Priya had never written a Dockerfile or GitHub Actions workflow before this project. The AI generated correct, production-quality configurations that she could understand, review, and customize.
-
The key skill is not memorizing syntax — it is knowing what to ask for. Priya's effectiveness came from understanding what pieces she needed (containers, CI/CD, monitoring, backups) even though she did not know the specific syntax for each.
-
Simple beats complex for small projects. A single Docker Compose deployment on a $24/month server is more appropriate than Kubernetes for a 15-user internal tool. Choose the simplest deployment that meets your requirements.
-
Monitoring is not optional, even for small projects. The bug that Sentry caught before any user reported it justified the entire monitoring setup. Without it, the client would have discovered the bug first, damaging trust.