Case Study 38-1: Priya Deploys the Acme Dashboard to Render

The Problem with "Runs on Priya's Laptop"

It was a Tuesday afternoon in October. Sandra Chen was presenting Q3 results to the executive team in a conference room downtown. She pulled up the Acme dashboard on her phone, intending to show the quota attainment number in real time.

The page loaded a browser error: "This site can't be reached. acme-dashboard-priya-laptop.local refused to connect."

Priya's laptop was in sleep mode. Priya was in a dentist's office.

The meeting continued with manually typed numbers from Sandra's memory. The dashboard, which had been working beautifully for six weeks, failed at exactly the moment it would have been most impressive.

That afternoon, Priya sent an email with the subject line: "Deployment Plan."


Planning the Deployment

Marcus Webb joined Priya's planning session the next morning. He had watched the dashboard develop from skepticism to genuine appreciation — it had reduced his own exposure to ad-hoc data requests by eliminating most of Priya's dependencies on the shared BI tool he maintained.

"The real issue is that the application lives on hardware we don't control 24/7," Marcus said. "Your laptop sleeps, goes to IT for updates, gets lost. The app needs to live on a server."

Priya outlined two options: 1. Internal server — run the application on one of Acme's intranet servers, managed by Marcus's team 2. Cloud hosting — deploy to Render (or similar), accessible from anywhere via the internet

Marcus's position on option 1: "We have a server available, but it requires our IT patching schedule, our backup processes, and someone to handle it when you leave the company. That's more overhead than I want for an internal analytics tool."

His position on option 2: "If it's behind a password and doesn't contain PII, cloud hosting is fine. The data is already in a CSV that gets emailed around, so it's not more exposed than it already is."

They chose cloud hosting on Render.


Preparing the Code

Priya's first task: making the application deployment-ready. This meant addressing everything in the deploy_checklist.md that she had not needed to worry about for a local development environment.

Writing the Dockerfile

She copied the Dockerfile from the chapter and walked through each instruction with Marcus. He had Docker experience from previous infrastructure projects and spotted one issue immediately:

COPY . .

"This copies everything in the directory into the image," Marcus said. "Including the .env file, the data/ folder with the CSV, and your venv/ if you have one."

Priya created a .dockerignore file:

.env
.env.example
venv/
__pycache__/
*.pyc
data/
.git/
tests/
*.md

The data/ exclusion was deliberate: the sales CSV on the production server would be populated by Acme's existing automated data pipeline (which had been writing to a shared network location since before Priya started). She would configure the pipeline to also write to the Render persistent disk. The image itself would start with an empty data/ directory.

Handling the Data Problem

This was the more complex issue. The Flask application read from data/acme_sales.csv on every dashboard load. In production, that file needed to exist and be kept current.

Three options:

  1. Bake the CSV into the image — the CSV would be one deployment old at all times. Rejected: the dashboard's value is live data.

  2. Read from S3 instead of local CSV — replace pd.read_csv("data/acme_sales.csv") with a boto3 S3 download. More complex but more robust. Priya bookmarked this for version 2.

  3. Persistent disk + pipeline update — configure a Render Persistent Disk at /app/data, update the automated data pipeline to write to a shared S3 bucket, and have the Flask app download from S3 on startup (or per-request). Marcus favored this approach because it separated data from application code.

For the initial deployment, they chose a simpler interim: Priya wrote a /admin/refresh-data endpoint (password-protected) that re-loaded the CSV from a shared network location accessible via SFTP. This was not elegant, but it worked within the two-day window they had before Sandra's next executive meeting.

@app.route("/admin/refresh-data", methods=["POST"])
@login_required
def refresh_data():
    """Manually trigger a data refresh from the network location.

    This is a stopgap until the automated pipeline writes to S3.
    Accessible only by authenticated users.
    """
    import subprocess
    result = subprocess.run(
        ["rsync", "-avz", "pipeline@data-server:/exports/acme_sales.csv",
         "/app/data/acme_sales.csv"],
        capture_output=True, text=True
    )
    if result.returncode == 0:
        logger.info("Data refreshed successfully")
        return jsonify({"status": "success"})
    else:
        logger.error("Data refresh failed: %s", result.stderr)
        return jsonify({"status": "error", "message": result.stderr}), 500

Marcus flagged this as a stopgap. "Get the S3 integration on the roadmap. This rsync approach is fragile and creates a dependency on the internal network from the cloud server."

The honest documentation went into the code as a comment. Technical debt is fine when it is visible and acknowledged.

Setting Up the Repository

Priya created a private GitHub repository for the application:

git init
git add app.py requirements.txt Dockerfile .dockerignore templates/ static/
git commit -m "Initial commit: Acme Corp dashboard"
git remote add origin https://github.com/priya-okonkwo/acme-dashboard.git
git push -u origin main

She explicitly did NOT add: - .env (secrets) - data/ (business data) - venv/ (environment-specific)


The Render Deployment

Marcus watched over Priya's shoulder as she set up Render. He wanted to understand the process, both to provide IT oversight and because he was genuinely curious.

Step 1: Creating the Service

At render.com: 1. New → Web Service 2. Connect GitHub account 3. Select the acme-dashboard repository 4. Configuration: - Name: acme-corp-dashboard - Environment: Docker - Branch: main - Instance Type: Starter ($7/month)

"Why not Free?" Priya asked.

"Free tier sleeps after 15 minutes of inactivity," Marcus said. "We'd be back to the original problem — the dashboard takes 50 seconds to load if Sandra hasn't used it in a while. $7/month for always-on is worth it."

Step 2: Environment Variables

In Render's "Environment" section:

Key Value
SECRET_KEY 8f3e2a1c7b9d4f6e... (64-char hex)
DASHBOARD_PASSWORD AcmeQ4FY2024!
FLASK_DEBUG false

Priya generated the SECRET_KEY with:

import secrets
print(secrets.token_hex(32))

She wrote the password in 1Password (Acme's corporate password manager) rather than texting it to Sandra. Marcus approved.

Step 3: Persistent Disk

Under "Disks" in the service settings: - Name: acme-data - Mount path: /app/data - Size: 1 GB ($0.25/month for 1GB)

Marcus noted: "This is where the CSV will live. When Priya's data pipeline eventually writes directly to this path or to S3, it will be populated automatically. Until then, she'll use the refresh endpoint."

Step 4: Initial Deploy

Render began the build process automatically when the service was created. Priya and Marcus watched the build log:

==> Cloning from https://github.com/priya-okonkwo/acme-dashboard
==> Building Docker image
[+] Building 87.3s (12/12) FINISHED
 => [1/7] FROM python:3.11-alpine
 => [2/7] RUN apk add --no-cache gcc musl-dev libffi-dev
 => [3/7] WORKDIR /app
 => [4/7] COPY requirements.txt .
 => [5/7] RUN pip install --no-cache-dir -r requirements.txt
 => [6/7] COPY . .
 => [7/7] RUN adduser -D appuser && chown -R appuser:appuser /app
==> Deploy succeeded
==> Your service is live at: https://acme-corp-dashboard.onrender.com

87 seconds from code to live URL.

Marcus was quiet for a moment. He had spent a decade managing company servers, patching operating systems, configuring reverse proxies, dealing with SSL certificate renewals, and troubleshooting network issues. What Priya had just done in 87 seconds represented infrastructure work that would have been a multi-day project on the company's internal servers.

"That's something," he said.


What Marcus Noticed

As Priya walked through the deployed application, Marcus asked questions that revealed the depth of his IT instincts:

"How do we update it?" "Push to the main branch on GitHub. Render automatically detects the push and rebuilds. Takes about 90 seconds."

"So every time you push code, it deploys automatically. What if you push something broken?"

This led to a conversation about the GitHub Actions CI setup from the chapter. Priya had not set it up yet. Marcus requested it before the application was shared with Sandra — he wanted assurance that broken code could not reach production without tests running first.

Priya added a minimal test suite that afternoon:

# tests/test_smoke.py
def test_home_page(client):
    response = client.get("/")
    assert response.status_code == 200

def test_dashboard_requires_login(client):
    response = client.get("/dashboard")
    assert response.status_code == 302  # redirect to login
    assert "/login" in response.headers["Location"]

def test_login_with_wrong_password(client):
    response = client.post("/login", data={"password": "wrong"})
    assert b"Incorrect password" in response.data

"Where do the logs go?" Priya showed him Render's log viewer. Live application logs, accessible in the browser.

"That's fine for now," Marcus said. "If traffic grows and we need to search logs or set up alerts, we should look at Logtail or Papertrail. But for internal use, this is adequate."

"What happens if Render has an outage?" "The dashboard is unavailable until Render recovers. For a internal metrics tool, I think that's acceptable. It's not a business-critical system — Sandra can use the weekly CSV report as a fallback."

Marcus agreed. He added it to his IT risk register under "Low Risk" with a note: "No business-critical operations depend on this service."


Sandra's First Access

Priya sent Sandra a brief message:

The dashboard is now running on a cloud server — no more "Priya's laptop must be on" requirement. New URL: https://acme-corp-dashboard.onrender.com

Same password as before. Please update your bookmark.

Sandra's response, six minutes later: "This is fantastic. Accessed it from my phone on the way to the parking garage. The Q3 numbers look good. Thank you for fixing that."

She did not comment on how it worked. From her perspective, the dashboard had simply started working reliably. That was the entire point.


Three Months Later

Marcus's IT team had experienced zero support incidents related to the application. The Render service had been up for 94 days with no outages. Priya had deployed four updates — the S3 integration (which eliminated the fragile rsync approach), a new regional comparison view Sandra had requested, a date range filter, and a minor UI improvement.

Each deployment followed the same workflow: local development → test → push to main → GitHub Actions ran tests → Render deployed automatically. Total deployment time: two minutes, unattended.

The application had become genuinely load-bearing for Sandra's workflow. She checked it before every regional sales call. She had started sharing the URL with regional sales managers (Priya added their IP addresses to an allow-list).

Marcus had quietly upgraded his mental model of what his team could safely rely on third-party infrastructure for. "If I'd known deployment could be this straightforward three years ago," he told Priya, "I'd have pushed for it sooner."


Technical Summary

Step What Happened Time
Dockerfile creation Priya wrote + Marcus reviewed 45 minutes
.dockerignore and repository setup Priya 30 minutes
Render service configuration Priya and Marcus together 25 minutes
Initial deployment Render automated build 87 seconds
Tests and GitHub Actions setup Priya 2 hours
Data pipeline integration (rsync → S3) Priya 1 day

**Total cost: $7.25/month** (Render Starter $7 + Render Disk $0.25)

Total uptime in first 90 days: 99.8% (one 4-hour maintenance window when Render updated their infrastructure)


Next: Case Study 38-2 — Maya deploys her client portal to production, and her first client accesses it from their phone.