Case Study 2: The 48-Hour Hackathon

Building Ambitious Projects Under Extreme Time Pressure with AI

The Scenario

HackPDX 2025, held in October at the Portland Convention Center, is a 48-hour hackathon with 320 participants in 80 teams. The theme is "Tools for Local Communities." Teams must build a working prototype, present a live demo, and explain their technical approach. Judges evaluate on impact, technical execution, and polish.

Team Lighthouse consists of four members: Marcus Rivera (backend developer, 3 years of experience), Jenna Park (full-stack developer, 2 years), Tomoko Sato (data scientist, 4 years), and Diego Herrera (design student, 1 year of coding experience). Marcus and Jenna have been practicing vibe coding for about six months. Tomoko has used AI assistants for data analysis but not for application development. Diego has never built a web application but has strong UX design instincts and has completed the first two parts of this book.

Their project, Beacon, is a community resource locator that aggregates data on food banks, free clinics, shelter availability, job training programs, and legal aid services in Portland. The data is scattered across dozens of websites, PDF directories, and social media pages. Beacon consolidates it into a searchable, map-based interface with real-time availability information.

This case study follows Team Lighthouse through the 48 hours, examining how vibe coding shaped their strategy, their mistakes, and their ultimate success.


Hours 0-2: Planning and Architecture

Most hackathon teams lose the first few hours to argument over what to build and which technologies to use. Team Lighthouse avoids this by spending the first 90 minutes on structured planning -- a practice directly inspired by the specification-driven approach from Chapter 10.

Marcus leads the planning session. They answer three questions on a whiteboard:

  1. Who is the user? Community navigators -- social workers, librarians, and volunteers at nonprofits who help people find resources. These navigators currently maintain personal spreadsheets and rely on word-of-mouth.
  2. What is the one thing the demo must show? A community navigator searches for "food assistance near 97214" and sees a map with food banks, their hours, current wait times (simulated), and walking directions.
  3. What is the minimum architecture? A FastAPI backend, a React frontend with Mapbox integration, a PostgreSQL database, and a data ingestion pipeline that populates the database from CSV and API sources.

The architecture discussion takes 20 minutes. They explicitly reject two ideas that would add complexity without improving the demo:

  • Mobile app: A responsive web app works on phones and avoids the need for app store deployment during a hackathon.
  • Real-time availability API: Real-time data from food banks and shelters does not exist in any accessible API. Instead, they will populate the database with realistic sample data and build the infrastructure to ingest real data later.

Jenna creates the project structure using a scaffolding prompt:

Create a full-stack project structure: FastAPI backend with SQLAlchemy
and PostgreSQL, React frontend with TypeScript and Tailwind CSS,
docker-compose for local development. Include a proper .gitignore,
requirements.txt, package.json, and environment variable templates.
The project is called "beacon" and is a community resource locator.

By the end of Hour 2, they have a running development environment, a clear plan, and assigned roles: Marcus takes the backend API and database, Jenna takes the frontend and map integration, Tomoko takes data ingestion and the search algorithm, and Diego takes UI design and creates the component specifications that Jenna will implement.

Hackathon Lesson: The 90 minutes spent planning saved far more time than it cost. Teams that dive into coding immediately often discover architectural mismatches 12 hours in, when it is too late to redesign. The structured planning approach from Chapter 10 scales down to hackathon timelines as well as it scales up to enterprise projects.


Hours 2-8: Foundation Sprint

The team works in parallel, each using AI assistance intensively.

Marcus (Backend): In six hours, Marcus builds the entire backend API. His approach mirrors the TaskFlow implementation walkthrough (Section 41.3), but compressed. He generates code in phases:

  1. Database models for resources (food banks, clinics, shelters), their locations (latitude, longitude, address), operating hours, and services offered. (30 minutes)
  2. API endpoints: search by location (radius query using PostGIS), filter by resource type and service category, and a detail endpoint for individual resources. (45 minutes)
  3. A data loading script that reads a CSV file and populates the database. (20 minutes)
  4. Authentication -- he considers skipping it, then implements a minimal API key system in 15 minutes because the demo needs to show that the API can be used by third parties.

Marcus's most effective technique is what he calls "prompt chaining with context": each prompt references the output of the previous one. His third prompt reads: "Given the Resource and Location models already defined, create a search endpoint that accepts latitude, longitude, radius_km, and optional resource_type parameters. Use PostGIS ST_DWithin for the geographic query. Return results sorted by distance."

By Hour 8, the backend has 14 API endpoints, all manually tested using the FastAPI interactive docs.

Jenna (Frontend): Jenna builds the React frontend in parallel. She starts with the map component, which is the visual centerpiece of the demo:

Create a React component using Mapbox GL JS that displays resource
markers on a map of Portland, Oregon. Each marker should be colored
by resource type (green for food, blue for health, orange for shelter,
purple for other). Clicking a marker shows a popup with the resource
name, address, hours, and a "Get Directions" link. Include a search
bar that geocodes an address and re-centers the map. Use TypeScript
and Tailwind CSS.

The initial version works within an hour, but the marker clustering needs refinement -- when zoomed out, overlapping markers become unreadable. Jenna iterates twice, first adding Mapbox's built-in clustering, then adjusting the cluster radius based on user testing feedback from Diego.

She also builds the search results list panel, the resource detail page, and the filter sidebar. Each component takes 30-60 minutes with AI assistance. The integration between components -- passing search parameters from the sidebar to the map and the results list -- requires manual coordination that the AI does not handle well. Jenna spends about an hour wiring the state management correctly using React's Context API.

Tomoko (Data): Tomoko takes on the data challenge. Portland's community resources are scattered across multiple formats: a city government CSV file of registered nonprofits, a United Way 211 API that lists services by category, and several PDF directories published by local organizations.

She builds a data pipeline inspired by DataLens (Section 41.5), but simpler:

  1. A CSV connector that reads the city's nonprofit registry and extracts food, health, and shelter resources.
  2. An API connector that queries the 211 database (using their public API) and normalizes the results to match the database schema.
  3. A geocoding step that converts street addresses to latitude/longitude coordinates using the Census Bureau's free geocoding API.
  4. A deduplication step that identifies resources listed in both sources (matching on name and address similarity).

Tomoko's data pipeline processes 847 resources, of which 312 are in the target categories (food, health, shelter, job training, legal aid). She identifies 43 duplicates across sources. The final clean dataset has 269 unique resources with accurate geocoordinates.

Diego (Design): Diego, the least experienced coder, contributes through design specifications rather than code. He sketches wireframes on paper, then writes detailed component specifications that Jenna uses as prompts:

The search results panel should show results as cards in a vertical
scrollable list. Each card shows: resource name (bold, 16px), resource
type as a colored badge, address (gray text), distance from search
location ("0.3 mi"), and current status (Open/Closed based on today's
hours, green/red text). Cards should have a subtle hover effect and
clicking a card should highlight the corresponding marker on the map.

Diego also designs the color system, the loading states, and the empty state ("No resources found within 5 miles. Try expanding your search radius."). His contribution demonstrates that vibe coding opens software development to people with design skills who lack traditional programming expertise. Diego writes zero lines of code directly, but his specifications are precise enough that the AI and Jenna can implement them faithfully.


Hours 8-12: The First Crisis

At Hour 8, the team attempts their first integration. Marcus's API is running, Jenna's frontend is running, and they connect them. Three problems surface immediately.

Problem 1: CORS. The frontend running on localhost:3000 cannot call the backend on localhost:8000. Marcus has not configured CORS. The fix takes 5 minutes with a targeted prompt: "Add CORS middleware to this FastAPI app allowing requests from localhost:3000 with GET and POST methods." A common integration issue, but one that wastes valuable minutes if you have never encountered it.

Problem 2: Data format mismatch. The API returns operating hours as a JSON object ({"monday": {"open": "9:00", "close": "17:00"}, ...}), but the frontend expects a simple string ("Mon-Fri 9am-5pm"). Rather than changing the API or the frontend, they add a formatting function on the frontend that converts the structured data to a display string. Jenna generates the formatting function with AI in 10 minutes.

Problem 3: Slow geographic queries. The search endpoint takes 3-4 seconds to return results because the PostGIS query is scanning the entire table. Marcus adds a spatial index, reducing query time to under 100 milliseconds. He knows about spatial indexes from Chapter 28 (Performance Optimization) but had forgotten to create one in the initial schema.

Integration Lesson: Every integration issue the team encountered was a mismatch between components built in isolation. The CORS issue was a deployment configuration gap. The data format mismatch was a schema disagreement. The performance issue was a missing optimization. These are precisely the "integration is the hard part" observation from Section 41.10. Individual components generated by AI work well in isolation; making them work together requires human attention to the boundaries.

By Hour 12 (midnight), the core application works: a user can search for resources by location, see them on a map, filter by type, and view details. The team takes a mandatory four-hour rest break (the hackathon rules require it).


Hours 16-30: Feature Development and Polish

After rest, the team works on features that distinguish Beacon from a simple map of pins.

Intelligent Search (Tomoko, Hours 16-22): Tomoko builds a search system that understands natural-language queries. When a user types "I need food for my family near downtown," the system should match food banks and food pantries near the downtown area, not just literal keyword matches.

She implements this using a lightweight approach: TF-IDF vectorization of resource descriptions combined with category matching. She prompts the AI:

Build a search function that takes a natural-language query and
returns ranked resources. Use scikit-learn's TfidfVectorizer on
resource descriptions and service lists. Also extract intent keywords
(food, health, shelter, jobs, legal) from the query and boost results
matching the detected category. Return results sorted by a combined
score of text similarity and geographic proximity.

The AI generates a clean implementation that Tomoko extends with synonym expansion (e.g., "food" also matches "meals," "groceries," "pantry"). The search quality is noticeably better than simple keyword matching -- a key differentiator in the demo.

Accessibility Features (Diego, Hours 16-24): Diego focuses on accessibility, an often-neglected aspect that hackathon judges notice. He specifies:

  • High-contrast mode for visually impaired users
  • Screen reader-friendly labels on all map markers and interactive elements
  • Keyboard navigation for the entire interface (Tab through results, Enter to select, Escape to close)
  • A text-only mode that presents all information as a simple list without the map, for users with very slow connections or screen readers

Diego writes the accessibility specifications and Jenna implements them with AI assistance. The text-only mode takes 45 minutes to implement. The keyboard navigation takes two hours because it requires careful focus management that the AI does not get right on the first attempt.

Favorites and Sharing (Jenna, Hours 22-28): Jenna adds the ability for navigators to save favorite resources and share curated lists with clients. A community navigator can search for resources relevant to a specific client, save them to a list, and send a shareable link that the client can open on their phone to see the resources on a map.

This feature requires: - A new database model for saved lists (Marcus adds this in 15 minutes) - Frontend components for creating, viewing, and sharing lists (Jenna, 3 hours) - A public view that displays a shared list without requiring login (Jenna, 1 hour)

The shareable link feature becomes the centerpiece of the demo's narrative: "A social worker finds five resources for a homeless veteran, saves them to a list, and texts the link. The veteran opens it on a library computer and sees all five resources on a map with walking directions."

Data Quality Dashboard (Tomoko, Hours 24-28): Tomoko builds a simple dashboard showing data quality metrics: how many resources are in the database, when the data was last refreshed, how many resources have verified operating hours versus unverified ones, and geographic coverage (a heat map showing areas with many resources versus resource deserts).

This feature demonstrates that the team has thought about the ongoing viability of the product, not just the hackathon demo. The quality dashboard also reveals that certain neighborhoods have very few resources listed -- a finding with real social significance.


Hours 30-36: The Second Crisis

At Hour 30, the team runs through the demo script for the first time. The demo crashes.

The crash occurs when the search endpoint receives a query with special characters (the test query was "children's health clinic"). The apostrophe in "children's" causes a SQL injection vulnerability in the raw query that Marcus had written for a full-text search feature added at Hour 26.

This is embarrassing and instructive. Marcus had used parameterized queries for every other endpoint (the AI generated them correctly), but for the full-text search he had written a manual query with string concatenation:

# WRONG: Vulnerable to SQL injection
query = f"SELECT * FROM resources WHERE description LIKE '%{search_term}%'"

The fix is straightforward -- use SQLAlchemy's parameterized query interface -- but it prompts a team-wide code review. They spend two hours reviewing every database query for similar issues, finding one more case of string concatenation in Tomoko's deduplication script (not user-facing, but still bad practice).

Security Lesson: AI-generated code is not immune to security vulnerabilities, especially when developers add manual code alongside AI-generated code. The manually written query used string concatenation because Marcus was in a rush and bypassed the ORM. The AI-generated endpoints used parameterized queries correctly. This is a case where the AI was more disciplined than the human -- a reversal of the common concern. The lesson is to apply the same security review to human-written code as to AI-generated code. Chapter 27's security-first principles apply regardless of who wrote the code.


Hours 36-44: Integration Testing and Hardening

The remaining hours focus on stability. The team adopts a disciplined approach: no new features, only bug fixes and polish.

Marcus writes integration tests for the API endpoints, focusing on error handling: - What happens when the search location is outside Portland? (Return empty results with a helpful message, not an error.) - What happens when the database is empty? (Return an empty list, not a 500 error.) - What happens when the Mapbox API key is invalid? (The frontend should show a fallback message, not a blank page.)

Jenna fixes four UI bugs: - The filter sidebar does not reset when the search location changes. - The resource detail modal does not close when clicking outside it on mobile. - The loading spinner never stops if the API returns an error. - The map does not resize correctly when the browser window is resized.

Tomoko reprocesses the data pipeline one final time, fixing three geocoding errors (resources placed in the wrong location because the Census Bureau's geocoder returned a centroid rather than the exact address).

Diego prepares the demo script and the presentation slides. He rehearses the narrative: start with the problem (scattered resources, frustrated navigators), show the solution (search, map, details), demonstrate the key feature (shareable list for a specific client), and end with the data quality dashboard showing coverage.


Hours 44-48: Final Push and Demo

The last four hours are presentation preparation. The team deploys to a free-tier cloud service (Render) so the demo runs on a public URL, not localhost. The deployment takes 90 minutes -- longer than expected because the PostGIS extension requires a specific database configuration that the default Render PostgreSQL instance does not support. Marcus switches to a Supabase PostgreSQL instance (which includes PostGIS) and redeploys.

At Hour 47, they do a full dress rehearsal. The demo runs smoothly. Diego presents while Jenna drives the application. Marcus and Tomoko sit in the audience ready to answer technical questions.

The presentation runs four minutes (the limit is five). The judges ask three questions:

  1. "How would you handle resources that change their hours seasonally?" (Marcus: "The operating hours model supports effective date ranges. A resource can have different hours for summer and winter, and the API returns the currently active hours.")
  2. "What is your data update strategy?" (Tomoko: "The pipeline can be scheduled to run daily. We built connectors for two sources and the architecture supports adding more. We also plan to add a community contribution feature where navigators can update information they know to be incorrect.")
  3. "How did a four-person team build this in 48 hours?" (Jenna: "AI-assisted development. We used structured prompts, generated code iteratively, and spent our human judgment on architecture decisions, integration testing, and design quality rather than on writing boilerplate.")

The Result

Team Lighthouse wins second place overall and first place in the "Community Impact" category. The first-place team built a volunteer coordination platform with more polish but less technical depth.

After the hackathon, Team Lighthouse receives interest from two local nonprofits that want to pilot Beacon with their community navigators. Marcus and Jenna continue development part-time, and three months later, the City of Portland's social services department expresses interest in funding a full deployment.


Technical Retrospective

After the hackathon, the team conducted a retrospective. Their key findings:

What worked:

  • Structured planning (Hours 0-2) prevented the team from going in four different directions. The three planning questions ("Who is the user? What must the demo show? What is the minimum architecture?") kept everyone focused.
  • Parallel development with clear interfaces allowed all four members to work simultaneously without blocking each other. The API contract defined in Hour 1 (Marcus publishes the endpoint specifications, Jenna codes to them) was essential.
  • AI assistance for boilerplate and scaffolding saved enormous amounts of time. The team estimated that without AI, they would have completed about 40% of what they shipped.
  • Diego's design-as-specification approach demonstrated that non-coders can contribute meaningfully to vibe coding teams by writing precise descriptions that AI and developers can implement.
  • Explicit "no new features" cutoff at Hour 36 prevented the common hackathon trap of adding features until the last minute and shipping something unstable.

What went wrong:

  • The SQL injection vulnerability was caught by luck during a rehearsal, not by systematic testing. In a real product, this would be a serious security incident.
  • The deployment took too long because the team had not tested the deployment configuration earlier. A practice deployment at Hour 24 would have revealed the PostGIS compatibility issue with time to spare.
  • Tomoko's data pipeline was undertested. The geocoding errors were caught manually, not by automated checks. If the team had implemented even basic quality checks (like the DataLens quality framework), these errors would have been flagged automatically.
  • The team did not write enough integration tests. Unit tests covered individual functions, but the integration between the frontend search bar, the API, and the map was tested only manually. An automated integration test would have caught the CORS issue before the first integration attempt.

Metrics:

Metric Value
Total development time 48 hours (4 people, ~40 active hours each)
Lines of Python code (backend) ~2,100
Lines of TypeScript code (frontend) ~3,400
API endpoints 14
Database tables 7
Unit tests 34
Integration tests 11
Resources in database 269
Data sources integrated 2 (CSV, API)
AI-generated code (estimated) ~65% of total
Estimated time savings from AI 60% (team estimate)

Lessons for Readers

The hackathon setting amplifies certain aspects of vibe coding that are always present but easier to overlook in longer projects.

1. Planning is more valuable under time pressure, not less. When you have 48 hours, spending 90 minutes on planning feels expensive. But the team that spends 48 hours building the wrong thing finishes with nothing usable. Structured planning -- defining the user, the demo goal, and the minimum architecture -- provides the focus that turns limited time into shipped software.

2. AI assistance shifts the bottleneck from implementation to integration. In the pre-AI era, a 48-hour hackathon project was bottlenecked by how fast the team could write code. With AI assistance, the team can generate components faster than they can integrate them. The bottleneck shifts to making components work together -- data formats, API contracts, state management, error handling across boundaries. This is the same pattern observed in the capstone projects (Section 41.10).

3. Non-coders can contribute through precise specifications. Diego wrote zero lines of code but significantly improved the product through detailed component specifications, accessibility requirements, and the demo narrative. Vibe coding lowers the barrier to contribution because specifications can be written in natural language and translated to code by AI and experienced developers.

4. Security discipline must be maintained under pressure. The SQL injection vulnerability occurred precisely because Marcus was rushing. Time pressure is not an excuse for skipping parameterized queries, input validation, or authentication. In fact, time pressure makes security lapses more likely, which means teams should be more vigilant, not less, when working fast.

5. Deploy early, test the deployment. The PostGIS deployment issue consumed 90 precious minutes at the worst possible time. A practice deployment at the halfway point would have cost 30 minutes and saved 60 minutes of crisis management. This principle applies to all software development: deployment is not the last step, it is a continuous activity.

6. The "no new features" cutoff is essential. The team's decision to stop adding features at Hour 36 and focus on stability, testing, and polish was one of their best decisions. Hackathon judges (like production users) prefer a polished, working application over an ambitious, crashing one. The same principle applies to product development: shipping less, well, beats shipping more, poorly.


This case study is a composite based on real hackathon experiences and team dynamics. The project concept, technical details, and timeline reflect authentic patterns observed in AI-assisted hackathon projects. Names and specific details have been fictionalized.